IT Knowledge

Governance for AI Agents Explained: 5 Key Risks & How to Manage Them in IT

Héloïse Rozès
CEO and co-founder
May 20, 2025
1
minute of reading

The field of AI is evolving rapidly, bringing forth a new generation of tools known as artificial intelligence (AI) agents. It is the new hype word and you have probably already read it somewhere else. In this article we will deep dive together in the concepts of agents, autonomous agents and multi-agentic systems. Let’s start with some proper definitions, as at Corma we are scientists and engineers at heart.

The rise of AI agents requires a fundamental shift in governance approaches. This shift introduces the concept of AI governance, emphasizing the need to adapt traditional frameworks, standards, and best practices to ensure the safety, ethical operation, transparency, and accountability of autonomous and intelligent AI systems.

An AI agent refers to a system or program capable of performing tasks on behalf of a user or another system, either autonomously by designing its workflow and utilising tools independently, or with human-in-the-loop oversight for critical decision-making or task execution. AI models underpin the decision-making and adaptability of these agents, enabling them to analyze data, learn, and respond to changing environments. LangChain defines an AI agent as as a system that uses an LLM to decide the control flow of an application. In contrast, software refers to a set of instructions, data or programs used to operate computers and execute specific tasks. SaaS (Software as a Service) is a cloud model where providers host and manage applications, delivering access to users over the internet via multi-tenant architecture and subscription licensing. A licence is an official permission or permit to do, use, or own something, whereas an account is an arrangement with an organisation to keep a record of transactions or interactions. For example, an employee might have an account with Salesforce, through which the employer keeps precise records of the employee’s customer interactions and sales activities. The employee would also have a licence provided by Salesforce to access and use specific parts of the Salesforce software.

AI agents can make decisions and perform tasks autonomously, but they also introduce complex challenges in governance, compliance, and security. At the same time, traditional SaaS (Software as a Service) tools remain critical to business operations. Issues around SaaS where handles with SaaS Management Platforms. Organizations now face the dual challenge of managing both traditional SaaS licences and these emerging AI agents effectively. This is where Corma steps in, offering a unified platform to map, centralise, and automate IT governance across all generations of digital systems. The autonomy and complexity of AI agents also bring new risks, including additional security, compliance, and operational challenges that organizations must proactively manage.

To clarify the differences between automation, AI workflows, and AI agents—and help organizations choose the right approach for each use case—we’re including the following framework. This visual highlights key decision-making criteria and governance considerations for each solution. When building AI agents, it is essential to integrate governance, security, and compliance from the outset, rather than treating them as afterthoughts.

To effectively govern AI agents, it’s essential to understand their diverse types. Capable agents can now autonomously perform complex tasks such as writing code, processing refunds, and manipulating live data, which increases the need for robust governance frameworks to address security, compliance, and ethical challenges.

Off-the-shelf agents cover a wide range of solutions, from general-purpose products like OpenAI’s agents to specialised, pre-built options available on platforms such as LangChain-which also offers a robust framework for building agents. However, there can be confusion between tools like Agno or CrewAI and what qualifies as an in-house agent: utilising these tools to build an agent does not necessarily mean it is not developed in-house.

Note: For each agent type, clear ownership and accountability must be established among stakeholders—including developers, security teams, compliance officers, and business leaders—to ensure effective oversight and management.

Type Description Advantages Challenges Compliance Data Collection
In-house Built Agents Internally developed by organisation’s own teams Deep integration with internal systems; highly customisable Requires substantial internal resources; slower deployment; must define rules and manage security risks, including potential harmful content Must ensure ongoing regulatory compliance and explainability; strong rule and policy enforcement required Organisation is responsible for secure, ethical data handling and quality control
Off-the-Shelf Agents Purchased from third-party vendors; general-purpose or specialised solutions (e.g. OpenAI Operator) Quick deployment Limited customisation; reduced control over rules; increased security and harmful content risks Limited control over vendor compliance with regulations and ethical standards Restricted visibility into data usage, privacy, and consent mechanisms
Agent-Building Tools Platforms like Agno or CrewAI that simplify custom agent creation Easier and faster to build custom agents Less customisable than in-house; requires monitoring for security, rules adherence, and harmful content Must validate that generated agents meet compliance and ethical requirements Data flows and storage may be opaque, complicating privacy oversight
Horizontal Agents Generalist models applicable across industries and tasks Broad, versatile usage Shallow expertise; unpredictable behaviour; incorrect decisions; security risks must be managed Harder to ensure compliance across diverse use cases Aggregates broad data, increasing privacy and consent risks
Vertical Agents Specialised for specific industries such as IT, healthcare, or finance High accuracy and deep domain expertise Limited scope; must address sector-specific rules, security risks, and harmful content Must meet strict, sector-specific regulatory and ethical standards Handles sensitive domain data requiring robust privacy safeguards

When considering compliance and data handling, it is essential to uphold ethical standards and implement full lifecycle management for agents, ensuring continuous oversight from development through deployment and maintenance.

AI agents must be continuously monitored to detect model drift, ensure policy adherence, and maintain safety and ethical standards throughout their lifecycle. Organizations must ensure AI agents operate within established governance frameworks to mitigate risks and ensure responsible operation. Certain actions by agents—especially those with high risk or potential for unpredictable behavior or incorrect decisions—should require explicit human approval to maintain accountability and control. Tracking and auditing certain actions is essential for compliance and risk mitigation, helping to prevent security breaches and the spread of harmful content.

The Power of Multi-Agent Systems (MAS)

A Multi-Agent System (MAS) involves multiple autonomous agents interacting and collaborating to achieve complex objectives that exceed the capabilities of individual agents alone. In MAS:

  • Each agent operates autonomously with specialised roles.
  • Agents communicate and coordinate actions explicitly (direct messaging) or implicitly (through shared environments). Agents connect using frameworks or protocols that enable seamless integration, observability, and governance across a wide range of tools.
  • MAS can involve cooperative interactions (agents working towards shared goals), competitive interactions (agents competing), or hierarchical arrangements where higher-level agents delegate tasks to lower-level ones.

Examples of MAS applications include:

  • Smart Home Assistants: In a connected home, different AI agents (such as a thermostat, lighting controller, and security system) communicate and coordinate to optimise comfort, energy usage, and safety for the residents.
  • Online Food Delivery Platforms: Multiple agents work together to process an order: recommendation agents suggest meals, payment agents handle transactions, and logistics agents coordinate delivery with restaurants and drivers.
  • Personalised Healthcare Systems: Specialised medical agents collaborate by analysing patient data from different medical perspectives (diagnostics, medication management, rehabilitation) to create integrated treatment plans.

Technical standards like the Model Context Protocol can facilitate secure and compliant communication between agents in complex multi-agent systems.

Data Privacy Considerations in AI Agent Governance

Data privacy stands at the heart of effective AI agent governance, especially as AI agents increasingly handle sensitive information across a wide range of real world applications. Whether agents operate in customer service, healthcare, finance, or IT automation, the way they access, process, and store data introduces significant risks that must be managed through proper governance frameworks.

One of the primary concerns in governing AI agents is the potential for data breaches and unauthorized access to sensitive data. As AI systems become more capable and autonomous, the risk of exposing personal or confidential information grows—making robust security and compliance measures non-negotiable. Technical controls such as encryption, strong authentication mechanisms, and granular access controls are essential to ensure that only authorized users and agents can access sensitive information.

Effective AI agent governance also requires organizations to establish clear policies and procedures for data collection, processing, and retention. Agents should be designed to collect only the minimum data necessary to perform their tasks, and to retain that data only as long as required. This principle of data minimization not only reduces the attack surface for potential security threats but also aligns with global regulatory requirements, such as those outlined in the EU AI Act.

Transparency and accountability are key components of any governance framework. Organizations must ensure that AI agents act in ways that are explainable and auditable, with clear operational boundaries and audit trails that allow for continuous monitoring of agent behavior. This helps identify vulnerabilities, enforce policies, and demonstrate compliance with data privacy regulations.

For example, consider an AI-powered customer support agent that interacts with users via natural language. Such an agent may access customer profiles, transaction histories, or even payment information. Without proper governance, there is a risk that the agent could inadvertently expose or misuse this data, leading to real world consequences such as data breaches, loss of customer trust, and regulatory penalties.

The EU AI Act and similar regulatory frameworks emphasize the need for organizations to implement end-to-end risk management and lifecycle management for AI agents. This includes regular evaluation of agent behavior, continuous monitoring for security incidents, and prompt response to any identified risks. By embedding data privacy into the full lifecycle of AI agent development and deployment, organizations can ensure their agents operate safely, ethically, and in compliance with evolving standards.

In summary, data privacy is a foundational pillar of AI agent governance. By prioritizing technical controls, clear governance policies, and ongoing risk management, organizations can protect sensitive data, maintain customer trust, and ensure their AI agents act responsibly in the real world.

The Role of Corma in Managing SaaS and AI Agents

Corma provides a unified platform designed specifically to address governance challenges posed by both traditional SaaS tools and emerging AI agents. Our vision is to become the world’s leading unified and automated IT platform for all businesses.

Here’s how Corma helps:

  1. Unified Oversight: Gain complete visibility into all SaaS applications and AI agents within your organisation, whether built in-house or purchased off-the-shelf.
  2. Automated Licence Management: Track active licences across SaaS tools and AI agent usage, identify unused resources, and reduce software expenses accordingly.
  3. Compliance Assurance: Ensure adherence to regulatory requirements through robust monitoring and reporting features for both software licences and autonomous agent deployments that function as an efficient Identity Access Management system.
  4. Seamless Onboarding/Offboarding: Automate (de)provisioning workflows for employees interacting with SaaS applications and AI agents.
  5. Full Lifecycle Management: Support the full lifecycle management of AI agents, from development and deployment to ongoing oversight, ensuring continuous governance and adaptive controls at every stage.
  6. Risk Mitigation: Proactively detect unauthorized apps or rogue agents early through continuous monitoring, helping ensure AI agents operate within governance frameworks to mitigate security risks and address new risks introduced by autonomous systems, securing your entire IT environment.

As organizations increasingly adopt traditional SaaS tools alongside sophisticated AI agents, trust becomes fundamental for effective governance. Corma’s approach combines automation with transparency by providing real-time insights into software usage patterns and agent behaviours.

Skello saved over €2,000/month and 100+ IT hours

A prime example is Skello, a fast-growing SaaS B2B HR software provider, which leveraged Corma to automate IT operations and optimise its software stack. In order to manage 556 SaaS applications and 14,000 licences, Skello deployed Corma's SaaS Management Platform to automate onboarding and offboarding for over dozens of employees, identify over 100 shadow IT apps, and save more than hundreds of hours annually (the equivalent of one full-time employee). These efficiencies enabled Skello to quickly recoup Corma's costs, cutting unused licences and reducing expenses by over €2,000 per month on Notion alone. As their Lead Cloud & IT Manager put it, “ is a game changer for automating IT, for example our onboarding and offboarding processes. It's a big time-saver for our IT team and HR department.”

Q&A

Q:What is the role of AI agent governance in enterprise IT?

A:AI agent governance ensures secure, compliant operation of autonomous agents by managing access control, data privacy, and regulatory requirements like SOC 2 or ISO 27001.

Q:How do multi-agent systems (MAS) improve IT automation?

A:Multi-agent systems (MAS) coordinate multiple AI agents to perform complex IT workflows—enhancing efficiency, task delegation, and cross-system automation.

Q:How does Corma simplify SOC 2 and ISO 27001 compliance with AI agents?

A;:Corma automates user provisioning, generates audit-ready reports, and enforces least privilege access to meet SOC 2 and ISO 27001 obligations for both SaaS tools and AI agents.

The IT Circle
January 27, 2026

The IT Circle: Interview with Manuel Cuesta, Group CIO at Rubis Energie

Read Article
SaaS Management
January 21, 2026

20 Best SaaS Management Platforms in 2026

Read Article
The IT Circle
January 6, 2026

45 Years in IT: Key Lessons from Orange's Zaima Chati

Read Article

The new standard in license management

Ready to revolutionize your IT governance?