INTEGRATION OF AI AGENTS INTO DEFI PLATFORMS: LEGAL IMPLICATIONS AND REGULATIONS

Giovanni Piccirillo's picture

Giovanni Piccirillo

Share article on:

notion-logogithub-logotwitter-logolinkedin-logoinstagram-logo
An in-depth analysis of how AI Agents are transforming decentralized finance (DeFi), enhancing efficiency while introducing regulatory challenges. This article explores the integration of AI in DeFi, its benefits, risks, and compliance with key regulations such as MiCA, ESMA, and the AI Act. Learn how AI-driven automation impacts trading, portfolio management, and financial accessibility while addressing concerns related to security, transparency, and legal liability. Discover insights on regulatory gaps, potential solutions, and the future of AI in decentralized finance.

Introduction

In recent years, decentralized finance (more commonly known as DeFi) is emerging as a disruptive force in the global financial landscape, offering financial services without the need for traditional intermediaries through the use of blockchain technology. At the same time, artificial intelligence has made significant progress, leading to the development of AI AGENTs capable of operating autonomously in various contexts. The integration of these agents into DeFi platforms represents a new frontier, promising to revolutionize the financial sector through the automation and optimization of services.

This convergence between AI and DeFi offers potential benefits, such as increased operational efficiency and accessibility to advanced financial services. However, it also raises complex legal and regulatory issues. Existing regulations, such as the MiCA Regulation and ESMA guidelines, have been developed primarily to address the challenges posed by crypto-assets and may not adequately cover the peculiarities introduced by the integration of AI AGENTs into DeFi platforms.

Furthermore, the adoption of AI AGENT in this context raises questions regarding data privacy, transaction security and legal liability in case of malfunctions or incorrect decisions made by autonomous systems. It is therefore essential to carefully analyze the legal and regulatory implications of this integration to ensure sustainable and safe growth of the sector.

This article aims to explore in depth the definition and use of AI AGENTs in DeFi platforms, evaluating their benefits and associated risks. We will analyze the main legal and regulatory implications, with particular attention to key regulations such as MiCA and ESMA directives, and discuss challenges related to privacy, security and legal liability. Finally, we will present possible future scenarios and provide recommendations for complying with existing regulations, supporting the analysis with concrete examples and relevant case studies.

Benefits and risks of using AI Agents in DeFi

AI agents are software systems designed to perform specific tasks autonomously, learning and adapting through machine learning algorithms. In the context of DeFi, these agents can automate processes such as managing portfolios, executing trading strategies, and optimizing transactions. For example, an AI AGENT can analyze market data in real time to execute trading operations on decentralized platforms, quickly reacting to price changes without human intervention.

The integration of AI agents into decentralized finance is profoundly transforming the financial sector, offering significant benefits but also presenting new challenges.

One of the main benefits of adopting AI AGENTs in DeFi is the increase in operational efficiency. These agents are capable of rapidly analyzing large volumes of data, identifying patterns and trends that may escape the human eye. This enables more informed and timely investment decisions, optimizing trading and portfolio management strategies. For example, AI AGENTS can constantly monitor markets, executing real-time trades on behalf of users, which reduces the risk of losses due to delays or human errors.

Furthermore, the automation offered by AI AGENTs makes financial services more accessible. Even individuals with limited technical skills can participate in DeFi, relying on these agents to handle complex operations. This democratizes access to advanced financial tools, broadening the user base and promoting financial inclusion.

However, implementing AI AGENTs in DeFi also comes with a number of risks. Security is a primary concern: AI AGENTS, if compromised, can become vectors for cyber attacks, putting user funds and the integrity of platforms at risk. The autonomous nature of these agents means they can execute transactions without direct human supervision, increasing the potential for unauthorized or fraudulent transactions.

Another critical aspect concerns the transparency and comprehensibility of the algorithms used. AI AGENTs often operate as "black boxes", where internal decision-making logic is not easily understood by users. This opacity can breed distrust and make it difficult for users to assess the risks associated with the operations performed by agents.

Furthermore, relying on low-quality or manipulated input data can lead AI AGENTS to make incorrect decisions, resulting in financial losses. Dependence on accurate, up-to-date data is critical, and any compromise in data quality can have significant impacts on agent performance.

Finally, widespread adoption of AI AGENT could contribute to greater market volatility. Large-scale automated decisions can amplify market movements, leading to more pronounced and potentially destabilizing price swings. This effect is particularly relevant in emerging markets such as cryptocurrencies, where liquidity can be limited and price fluctuations more pronounced.

In conclusion, while AI AGENTs offer promising opportunities to improve efficiency and accessibility in DeFi, it is essential to carefully address the associated risks. A balanced approach, combining technological innovation with robust security measures and a clear understanding of potential dangers, will be crucial to fully exploit the benefits of AI AGENTs in the decentralized financial sector.

Legal and regulatory implications

At the moment, the integration of artificial intelligence agents in decentralized finance platforms represents an emerging and rapidly evolving area, for which there are no specific regulations at global or Italian level. However, there are broader regulations that could impact this sector and which need to be analyzed to understand any challenges that may arise.

MiCA Regulation

The MiCA Regulation[1] certainly represents the first step towards the harmonization of rules relating to crypto-assets in the European Union. MiCA establishes uniform requirements for the issuance, offering to the public and provision of services related to crypto-assets, with the aim of protecting investors and preserving the integrity of the financial market.

However, MiCA was primarily designed to regulate traditional crypto-assets, focusing on centralized entities and the issuance and management of crypto-assets involving direct human control. The AI Agent in DeFi platforms, however, are designed to operate autonomously, making decisions in real time and interacting dynamically with various financial protocols. This decision-making autonomy and ability to adapt represent a significant departure from the traditional operational structures envisioned by MiCA as operational decisions are no longer directly attributable to specific individuals or entities, but to AI systems that learn and evolve over time.

This technological evolution raises fundamental questions regarding legal liability and regulatory compliance. If an AI AGENT makes a mistake or acts in a manner that does not comply with regulations, it becomes complex to determine who should be held responsible: the creator of the algorithm, the operator of the platform or the end user who interacted with the agent? This ambiguity is not adequately addressed in the current MiCA framework, suggesting the need for a revision of the definitions and categories provided by the regulation to include these new operational entities.

Furthermore, the dynamic interaction of AI AGENTs with various DeFi protocols introduces additional complexities in terms of supervision and control. The decentralized and autonomous nature of these interactions makes it difficult for regulators to monitor and ensure compliance with existing regulations. Therefore, it may be necessary to develop new supervision tools and methodologies that take into account the peculiarities introduced by AI AGENTs in DeFi platforms.

So, while MiCA represents a pioneering regulatory framework for crypto-assets, the integration of AI AGENTs into DeFi platforms highlights areas where the regulation may need updating or expansion. Addressing these challenges will require ongoing dialogue between policymakers, technologists and industry players to ensure that innovation can thrive within a clear and accountable regulatory environment.

Qualification of crypto-assets as financial instruments

The same reasoning made so far also applies to the classification of crypto-assets and their qualification as financial instruments.

On 17 December 2024, ESMA[2] published guidelines that outline the criteria for determining when crypto-assets should qualify as financial instruments. These guidelines aim to provide clarity on the classification of crypto-assets, helping national competent authorities to assess the nature of such instruments on a case-by-case basis. The main objective is to ensure consistent application of existing regulations, such as MiFID II[3], ensuring that crypto-assets that fall within the definition of financial instruments are subject to the same regulations applied to traditional financial instruments.

However, the integration of AI agents into decentralized finance platforms introduces new complexities to the regulatory landscape. These AI AGENTs are designed to operate autonomously, executing transactions and interacting with smart contracts without direct human intervention. Their ability to learn, adapt and make decisions in real time distinguishes them from traditional entities under current regulations.

This autonomy raises questions about how to classify the activities carried out by AI AGENTs and what the appropriate regulatory framework is for such operations. ESMA guidelines mainly focus on the intrinsic nature of crypto-assets to determine their qualification as financial instruments. However, when an autonomous AI AGENT interacts with a crypto-asset, the nature of the asset could transform, moving from a simple transaction to a more complex operation that could take on the characteristics of a regulated financial service.

For example, if an AI AGENT manages investment portfolios, executes algorithmic trading operations or offers financial advice based on machine learning algorithms, such activities could fall into the categories of regulated financial services under MiFID II. However, the absence of a human intermediary complicates the direct application of existing regulations, as responsibilities and obligations under the law are traditionally attributed to natural or legal persons.

Furthermore, the decentralized nature of DeFi platforms, combined with the autonomy of AI AGENTs, makes it difficult for regulators to effectively monitor and supervise these activities. Transactions can occur pseudonymously, across multiple jurisdictions, and without a centralized point of control, challenging traditional financial regulatory enforcement mechanisms.

Therefore, there is a clear need for an update of the existing regulatory framework or the development of new specific guidelines that take into account the peculiarities introduced by AI AGENTs in DeFi platforms. This could include defining new legal categories for AI AGENTs, establishing specific transparency and accountability requirements for their operations, and developing advanced technological tools for monitoring and supervising their activities.

So, while ESMA guidelines represent a step forward in the regulation of crypto-assets, the technological evolution represented by AI AGENTs in DeFi platforms requires further reflection and regulatory adaptation. Only through a proactive and collaborative approach between regulators, technologists and market operators will it be possible to ensure that innovation can thrive in a safe and regulatory compliant environment.

AI ACT Regulation

The regulation known as the AI Act[4], which came into force on 1 August 2024, represents the first global attempt to provide a comprehensive regulatory framework for artificial intelligence within the European Union. Approved to ensure responsible, safe and transparent development of AI systems, the regulation is based on a risk-oriented approach, classifying AI applications based on their potential impact on fundamental rights, security and the market. The main objective of the AI ​​ACT is to establish clear requirements for the development, implementation and use of AI systems, promoting innovation and competitiveness, but without compromising the protection of individuals and the stability of markets.

Specifically, the AI ​​Act classifies AI systems based on the level of risk associated with their use: unacceptable risk, high risk, limited or minimal risk and systemic risk.

Regarding the “unacceptable risk“The AI ​​ACT prohibits specific uses of AI to prevent harm and abuse, including:

  • Manipulation of people causing physical or psychological harm.
  • Exploitation of vulnerabilities (e.g. age, disability).
  • Social scoring with disproportionate consequences.
  • Data collection for crime predictions.
  • Creation of biometric databases through unauthorized scraping.
  • Interpretation of emotions in work or educational settings.
  • Biometric-based classification for discriminatory inferences.
  • Some uses of real-time remote biometric identification

The "high risk" system[5] instead, they are subject to stringent requirements, including conformity assessments, detailed technical documentation and transparency obligations[6]. All the sectors listed in Annex III of the AI ​​ACT are considered high risk with the clarification that, however, a system included in Annex III is not considered high risk if used for limited procedural tasks or without direct impact on people's rights.

Finally, the “systemic risk” which si refers to the capacity of an AI model for general purposes[7] to have a high and significant impact on the European Union market, with potential negative effects on public health, safety, fundamental rights and society. The conditions for identifying the existence of systemic risk are the following:

  • A key indicator is the amount of computing resources used in training, which is considered risky if it exceeds 10²⁵ floating point operations.
  • Even without reaching this threshold, a model can be classified as systemic risk if it has: a) High amount of computation; b) High market impact (at least 10,000 registered users in the EU)

In the context of integrating AI into decentralized platforms such as DeFi, AI ACT plays a crucial role. These applications, potentially classified as high-risk systems, must comply with stringent criteria in terms of transparency and accountability. Therefore, developers and operators of such agents will need to ensure compliance with the requirements under the AI ​​Act, which may include implementing measures to ensure the accuracy, robustness and security of AI systems, as well as transparency to users regarding the operation of the agents.

Legal responsibility

As already highlighted in the previous points, one One of the main regulatory challenges concerns legal liability for the actions performed by AI AGENTS. We have to ask ourselves: in the event of an error or harmful behavior on the part of an AI AGENT, who is actually legally responsible? Is the legal manager the creator of the agent, the user who implemented it, or the DeFi platform that hosts it?

The issue of legal liability is particularly complex in the DeFi space. Given the decentralized nature of these platforms and the operational autonomy of AI AGENTs, determining who is responsible in case of errors, financial losses or fraudulent activities becomes a challenge.

To delve deeper into the basis of legal responsibility, it must be said that the latter must be divided into two sub-categories: civil liability and criminal liability.

The theme of civil liability of AI Agents involves different actors. Manufacturers can be held liable for design defects, errors in model training, or negligence in maintenance and upgrades. Users, especially professionals or companies, are required to use the systems in accordance with the instructions provided, regularly checking their performance to prevent risks. In some cases, responsibility may be shared between manufacturer and user, especially when the damage is the result of both an intrinsic defect in the agent and improper or negligent use.

The attribution of criminal liability to an AI Agent, however, raises complex questions. Being devoid of consciousness and intentionality, agents cannot be directly attributable. However, developers could be investigated for omissions in controls or negligent design, while end users could be criminally liable if they knowingly use the systems for illicit purposes. Supplier companies can also incur criminal liability if they do not adequately supervise their technologies. On an ethical level, issues emerge related to the transparency of algorithms, the prevention of bias and the protection of human rights, central aspects for the responsible use of artificial intelligence.

It is therefore necessary to clearly establish legal responsibilities across smart contracts and define precise guidelines that outline the roles and responsibilities of developers, platform operators and end users.

Case studies and real examples

The integration of AI agents into decentralized finance platforms is rapidly evolving, offering case studies that highlight both innovative potential and emerging regulatory challenges.

A significant example is represented by Griffin AI, an on-chain network designed to assign each agent a digital identity, permissionless access, and transaction capabilities. This structure allows AI AGENTs to operate autonomously within the DeFi ecosystem, performing complex tasks such as algorithmic trading and portfolio management. However, user autonomy and direct access to funds raise critical questions regarding compliance with regulations such as the AI Act Regulation of the European Union. Therefore, platforms like Griffin AI must ensure that their agents operate in compliance with these regulations, implementing measures for risk assessment and operational transparency.

Another relevant case is the collaboration between Frax Finance and the project IQ, which led to the development of a technological platform focused on AI AGENTs, inspired by the AIVM model. This initiative aims to make DeFi more intuitive and accessible by enabling AI AGENTS to operate autonomously on the blockchain. However, the implementation of such agents requires careful consideration of existing regulations, such as the MICAR and the guidelines of ESMA. These regulations establish requirements for transparency, investor protection and risk management, which are key to ensuring that the integration of AI AGENTs occurs in a compliant and safe manner.

Finally, platforms like AgentOS combine machine learning and blockchain tools for the development of AI AGENTs, supporting on-chain transactions and training agents with data from both the blockchain and external sources, such as social media. This integration raises additional regulatory issues, in particular regarding data protection and privacy, aspects regulated by the General Data Protection Regulation (GDPR) in the European Union[8]. Platforms must therefore implement appropriate measures to ensure compliance with data protection regulations, ensuring that user information is handled in a secure and transparent manner.

These examples illustrate how innovation in integrating AI AGENTs into DeFi platforms must be balanced with a rigorous focus on regulatory compliance. The organizations involved must take a proactive approach, carefully monitoring the evolution of regulations and implementing measures that ensure safety, transparency and user protection within the DeFi ecosystem.

Final considerations and proposals for an adequate regulatory framework

After the evidence and considerations reported so far, it is logical to establish that, to address these challenges, it may be necessary to develop a specific regulatory framework that considers the peculiarities of integrating AI AGENTs into DeFi platforms. This could include introducing transparency requirements regarding the operation of AI AGENTs serving DeFi, implementing security measures to prevent malicious behavior, and clearly defining legal responsibilities in the event of malfunctions or abuse. Additionally, regulatory authorities may consider creating specific certifications or licenses for AI AGENT developers and operators in the DeFi context, ensuring that such entities meet high standards of security and regulatory compliance.

Of great importance would also be to bridge the differences between the MiCA regulation and the integration of AI AGENT and at the same time between MIFID II and AI AGENT.

For the purposes of deference with MiCAR, the following solutions could be implemented:

  • the introduction of a transparent algorithmic supervision system, i.e. creating a framework that forces developers to integrate transparency mechanisms within AI AGENTs. These should include regular audits and records of decisions to ensure traceability of operations;
  • shared responsibility defined through smart contracts and therefore smart contracts should include specific clauses to define the responsibilities between the creators of the algorithm, the platform operators and the end users, in this way, even in the event of AI AGENT errors, it will be clear who must legally respond;
  • introduce a certification system for AI AGENTS operating in the DeFi field. This standard could be supervised by a community body similar to that envisaged by MiCA for crypto service providers.
  • update MiCAR to include a new category for AI AGENTs, distinguishing them from traditional human operators. This would officially recognize their operational autonomy and provide dedicated compliance rules.

To reduce, however, the inconsistencies between the MiFID II directive and the use of YOU HAVE AN AGENT in the qualification of crypto-assets as financial instruments, the following actions are proposed:

  • creation of a new category of automated financial services and therefore update MiFID II with a section dedicated to “Automated Financial Services” to cover activities managed autonomously by AI AGENT, such as automated algorithmic trading, AI-based financial advice and portfolio management through autonomous agents;
  • develop blockchain-based technologies to monitor the operations performed by AI AGENTs in real time, ensuring that all transactions comply with MiFID II standards.
  • provide for mandatory transparency and explainability requirements, thereby land DeFi platforms should be obliged to provide clear and accessible explanations on the decision-making criteria used by AI AGENTs, so as to reduce the risks related to the "black box" of the algorithm.

All of the above and proposed so far is intended to highlight that, while the integration of AI AGENTs into DeFi platforms offers significant opportunities for innovation in the financial sector, it is essential that this technological progress is accompanied by adequate regulatory development. Only through a clear and comprehensive regulatory framework will it be possible to guarantee investor protection, transaction security and trust in the emerging decentralized financial system.


_______________________________________________________________________________

[1] See EU Regulation 2023/1114.

[2] For further information see "Guidelines on the conditions and criteria for the qualification of crypto-assets as financial instruments”, ESMA75453128700-1323, 17 December 2024.

[3] See Directive 2014/65/EU.

[4] See Regulation (EU) 2024/1689

[5] Cfr. Art. 6 of Regulation (EU) 2024/1689

[6] See Article 17 of Regulation (EU) 2024/1689

[7] General-purpose AI models are models trained with large amounts of data to perform various tasks and often serve as the basis for other AI systems.

[8] See Regulation (EU) 2016/679.