Imagine you are an investor in a decentralized finance (DeFi) protocol. One day, you find that your account has been emptied due to an automated operation performed by AlphaTrade, an AI Agent specializing in algorithmic trading. The AI, designed to maximize profits by exploiting price changes on decentralized platforms, identified an opportunity on a new liquidity pool and executed a series of high-frequency speculative trades. However, due to a bug in the underlying smart contract code, the transactions triggered a vulnerability that emptied the liquidity fund, causing millions in losses.
The platform on which AlphaTrade operated declares itself unrelated to the facts. The smart contract is “trustless", automatically executes the code without human intervention. The AI developers claim that they have no direct responsibility because AlphaTrade acted according to its training, based on a reinforced learning model that allowed it to make independent decisions.
At this point an unprecedented legal dilemma opens up: Who is responsible for this damage?
- The DeFi protocol, which integrated the vulnerable smart contract?
- The AI developers, who designed an autonomous agent without ethical or legal constraints?
- The users themselves, who have accepted the risks of operating in a trustless environment?
This hypothetical case, inspired by real events in the crypto sector, is just one of many situations in which the interaction between AI Agents e smart contracts raises complex legal issues. When Artificial Intelligence makes autonomous decisions on blockchain, who is responsible for it? And above all, is current law capable of managing similar scenarios?
In traditional law, liability for a wrongful action can fall on natural or legal persons: a company, a developer or a user. However, when a AI Agent acts without direct supervision, the chain of responsibility becomes nebulous. The AI is not a legal entity, it cannot be sued or hold assets; the smart contract, for its part, automatically executes immutable instructions, without the possibility of interpretation or external intervention.
We are faced with a legal paradox:
- Smart contracts, being self-executing and decentralized, effectively exclude the possibility of modifying or revoking an incorrect action. So how do you handle an error or breach of contract?
- Can the law impose constraints on an artificial intelligence that acts in a trustless system, without intermediaries and without a central authority?
These questions show clearly current regulatory gaps. The law is still based on identifiable responsible subjects, but AI Agents and smart contracts operate in a decentralized ecosystem, without a clearly attributable entity. This asymmetry between technology and jurisprudence opens the debate on new models of responsibility and how the legal system can adapt to the evolution of Artificial Intelligence on blockchain.
The interaction between AI Agents e smart contracts opens up new scenarios in law, posing questions that challenge traditional legal categories. This article aims to explore the current regulatory framework, identifying existing regulations that could apply to these technologies, their limitations and areas still lacking clear regulation.
AI Agents and Smart contracts compared
AI Agents are artificial intelligence systems designed to perform tasks autonomously, making decisions based on external inputs and learning models. Unlike traditional deterministic software, these agents do not simply follow predefined instructions, but adapt their behavior in response to environmental variables, thanks to techniques such as machine learning and natural language processing.
In the context of blockchain, AI Agents are used to manage operations without the need for direct human intervention. They can interact with smart contracts, analyze on-chain data to make financial decisions, or execute transactions based on pre-set criteria. Their ability to operate in a fully automated and decentralized manner makes them powerful tools, but also raises crucial questions about their predictability and the ability to limit their action in the event of errors or abuse.
On a technical level, an AI Agent can be structured as a multi-agent system, in which different modules collaborate to analyze the context, optimize strategies and implement decisions. When these agents are integrated with smart contracts, their work becomes irreversible, since every action is recorded on the blockchain and cannot be modified ex post. This feature, on the one hand guarantees transparency and efficiency, on the other amplifies the risks in the event of unpredictable or incorrect behaviour.
Smart contracts, for their part, are computer protocols that automatically execute contractual clauses when certain conditions occur, without the need for human intervention or intermediaries. Developed on blockchain, they guarantee transparency, security and immutability, reducing the risk of manipulation or unilateral violations.
Unlike a traditional contract, which is based on an agreement between the parties and the possibility of resorting to an authority to enforce their rights, a smart contract executes the conditions immediately and irrevocably. This means that, once activated, the code operates autonomously, without the possibility of interpretation or posthumous modification.
These characteristics make smart contracts particularly suitable for financial transactions, digital asset management and process automation, but they also raise issues in terms of legal liability, especially when they interact with AI Agents that make decisions without human supervision.
The integration between AI Agents and smart contracts represents a significant evolution in the automation of digital transactions. Thanks to their advanced decision-making capabilities, AI Agents can analyze data in real time, identify favorable conditions and activate smart contracts without human intervention, creating an autonomous contract execution ecosystem on blockchain.
The process occurs through three main phases:
- The AI Agent monitors the context in which it operates, acquiring information from on-chain and off-chain sources. It can analyze market prices, financial trends, weather conditions (in the case of insurance contracts) or other relevant parameters.
- Once the data has been processed, the AI determines whether the conditions set by the smart contract are satisfied and decides whether to activate it. This process occurs autonomously, based on machine learning algorithms or predefined rules.
- If the criteria are met, the AI Agent sends a transaction to the blockchain, activating the smart contract. The latter, in a self-executive manner, carries out the planned actions, such as transferring funds, updating registers or activating services.
The problem of legal responsibility
Returning tohe question of the legal and criminal liability of AI Agents, all this has raised crucial questions, especially when it comes to serious errors that could be committed autonomously by artificial intelligences. The much discussed question mark is always the same: "Who is legally and criminally responsible if an AI makes a mistake?" The common position suggests that the responsibility lies with humans, such as programmers or users of AI. However, this approach is too simplified when dealing with strong AI systems, which, although created by humans, have the ability to learn autonomously and make unpredictable decisions.
In this context, a difficulty emerges in attributing responsibility in case of errors caused by AI, since the acts performed by these technologies could be the result of autonomous learning or unexpected decisions, which are beyond the direct control of the creator or user. The legal debate focuses on whether responsibility should be attributed exclusively to the humans involved or whether the introduction of autonomous responsibility for AI is necessary, recognizing a sort of legal personality for these advanced systems.
In line with these reflections, the Israeli jurist Gabriel Hallevy has analyzed various paradigms of criminal responsibility applicable to AI. Among these, the “perpetration through another” paradigm suggests that the human (e.g., the programmer or the end user) can be held responsible for instructing the AI to commit a crime. However, in the context ofstrong AI, where learning and decisions take place autonomously, the problem emerges of attributing responsibility directly to AI, in a way similar to what happens with the criminal liability of entities.
It is important to understand that, in a traditional legal context, criminal liability is usually attributed to an individual who has acted maliciously or negligently. However, when dealing with autonomous AI Agents, as in the case of financial bots or automatic trading systems, this principle of attributability becomes more difficult to apply.
In this regard, Hallevy hypothesizes a parallel with the system introduced by Decree 231/2001 on the attribution of crimes to companies, suggesting that even AI entities could be criminally liable for acts committed autonomously, unless their behavior can be attributed to human error.
In light of this reflection, it appears clear that there is a "responsability gap” and that all this could open up scenarios of potential irresponsibility.
Regulatory gaps
The advent and spread of AI Agents and smart contracts have highlighted significant gaps in the current regulatory framework, raising questions regarding their regulation and legal implications.
First of all, let's talk about the absence of a legal personality for AI. In traditional law, legal responsibility is attributed to entities with legal personality, such as individuals or legal entities. However, AI Agents, lacking awareness and intentionality, do not fall into this category. Their growing operational autonomy has led to discussions on the possibility of giving them an electronic legal personality, in order to facilitate the attribution of responsibility and management of any damage caused by their actions. However, the idea of giving legal personality to AI Agents has been the subject of debate, with conflicting opinions on its desirability and feasibility.
Furthermore, another gap is given by Smart contracts as "self-executing" tools without legal flexibility. Smart contracts are software programs that automatically execute operations when certain conditions occur, following the logic "if this happens...then do that". This automation ensures efficiency and reduces the need for intermediaries. However, their self-executing nature presents limits in terms of legal flexibility. Any changes or adaptations to the contractual conditions require technical interventions on the code, making it difficult to align with the dynamics and needs of traditional law, which often require interpretations, negotiations and contextual adaptations. Furthermore, the rigidity of smart contracts can conflict with legal principles that provide remedies and adaptations in unexpected or exceptional situations.
The combination of these regulatory gaps therefore requires careful review and, probably, legislative reform to ensure that AI Agents and smart contracts can effectively integrate into the existing legal system, ensuring clarity in legal liability and operational flexibility.
Case studies on AI Agents and smart contracts in practice
We have several case studies, in different subjects, which refer to the integration of AI and Smart contracts, with the analysis of the legal implications.
First of all we have cases regarding DAO and decentralized governance:
- In 2016, The DAO, a Decentralized Autonomous Organization on Ethereum, raised over $150 million in Ether. A vulnerability in its code allowed an attacker to drain approximately $50 million in funds. The Ethereum community responded with a hard fork to return funds to investors, creating two separate blockchains: Ethereum and Ethereum Classic. This incident raised questions about the legal liability of developers and investors in DAOs.
- In 2022, a lawsuit involved bZx DAO, accused of negligence for failing to properly secure a DeFi protocol, leading to the theft of $55 million. The lawsuit implicated co-founders and members of the DAO, underscoring the possibility of shared liability in the event of operational failures.
- Furthermore, in December 2024, a federal judge ruled that Lido DAO and its venture capital investors could be held liable for the sale of unregistered securities, treating the DAO as a partnership generale. This decision reinforced the need for clearer regulation for DAOs and their members.
Then there are case studies on the subject of AI in financial trading:
- The use of AI-powered trading bots is widespread in financial markets to automate trades. However, some bots have executed unauthorized trades or manipulated markets, leading to significant losses. For example, in 2021, a trading bot caused abnormal fluctuations in the cryptocurrency market, raising concerns about how to assign legal liability for such actions.
- In scenarios where bots cause harm, victims may consider legal action against the exchanges hosting such bots or the developers responsible for their operation. However, the decentralized and pseudonymous nature of many trading platforms makes identification and accountability of the actors involved complicated.
Finally, there are case studies on copyright, copyright and AI-driven smart contracts:
- smart contracts can automate the creation and sale of digital content, such as NFTs (Non-Fungible Tokens), music and art. For example, an artist can program a smart contract to automatically release an NFT upon releasing a music track, ensuring transparency and traceability in transactions.
To conclude, automation in the generation and sale of content via AI can lead to copyright conflicts. For example, if an AI generates a work based on copyrighted materials without permission, questions arise about who owns the rights to the resulting work and how licenses should be enforced. The lack of legal personality for AI further complicates attribution of responsibility and management of copyright.
These cases highlight the need for an updated regulatory framework that addresses the unique challenges posed by the integration of AI and smart contracts in the financial and creative sectors.
Conclusions
The discussion on the legal and criminal liability of AI Agents is not only about assigning blame, but also about the need to develop an adaptable legal system that can recognize the growing autonomy of these technologies. As highlighted by experts like Ivan Salvadori, it is crucial that programmers and end users adopt diligent practices in the design and use of AI, in order to prevent the risk of wrongdoing.
Therefore, to effectively navigate the challenges posed by AI and smart contracts, the different actors involved must:
- for jurists, develop specialized skills to interpret and apply laws in a technologically advanced context, addressing issues such as legal liability and intellectual property.
- for regulators, create a flexible regulatory framework that can quickly adapt to technological evolution, balancing innovation with the need to protect consumers and society.
- for developers, integrate legal and ethical considerations into the software lifecycle, adopting privacy by design practices and ensuring algorithm transparency.
The future challenge will lie, for everyone, in the balance between innovation and legal protection, creating a regulatory framework that reflects the evolving technological reality.