PRIVACY AND DATA PROTECTION IN THE USE OF AI AGENTS ON BLOCKCHAIN: LEGAL, TECHNICAL AND ETHICAL PERSPECTIVES

Giovanni Piccirillo's picture

Giovanni Piccirillo

Share article on:

notion-logogithub-logotwitter-logolinkedin-logoinstagram-logo
This academic paper, titled Privacy and Data Protection in the Use of AI Agents on Blockchain: Legal, Technical, and Ethical Perspectives, explores the complex intersection between artificial intelligence (AI) and blockchain technologies, with a specific focus on how their convergence challenges existing data protection laws such as the GDPR (General Data Protection Regulation). The paper presents a thorough analysis of the legal, technical, and ethical implications of using decentralized AI systems, addressing issues like data immutability, automated profiling, the right to be forgotten, and the role of actors in decentralized networks. It also discusses real-world case studies such as Ocean Protocol, Fetch.ai, SingularityNET, and Bittensor. The article offers recommendations for policymakers, developers, and businesses, and proposes emerging solutions including zero-knowledge proofs, privacy-preserving techniques, and hybrid on-chain/off-chain architectures. This document is essential reading for professionals and researchers interested in legal compliance, responsible AI design, and blockchain governance.

Abstract

The convergence between artificial intelligence (AI) and blockchain technologies is giving rise to a new generation of intelligent decentralized systems, capable of operating autonomously, transparently and immutably. However, this integration poses new and complex challenges regarding privacy and personal data protection, especially in light of current regulations such as the General Data Protection Regulation (GDPR). This article aims to critically analyze the points of friction between the fundamental principles of data protection (such as minimization, storage limitation, right to be forgotten) and the intrinsic characteristics of blockchain and AI agents, such as data immutability and automated profiling.

Through an interdisciplinary analysis, the contribution offers a technical overview of the two technologies, explores the main legal issues and proposes a theoretical framework of responsibility in decentralized systems. Furthermore, some emerging solutions are presented—such as privacy-preserving approaches, the use of zero-knowledge proofs, hybrid architectures and self-sovereign identity models—capable of mitigating the conflict between transparency and confidentiality.

The work concludes with some operational recommendations for developers, policy makers and businesses, underlining the urgency of a dialogue between technological innovation and regulatory adaptation, in order to guarantee a sustainable balance between efficiency, transparency and protection of fundamental rights.

Introduction

In recent years, the integration between artificial intelligence (AI) and blockchain technologies has generated growing academic and industrial interest, opening up unprecedented scenarios in terms of automation, security, traceability and decentralized governance. AI allows the development of intelligent agents capable of analyzing data, making autonomous decisions and learning from context, while blockchain offers a transparent and immutable infrastructure, capable of guaranteeing trust between parties who are not necessarily trusted.

This technological convergence is giving rise to new paradigms such as autonomous economic agents, the DAO cognitive, and i distributed decision-making systems, which promise to revolutionize sectors such as supply chain, decentralized finance (DeFi), healthcare and digital identity. However, the joint adoption of AI and blockchain raises important legal and ethical questions, especially in matters of privacy and protection of personal data.

Unlike centralized architectures, where it is relatively clear to identify the subjects responsible for data processing, in decentralized and autonomous systems substantial ambiguities emerge on roles, responsibilities, rights and obligations. Furthermore, the data persistence on the blockchain, thealgorithmic opacity of some AI models, and the growing trend towards automated profiling question the actual compatibility of these technologies with regulations such as the General Data Protection Regulation (GDPR).

The goal of this article is to analyze, from one perspective multidisciplinary, the main regulatory and technical challenges related to the use of AI agents operating on blockchain infrastructures, with particular attention to the impacts on privacy, the legal framework of reference, and emerging technological solutions capable of mitigating risks.

The fusion between artificial intelligence and blockchain, while offering significant opportunities, exposes profound critical issues regarding confidentiality, data control and respect for fundamental rights. On the one hand, AI needs large volumes of data, often personal or sensitive, in order to function effectively through training and optimization processes; on the other hand, the blockchain is based on immutability and transparency, characteristics that are difficult to reconcile with the principles of storage limitation, right to be forgotten and data rectification established by privacy regulations, such as the GDPR.

In particular, the interaction between AI agent and blockchain can involve:

  • The immutable record of data generated or used by the AI, making their cancellation or modification impossible;
  • The uncontrolled dissemination of information through public blockchains, where each node has access to the same copy of the data;
  • The difficulty in identifying the data controller in decentralized architectures, where artificial intelligence operates autonomously and distributed;
  • The presence of automated profiling which, if not correctly regulated, can lead to discrimination or violations of individual rights.

Furthermore, the opacity of some AI models (especially those based on deep learning) makes it complex to clearly explain the logic underlying automated decisions, a requirement set out in Article 22 of the GDPR. The lack of algorithmic transparency, combined with immutability of distributed ledgers, thus poses not only technical, but also legal and philosophical challenges, which require a rethinking of the current regulatory system or, at least, the adoption of technological solutions compatible with existing regulatory principles.

Legal framework of reference: GDPR (EU) and other relevant regulations

The theme of protection of personal data represents one of the central nodes in the legal debate on the use of artificial intelligence on blockchain infrastructures. Among the most advanced and complete regulations globally, the Regulation (EU) 2016/679 (GDPR Regulation), which came into force in 2018 with the aim of harmonizing data protection in the European Union and guaranteeing the protection of citizens' fundamental rights in the digital context.

The GDPR applies to any processing of personal data, regardless of the technology used, and therefore also includes automated, decentralized and distributed ledger-based systems. For the purposes of the regulation:

  • per personal data, according to Article 4, paragraph 1, of the GDPR, that is any information relating to an identified or identifiable natural person (“data subject”). The natural person who can be identified is considered identifiable, directly or indirectly, with particular reference to an identifier such as name, identification number, location data, online identifier or to one or more characteristic elements of his physical, physiological, genetic, psychological, economic, cultural or social identity.

This broad and technologically neutral definition implies that, in the context of blockchain, Also a public wallet address o one hash of personal data, if traceable to a natural person through reasonably accessible means, fall within the notion of personal data.

  • while the processing, according to Article 4, paragraph 2, instead defines the treatment come any operation or set of operations, carried out with or without the aid of automated processes, applied to personal data or sets of personal data, such as collection, recording, organisation, structuring, storage, adaptation or modification, retrieval, consultation, use, communication by transmission, dissemination or any other form of making available, comparison or interconnection, limitation, cancellation or destruction.

It follows that any activity carried out by an AI agent that involves the collection, analysis, profiling, prediction or storage of personal data – even in decentralized environments – falls within the notion of automated processing. Particularly relevant, in this context, is Article 22 of the GDPR, which regulates the decision based solely on automated processing, including profiling, if it has legal effects or significant impacts on the data subject.

In the event that these decisions are made by autonomous AI agents on blockchain, the question of responsibility for processing becomes central, as does the obligation to guarantee mechanisms of transparency, human intervention and protection of the interested party, even in the absence of a centralized entity.

In the context of blockchain and AI architectures, these definitions potentially extend to wallet addresses that can be associated with subjects, hash of personal data, outputs generated by AI agents on an individual basis, and to any other data that, even indirectly, allows the user to be identified.

The fundamental principles of the GDPR, which are particularly critical in this context, include:

  • the purpose limitation and the data minimization (art. 5),
  • the storage limitation (storage limitation),
  • the transparency and accountability (accountability),
  • The right to be forgotten (art. 17), the data portability (art. 20) and the right to not be subjected to automated decisions without adequate guarantees (art. 22).

These rights clash with intrinsic characteristics of the technologies analyzed, such as immutability of the blockchain, which prevents deleting or modifying data once recorded, and the opacity of algorithmic decisions in some AI models.

Alongside the GDPR, other international regulations are also starting to regulate these issues. The California Consumer Privacy Act (CCPA) and its evolution, the California Privacy Rights Act (CPRA), introduce similar rights (access, deletion, opt-out from the sale of data) and impose transparency obligations, but adopt a more flexible approach.

Other relevant examples include:

  • the Brazilian LGPD law,
  • the Personal Information Protection Law (PIPL) in China,
  • the OECD guidelines and of theEDPB (European Data Protection Board),
  • the recommendations of theENISA on security and decentralization.

However, no current legislation addresses in a fully exhaustive way the legal complexity resulting from the combination between Agent AI and Distributed Systems. There is therefore a need for regulatory updates, extensive interpretations of the provisions in force, or, alternatively, the adoption of compatible technological approaches by design with the regulative principles.

The principles of the GDPR applied to Blockchain and AI: data minimization, purpose limitation, storage limitation

The GDPR is based on a series of general principles which guide any processing of personal data. Among these, the most problematic in the context of the convergence between blockchain and artificial intelligence are the principles of data minimization, purpose limitation and storage limitation. These principles often come into tension with the intrinsic nature of the two technologies.

Data minimization

The minimization principle requires that the personal data processed be adequate, relevant and limited to what is necessary for the purposes for which they are treated. In other words, the collection and storage of superfluous data must be avoided.

In the context of artificial intelligence, this principle is often difficult to apply: AI models, particularly those based on machine learning and deep learning, tend to absorb and memorize large amounts of data, often not fully determined ex ante. When such data is subsequently recorded in a way immutable on a blockchain, any possible redundancy or excess becomes permanent, in contrast with the spirit of the law.

Purpose limitation

This principle states that personal data must be collected for specific, explicit and legitimate purposes, and subsequently processed in a manner compatible with these purposes.

In the case of AI agents on blockchain, it is complex to limit the use of data a priori: agents can learn and act autonomously, reacting to environmental stimuli and online interactions. Also, nature distributed and permissionless of many public blockchains makes it difficult to ensure that third parties do not subsequently process the data for unintended purposes, effectively circumventing central control over the purpose of the processing.

Storage limitation

The retention limitation principle requires that data be retained for a period of time no longer than necessary upon achieving the purposes for which they were collected, with the obligation of cancellation or anonymization at the end of this period.

Blockchain, however, is designed to ensure persistence and immutability of the data. Once information is recorded in an on-chain transaction, it is no longer possible to delete or modify it, making compliance with this principle technically impossible, unless hybrid architectural solutions (e.g. off-chain storage) or advanced techniques such as cryptographic obfuscation, i commitment or him non-reversible hashes.

Problems with the right to be forgotten

Among the fundamental rights recognized by GDPR regulation, The right to be forgotten – formally defined as right to cancellation (art. 17) – represents one of the most problematic provisions to apply in technology-based environments blockchain. According to this rule, the interested party has the right to obtain, in certain circumstances, the deletion of your personal data without unjustified delay, in particular when:

  • the data are no longer necessary for the purposes for which they were collected;
  • the interested party withdraws consent or objects to the processing;
  • the data has been processed unlawfully.

In the context of blockchain architectures, the technical realization of this right results almost impossible for a structural reason: the immutability of the data. The information recorded on a public blockchain, distributed among all nodes participating in the network, cannot be modified or deleted ex post, precisely to guarantee the integrity and security of the entire system.

This feature, despite being one of the main strengths of blockchain technology, comes into stark contrast with the data subject's right to delete their data. Even when technical measures are adopted - such as hashing the data or its encryption - such approaches are not equivalent to a real cancellation, but rather to a form of inaccessibility, which does not always meet regulatory requirements.

In the case of an agent, the problem is further exacerbated as an intelligent agent could have learned from the data that the interested party asks to be removed, giving rise to implicit memory phenomena in the trained models. Removing such data would, in some cases, result in the need to retrain the model, which is technically burdensome and, in some contexts, impractical.

Furthermore, when AI's automated decisions are recorded on blockchain for reasons of auditability or transparency, an additional layer of information permanence is created which prevents the impact that a specific piece of data has had on a decision-making process from being retroactively removed.

Total solutions to the problem cannot be adopted, so partial solutions have been proposed that can better adapt to the dynamics exposed, solutions such as:

  • the use of off-chain systems to store data, with on-chain reference to erasable data;
  • the tokenization of consent, which allows dynamic control over processing by the user;
  • the use of cryptographic technologies (e.g. zero-knowledge proofs) which allow the verification of a condition without exposing the data in clear text.

However, these solutions still remain limited, experimental or not universally adopted, highlighting the need for an evolution of the regulatory framework or an architectural rethink to reconcile the immutability of the blockchain with the individual rights guaranteed by the GDPR.

Accountability and role of actors

The principle of accountability, enshrined in Article 5, paragraph 2, of the GDPR, requires that the data controller (so-called. data controller) is not only responsible for compliance with the principles relating to the protection of personal data, but also able to demonstrate effective compliance. However, the identification of the data controller within decentralized systems based on AI and blockchain represents one of the most controversial and unresolved challenges of the current legal framework.

In traditional systems, the data controller is easily identifiable as the body, whether public or private, that determines the purposes and means of the processing. On the contrary, in decentralized architectures:

  • there is no central entity that controls the entire treatment process;
  • decisions can be made by autonomous agents, such as smart contracts or self-executing AI agents;
  • the data can be processed by a network of distributed nodes, each of which operates independently but contributes to the validation and preservation of the data.

This structure directly challenges the traditional approach of the GDPR, making it difficult to attribute responsibility to a single actor.

In particular, three main critical issues emerge:

  • Who can be held responsible for processing carried out by an AI agent deployed on a public blockchain? The creator of the agent, the node that runs the code, or the user that interacts with the system?
  • in a truly decentralized system, there is no entity that controls the means of processing, since they are often predefined deterministically in the code (smart contracts). This makes the figure of the controller distributed or even unidentifiable according to classical criteria.
  • developers, smart contract deployers, node managers, front-end operators and even end users could be considered, depending on the case, controllers, processors or joint controllers, resulting in complexity in the allocation of obligations and the recognition of legal responsibilities.

To address these ambiguities, several theoretical proposals have been put forward:

  • the adoption of the concept of collective control, which sees responsibility as shared between multiple actors;
  • the creation of bridge entities (e.g. foundations, legally recognized DAOs) that act as legal intermediaries between the decentralized network and the regulatory system;
  • the application of strict liability models by default, which attribute ownership to the person who first started the processing or made the processing public.

However, these solutions remain the subject of doctrinal debate and are not yet fully implemented by jurisprudence or by the guarantor authorities. In the absence of a regulatory review or specific guidelines for decentralized systems, the effective accountability of actors in AI+Blockchain ecosystems remains a gray area, which requires prudence, legal innovation and compliance-oriented design approaches right from the design phase (privacy by design and by default).

Shared responsibilities between developers, users and platforms

In the decentralized context of the interaction between artificial intelligence and blockchain, the traditional notion of legal responsibility, typical of centralized systems, is replaced by a distributed matrix of responsibility, in which multiple subjects - developers, users, platforms and infrastructure operators - contribute, directly or indirectly, to the processing of personal data.

This distribution carries the risk of fragmented responsibility, where no actor is fully responsible, but all contribute to the creation of an environment which, as a whole, produces significant legal effects. Faced with this, the need to recognize and analyze shared responsibilities is becoming increasingly clear, adopting multidimensional and multilevel approaches.

Developers

Developers of AI agents and smart contracts play a crucial role in determining the means of processing. Through the source code, they define:

  • what data is collected,
  • how they are processed and stored,
  • to what extent the algorithm makes automated decisions.

In many cases, such decisions occur ex ante, i.e. before the actual processing begins, making developers potentially co-processors or controllers in the sense of the GDPR, especially when their codes are intended to process personal data in a systematic and predictable way.

Users

Users, understood both as data providers and as subjects who activate transactions or AI agents, actively participate in the processing. Although their liability is often limited to individual level, it can acquire legal relevance:

  • when a user employs a smart contract to process third-party data,
  • when its activity involves profiling, predictions or automated decisions on other subjects.

In the absence of a central figure, the user can become a de facto controller, yes especially in cases where you set purposes and use AI agents for your own purposes, for example in DeFi platforms, NFTs or decentralized social networks.

Platforms and interfaces (front-end)

Even the application interfaces (dApps, web front-ends) through which users interact with AI agents and smart contracts are not exempt from liability. The platform operator can:

  • monitor data flows,
  • establish the conditions of access to the service,
  • influence the behavior of AI or the configuration of smart contracts.

In such cases, the person who controls the front-end could be considered owner or co-owner of the processing, or at least responsible (data processor) pursuant to art. 28 GDPR. Some European regulators have already underlined that the interface for access to decentralized technologies can itself constitute a relevant control point, and therefore subject to regulatory obligations.

In summary, in decentralized and automated systems, there is no clear separation between developer, user and platform manager as each contributes, to a different extent, to the definition of the purposes and means of processing. A model of cooperative and transparent responsibility is therefore desirable, in which each actor is put in a position to:

  • know and evaluate the legal implications of one's actions,
  • adopt appropriate technical and organizational measures,
  • ensure compliance right from design (compliance by design).

Pseudonym vs anonymity

One of the main tensions between public blockchains and protection of personal data concerns the—often misunderstood—distinction between pseudonym and anonymity. While many blockchain systems present themselves as “privacy-friendly” tools, they actually offer form limited identity protection, which is not always compatible with the regulatory requirements of the GDPR and other privacy laws.

Pseudonymity on your blockchain

In public blockchains, each user is identified by a unique alphanumeric address (e.g. a wallet), which does not contain information directly attributable to a natural person. However, this does not guarantee anonymity, only pseudonymity. Article 4(5) of the GDPR defines pseudonymised data as:

“personal data that can no longer be attributed to a specific data subject without the use of additional information”.

Such data remains subject to data protection legislation, as, with reasonably available means (e.g. analysis of transactions, connection with IP addresses, cross-referencing with KYC data), it is often possible to re-identify the user behind a wallet.

Several academic studies and investigations by public authorities have demonstrated how the combined analysis of metadata, timestamps, spending patterns and publicly available data can deanonymize users even on networks such as Bitcoin or Ethereum, compromising the confidentiality and information self-determination of the interested parties.

Lack of effective anonymity

Anonymity, understood as the objective and definitive impossibility of tracing the identity of the interested party, is not guaranteed by default by public blockchains. Effectively:

  • the recorded information is transparent, immutable and accessible by anyone;
  • each operation is traceable over time and can be associated with a specific address;
  • transactions, if analyzed with blockchain forensics tools, can reveal social relationships, consumption behaviors and capital flows.

These characteristics contrast with the principle of data minimization and with the idea of ​​"privacy by default", envisaged by the GDPR in article 25.

Legal and operational consequences

The misunderstanding between pseudonymity and anonymity can have serious legal implications for those who develop, use or manage blockchain-based applications. In particular:

  • data processed on-chain may need to be considered personal data, with all the consequent regulatory responsibilities;
  • the difficulty of assigning clear roles in the processing hinders the application of the data subject's rights (access, rectification, opposition, cancellation);
  • the supervisory authorities could consider a system that does not implement adequate technical and organizational measures to guarantee effective data protection to be non-compliant.

Partial solutions are offered by protocols that implement advanced privacy mechanisms, such as zk-SNARKs (zero-knowledge proofs), ring signatures or commitment schemes, but their use is still limited and often not compatible with the main interoperability and auditability standards.

Design by default & by design

One of the fundamental pillars of the GDPR is the principle of “privacy by design and by default” (art. 25), which requires data controllers to incorporate data protection mechanisms right from the system design phase and to guarantee, by default, the minimum necessary level of processing.

In the context of the convergence between artificial intelligence and blockchain, this principle takes on even greater relevance, given the structural difficulty of modifying or correcting a system once made public, especially if immutable and decentralized.

Privacy by design

Apply the principle of by design means developers must:

  • analyze privacy risks from the early stages of the product life cycle (privacy impact assessment or DPIA),
  • limit data collection to what is strictly necessary,
  • implement privacy-preserving technologies, such as pseudonymization, encryption, ZKP, and hybrid off-chain/on-chain architectures,
  • design AI models that are interpretable, verifiable and transparent (explainable AI),
  • document design decisions with explicit reference to the principles of the GDPR and any international standards (e.g. ISO/IEC 27701).

In the case of smart contracts or AI agents deployed on blockchain, this implies an advance design responsibility, as ex post intervention is often impossible or ineffective.

Privacy by default

The principle of by default requires that, by default, a system:

  • only process the data strictly necessary for each specific purpose,
  • does not expose personal data to third parties or the network unless expressly requested by the user,
  • adopt conservative configurations in terms of visibility, access and duration of conservation.

In AI+Blockchain systems, this can translate into:

  • wallets and digital identities configured for maximum confidentiality,
  • machine learning parameters selected to avoid overfitting on personal data,
  • granular informed consent mechanisms, which allow the user to specifically decide which data to share and for what purposes.

To effectively implement the privacy by design and by default, it is recommended:

  • the use of ethical AI development frameworks (such as those promoted by the European Commission or the OECD),
  • the adoption of distributed data governance models, which define roles, responsibilities and audit procedures,
  • the use of DevSecOps models, in which security and compliance are integrated into the continuous software development cycle,
  • the use of controlled versioning and rollback systems even in decentralized environments, to prevent damage in the event of errors or vulnerabilities.

A decentralized and compliant AI cannot be the result of subsequent corrective interventions, but must arise from careful, interdisciplinary planning oriented towards respect for the person. Only by incorporating the principles of privacy by design and by default from the beginning will it be possible to build intelligent, reliable and truly sustainable infrastructures in the long term.

Case Studies: Real Projects Integrating AI and Blockchain

The integration between artificial intelligence and blockchain is now a concrete reality, with numerous active projects experimenting with decentralized models of computing, learning and data management. However, the solutions adopted to enable these synergies pose significant impacts in terms of privacy, which deserve to be analyzed on a case-by-case basis. Among the most representative projects, Ocean Protocol, Fetch.ai, SingularityNET and Bittensor stand out.

Ocean Protocol and data monetization in a decentralized way

Ocean Protocol is a platform that allows you to exchange, monetize and analyze data in a decentralized way, via a Data Marketplace based on smart contracts. It uses blockchain to manage authorizations and controlled access to datasets, and aims to encourage the sharing of data between public and private entities, keeping control in the hands of the data holder.

Ocean adopts a hybrid architecture, in which data remains off-chain, and only authorizations (access tokens) are registered on-chain. It introduces a model of data sovereignty, in which data providers maintain control over their information, deciding who can access it, when and under what conditions. Despite this, the use of AI models trained on external data can entail risks of inference and re-identification, especially in the absence of advanced anonymization or differential privacy techniques.

Ocean Protocol strives to comply with GDPR principles, but true transparency into how algorithms use data remains critical, especially when integrating black-box AI models.

Fetch.ai and autonomous agents and machine-to-machine economics

Fetch.ai develops an infrastructure based on autonomous AI agents that interact within an "Open Economic Framework", making decisions and concluding transactions in an automated way. Agents represent human, corporate or institutional interests, and are capable of learning, bargaining and adapting to complex economic contexts.

With Fetch.ai, each agent acts as a decentralized, pseudonymous entity, with interactions tracked on its blockchain. The data used and generated by agents may include sensitive information or information potentially attributable to the end user, such as preferences, location, transaction history. It is not always clear, however, where responsibility for data processing lies, nor whether it is possible to revoke or correct a decision made by a completely autonomous agent.

Fetch.ai proposes an evolved model of decentralized AI, but raises questions about accountability, auditability and the right to automated opposition (art. 22 GDPR), given the difficulty of tracing a decision to a legally identifiable subject.

SingularityNET and decentralized AI-as-a-Service

SingularityNET is a platform for offering and consuming AI services in a decentralized manner, where developers and businesses can upload, sell or access AI modules using AGIX tokens. The goal is to create a global artificial intelligence market, managed by a distributed network.

In his case, the AI ​​models are available in modular form, but the management of input and output data is left to the responsibility of individual operators. There is a lack of centralized mechanisms to verify privacy compliance by model providers, creating a situation similar to an “algorithmic jungle”. In the absence of an integrated SSI system, the user can lose control over their data once it is sent to the AI ​​module.

Despite the innovativeness of the project, the absence of integrated and enforceable protection standards represents a critical vulnerability for regulatory compliance and the protection of interested parties.

Bittensor and the blockchain of collective intelligence

Bittensor is a decentralized neural network in which distributed nodes contribute computational power and AI models, receiving economic incentives (TAO tokens) in exchange. Each node participates in a network that evaluates and rewards learning contributions, creating an ecosystem of incentivized collective intelligence.

With Bittensor, AI models evolve based on shared input and feedback, but the system does not always distinguish between anonymous, pseudonymous or personal data. There are no specific tools for anonymizing training data or managing consensus. The highly technical and distributed nature of the project makes it difficult to ensure transparency and accountability, especially regarding secondary use of the data.

Therefore, despite being one of the most sophisticated implementations of decentralized AI, the absence of explicit regulatory controls or privacy-by-design tools exposes the project to potential risks of non-compliance and information abuse.

Final reflections

The convergence between artificial intelligence and blockchain represents one of the most fascinating and complex challenges of our time. The potential of these tools - in terms of automation, efficiency, disintermediation and security - is extraordinary, but clashes with the equally urgent need to protect people's fundamental rights, in particular privacy, informational self-determination and transparency.

As the analysis revealed, many of the structural features of blockchain — such as immutability, transparency and decentralization — are in potential conflict with key principles of the GDPR, such as data minimization, retention limitation and the right to be forgotten.

These conflicts should not be read as absolute incompatibilities, but rather as trade-offs to be governed with technical, regulatory and ethical tools capable of mediating between innovation and the protection of people. It is only through responsible design — and not posthumous — that it will be possible to develop truly sustainable systems.

To effectively address the highlighted challenges, it is appropriate to articulate differentiated concrete actions for key players in the sector:

For policy makers it is necessary to promote specific technical and regulatory guidelines for the combined use of AI and blockchain, overcoming the current legal ambiguity. Moreover, it is necessary to provide regulatory sandboxes in which to experiment with decentralized and AI-driven models in a controlled manner. Finally it is important support research and development of international standards for privacy-preserving technologies (e.g. ZKP, MPC, homomorphic encryption).

For developers you have to adopt a privacy by design and by default approach, starting from the architectural design phase. Furthermore, explainable AI tools and transparent governance models must be integrated into intelligent systems. Finally, it is necessary to favor hybrid solutions (off-chain/on-chain) and granular access control techniques for the management of sensitive data.

Finally, for companies and market operators, it is necessary to evaluate the ethical-legal risks in the selection and implementation of AI+Blockchain technologies. Then we need to invest in continuous training of technical and legal staff on the issues of privacy, responsible AI and decentralized compliance. Finally, it is important to promote active transparency towards users, also through participatory governance and verification mechanisms.