Generative artificial intelligence has taken hold in organisations at a speed that few IT or legal managers anticipated. ChatGPT, OpenAI's flagship tool, is now used in thousands of European companies, often without any formal framework, internal policy or risk analysis. For managers, DPOs and compliance officers, the question is no longer whether their staff use ChatGPT, but how to do so in a legally secure way.
1. ChatGPT and the RGPD: what does the legal framework actually say?
Before adopting or tolerating the use of ChatGPT in the workplace, it is important to understand the legal framework in which this tool operates. Visit RGPD and artificial intelligence are not yet perfectly clearly articulated, and this is precisely where the first risks lie.
Data controller or data processor: a strategic blur
When a company uses the OpenAI API, the relationship is relatively clear-cut: OpenAI acts as a processor, and the company as a data controller. But in the case of ChatGPT used directly via the’web interface, the boundary is more blurred. OpenAI may take on the role of co-responsible or even independent controller for certain operations, in particular the training of its models. This ambiguity has direct consequences for the compliance IA enterprise Who is responsible in the event of a leak or unlawful processing?
The positions of the European authorities
Several data protection authorities have already taken action. The Italian authority (Garante) suspended access to ChatGPT in 2023. The French CNIL has conducted investigations. L’RGPD impact assessment imposed by these authorities reveals real shortcomings: lack of transparency regarding the data collected, absence of a clear legal basis for training, difficulties in exercising people's rights. The compliance IA Belgium is also on the radar of the Data Protection Authority (DPA), which is keeping a close eye on the practices of local companies.
RGPD, AI Act and new obligations to come
Le European AI Act, which came into force in 2024, adds a further layer of regulatory complexity. General-purpose generative AI systems, such as ChatGPT, are now subject to transparency and technical documentation obligations. For businesses, this means that the AI data governance can no longer be left to improvisation: it must be structured, documented and audited.
2. What are the practical risks for businesses?
The risks associated with’use of ChatGPT data protection are not theoretical. They occur on a daily basis, often without the organisation being aware of them. Here are the most critical examples.
Sensitive data inserted in prompts
The main vulnerability is behavioural. An employee who submits a customer contract, HR data, financial information or source code in a ChatGPT prompt transfers this information to OpenAI's servers, which are located outside the EU. These ChatGPT personal data can be used to drive models, unless otherwise configured. The data transfer outside the EU triggers specific obligations under the RGPD (standard contractual clauses, risk analyses) that are rarely met in practice.
Shadow AI: a major organisational risk
Le shadow AI company refers to the undeclared, unsupervised and uncontrolled use of AI tools by employees. This is one of the most dangerous blind spots for the generative AI security Data protection: the organisation does not know what data is shared, with what tools, under what conditions. Without an inventory or policy, it is impossible to guarantee compliance or to react in the event of an incident.
Sanctions, reputation and legal liability
Penalties under the RGPD can reach 4 % of annual global turnover. But beyond the fines, it is the contractual liability to customers, the loss of trust and the damage to reputation that are the most immediate risks for SMEs. A company that cannot demonstrate that it is in control of its data flows is exposed to audits, litigation and a weakening of its commercial relations.

3. What guarantees does OpenAI offer today?
OpenAI has gradually strengthened its contractual and technical arrangements to meet European regulatory requirements. These guarantees are real, but they require companies to take an active approach.
Data Processing Addendum (DPA)
L’OpenAI DPA (Data Processing Addendum) is a contractual agreement which governs the processing of data within the framework of the API. It specifies OpenAI's obligations as a processor, the security measures applied and the conditions of transfer. This document is essential for any company wishing to use OpenAI's services within a framework that complies with the RGPD. But beware: it does not automatically apply to the use of ChatGPT via the general public interface.
Privacy settings and disabling training
OpenAI now allows users and businesses to disable the use of their data for model training. This option, which can be accessed in the account settings or via the API, is a minimum requirement for any organisation concerned about compliance IA enterprise. It doesn't solve all the problems, but it significantly reduces exposure.
Hosting, data transfer and security
The data processed by OpenAI is hosted in the United States. The legal framework applicable to data transfers outside the EU is based on standard contractual clauses (SCCs) and, since 2023, on the EU-US Data Privacy Framework. The generative AI security proposed by OpenAI includes encryption of data in transit and at rest, strict access controls and security certifications (SOC 2). These elements must be documented in your data processing register.
4. Good practice for correct and controlled use
Compliance cannot be decreed: it must be built methodically, combining an organisational framework, legal analysis and human training.
Implementing an internal AI policy
All organisations must draw up a formal policy for the use of AI tools, specifying which tools are authorised, in which contexts and with which data. This policy should distinguish between professional and personal use, define the categories of data excluded from prompting, and provide for control mechanisms. This is the first line of defence against shadow AI company.
Carrying out an impact assessment (DPIA)
L’RGPD impact assessment (or DPIA Data Protection Impact Assessment) is mandatory whenever processing is likely to result in a high risk to individuals. The use of ChatGPT on customer, HR or financial data generally meets this criterion. The DPIA makes it possible to identify the risks, document them and define the appropriate mitigation measures, an essential step for the AI data governance.
Training teams in AI data governance
Technology is not enough without human awareness. Training employees in the risks associated with prompts, good information-sharing practices and the obligations of the RGPD is an investment directly linked to reducing legal risk. A trained team is less likely to generate incidents of generative AI security and better able to identify anomalies.

5. How Iterates supports Belgian SMEs on the road to compliant and strategic AI
At Iterates, we support Belgian SMEs in implementing an AI strategy that is both effective and rigorously compliant. Our approach combines legal expertise, technical mastery and strategic vision to ensure that AI becomes a lever, not a risk.
RGPD audit & mapping of AI uses
We start with a complete inventory: what AI tools are used in your organisation, by whom, on what data, in what flows? This mapping enables us to identify areas of shadow AI company, This is the basis for a comprehensive data protection policy. This is the basis of a compliance IA Belgium solid and defensible.
Technical support (architecture, security, flow control)
We can help you design a technical architecture that minimises risks: data compartmentalisation, anonymisation of prompts, deployment of on-premise or European cloud solutions, implementation of outbound flow controls. The aim is to guarantee generative AI security without sacrificing operational efficiency.
Integration of secure AI tools into your IT ecosystem
We select and integrate AI solutions that are tailored to your business needs, respect the RGPD and artificial intelligence, and compatible with your existing infrastructure. Whether it's sovereign alternatives to ChatGPT, locally deployed models or secure configurations of the OpenAI API, we're with you every step of the way, from strategy to implementation.


