AI projects in companies don't fail because the technology doesn't work. They fail because adoption is ill-conceived, rushed or driven by the wrong people for the wrong reasons. After two years of widespread use of tools such as ChatGPT, Copilot or Gemini in organisations, the same patterns of failure are being repeated. Here are the five most common mistakes - and what to do instead.
Mistakes 1 and 2: Confusing adoption with deployment
The first mistake most organisations make is to treat generative AI as a tool to be installed, not as a transformation to be supported. Deploying a tool is technical. Adopting it is human. The two are not managed in the same way, and confusing them is the starting point for the majority of projects that end up in nothing concrete.
Mistake 1 - Launching a POC without a post-launch strategy
Le Proof of Concept has become the obligatory initiation rite for any AI project. A volunteer team, a limited use case, a few weeks of testing, an enthusiastic report. Then... nothing. The POC remains in a drawer, The team goes back to business as usual, and the organisation waits for the next POC to feel innovative.
The problem isn't the POC itself - it's the lack of an answer to the question that should precede it: if this test is conclusive, what happens next? A POC without a deployment roadmap, without an identified sponsor, without a budget allocated for the follow-up, is not a pilot. It is a one-off demonstration. Before you launch anything, define the success criteria that will trigger the move to scale - and get management to commit to it before you even start.
Error 2 - Measuring adoption by the number of activated licences
“We've deployed Copilot to 200 employees” is a phrase you often hear in AI project reviews. What we hear less often is how many people are actually using it, for what purposes, and with what measurable results. Activating a licence is not the same as adopting a tool. It means buying the opportunity to use it.
Real adoption can be measured in different ways: weekly usage rates, types of tasks automated, time saved per business profile, perceived quality of outputs. Without these indicators, it's impossible to know whether the investment is producing value - or whether you're paying for 200 licences for 20 people to use them occasionally.
Mistakes 3 and 4: underestimating the human dimension
The technology is the simplest part of an AI project. Resistance is always human - fears, habits, power games, legitimate questions about the impact on jobs. Ignoring these dimensions doesn't make them go away: they come back in the form of passive resistance, non-use, or risky workarounds.
Mistake 3 - Not training, or training only once
Training in generative AI is often reduced to a one-hour webinar on the day of deployment. It's not enough, and everyone knows it - including those who organise it. Mastery of a tool like ChatGPT or Mistral is not acquired by watching a demo. It's built through practice, experimentation and confrontation with real-life use cases linked to each employee's job.
What works: short, repeated training sessions, anchored in the day-to-day tasks of each team, with practical exercises. A lawyer doesn't need to know how a transformer works - he needs to know how to write an effective prompt to analyse a contract. Training by business profile, not by tool, This is a radical change in membership.
Mistake 4 - Ignoring resistance instead of working on it
“My employees don't want to use AI” is a phrase we often hear as an admission of failure. In reality, it is a valuable information that most managers don't delve into deeply enough. Behind any resistance to generative AI, there is almost always one of three things: a fear of losing one's job or perceived value, a legitimate mistrust of the reliability of the outputs, or a lack of understanding of what the tool can actually do for one's daily life.
None of these three resistances can be resolved by talking about innovation. They have to be dealt with transparency about the organisation's intentions, This is achieved by providing a range of tools, concrete examples of how to save time, and - above all - by involving employees in the choice of use cases rather than imposing a top-down tool on them.
Mistake 5: Neglecting legal and compliance risks
The enthusiasm surrounding generative AI has often caught up with legal and compliance teams. In many organisations, employees use consumer AI tools for sensitive tasks - drafting contracts, summarising confidential meetings, analysing customer data - without anyone having assessed the real implications of these practices.
5a - Letting employees choose their own tools
Le Shadow IT existed before AI. With generative AI, it has changed scale. An employee copying and pasting a confidential contract into ChatGPT to summarise it may have just transmitted sensitive data to a US server subject to the Cloud Act, exposing them to reuse for model training, and creating a security incident that the CISO will discover - if he discovers it at all - several months later.
The answer is not to ban. Prohibition without an alternative creates exactly the behaviour it seeks to avoid, only less visibly. The answer is to provide an approved solution that is easy to access and of sufficient quality so that the use of unapproved tools becomes unnecessary. A clear AI policy, tools validated by the team, and honest communication about the reasons for these choices: this is what allows you to regain control.
Error 5b - Believing that the RGPD covers everything
The GDPR is a necessary framework, but it does not answer all the questions posed by generative AI. Intellectual property of outputs, liability in the event of factual error, traceability of AI-assisted decisions, etc. - These are all subjects on which European regulations are still being drafted, and on which your organisation must take a position before being faced with a dispute.
The European AI Act, which is currently being rolled out, will gradually impose obligations according to the level of risk of the systems used. Organisations that have already mapped out their AI uses will have a head start. Those that will discover their obligations at the time of compliance. will pay the price of improvisation.
Do you recognise your organisation in these mistakes? That's what Iterates is for.
These five mistakes are not faults. They are predictable stages in an adoption that has not been sufficiently prepared. The good news: they can all be corrected, The key is to act before bad habits become too entrenched.
Iterates supports organisations in structuring their adoption of AI - from auditing current practices to defining a clear policy, including training teams and choosing the tools best suited to their context. No generic solution: an approach tailored to your reality.


