BlogStrategyApril 27, 2026

AI Skills Gap 2026: Why Training Falls Short and Ownership Matters

53% of companies cite lack of AI skills as the top challenge. Why more training won't close the gap and the 3 steps that truly help.

Fabian Ulitzka8 min

Last week I was at Helaba’s Maintower with 25 leaders from Corporate Real Estate. They plan spaces for over 250,000 workplaces across Germany. They hadn’t come to have AI explained to them. They came because they need to understand the work they’re designing spaces for tomorrow.

That question ties directly to one of the hardest numbers in this year’s Bitkom AI study: 53% of German companies cite a lack of competence as the biggest hurdle to adopting AI. The reflex is always the same: more training, more workshops, more courses. I believe that misses the point. Systematically.


What the Bitkom AI study 2026 really shows

What does the AI competence gap actually measure?

The Bitkom AI Study 2026 (n=604 German companies) gives three figures that together paint a clear picture. 41% of German companies actively use AI — a doubling from last year. 48% plan to deploy AI. That means AI is either already in use or on a concrete roadmap for roughly 90% of companies.

But the study also shows the brake. 53% name their employees’ lack of competence as the main obstacle. Ahead of data protection, process integration, and everything else. Large companies with more than 500 employees report usage over 60%. Mid‑sized firms are well below that. The scaling gap isn’t between “having tools” and “not having tools.” It’s between “tools deployed” and “people enabled.”

  • 41% – companies actively use AI (double vs. last year)
  • 48% – plan to deploy AI
  • 53% – lack of competence as the biggest hurdle

Why the training reflex falls short

Here it gets interesting. A parallel study from the BCG Henderson Institute (n=1,488 employees) shows something disturbing. Even employees who are trained and who use AI regularly can fall into a new kind of overload. It’s called AI Brain Fry — and it’s measurable.

Employees who actively supervise AI report 33% more decision fatigue, 39% more severe errors, and 39% higher intent to quit. In marketing roles the share affected is 26%, higher than any other function studied. The strain doesn’t rise with AI usage. It rises with oversight of AI. BCG also finds a clear threshold pattern: three AI tools is the productivity sweet spot. At four tools, performance drops even though effort increases.

  • 33% – more decision fatigue with active AI oversight
  • 39% – more severe errors under oversight
  • 39% – higher intent to quit
  • 3 tools – productivity optimum; performance falls at 4+

This changes the diagnosis fundamentally. The competence gap is not primarily a knowledge problem. It’s a responsibility problem. No training program will fix it simply by giving people more knowledge about tools whose outputs they’re expected to own without being able to fully oversee them.


Why more agents don’t mean less responsibility

Who holds responsibility in an agentic organization?

From working alone to being accountable for half a dozen agents

A year ago I worked alone. A laptop, a calendar, a handful of recurring tasks. Today I’m accountable for the output of half a dozen agents that write briefings, research, draft content, and aggregate data around the clock. What’s multiplied isn’t just my workload. It’s my burden of responsibility for things I no longer do myself.

This is the invisible scaling we haven’t been talking about. We celebrate outputs. We measure time saved. We show slides with hours reclaimed. What we don’t measure is what it does to people who suddenly carry responsibility for ten times the output without being able to oversee ten times the detail.

The reflex in these discussions is always: multi‑agent systems. One agent supervises five others. A co‑CEO agent reviews everything. A QA agent catches errors. That’s too simplistic. You must fine‑tune each of those agents, especially in complex processes where teams depend on one another. You need very detailed knowledge about when they hallucinate, where they make assumptions, and which decisions they must not make. Asking “Who’s accountable?” isn’t solved by having a QA agent review the others. You only move the problem up one level.

Who is even willing to carry it?

The second layer of the problem is rarely said out loud. When AI takes over repetitive tasks, the remaining work requires judgment, responsibility, and active co‑creation. Not every employee is built for that. Many people prefer ordered, recurring work — and for good reasons. That kind of role delivers stability, clarity, and the satisfying feeling of being finished at the end of the day.

This is not a value judgment. It’s an observation every leader recognizes when they look honestly at their team. But that type of work loses its ground in an agentic organization. Ordered, recurring tasks are the natural use case for agents — and they migrate there. What remains for people is responsibility for the result.

That reframes the conversation about AI competence. It’s not mainly about teaching people how to use AI. It’s about building ownership and the willingness to stand behind results they no longer produce in detail. You can develop that willingness. But it must be discussed first, not last. And it must be reflected in hiring, leadership, and team development — not left until the AI rollout.

What the data says

Market data validates both points. Gartner predicts that over 40% of active agentic‑AI projects will be abandoned by the end of 2027. Not for lack of technology, but because of unclear business value, rising costs, and weak risk control. McKinsey reports in State of AI Trust 2026 that only about one third of organizations reach maturity level three or higher in strategy, governance, and agentic‑AI control. A Berkeley analysis sums it up: 80% of organizations report risky agent behavior, from unauthorized data access to unexpected system interactions. Only 21% have a mature governance model to handle it.

  • >40% – agentic‑AI projects abandoned by 2027
  • 1/3 – reach maturity level 3+ in strategy/governance
  • 80% – report risky agent behaviors
  • 21% – have a mature governance model

This is the new limit for scaling agentic organizations. Not the technology. The limit is people’s ability to carry the responsibility that comes with each additional agent. Some people want that. Many do not. Both reactions are legitimate. Both must have a place in your strategy.


Three steps that actually help

How do you start now?

If the competence gap is a responsibility problem, classic training programs are the wrong answer. The effective interventions are surprisingly unspectacular. Three steps we followed at faive and that I recommend to any team aiming for more than short‑term efficiency gains. This assumes the ownership question is settled within the team.

  1. Processes must be visible end to end before you automate them. You can’t be accountable for what you don’t understand. Before the first agent runs in a process, be clear: What are the steps? What decisions get made? What are the inputs and outputs for each step? Where is human judgment indispensable? This work used to be considered too time‑consuming. Consultants charged a lot for it. Today you can do it in a lunch hour.

  2. Use AI to capture your processes. Practically: record a recurring process with Loom or a screen capture. Upload the recording or transcript to a large language model and use a clearly structured prompt. At faive we use this variant:

"You are a process analyst. Attached is a video or transcript of an employee performing a recurring task. Produce: (1) a Mermaid flowchart of the individual steps, (2) a list of all decisions and their criteria, (3) a list of inputs and outputs for each step, (4) markers for where human judgment is indispensable. Keep everything concise."

What used to take two weeks of process documentation now takes about an hour. And that visibility is the precondition for briefing an agent sensibly.

  1. Define decision rights before the agent runs. Deloitte and Berkeley argue the same: agents need clear decision rights. What may the agent decide alone? What must it escalate? Which outputs require a human sign‑off? Answering these questions before rollout separates a productive agent from a compliance risk with a friendly interface.
  1. Make processes visible Before an agent runs, make steps, decisions, inputs, and outputs clear. Visibility also includes where human judgment is indispensable. What once felt too time‑consuming can now be done in a lunch hour.
  2. AI‑assisted process capture Document recurring workflows via screen recording and feed the recording or transcript into an LLM. With a structured prompt you get flowcharts, decision lists, and I/O lists in about an hour instead of two weeks.
  3. Define decision rights in advance Specify what the agent can decide, what it must escalate, and which outputs need human sign‑off. Clarified decision rights before rollout separate productive use from risk.

Conclusion

The Bitkom figure — 53% citing a competence gap — is real. But it doesn’t measure what we often assume. It doesn’t primarily say German employees lack knowledge about AI. It says German organizations lack two things: structures that let people carry AI‑related responsibility, and people who are willing to accept that responsibility instead of clinging to ordered tasks that will disappear in an agentic world.

Responsibility doesn’t scale with tools. It scales with process knowledge and with people willing to take ownership. Both are cheaper and faster to build today than ever before. If you start one thing this week, record a recurring process in your team and have AI turn it into a diagram. You’ll be surprised by what you see — and by who in your team you should talk to first.

Interested?

Let's find out together how we can implement these approaches in your organization.

Schedule a conversation now