AI training is set for the mainstream

Written by
Ben Wirz

In our Learning & Work Funding Report, published in late January, we made the prediction that AI training would proliferate in 2026. We said:

One of our 5 predictions featured in the European Learning & Work Funding Report 2026

This article dives into our thinking on this subject…

---

For years, AI skills have been treated as provisional, something to pick up informally, learn on the job, or outsource to the motivated few. That framing is now obsolete.

AI training is becoming a formalised domain not because the technology has stabilised and stopped advancing (quite the opposite!), but because our understanding of what people need to know is crystallising. Foundational capabilities are no longer speculative and role-level patterns are emerging. As a result, AI training is shifting from an individual responsibility to a systemic one. We are approaching the point at which AI is beginning to behave like infrastructure.

To this end, we have already observed significant activity in this space in 2026 - for example, Swiss company Scholé AI raised $3M to fund its AI upskilling work with large international enterprises, including Canal+, Swisscom, Decathlon, Visa and Bank of America. There are a lot of other companies either already focused on this area or pivoting to focus on this area - see Ivee and Mendo. Switching gears to later stage companies, Multiverse has pivoted into training AI skills, accelerated by their acquisition of Stackfuel which enables them to upskill 100,000 German workers and expand their footprint in the market. Both companies focus on upskilling employees via their employers, but there are others that focus on upskilling individuals, which we cover below, sometimes distributing via governments, such as Sana’s partnership with the Swedish government.

The end of tool-centric learning

The first wave of AI education was dominated by tools: prompt courses, platform tutorials, and “how to use model X” playbooks. That made sense in a period of rapid experimentation, but it was always likely to be a transitional phase.

It’s obvious that tool-centric learning does not scale - it fragments quickly. And it creates a false sense of competence that collapses the moment interfaces or models change.

What is replacing it is a clearer, more durable foundation that we should all be learning:

  • How AI systems reason and where they break
  • How to supervise, challenge, and collaborate with machines
  • How to judge outputs rather than generate inputs
  • How responsibility, accountability, and risk shift in AI-mediated work

These will become baseline capabilities and as that baseline becomes more legible, we expect it to be taught deliberately, assessed consistently, and delivered at population scale.

Sweden and the reframing of AI as a public good

Sweden’s AI reform is a signal event, not necessarily because of the (fantastic) tools involved, but because of the philosophy underneath it.

By giving millions of citizens, including young people and government officials, access to advanced AI tools and structured learning environments, the Swedish government is implicitly arguing that AI competence is no longer optional for economic participation. It belongs in the same category as digital literacy, language, and numeracy.

This is a meaningful break from the assumption that has manifested to date that AI upskilling ‘should’ be employer-led or market-driven. This assumption fails in three ways:

  1. It widens inequality - access to AI capability follows income, sector, and organisational maturity.
  2. It arrives too late - by the time people encounter AI training at work, habits and skill gaps are already entrenched.
  3. It underestimates compounding effects - early AI fluency radically improves long-term adaptability.

Sweden’s approach suggests a different model is emerging: the state guarantees the floor; the market decides the ceiling.

Foundations first, context always

One of the clearest patterns now emerging is the separation between foundational AI capability and contextual application.

Foundational skills are horizontal. They apply across roles and sectors, evolve slowly, and benefit from standardisation. This makes them well-suited to delivery through formal education systems, public workforce programs, and nationally recognised frameworks. Alternatively, OpenAI, Anthropic and other now-household names in the AI space could deliver training utilising their existing brands and obvious distribution options.

Conversely, contextual skills are vertical. They are specific to industries, workflows, risk profiles, and organisational cultures. They change quickly and are best developed close to the work itself. This distinction matters because it prevents a common failure mode: trying to teach “AI for everything” in one place, or worse, teaching nothing until specificity is available.

In our view, the future model is layered, with i) governments and public institutions deliver durable foundations, ii) platforms and employers contextualise those foundations inside real work and iii) training becomes continuous, modular, and role-aware.

This has significant workforce and workplace ramifications. As foundational AI skills become more widely distributed, workforce strategy will change.

When baseline AI fluency is assumed, organisations can hire for judgment, domain understanding, and adaptability, then rapidly contextualise AI capability post-hire.

With this in mind, we could expect:

  • Faster onboarding into AI-augmented roles
  • Sector-specific AI academies and credentials
  • Training designed around workflows rather than tools

In this world, AI training is about learning how work changes when AI is present.

Why this accelerates now

To us, with the above insights in mind, the prediction that AI training will become one of the most heavily funded categories in learning & work in 2026 is a natural outcome of several forces converging:

  • Clearer definitions of foundational AI competence
  • Government recognition of AI as civic infrastructure
  • Platform maturity that enables embedded, adaptive learning
  • Employer pressure to reskill entire workforces, not just technical teams

We are moving from improvisation to intentional design, from fragmented experimentation to systems-level thinking. The remaining question is not whether AI training will formalise, but who will shape it, and to what ends.

Let's look at what this could all be telling founders, policymakers and employers...

For founders:

Founders building in AI training should assume two things: first, foundations will be commoditised. Government-backed initiatives and large platforms will define and distribute baseline AI competence at scale. Competing there will be difficult and margin will be thin. Second, context is where value concentrates. In our view, the real opportunity lies in:

  • Sector-specific training layers
  • Role-aware learning embedded into workflows
  • Systems that translate general AI capability into measurable performance

Winning companies will help people do their actual jobs better in AI-mediated environments, rather than 'teach AI skills'.

For policymakers:

In our view, treating AI training as an optional workforce initiative could entrench inequality and slow national adaptability. Treating it as infrastructure creates compounding returns.

This implies:

  • Embedding AI foundations into formal education early
  • Standardising competency frameworks that transfer across sectors
  • Partnering with platforms rather than attempting to build everything in-house

The goal is to future-proof people, rather than future-proof jobs.

For employers:

Employers shouldn't be waiting for “finished” AI skills.

Foundational capability will increasingly be present, but unevenly applied. The competitive advantage shifts to organisations that can:

  • Contextualise AI effectively within their workflows
  • Redesign roles around human judgment and machine leverage
  • Make learning continuous rather than episodic

In practice, this means moving beyond generic AI training and investing in systems that align AI capability with how work actually gets done.

Of course, there are ways this all goes horribly wrong or less horribly, and more inefficiently. For example, AI foundations become box-ticking exercises that quickly become out of date.

What could go wrong?

The formalisation of AI training is not inevitable progress. It is a design challenge and there are several ways it can fail. For example, firstly, if centralised, the training can quickly become outdated and secondly, AI training can become a convenient substitute for harder conversations as upskilling does not automatically resolve job redesign, power shifts, accountability gaps, or ethical risk.

We're excited to meet companies building in this space, so if this is you, please do get in touch. We'd love to learn about your vision and what you're building.

I’m building what’s next

Share your deck and a few lines about what you’re building.

Submit a pitch
I have a question

For partnerships, media and general enquiries, we’d love to hear from you.

general enquiries