Disruption Isn’t Coming  –  It’s Here: What Enterprise Leaders Said About AI in 2025 (and What to Do Next)

AI has officially moved from “innovation lab” to “boardroom agenda.” Several documents researched – including Bloomberg’s surveys – have suggested that the dominant driver isn’t just curiosity or incremental growth; it’s disruption risk. In a single December 2025 enterprise AI strategy survey of 604 senior executives at large companies (5,000+ employees), 36% said AI is their #1 strategic priority, and another 47% put it in their top three.

That’s not a trend. That’s a mandate.

What Leaders Are Doing Right Now (And How Widespread AI Has Become)

One of the most striking findings: AI strategy is no longer optional. 97% of surveyed enterprises said they already have an AI implementation/strategy, and the remaining 3% said they plan to. In other words, nearly every large enterprise is already in motion – across sectors ranging from financial services and healthcare to telecom/media and consumer industries.

The survey also gives a reality check on how executives perceive disruption:

  • For the industry overall, 38% rated AI disruption as “very high” and 42% as “high.”
  • For their own company, leaders were slightly more confident: 27% “very high,” 41% “high.”

That “we’ll be fine” optimism might be natural – but it also creates a dangerous gap: many organizations believe they’re more resilient than the market they compete in.

The Business Goals Behind AI: Efficiency First, Revenue Second

When executives were asked to rank the objectives of AI initiatives, the #1 answer was clear:

  • Operational efficiency ranked as the top goal (with 47% placing it as most important).
  • Revenue generation came next (21%).
  • Customer experience
  • Headcount reduction was notably not the primary driver (it ranked lower than efficiency, revenue, and experience).

This matters because it signals where ROI expectations are landing first: cost-to-serve, cycle time, throughput, and productivity.

The "Where Are We" Maturity Snapshot: From Experimentation to Scaling

When asked what best describes their current AI technology strategy, organizations landed across a spectrum:

  • 14% are still evaluating LLM options
  • 21% are training LLMs with company/industry data
  • 35% are developing and testing AI-based tools
  • 30% are scaling AI tools across the organization

That distribution is important: a meaningful share is already shifting from pilots to enterprise rollout – where governance, security, integration, and operating models become the real bottlenecks.

Budgets Are Rising - And Leaders Expect Measurable Impact

On investment, the survey indicates AI funding is not flat:

  • Respondents expect AI budgets to rise ~14% on average over the next 12 months
  • The largest group projected increases in the 11–20% range

On outcomes, expectations are optimistic but not “science fiction”:

  • Executives expect AI to drive an average ~7% gain in sales and profit over the next 2–3 years

Over the next 3 years, they estimate:

  • Operating costs: +4% net change (a subtle but telling number – often reflecting new AI spend alongside savings)
  • Productivity: +10% gain

This combination signals what many leaders are living through: AI is not “free savings.” It’s an investment cycle – where productivity-upside is expected, but spend shifts into platforms, infrastructure, talent, and change management.

The Model Strategy Reality: Multi-Model is Becoming the Default

On how organizations leverage generative AI models in products/services:

  • 44% use third-party models from multiple providers
  • 32% use a mix of proprietary and third-party models
  • Only 5% are exclusively proprietary/in-house

When asked which third-party providers they rely on most heavily, respondents most cited:

  • OpenAI GPT models (74%)
  • Google Gemini (62%)
  • Followed by a tier of additional providers (AWS, IBM, Anthropic, Meta, etc.)

Translation: vendor concentration risk is real, and so is the need for architecture that supports portability, governance, and cost control across models.

Employee Adoption is Already Material - But Uneven

AI tools aren’t just centralized in engineering teams. The survey’s weighted average suggests ~46% employee engagement with AI tools today.

That’s massive – and it introduces a new challenge: when adoption spreads faster than governance, you get shadow AI, inconsistent data handling, and uneven quality.

Workforce Impact: More Transformation Than Reduction

A headline many leaders miss: the survey suggests AI is not automatically a “job-cutting machine.”

  • 62% expect net headcount to increase due to AI, averaging ~4% growth over three years

Role impact is already visible:

    • 68% see AI automating routine tasks
    • 60% say it’s changing job responsibilities
    • 54% are creating new AI-related roles
    • 45% see reduced need for certain roles

This strongly points to a future where winners invest in reskilling, operating model redesign, and adoption enablement, not just tools.

The Roadblocks: Security, Data, Quality, ROI, and Infrastructure

When executives ranked barriers to AI adoption, the top concerns were:

  1. Data privacy & cybersecurity risks
  2. Availability of clean data
  3. High implementation costs / uncertain ROI
  4. Availability of AI infrastructure
  5. Workforce displacement & reskilling challenges

This is the “real enterprise AI” problem set: not prompts, not demos – risk, readiness, and repeatability.

If you’re a business or technology leader, the message is simple:

Your peers aren’t asking whether to do AI. They’re trying to make it work at scale.

And the upside executives are targeting is tangible:

  • A credible path to mid-single-digit improvement in sales/profit contribution over a 2–3 year horizon (~7% expected average)
  • Potential for double-digit productivity improvement over three years (~10% expected average)
  • Strong anticipated savings in areas like technology/software development, customer service, finance/ops/HR, and manufacturing processes

But the same research makes something else clear: benefits don’t materialize from “AI tools” alone. They materialize when AI is treated like any other enterprise capability – with:

  • clear governance and KPIs,
  • a data foundation you can trust,
  • secure and scalable infrastructure,
  • and a workforce operating model that can absorb change.

In short: AI advantage is built, not bought.

Here’s how MILL5 can help you turn “AI urgency” into measurable enterprise outcomes – through its three core pillars: Strategy, Build, and Operate.

1) Strategy: Make AI Investable, Governable, and Aligned to Value

The survey signals that leadership is increasingly integrating AI into executive agendas – yet many organizations still struggle with ROI clarity, risk, and prioritization. MILL5 can help you:

  • Define an enterprise AI strategy tied to business outcomes (efficiency, revenue, experience) – not “random use cases”
  • Establish AI governance (decision rights, model policies, data handling standards, approvals, and KPIs)
  • Build a use-case portfolio and value roadmap (quick wins + durable platforms)
  • Create a practical risk and compliance framework focused on privacy, cybersecurity, model governance, and responsible AI – directly addressing the top roadblocks leaders cite
  • Design an AI operating model: who owns what, how work gets funded, and how solutions move from pilot → production → scale

Outcome: leaders stop debating “should we” and start executing “what, why, and how fast – with what controls.”

2) Build: Turn Your Data, Infrastructure, and Models Into Repeatable Delivery

The research highlights infrastructure and clean data as persistent blockers, alongside rising budgets and expanding deployments. MILL5 helps you build what scaling organizations need:

  • A secure, scalable AI/LLM architecture (including multi-model patterns – important given how many enterprises use multiple third-party providers)
  • Data readiness: pipelines, quality controls, governance, and “clean data” programs aligned to critical domains
  • LLMOps / MLOps foundations: deployment, versioning, evaluation, guardrails, monitoring, auditability
  • High-impact implementations such as:
    • Customer service augmentation
    • Internal knowledge assistants
    • Process automation for finance/ops/HR
    • Engineering productivity accelerators
  • Production-grade controls for privacy, security, and access, tackling the survey’s #1 adoption barrier head-on

Outcome: you move from prototypes to reliable, scalable AI products that can be reused across functions.

3) Operate: Keep AI Safe, Adopted, and Improving After Go-Live

This is where many AI programs stall – especially as employee use expands and roles change. MILL5 can help you operationalize AI with:

  • Run + monitor: performance, drift, hallucination risk controls, uptime, incident response, and cost optimization
  • Model governance: audit trails, evaluation frameworks, and compliance readiness
  • Change enablement: training pathways, adoption programs, and new ways of working – aligned to the survey’s reality that AI is reshaping responsibilities and creating new roles
  • Continuous improvement loops: feedback, prompt/model tuning, workflow optimization, and KPI tracking

Outcome: AI becomes a managed enterprise capability – not a collection of experiments that slowly decay.

A Practical Next Step

If your organization is feeling the pressure (and the opportunity), start with a focused engagement by MILL5:

  • AI strategy + value discovery workshop
  • Data & AI readiness assessment
  • Pilot-to-scale roadmap with governance, architecture, and operating model baked in

Because the enterprises seeing real returns are doing the unglamorous work: turning AI into a system that can scale – securely, responsibly, and repeatedly.

Contact our AI specialists at ai@mill5.com.