New York

October 15–17, 2025

Berlin

November 3–4, 2025

London

June 2–3, 2026

OpenAI report: Enterprise AI is still in the “early innings”

Insights into AI usage from 1 million business customers.
December 11, 2025

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 4 minutes

The AI giant has opened a window into how businesses use its tools, highlighting a growing gap between leaders and laggards.

Use of AI tools and large language models (LLMs) by large enterprises is still in the “early innings,” according to OpenAI’s first state of enterprise AI report.

The research, conducted by OpenAI’s economics research team, brings together, for the first time, aggregated usage data from the one million business customers using OpenAI’s tools. Researchers also surveyed 9,000 workers across almost 100 enterprises.

“For much of the past three years, the visible impact of AI has been most apparent among consumers,” wrote OpenAI’s Chief Economist, Ronnie Chatterji, in his foreword for the report. “However, the history of general purpose technologies—from steam engines to semiconductors—shows that significant economic value is created after firms translate underlying capabilities into scaled use cases. Enterprise AI now appears to be
entering this phase, as many of the world’s largest and most complex organizations are starting to use AI as core infrastructure.”

“Enterprise problems also present the hardest technical challenges for frontier intelligence, requiring reliability, safety, and security at scale,” Chatterji added.

Usage patterns

The report found that ChatGPT message volume has grown 8x since November 2024, and API reasoning token consumption per organization increased 320x over the same period. More structured workflows where companies have built custom GPTs, or where purpose-built Projects workspaces are being spun up also grew 19x. These stats show that not just the volume, but also the sophistication of usage has been growing amongst enterprise users.

The most widely deployed GPTs tend to codify institutional knowledge into reusable assistants, or automate common enterprise workflows by integrating with internal systems. For example, the Spanish bank BBVA was cited in the report as having more than 4,000 custom GPTs.

Technology companies specifically are using the OpenAI API at a rate 5x higher year-over-year for in-app assistant and search, agentic workflows, coding and developer tools, customer support, and data extraction and analysis.

Productivity impact

Enterprise users save a modest 40–60 minutes per day because of these tools, with data science, engineering, and communications workers reporting 60–80 minute gains per day. Heavy users tend to report slightly larger gains of more than 10 hours per week. Another 75% of respondents report faster or higher-quality work, especially in technical domains like coding or data analysis. 

For engineers specifically, 73% report faster code delivery, but it’s unclear if that means simply writing code, or the more complex task of deploying it into production.

While we know self-reported productivity gains need to be taken with a pinch of salt, there is a clear sense that these AI tools can lower barriers to more complex work. Coding-related messages increased 36% for workers outside of technical functions, and 75% of users said they can now complete tasks they previously could not perform. Concerns over the subsequent risks of allowing non-developers to write code aren’t broached in the report. 

Mind the gap

We know there are AI leaders and laggards, but OpenAI’s research suggests a widening chasm. What they define as “frontier employees” – those operating in the 95th percentile of all users – send 6x more messages than a median user, and engage across far more task types. These gaps are widest for writing, coding, and analysis tasks, and are 17x wider for coding tasks alone, the largest relative gap in the findings.

Frontier firms are generating approximately 2x more messages per seat than the median enterprise and 7x more messages to GPTs. These firms have shown an appetite to invest in the infrastructure and operating models required to embed AI as a core organizational capability rather than a peripheral productivity tool,” the report notes. The report cites customers like Intercom, BBVA, Lowe’s, Indeed, Moderna Health, and Oscar Health as AI leaders.

“Whether this gap widens or contracts will depend on how organizations approach change management and their ability to build the systems, skills, and operating models required to successfully deploy AI,” the report notes. 

This aligns with the latest DORA report, which concluded that AI tools are not a panacea and are an amplifier of good engineering habits. The report suggests that leaders need to set clear mandates, secure resources, align teams, and create space for experimentation if they are to get the full value from these tools.

While the US is the biggest market for OpenAI’s business tools, Europe is growing fast. France and the Netherlands are among the fastest-growing enterprise markets worldwide, and the UK and Germany now rank among the largest outside the U.S. 

What’s next?

The report paints a clear picture of a quickly emerging world, where enterprises are still operationalizing deep LLM usage beyond just individual employees querying ChatGPT.

Hey, you’d be great on the LDX3 stage

For example, only one in four enterprises has given OpenAI secure access to company data inside core tools to enable context-aware AI. While this suggests a stubborn (and understandable) distrust for these tools at the enterprise level, it’s also potentially limiting the effective use of the tools.

“The primary constraints for organizations are no longer model performance or tooling, but rather organizational readiness,” the report concluded.