Uber's rapid move to AI-driven coding is delivering speed, but it is also pushing costs far beyond what the company had planned. CTO Praveen Neppalli Naga says tools like Claude Code have already forced a rethink of its AI budget.

What started as a push to make software development faster is now turning into an expensive lesson for tech companies experimenting heavily with AI. At Uber, the move towards AI-powered coding tools has accelerated so quickly that even carefully planned budgets are struggling to keep up. The company’s top technology executive has now admitted that the scale of adoption has gone far beyond expectations, forcing a rethink of how much money needs to be set aside for AI in the coming years. Speaking to The Information, Uber CTO Praveen Neppalli Naga said the company's AI spending assumptions have already been exceeded, largely due to the rapid uptake of advanced coding tools like Claude Code developed by Anthropic. "I'm back to the drawing board, because the budget I thought I would need is blown away already," he said, suggesting how quickly costs have spiralled. The comment comes at a time when Uber is seeing a fundamental change in how software is built internally. AI is no longer just assisting engineers with suggestions or auto-complete features. Instead, it is increasingly taking over the process of writing code itself. According to Naga, Uber is now witnessing what he describes as "agentic software engineering," where AI systems independently generate code with minimal human involvement. The numbers show just how deep this change runs. Around 1,800 code changes every week at Uber are now written entirely by its internal AI coding agent, without direct human input. Nearly 95 per cent of the company’s engineers use AI tools every month, and close to 70 per cent of the code that gets committed is generated by these systems. In just a few months, Uber’s internal AI agent has gone from contributing less than 1 per cent of code changes to roughly 8 per cent. Rising costs likely driven by new pricing models of Claude While the productivity gains are clear, the financial impact is becoming harder to ignore. A key factor behind the rising costs is likely the way AI tools are priced. Anthropic has overhauled its enterprise pricing for Claude, moving away from fixed subscription tiers to a hybrid model that combines lower per-user seat fees with mandatory usage commitments. Under this structure, companies may pay a relatively modest monthly fee per developer, but they must also commit to a pre-estimated level of token consumption. Tokens, which represent units of text processed by AI systems, form the basis of billing. Even if actual usage falls short, companies still have to pay for the committed amount. The removal of earlier enterprise discounts has added to the cost pressure. For engineering teams, this means budgeting is no longer as simple as counting the number of users. Instead, they must now predict how much AI computation their teams will consume, a task that becomes difficult when usage is experimental or rapidly growing. The rise of "tokenmaxxing" culture The surge in AI adoption has also led to a new phenomenon within tech companies — the idea of "tokenmaxxing." As tokens become a measurable unit of AI usage, some organisations are informally tracking how much engineers are spending on these tools. At companies like Meta, reports suggest that internal dashboards are even being used to rank employees based on token usage, turning it into a form of competition. While some see this as a sign of engineers embracing AI, others warn that it could encourage inefficient use of resources. Critics argue that high token consumption does not necessarily translate to better outcomes. Instead, it could lead to wasteful spending, especially at a time when AI budgets are already under pressure.