The Financial Times reported this week that Amazon employees have found a workaround for their company's AI usage leaderboard: run the internal AI tool on trivial tasks, inflate the token count, climb the rankings. This is tokenmaxxing: the practice of maximizing token consumption as a proxy for AI engagement and productivity.
And Amazon isn't alone. Meta built an internal leaderboard called "Claudeonomics" ranking its roughly 85,000 employees by token consumption. In one 30-day window, total usage exceeded 60 trillion tokens.
The coverage has framed this mostly as a culture story: gamification gone wrong. That framing misses the point. The story here isn’t employee discipline or pressure to perform. Tokenmaxxing results from organizations' inability to connect AI spend to business value. Instead, volume fills the vacuum because it's the only number they have.
When the Metric Becomes the Target
Token count is legible. It appears in dashboards, produces numbers, and creates the impression of oversight. But it’s just a billing unit. What's happening at Amazon, and probably many other organizations that haven't made the news, is what happens when you treat it as a performance signal.
The Financial Times piece notes that Amazon told employees tokenmaxxing would not factor into performance reviews. Yet multiple employees reported feeling pressure to top the leaderboard anyway. That gap between policy and behavior is exactly what happens when organizations haven't given people a better signal, one that connects AI usage to actual business outcomes.
What a Better Signal Actually Looks Like
Token count only tells you what the model charged. It doesn't capture the true cost of the workflow, or whether it actually worked. In reality, every agent action triggers downstream activity, like API calls, data services, retry loops, and third-party integrations, and every action should have a measurable quality-based outcome; none of which show up in a token dashboard.
An organization tracking token consumption can still be completely blind to where its AI spend is going and how well it is working. That's the trap Amazon's leaderboard built. Without visibility into what AI usage is actually producing, engineers are left optimizing for the only target available. That's how you end up with engineers running trivial tasks to satisfy a metric with no proven connection to business value.
Connecting spend to outcomes closes that gap. Revenium's AI Economic Control System gives engineering and finance teams a shared view of who's using what, what it's actually costing, and whether it's generating value, so token count never has to be the default metric.
Tokenmaxxing is a reaction to trying to track AI. If you want to know the real ROI and impact of using AI, book a demo to see how Revenium gives your team AI metrics to optimize.



