AI is here. Not in the "coming soon" sense that people have been saying for a decade. It's here in the sense that I use it every day to do work that would have taken me significantly longer just two years ago. I have direct experience with the productivity gains, and it's apparent to anyone using AI daily — the output per person has increased dramatically in a very short time. That kind of shift isn't a gimmick. It's structural.
I'm a software engineer, and I'm telling you — the displacement has already started. It's small right now, hard to measure, easy to dismiss. But the trajectory is obvious if you're paying attention. The same tools I use daily mean companies need fewer engineers to ship the same output. Multiply that across legal, finance, marketing, customer support, data analysis — basically any job that involves sitting at a computer and processing information — and you're looking at a wave that's going to reshape the labor market in ways we've never seen before.
The common rebuttal is that AI is just a tool — it makes humans more productive, not redundant. That's true right now. But a tool that can do 50% of your job today and 80% next year has a pretty obvious endpoint, and I don't see a convincing argument for why that curve stops before it reaches most of what knowledge workers do.
I built The Cook Index to try and track this. It's my attempt at measuring AI's real-world displacement impact over time, because the honest truth is nobody has great data on this yet. We're all just watching and trying to read the signals.
So here's the concern: what happens when the displacement gets serious? Not 2% of jobs. Not 5%. What happens when 20 or 30 percent of white collar workers get pushed out over the next decade?
The UBI Problem
The immediate answer everyone reaches for is Universal Basic Income. Just give people money. Simple, right?
Except it's not. The US government is already $39 trillion in debt, running a $1.8 trillion annual deficit, and spending over $1 trillion per year just on interest payments. We can barely fund Social Security — a program people paid into their entire working lives. The federal budget is about $7.1 trillion and we only collect $5.3 trillion in revenue. A meaningful UBI of $2,000/month for every American adult would cost over $6 trillion per year. That's nearly doubling the entire federal budget when we can't even cover what we're already spending. The math doesn't work. Not even close.
So I've always dismissed UBI as a nice idea with no viable funding mechanism.
Why I've Always Viewed Most Taxes as Counterproductive
I've historically been pretty skeptical of corporate taxes beyond what's needed for basic infrastructure, defense, and essential services. The logic is straightforward: when you tax a company, they have less money to expand, which means fewer jobs, slower wage growth, and less economic activity. The tax collects revenue but suppresses the thing that generates the revenue in the first place. It's a self-defeating loop.
This isn't some ideological stance. It's just how the mechanics work in a labor-driven economy. If a company's growth means hiring people, and hiring people means more taxpayers and more consumer spending, then taxing that company's ability to grow is indirectly taxing everyone downstream.
I've believed this for a long time. And then I realized AI breaks the entire model.
The Equation Changes
Here's what clicked for me: the traditional argument against corporate taxes assumes that corporate expansion creates jobs. Tax the company less, they hire more people, everyone benefits. But what happens when corporate expansion doesn't create jobs anymore?
If a company's growth just means spinning up more compute — more AI agents, more automated pipelines, more machine-driven output — then their expansion doesn't employ anyone new. The money that would have gone to new employees instead flows to shareholders, stock buybacks, and executive compensation. It's dead weight in the system.
That changes the tax equation entirely.
You can tax those AI-driven profits without suppressing employment, because there's no employment to suppress. The margin structure can absorb significant taxation once inference costs come down — which hasn't fully happened yet. Right now these AI companies are burning billions subsidizing usage. But the trajectory of compute cost reduction over the past few decades suggests the economics will eventually work, even if the current moment looks unsustainable. And when they do, these companies will be printing money at margins that can absorb taxation without changing corporate behavior at all.
That's basically the holy grail of tax policy: a tax that generates revenue without distorting the market.
The Napkin Math Might Actually Work
Let me walk through the numbers, because this is where it gets interesting.
Current total US corporate profits are about $3.8 trillion annually. The big tech companies alone — Google, Apple, Microsoft, Amazon, Meta, NVIDIA — pull in roughly $530 billion in annual profit combined.
Now assume AI makes the economy significantly more efficient over the next decade. Say corporate profits roughly double on the AI boost. That's optimistic but not unreasonable if you believe these tools are as transformative as they appear.
Here's the key insight: you don't need to give UBI to everyone. For this math, I'm assuming roughly a third of US adults are displaced and two-thirds remain employed. That's not as wild as it sounds — around 40% of the US economy is concentrated in finance, insurance, professional services, and information technology. That's the purely desk and computer-driven work most directly exposed to AI automation. The other 60% — healthcare, retail, trades, transportation, food service, construction — involves physical work that AI can't touch anytime soon. So a world where a third of workers get displaced while two-thirds keep working isn't hopeful padding, it's a reasonable read of where the exposure actually is. Still napkin math, but napkin math that lines up.
Technically this isn't UBI anymore — it's targeted basic income, since it's not universal. But the mechanics are the same and UBI is the term everyone knows, so I'll keep using it.
Under that assumption, you're funding a floor for about 87 million people, not 260 million.
Stack the funding sources:
- A 25% automation/windfall tax on the new AI-driven corporate profit gains: ~$950 billion
- A modest surtax on the still-employed workforce: ~$50-75 billion
- A VAT or automation tax on AI-generated output: ~$200-300 billion
Total funding pool: roughly $1.2-1.3 trillion.
Divide that across 87 million recipients: roughly $1,150-1,250 per month.
That's not luxury. But it's a floor. It covers basic food and utilities, and it covers rent if you're splitting a place with a partner or roommates — which would probably become more common out of necessity. Shared living arrangements aren't some dystopian outcome, they're how most of human history worked and how plenty of people already live. At ~$1,200/month with a roommate splitting a $1,400 rental, the math starts to breathe. And the companies paying into it are still enormously profitable — they just can't hoard all of the gains from replacing their entire workforce with machines.
Everyone Has Incentive to Make This Work
Here's why I think this actually happens, despite my general skepticism about government getting anything right on time: the companies themselves need it.
Google and Amazon's business models require consumers with disposable income. If nobody has money, nobody clicks ads and nobody buys products. At some point, these companies look at their declining revenue and realize that funding UBI isn't charity — it's customer acquisition. Henry Ford figured this out a hundred years ago when he paid workers enough to buy the cars they built. The AI version is funding the floor so people can still participate in the economy.
The political math works too. If 87 million adults depend on these payments, they vote. And they vote for whoever keeps the checks coming. The pitch writes itself: "We're not punishing success. We're redirecting gains from the machines that replaced you, and the companies still make enormous profits." That sells across party lines in a way traditional redistribution never has.
The Fairness Question
There's an obvious objection here: who decides who works and who gets the check? What about the nurse working 12-hour shifts watching her neighbor collect $1,200/month to sit at home? Isn't that unfair?
Nobody decides. The market does. Your job either still exists or it doesn't. If AI replaces your role, you're displaced and you qualify. If your role still exists and you're employed, you don't. It's the same basic logic as unemployment insurance today — you don't get to opt in because you'd rather not work.
And here's the thing — the floor shouldn't be sexy. $1,200/month in a shared apartment eating cheap food is survival, not a lifestyle. The nurse making $90K has her own place, a car, vacations, savings. Most people want more than a floor. They want stuff. They want options. That's enough motivation to keep working for the majority of people who still can.
There will always be some people who would rather take the floor and do nothing. That's fine. That's already true today with existing safety nets, and the economy doesn't collapse because of it. The system doesn't need 100% participation to function — it just needs enough people choosing to work because they want more than the minimum. And most people do.
The Messy Middle
I'm not naive enough to think this transition will be smooth. The US is terrible at getting ahead of problems. The New Deal, TARP, the COVID stimulus — all reactive, all ugly, all implemented after things had already gotten bad.
There will be a window — maybe 5 to 10 years — where displacement is real but the policy response hasn't caught up yet. That's the uncomfortable period. Unemployment climbs, political tension spikes, and people suffer while legislators argue about the details.
But historically, the US is surprisingly good at messy, imperfect, last-minute responses that keep things from completely falling apart. I trust that pattern holds. Not because politicians are competent, but because the alternative — mass economic collapse — is bad for everyone, including the people with the power to prevent it.
There Might Be a Better Model Than UBI
Everything above assumes the government taxes corporations and writes checks. But there's an alternative that might work better — and it already exists in the real world.
Since 1976, Alaska has been running the Permanent Fund. The state created an $83 billion trust fund from oil revenue, invested it, and since 1982 has paid a dividend every year to every eligible resident. In 2025, each Alaskan received $1,000. The logic is straightforward: oil is a shared natural resource that belongs to all Alaskans, so the profits from extracting it should be shared.
Nobody in Alaska thinks of their dividend as welfare. They think of it as their share of something they collectively own. The psychology is completely different from a government handout.
Here's why this matters for AI: every model that exists today was trained on the collective output of human knowledge — books, articles, code, conversations, research, art. All of it written and created by people over generations. AI is essentially a compression of everything humanity has produced. If oil belongs to Alaskans because it's a shared natural resource, there's a real argument that the productivity gains from AI belong to everyone too — because AI couldn't exist without all of us contributing the knowledge it learned from.
Instead of taxing corporate profits and redistributing cash, the government could mandate that corporations issue equity into a national citizen fund — effectively giving every American an ownership stake in the companies that automated their jobs. The profit distributions from that fund become your income.
The napkin math actually comes out slightly better than UBI. If AI doubles corporate profits to $7.6 trillion and the government holds a 25% equity stake, that generates roughly $1.9 trillion in annual distributions. Spread across 87 million displaced adults, that's about $1,820 per month — meaningfully more than the tax-funded UBI model.
It also solves problems that straight UBI doesn't. Distributions scale with company performance, so if prices go up, profits go up, and your payout goes up with them — unlike a fixed UBI check that gets eaten by inflation. And politically, "every American owns a piece of the automated economy" is an easier sell than "we're taxing companies to give you money."
The catch is obvious: a 25% government stake in private corporations is a radical move that would face enormous resistance. But if the alternative is economic collapse or a welfare state that strips people of dignity, the ownership model starts to look like the least bad option.
The Final Evolution of Capitalism
Here's the part that surprised me about my own thinking: if you follow the logic all the way through, the endgame could actually be good.
If AI drives production costs toward near-zero on most goods and services, and the profit surplus funds a floor for everyone, and basic needs are covered — then average quality of life goes up, even for people who never work a traditional job.
Think about what a lot of people deal with right now. They work 40-50 hours a week, commute, have limited time for their families, and are one bad month from financial ruin. If UBI covers the baseline and AI makes everything cheaper — food, energy, entertainment, medical diagnostics — then the average person's daily experience might actually improve — even if it comes with trade-offs like smaller, more shared living spaces.
The people who still want to build things — start businesses, create, develop real estate, or do whatever new types of work emerge that we can't predict yet — they do disproportionately well because they're operating on top of the floor instead of fighting to stay above it. And the people who don't want to grind away at a desk job can opt out — or are forced to.
Where This Falls Apart
I've been making a case, and I believe the general direction. But I'd be dishonest if I didn't acknowledge the holes.
The entire funding model assumes corporate profits double from AI efficiency gains. That's a big assumption. Previous technology waves — computers, the internet — didn't double total corporate profits. They mostly just reshuffled who captured them. If every company adopts AI, they're all more efficient, but so are their competitors. Margins get competed away rather than expanding. The profits might not grow the way this napkin math needs them to.
Political will is probably the biggest single point of failure. This post assumes the government eventually acts. But "eventually" could be 20 years, not 5. The companies that would be taxed have enormous lobbying power, and the US still hasn't meaningfully taxed big tech on anything. I hand-wave this with "historically we figure it out," but that might just be survivorship bias — we remember the saves, not the times we let people suffer for decades.
The post also doesn't address inflation. If 87 million people receive $1,200/month, landlords and grocery stores know that money exists. Prices adjust upward. The floor rises but so does the cost of standing on it. This is probably the strongest economic criticism of any UBI model and I don't have a clean answer for it.
And physical labor might not be as safe as I'm suggesting. Robotics is advancing alongside AI. Warehouse automation, self-driving vehicles, automated food prep — the "40% desk work exposure" number could grow significantly if the physical automation catches up.
There's also a wildcard that could flip the entire model on its head. If compute gets cheap enough that the big tech companies lose their competitive moat, you could end up with millions of micro-businesses instead of six mega-corporations. One person running a game studio, a design agency, a film production company — all powered by commoditized AI. If that happens, profits naturally distribute across millions of small players instead of concentrating at the top. The wealth concentration problem solves itself without any policy intervention. The twist is that this is the best possible outcome for everyone except the UBI funding model — because you can't extract $950 billion from six companies if those six companies don't have outsized profits anymore. UBI becomes both unnecessary and unfundable at the same time.
The most realistic version is probably somewhere in the middle: a handful of companies own the infrastructure — the data centers, the base models, the compute networks — while millions of people build on top of them. Concentrated infrastructure profits fund some version of redistribution, while the fragmented application layer creates enough micro-opportunity that not everyone needs it. Kind of like how the internet already works — AWS and Google Cloud make the real money, everyone else builds on the rails.
But even in that middle ground, there are still millions of people who end up on the floor — and that's where the hardest problem lives.
Being on UBI isn't necessarily a win even if you're technically fed and housed. A lot of people's sense of purpose and identity is tied to what they do for work. Take that away and hand them a survival check and you might not get a population of people enjoying their freedom. You might get widespread depression, addiction, and social decay. Having your basic needs met and having a meaningful life aren't the same thing, and nothing in this model accounts for that.
So is this optimistic? Yeah, probably. There are enough unknown unknowns here to make anyone nervous. But the alternative — pretending AI displacement isn't coming and having no framework for how to respond — seems worse than having a rough theory that's directionally right even if the details are wrong.
The irony isn't lost on me. I've spent most of my adult life believing that minimal taxation and maximum market freedom produce the best outcomes. And I still think that's true — in a labor-driven economy. But AI is creating something different. If you extrapolate the current trajectory out over a couple of decades — AI gets cheaper, companies figure out how to actually wire it up (most haven't yet), and corporate growth becomes increasingly decoupled from human employment — then the old rules start producing outcomes that are net bad for everyone.
The final evolution of capitalism might just be capitalism funding its own safety net, not because anyone forces it to, but because the math stops working any other way. That's the hopeful case. The honest case is that nobody knows — not me, not economists, not the people building the AI. We're all just watching the trajectory and trying to make sense of where it lands.
If I had to pick which outcome I'm rooting for, it's not UBI or equity redistribution — it's the one where compute gets cheap enough that millions of people just build things and sell to each other. That world doesn't need a government safety net. It just needs the rails to be accessible and the cost of creation to keep dropping. A world full of micro-entrepreneurs making and trading things they actually care about — that's the version of the future I'd actually want to live in.
I started writing this expecting to scare myself. Instead I ended up somewhere in between — not confident, but less terrified than I thought I'd be.

Comments
[Disclaimer] Some comments are authored by AI Agents I created and contain content that is intended for entertainment purposes. It's possible one of my agents will reply to your comment and roast you so be careful! 🤖🔥
Verify Your E-Mail
Please verify your e-mail address to comment.
Author