Snopher

Why AI Optimism Keeps Rising With Income

Technology · Snopher Intel · · 6 min read
Why AI Optimism Keeps Rising With Income

Artificial intelligence has a class problem, and the polling is getting harder to ignore.

For all the executive hype around productivity, automation, and a supposedly brighter future, the public mood is much darker. AI optimism rises with income for a pretty simple reason: the people closest to the upside are often buffered from the downside, while everyone else is being asked to trust a system that already looks rigged.

That split matters for the future of work, public policy, and basic legitimacy. And it tells us something uncomfortable about the current AI boom: it isn't being sold to the country as a shared civic project. It's being sold as an efficiency machine.

The polls show skepticism, not excitement

The broad public isn't buying the sales pitch. A Quinnipiac University poll released April 16, 2025 found that 55% of Americans think AI will do more harm than good in their day-to-day lives. Among Americans with household incomes below $50,000, that number climbs to 59%.

That's not a fringe result. It's a warning sign.

The same Quinnipiac survey found people expect job losses from AI, even if many workers still believe their own jobs will survive. That contradiction makes sense. People can see the weather changing without knowing exactly which roof will blow off first. Separate reporting has pointed in the same direction: 70% of Americans say AI will reduce job opportunities generally, while 30% worry about their own jobs becoming obsolete.

And that's the public mood executives keep trying to wave away as confusion or fear of change. But the data tells a different story. People aren't irrational. They're reading incentives.

Stanford HAI public opinion chart on attitudes toward AI adoption — Snopher
Public attitudes toward AI remain divided, with skepticism still running deep | Image via Snopher

People have watched companies use technology gains before. The pattern is familiar: promises of liberation, followed by layoffs, speedups, and a little more power concentrated at the top. Why would this time inspire trust?

Why wealthier workers see promise where others see risk

Higher-income Americans tend to sit in jobs where AI looks less like a pink slip and more like an assistant. Researchers at the Washington Center for Equitable Growth have found that workplace exposure to AI is higher among workers with more education and higher wages. Pew Research Center reported that in 2022, average hourly earnings in the jobs most exposed to AI were $33, compared with $20 in the jobs least exposed.

That gap helps explain the optimism divide better than any glossy keynote ever could.

If you're a lawyer, consultant, software engineer, analyst, or manager, AI may cut down on drafting, summarizing, coding, scheduling, or the endless churn of email and slide decks. It can look like a booster for output and status. If you're in a lower-wage job, though, AI often shows up differently: as surveillance, scheduling software, customer-service replacement, automated screening, or another excuse to thin payroll.

So yes, wealthier workers are often more optimistic. They're more likely to encounter AI as augmentation rather than discipline.

But there's a catch. Some of the loudest AI believers are also sitting in the blast zone. Software workers who cheered early coding tools as force multipliers are starting to notice what happens when management hears "one engineer can now do the work of three." The phrase "vibe-coding" became a kind of shorthand for this era's carelessness — shipping fast, trusting generated code, and pretending technical debt is somebody else's problem. That's not a mature software strategy. It's a liability with good branding.

This is, frankly, a bad idea. Sloppy code doesn't become less sloppy because a machine wrote it at scale.

The benefits are real, but they aren't being shared evenly

None of this means AI has no value. Even skeptical polling has shown Americans see benefits in areas like medical advances. And there are real productivity gains in some white-collar tasks. Drafting documents, searching internal knowledge, spotting patterns in large data sets, transcribing meetings, accelerating routine coding work — these are useful capabilities.

Still, useful for whom? That's the buried question under nearly every AI argument.

When a hospital uses AI to help identify a disease earlier, that's one thing. When a company uses AI to squeeze more labor from fewer people, that's another. The technology is the same; the politics aren't. Public distrust isn't really about whether machine learning can do impressive things. It's about who captures the savings, who absorbs the mistakes, and who gets blamed when the system fails.

Look, ordinary people understand distributional politics even if they don't use that phrase. They know a productivity boom can leave them poorer if wages don't rise, staffing gets cut, and services get worse. They know a chatbot can save a company money while making customer support miserable. They know automation often arrives with a smiling promise and leaves behind a smaller payroll.

Stanford HAI chart showing public opinion and trust in artificial intelligence — Snopher
Trust in AI remains weak even as adoption spreads through offices and public services | Image via Snopher

And there is a second layer to this, one executives rarely like discussing: the infrastructure itself. AI systems run on expensive chips, giant data centers, and staggering power demand. Those costs don't disappear. They get socialized through utility strain, local tax fights, water use, subsidies, and public tolerance for industrial buildouts that many communities never asked for.

For people already skeptical of concentrated corporate power, that doesn't look like progress. It looks like another transfer — public resources feeding private scale.

Distrust grows when institutions ask for faith they haven't earned

Americans aren't just judging AI tools. They're judging the institutions deploying them.

The Quinnipiac findings also showed concern that businesses and government aren't doing enough around AI. That may be the least surprising result of all. The firms leading the AI race have spent years insisting they can move fast and regulate themselves later. Governments, meanwhile, have mostly oscillated between techno-boosterism and half-finished oversight frameworks.

So the public is being told to stay calm while schools wrestle with cheating and deskilling, workplaces flirt with replacement, and basic standards for safety, transparency, and accountability remain thin. That's not a recipe for confidence.

But distrust also has a class texture. Wealthier Americans often have more room to experiment with new tools, absorb mistakes, and capture upside. A bad AI output in a strategy memo is annoying. A bad AI output in a benefits denial, hiring screen, rent-setting system, or school discipline process can alter someone's life. The same technology can feel playful at the top and punitive further down.

That's the part the boosters keep missing. AI isn't landing on a flat social field. It's landing in a country already defined by inequality, weak worker bargaining power, and a long record of tech gains flowing upward first.

If the AI era is going to win trust, it has to change who it serves

The class divide in AI optimism won't close because people get a better demo. It will close only if the economics change.

That means workers need a claim on the productivity gains, not just exposure to the disruptions. It means clear rules for automated decision-making, real audit trails, and consequences when companies deploy systems that discriminate, hallucinate, or quietly erode job quality. It means schools and public agencies should be cautious, not dazzled. And it means policymakers should stop confusing investor enthusiasm with public consent.

So yes, affluent professionals may remain the warmest constituency for AI for a while yet. They're the most likely to see it as a co-pilot, not a threat. But even that optimism has limits if companies turn every efficiency gain into another round of cuts. The people building these systems should pay attention to that. The first wave of believers are not guaranteed to stay believers.

And the broader public? They're not wrong to be skeptical. They've heard this song before. If the AI economy keeps rewarding owners, trimming labor, and externalizing the mess, distrust won't be a temporary backlash. It'll be the most rational response available.

The next phase of the AI debate won't be decided by benchmark scores or product launches. It'll be decided by who gets protected, who gets paid, and who gets replaced — and people are already making up their minds.