AI is the Software. You are the Hardware.

AI won’t replace humans — but it will redefine their role. A deep dive into abstraction, specialization, and what AI is really doing to us.

AI is the Software. You are the Hardware.
AI Enabled

AI won’t replace humans — but it will redefine their role. A deep dive into abstraction, specialization, and what AI is really doing to us.

History doesn’t repeat itself, but it often rhymes.

And this isn’t a post about the Industrial Revolution and job losses.

To set the context, I need to start with history, and I’ll try to keep it concise. So, please extend your attention spans for a bit — I promise it will make sense in a moment, and it’ll reframe how you think about AI.

Background

In the early days of computing, hardware was the hero — but software gradually took over as the faster, cheaper, and more flexible way to build things. Hardware didn’t disappear; it became specialized, modular, and increasingly abstracted. The focus shifted from knowing how to build a circuit to knowing which chipset or module to use — and from “how” to “what.” Still, without understanding the how, customizing the what remains difficult.

As the software industry exploded, it brought a fair amount of slop with it — from the dot-com bubble to lingering copy-paste digital solutions in regions that came online later. Even today, you still hear “your business needs a website” as if that’s reason enough.

But did software replace hardware? Of course not — software still needs hardware to run. What it did was make hardware less visible.

Software made the hardware less visible:

  • Abstracted the hardware
  • Commoditized it
  • Shifted the power balance.

I asked ChatGPT to contextualize, and it gave me this odd metaphor (presented as-is)

Software Ate the World —But It Ate Hardware Like a Snake Swallowing a Pig

So — What’s the point?

The point — AI and Humans

We are witnessing an AI explosion at a scale where the human element will remain, but it will become increasingly specialized. You keep hearing the line below, and it’s true.

AI is not going to replace humans. But a human who uses AI will replace one who doesn’t.

Well, let’s change the context to the old story. In the old story, it would be:

Software is not going to replace hardware. But hardware that runs software will replace hardware that can’t.

That resonates! Or does it? And what about the GPT metaphor above? 😏

Will the demand for AI applications continue to grow, and will it require increasingly specialized human resources? Will humans get abstracted? Will humans get commoditized?

Let’s break it down

There is no doubt that the demand for AI will continue to grow. I cannot talk about the supply side, but there is no reason to believe that humans will stop demanding better AI as long as they can afford it.

Can humans be abstracted?

Abstraction is hiding complexity behind a simple interface.

When humans are abstracted, it means you interact with a role or function or a system, not the person behind it.

Let’s take customer support. You use a chatbot → escalation to human → generic support agent → ticket resolved. You rarely know who helped you. The person is abstracted. They might show their name, but not many remember it unless you want to complain.

Or let’s take gig workers. You tap Uber → someone drives you. Do you think of them as a person with dreams or just a function of transport?

Yes. We humans already are abstracted.

Can humans be commoditized?

Wait — Isn’t that already a thing?

We have always heard “You are not irreplaceable”. That is true, and only the difficulty of finding a replacement varies.

That’s saying “You are a cog” or — just for fun “LaaC — Labor as a Cog”

Yes. We humans are already commoditized.


What does that mean?

AI is outpacing humans, and the systems we have built for abstracting and commoditizing human tasks are likely to replace many human roles with AI or with humans using AI.

Denial

Many people are in denial of what AI can do for them. And an AI-enabled human has a particular superpower.

There is an elitist bully-ism that says, “See — it did this wrong, it can’t replace us.” What they forget is that they are a thirty-year-old comparing themselves to a five-year-old one that’s underfed and held back on purpose.

And yet, AI catches up, and eventually, it does the right thing. What they meant is, “I can see the mistake, it can’t replace me.” But it can replace others who make more mistakes.

It is good to be skeptical about AI. The job of a human will be to verify what AI does and correct it when it is wrong.

The AI bubble theory

People say we are seeing an AI bubble. That may or may not be true. I do not see a demand-side constraint. I do not see as many stupid AI apps as we saw with websites in the dotcom bubble. If it continues this way, it will happen, that is for sure, it's human nature — but I don’t see it yet.

I would be more concerned about the supply-side logistics that might just kill the AI companies. We may reach a point of diminishing profitability for AI companies, as cash burn may not be sustainable in the long term. If the economy crashes, and there’s nobody to buy AI subscriptions, who’s going to pay the bills for the huge server farms? Or maybe some method/hardware is discovered to run large models on small, affordable machines — the large companies are going to feel the pinch. Perhaps some regulation will compel them to stop in certain regions, but then AI will proliferate elsewhere.

The AGI theory

Today’s AI, which we mostly hear about, revolves around LLMs (large language models). What most people do not realize, especially those who have never spent years in machine learning before the LLMs or transformers (not the movie), is that we do not exactly understand how LLMs work. While we understand the architecture and math behind transformers, we still don’t fully grasp why certain intelligent behaviors emerge — and how these models generalize far beyond what they’re explicitly trained to do.

The question is — is that even “artificial” at all? Are we witnessing a mysterious natural phenomenon? Is it some digital-natural-intelligence?

Ironically, the real “artificial” part of intelligence was back in the days when we hand-coded algorithms for specific problems, for example, face recognition.

AGI will come into existence someday — the real question is when. It may be too costly to build, and it may not even have a practical use in the society we live in today. It may not hate humans, but it may not like us either — or it may simply leave us behind.

(Some statements have been clarified at the end)


Final thoughts

What we have seen with AI, we cannot unsee it. We are never going back to the old ways, and many people who are in the denial phase will eventually come to terms with the new reality.

AI is not going to replace you; AI-enabled humans will replace you.

Here’s a quote from “The Incredibles” that I will leave you with

When everyone’s super, no one will be.

Factual Clarifications

A few clarifications that I’d like to point out. These are not in-line because they break the flow of the post.

About: “We do not exactly understand how LLMs”

  • We do understand the architecture and math behind transformers and LLMs.
  • We don’t fully understand how specific capabilities emerge — why certain behaviors, reasoning, or compositionality suddenly appear at scale.

About: “Is it some digital-natural-intelligence?

  • Speaking metaphorically. Calling it natural intelligence or a natural phenomenon is controversial.
  • At times, their behavior feels less like something artificial and more like a complex emergent phenomenon — raising philosophical questions about the nature of intelligence itself.

About: “AGI will come into existence someday.”

  • This is a speculative, not a factual inevitability.
  • This discussion is not about niche or domain-specific AGI.
  • AGI isn’t fantasy, but it also isn’t inevitable, especially under economic, physical, and infrastructural constraints.
  • If AGI does arrive, whether soon or in the distant future, it could dramatically alter our trajectory — but it’s not yet clear when or even if it will become feasible.