On AI, Impact, And Job Displacement

Daniel Rosehill

By Daniel Rosehill

15 - Sep - 2025

Share this article

On AI, Impact, And Job Displacement

The Jewish New Year looms, which always gives me pause to jot down some more reflective thoughts than I usually find time to engage in.

While I love open-source everything and used to keep busy making YouTube videos, I keep a fairly low profile about what I do professionally - partly because it's always in a state of flux and partly because... well, self-promotion isn't really my jam.

But here's the short version:

I've spent most of my career to date in tech marketing (although my mother, who just about operates a tablet, insists I would have been a great doctor - some things never change). I like writing and creative pursuits, build computers for fun, and would get bored simply writing code. So it's an interesting career path that treads along the fault lines.

When I first arrived in Israel, I began exploring contract positions - fillers, essentially, while I found my feet. Oddly, I've come to not only embrace this way of working but also fervently believe that it's the future. Strangely, it's also fed the other way: former (in-house) positions have turned into client relationships.

While clumps of my hair have been expended trying to remember the document classification codes in Green Invoice, overall, it's been a positive voyage. Which brings me to: a client conversation I didn't get to finish.

The AI job displacement wave isn't coming—it's here.

While tech leaders debate timelines and politicians offer platitudes about "upskilling," we're missing the fundamental question: What happens when our entire economic system is built on human productivity, but humans are no longer the most productive?

This isn't another think piece about whether AI will take your job. It's about recognizing that our current framework for measuring human worth—economic output—is about to become obsolete, and we have a narrow window to build something better.

Does The Yardstick For Quantifying Business Success Even Have To Be ... Money!?

Once upon a time, salt was used as currency. Will measuring value by generating wealth for shareholders be viewed, one day, as equally absurd? Image: Flux Dev
Once upon a time, salt was used as currency. Will measuring value by generating wealth for shareholders be viewed, one day, as equally absurd? Image: Flux Dev

For the past several years, I've been helping Sir Ronald Cohen with communications.

This has proven to be an interesting point of comparison with my more typical client base of small to medium technology companies. Sir Ronald's mission resonates with me because when surveying the evolving state of X (technology, industry) I frequently find myself asking "nice product, but what's the bigger picture?" This was true as much then - when we started working together - as it is now, when we are in the throes of the great advance of AI into our workplaces and lives.

Another reason why I've gravitated, over time, to a frame of thinking that goes beyond my immediate concerns of ensuring my employability and financial provision: It's hard to be closely involved in technology marketing for too long without becoming at least slightly jaded, however passionate you remain about technology as a whole.

The more enmeshed you become in technology, and the more "mature" your clientbase, the more you realise that everybody, big and small, is just bringing a kind of buggy piece of code to market - hoping that it doesn't flop and that other people will like it, too The more time you spend among developers, you realise that somebody is going to need to explain the genius of this idea in a better language than memes. You see why your job exists. But you also see some of the more dubious stuff that goes on: bought awards; impossible features cooked up during conference calls with investors. You become a little jaded; and conflicted.

The "military grade encryption" on your brochure? Bog standard AES-256, buddy (invented, by the way, in 2001 and yes, 'Swiss Banks' have probably moved onto something better already...). I was once paid to review VPNs. Normal friends (who don't use Linux) periodically remember that fact and ask me for VPN advice.

My stock response remains that they're basically all the same and may all be equally shady. I don't know. Telling someone wishing to remain anonymous in 2025 to take up residence in a cave in Afghanistan has historic precedent proving it to be insufficient advice. A homing pidgeon? Elon Musk might have an idea.

Business Value Creation Is Kind Of Absurd When You Think About It

There are two behemoths of the Israeli tech world which I have never worked with and – at the risk of spurning invites for cocktails (Wix does throw excellent “events”) – I like to hold out as examples of why I think that the disconnect between paper value and real value has long since passed a threshold beyond which some kind of serious re-engineering is in order.

Monday.com enjoys a market valuation that is frankly eye-watering. But at its heart: it’s a CRUD app with some imaginative marketing and which was blessed with the chutzpah to realize that calling a product a “work OS” will not infringe anyone’s copyright.

Boiled down to its technical nuts and bolts, it writes stuff to a database and makes it look pretty thereby making it less awful for knowledge workers to effectively keep track of projects and "ping" the person sitting on the other end of the desk. It does not cure cancer, cannot (unlike Suno) compose funny songs, nor does it bring us closer to world peace (if you have not already done so, please now visit Suno and make AI songs).

It is also decent at what it does and (like Wix) even if it’s not my cup of tea, I agree that it deserves to be profitable. But it’s the hyperbole that gets me stuck: it’s just hard to swallow the idea that in a world swimming in microplastics, social division, and war, that it represents the pinnacle of human achievement.

If we measure success and value purely by the aggregation of money, then Monday.com is “worth” many multiples more than CureVac — which was a pioneer in mRNA vaccine research during COVID-19 — or other small research-intensive biotechs that have saved lives. And yet, despite their real life-saving contributions, these companies’ societal impact is hugely under-captured by how the market values them.

It makes profit reliably well. But when we try to define its counterpoint - something that is profitable and which makes an appreciable contribution to the betterment of our race, society, and planet - finding the right vocabulary, oddly, becomes awfully hard.

Impact: reframing value creation through a holistic - and planetary - lens. Image: Flux DEV.
Impact: reframing value creation through a holistic - and planetary - lens. Image: Flux DEV.

Impact: Redefining The Terms Of Capitalism

ChatGPT is my eternal companion through many fascinating rabbit holes (what? yours too!?). Exploring them is a guilty pleasure.

Plumbing those, I find myself thinking more and more about how things we are wont to dismiss as inconsequential details, or "baked-in-stone," facts in life tend to have the most outsized ripple effects.

AI's greatest attribute: as a comiputer program, it exhibits no judgement and so is the ultimate sounding board to which to throw out those ideas you thought of at the gym but which you assumed to be too dumb to even post on Reddit. Like: what if the world all spoke the same language?

As you might be able to tell, I'm a sucker for standardization and am likely one of very few earthlings who counts visiting the headquarters of the International Standards Organisation ISO) as among my latent ambitions (I have better ones, I promise). Did you ever think how much money could be saved if the envelope of standardization could be pushed as far as it can go?

Ask ChatGPT to model the savings if the world consolidated on one single currency and be prepared to get some mind-blowing projections.

What about if the world agreed that whether you're Muslim or Jew, a Greek or a research scientist in Antarctica, that nobody has majorly strong feelings about what voltage the national electricity grid runs on?

220V, I contend, is just about as good a voltage as any, and the Type F plug is a fine delivery mechanism.

Yet globally we insist on using 15 different plug connectors and there are 14 national standards upon which national power grids operate.

Taking the bus, or walking, will reduce your personal carbon footprint, sure. But few consider the fact that 17 million tonnes of plastic creation could have been avoided had we held a World Plug Forum in which Trump, Jong Un et al had a really good heart to heart about why we need to keep those pesky adapters anyway?

Food for thought: vast amounts of plastic waste could be eliminated if humanity agreed to standardisation on non-contentious issues, like plug types (we think). Generation: World Plug Forum as imagined by Leonardo.ai and Flux DEV.
Food for thought: vast amounts of plastic waste could be eliminated if humanity agreed to standardisation on non-contentious issues, like plug types (we think). Generation: World Plug Forum as imagined by Leonardo.ai and Flux DEV.

Consider this from the vantage point of an alien visitor to planet Earth. The only thing more bananas than the idea of an international plug summit might be the fact that nobody thought of convening one in the first place!

The Alien's Guide to Human Economics

As this blog is already weird enough, let's run with the alien analogy a little while longer.

If you had the opportunity to break bread with that alien, you may, perhaps, also have been asked to explain how the whole "money" thing works.

It makes the world go around, you may wryly remind your visitor. But if it's so important - who sets the rules as to how it's aggregated and who decided what constitutes "value" in the first place?

If asked that question, you would probably reach for a whistle-stop tour of human history - explaining how we once bartered, then fixed value on commodities like salt, then gold, and how today currency isn't even backed by something tangible.

You may then jostle over to cryptocurrency - the idea that the whole system should be decentralized - then get sidetracked to discussing the perils of anarchy, before concluding, with a sigh, that the whole enterprise of money is very complicated and - like the climate crisis - kind of a mess.

Tunnel vision, however, is a powerful force. And imagining encounters with aliens a useful mechanism for challenging assumptions arising from a familiar environment. Amid trying to explain the difference between a hardware wallet and a digital one (I own one and have no good idea), the alien may wish to dig into the whole concept of value creation.

We can all agree that having a balance at the bank - or on the blockchain - is more sensible than having to keep a herd of camels out the back to swap out whenever your TV breaks ("I think it's just a cable issue. Would you settle for the smaller one?"). But the terms through which value creation is viewed are less obvious.

Our world is deeply interconnected. Yet the rules that govern profitmaking in business fly in the face of this reality. Generation: Leonardo.
Our world is deeply interconnected. Yet the rules that govern profitmaking in business fly in the face of this reality. Generation: Leonardo.

The Zero-Sum Fallacy

Value creation in market economies labors under the implicit assumption that the world is a zero-sum game.

Yet all around us we see evidence screaming out that things are not so.

These range from the metaphysical - we can model the likelihood that we have breathed in a molecule that passed through any person on earth and find the answer to be almost 100%. To the scientific - like the established patterns of cause and effect in climate change.

Efforts to measure these have been framed mostly in terms of social "goodness" - voluntary reporting frameworks and the like. #

But why should it be that way to begin with?

The legal codes of liberal democracies attempt to ensure that one's right to pursue happiness and success is guaranteed so long as others are not harmed in the process. These safeguards, however, are often not extended to business - where the fiction that corporations are their own "entities" is extended to impunity from incurring normal responsibility for their actions.

Who sets these rules, anyway?

These are the very fundamental concerns that Sir Ronald's work tries to question (disclosure: I was involved in the manuscript preparation for the forthcoming second edition of Sir Ronald's book, Impact: Reshaping Capitalism To Drive Real Change and am paraphrasing the ideas rather than stating them exactly).

AI As Elevating Human Potential: The Productivity Paradox

So now that he's been introduced, I'll share an anecdote:

I was chatting today with Sir Ronald and a colleague. Knowing me to be something of an AI boffin, Sir Ronald asked whether I had any thoughts to share on the role AI would play in reshaping employment.

Just as he did, a rocket siren notified of an incoming projectile from Yemen and I had to summarily end my participation in the Zoom call without providing an answer. So here let me lay out some of what I would have articulated:

Is AI ready for
Is AI ready for "production"? And the type of production that could take your job? Flux

The Reality of AI Capabilities

The potential for AI to eliminate jobs is understated and overstated at the same time.

Spend a day using the code development tools of today - even the state-of-the-art ones - and it's hard to really believe that something this fumbling could truly eliminate the role of developers.

By that token, the predictions of industry luminaries have consistently been overly bullish and there is no good reason to believe that today's prognostics are any better grounded in reality.

None of this is to negate the massive advantages which generative AI tools have quickly proffered. But they require close supervision to be anything like effective, much less reliable.

This, however, oversimplifies: the curve is not linear. AI's ability to displace human employment is far more pronounced in some fields - whwere it is already a fait accompli - than in others.

The first jobs to be subsumed by AI will be those which it is most reliable at doing. Think replicable results rather than the tasks which AI shines in on benchmarks (great for marketing literature and hype; less relevant to operational business calculus). There is, I suggest, a good litmus test for knowing which side of the divide your job is on; but it involves understanding something about why artificial intelligence isn't really akin to human thinking.

The TL;DR: under the hood, AI is a statistical predictive model. It is a parrot and a trickster. But its most misleading characteristic is conjuring the idea that its innate home is the world of words (and by extension ideas) rather than numbers.

AI "thinks" by computing what the next part of a word (token) should be based upon patterns it has learned to detect and emulate. At scale, this begins to looks like thinking and the prediction feels like talking to a cognitive partner. However, this is illusory. This knowledge isn't just for dinner parties: it's a rubric for assessing the ability of AI to displace human jobs. Its creative potential is most reliable when asked to replicate upon a familiar pattern that it has seen a lot of in training.

Ask an AI tool to develop a nice React website for your business? It will turn out something smashing and you will proclaim that frontend development is headed the way of the dinosaurs.

But venture further up the totem pole of human thought and you will quickly encounter its limitations. Even intellectual tasks which may be thought of as "less" challenging but potentially fruitful tasks for AI - say, providing a summary of main news events - will start to look very iffy.

The reason? The template may be somewhat pattern-based, but the analysis required by a human analyst can't easily be whittled down into code. It's the actual thinking that AI will be challenged to replicate. Quite wonderfully because we're all so unique in our exercise of the faculty.

The Looming Grunt Work Apocalypse

I do not worry about a great bot takeover because I do not believe that the bots will ever be capable of the feat. I do however foresee the imminent collapse of grunt work. This is basically a certainty. The only variables to manage are our responses to it. Those can be guided, in measure, by our framing.

Here's my motto for today: what's AI-able should be AI-able.

In other words: if a task is so rule-based and pattern-based that it can be reliably offloaded to a robot, then the world will be a net beneficiary by offloading a sentient human of a task that does not allow their G-d-given human potential to flourish and shine. Almost. The missing link in the chain is our friend impact.

Poke around almost any area of human labor and you won't be hard pressed to find examples. Among those in my own fiefdom: SEO content writing.

Keyword-stuffing is a nonsensical proposition: there is already more "content" on the internet than anybody could consume in a lifetime. Multiplying the generation of content through AI is another failed strategy: We can all ramp up the production of mediocre AI-produced content, but if human focus doesn't increase in tandem, we'll all just be fighting over the same 'pie' - with worse goods to offer.

The Democratization of Capability

So I see a role for AI to play here that is actually counterintuitive: freeing humans of the notion that they can brute force victory over one another. The democratization of AI is to thank for this development. We're all wielding (almost) equally powerful swords.

Those menacing robots, I believe, will force us to compete on our human characteristics. But what to do about all those unemployed call center workers?

Upskilling alone may not be enough to fend off AI job displacement. Generation: Flux
Upskilling alone may not be enough to fend off AI job displacement. Generation: Flux

We Can't All Be Prompt Engineers: Why Mass "Upskilling" Isn't A Sufficient Response

Prompt engineering gets some stick in the AI world but - having done everything from MCP to voice agents - I remain a firm advocate for the value of careful iterative prompt writing as a foundational skill in working with AI.

You may agree with my contention that from a high vantage point we should celebrate AI's potential to relieve humans of the monotony of grunt work. But that philosophical take will not be of much comfort to a family whose bread is won by those jobs - or by workers who may be unqualified for other ones.

The drumbeat of upskilling and retraining beats loud here and I have probably reached at times for this as the obvious antidote: don't do something a robot could do; learn how to master operating the robot. It's reflexive. It feels good to argue that humans should shift into a higher gear. Yet it belies some obvious hurdles: what if people were most content and fulfilled doing what they previously did? What if those people have no interest in upskilling? Do we just toss them on the garbage heap of our new totem pole? I think that would be shortsighted and cruel.

The conclusion I gravitate towards oddly ends up dovetailing with the central idea that Sir Ronald articulates in Impact.

If we allow humans to generate wealth - even only enough 'wealth' to subsist - on the basis of their economic productivity alone, then the advent of AI means that the whole system is like a Jenga tower about to fall apart: the system has no built-in mechanism that will allow humans and non-remunerated robots to compete fairly.

Why would anybody have even thought like this when these first ideals were being formulated?

The Path Forward: Recalibrating Impact

So where do I get to, in short?

The AI job displacement tidal wave is going to be grisly and it's going to be very real. The chorus urging those displaced by this first wave to "just upskill" will be very loud. It may take some trial and error for the fact that this is not a one-stop solution to become self-evident.

Hence, I actually find myself liking more the idea of some kind of a societal recalibration of impact - of what we can do with our lives that's deemed worthy enough to be rewarded with enough tradeable currency to take care of our kids, go on vacation, or simply enjoy a good bottle of wine after a hard day at work.

The Implementation Challenge

The downside?

It's a complicated idea with lots of nuance. It's hard to roll into a good soundbite. And there's an awful lot to be figured out in how it can actually be implemented. The upside is big, though: I believe that if we can get this right, the AI revolution will come to be regarded as a hugely positive one. Not just technologically. But for all of humanity. That, oddly enough, sounds like good impact.

We're at a critical juncture.

The AI displacement wave is beginning, but we still have time to build new frameworks before the old ones completely collapse. However, this window won't stay open forever.

The companies and societies that answer these fundamental questions thoughtfully will thrive.

Those that don't will face increasing instability and conflict.

This isn't just about preventing job displacement—it's about creating a world where human potential can flourish in ways we've never imagined.

Where the elimination of grunt work becomes liberation rather than devastation. Where AI becomes the tool that finally allows us to build an economy based on human flourishing rather than mere survival.

The technology is advancing whether we're ready or not.

The question is: will we use this moment to build something worthy of our highest aspirations, or will we let it happen to us?

That choice is still ours to make. But not for much longer.

Daniel Rosehill

Daniel Rosehill

Writer, technologist, and entrepreneur based in Jerusalem, Israel