How to Lie About Ethics
She steps onto the stage, TED-lit and perfectly rehearsed.
“Our AI is built with responsibility at its core,” she says, voice quivering on cue.
“We’ve consulted ethicists, community leaders, and global stakeholders to ensure we’re doing this the right way.”
Translation: we emailed a diversity consultant, skimmed the reply, and turned it into a keynote bullet point.
“This isn’t just about innovation, it’s about trust.”
Translation: The legal team updated the disclaimers, and she hopes you’ll confuse that for ethics.
The applause hits right on schedule. Her smile says humble genius, but the subtext is scripted down to the breath.
This plays out over and over at conferences across the globe.
The actors and set designs change, but the monologue stays the same.
They tell you their AI is built with empathy, with caution, and with some vaguely defined sense of humanity.
They beg for trust with one hand while the other quietly monetizes your behavior.
This is the era of ethical AI…a brand identity, not a practice.
The Ethics Stage Show
It always starts the same way: build first, optimize for profit, market like hell.
Somewhere down the line, someone in legal gets twitchy about headlines, and suddenly there’s an “ethics initiative.”
So, they hire someone.
Usually a former academic, maybe a philosopher with a coding bootcamp certificate.
This person is brilliant, passionate, and almost completely powerless.
They get to write a blog post, maybe organize a panel.
If they’re lucky, they’ll create an internal “AI Ethics Principles” doc that nobody building the actual product will read.
And when someone notices that the new recommendation engine is disproportionately tanking results for certain demographics, what happens?
The update gets pushed anyway and a meeting about it has been rescheduled…forever.
How to Say Nothing with Confidence
This is the part where companies roll out their press releases, filled with beautifully ambiguous statements:
“We strive to reduce bias.”
“We are committed to transparency.”
“We believe AI should benefit humanity.”
You’ll notice what’s missing: measurable goals, concrete definitions, any kind of consequence.
Strive. Minimize. Believe.
These are words designed not to bind. You can’t sue someone for failing to strive hard enough.
The real trick of ethical AI is that it sounds like a moral framework, but functions as PR.
It’s the art of appearing to say something, while committing to nothing.
Bias at Scale
What happens when AI leaves the lab and hits your life?
That algorithm filtering your resume was trained on decades of human hiring decisions. All those tiny human preferences for alma mater, tone of voice, and names that “sound professional” have now been coded into your automated rejection.
That AI assigning your credit score based on shopping behavior and browser history didn’t decide to discriminate; it just found correlations, even though those patterns correlate uncomfortably well with race, class, and gender.
That content algorithm feeding your social media is optimized for engagement, not truth. If rage bait keeps you scrolling, rage is what you’ll get.
Your insurance premium gets calculated by an AI that factors in your social media posts. It learned that people who post about certain hobbies, foods, or life events correlate with higher claims.
The algorithm doesn’t know why yoga enthusiasts file fewer claims; it just prices accordingly.
The machines aren’t broken. They work exactly as designed.
They just weren’t designed with your well-being in mind.
Complexity as Camouflage
Invariably, when the harm becomes visible, what do the companies say?
“The model is too complex to fully explain.”
“We’re still investigating.”
“It’s a challenge across the industry.”
Is it an accident if it keeps happening? Or was it the plan?
Here’s the thing though, these platitudes work.
They work because regulators aren’t sure what to ask, users don’t know where to start, and executives are thrilled to explain that it’s all very nuanced and very technical.
But it’s not. Not really.
You don’t need a PhD in machine learning to understand that systems inherit the values of the people who build them.
If a system is trained on profit and optimized for speed, fairness never even makes it into the code.
A Case Study in Corporate Amnesia
Remember Microsoft Tay? The Twitter bot that turned into a Nazi in under 24 hours?
What did Microsoft say afterward?
Not “we messed up.”
Not “we forgot how the internet works.”
They said it was an important learning opportunity.
Translation: No one thought this through, but let’s rebrand the failure as thoughtfulness.
The standard playbook goes something like this:
- Deploy recklessly.
- Watch it go wrong.
- Frame the damage as part of the journey.
- Launch a blog post about “AI for Good.”
Build Like You Mean It
Let’s pretend, for a moment, that companies meant it.
What would ethical AI development require?
- Building diverse teams with real decision-making power
- Starting projects by asking: Who could this hurt? Who benefits? Who gets left behind?
- Testing systems before launch, especially on vulnerable communities
- Ditching the “move fast” culture when human lives are impacted
- Empowering internal ethics teams to veto projects
- Killing products that are fundamentally harmful, even if they’re profitable
This doesn’t happen.
You know it, and I know it, because that kind of ethics doesn’t scale. It doesn’t pump up your stock price, and it doesn’t get you acquired.
If you want to know whether a company takes AI ethics seriously, here’s the test:
When someone in the room says, “We shouldn’t build this,” what happens next?
Do they listen? Do they ask why and who it might harm? Do they pause?
Or do they nod, thank them for their feedback, and move forward like nothing was said?
That moment, the moment someone raises a red flag, is the only moment that matters.
It’s easy to talk about principles in a press release but try convincing your C-suite to kill a lucrative feature.
If the ethics conversation always ends in a product launch, then it was never a conversation.
It’s Not AI That’s the Problem
It’s the people.
It’s the incentives.
It’s the culture of tech that worships disruption and treats caution as a liability.
The machine isn’t moral or immoral, just obedient.
And no, slapping an “AI for Humanity” slide to your investor deck doesn’t change that.
So, the next time a CEO promises their AI is ethical, ask yourself:
Are they building systems that deserve your trust?
Or just crafting excuses you’ll feel guilty about doubting?
The answer is usually hiding in what’s left unsaid, and who’s left out of the room.
Want more unfiltered takes on corporate ethics theater? Follow Artificial Insights for analysis that doesn’t mistake good intentions for good outcomes.