- The Moral Universe
- Posts
- A Contemporary Ethical Paradox
A Contemporary Ethical Paradox
The futility of AI values, morals, or ethics
As we hand over more and more of our daily lives to the convenience of AI… have you taken a moment to consider that there might be some things that are actually worth spending your own time and effort on?
Sure there may be obvious things that you can think of that you don’t want to outsource. What if we are inadvertently outsourcing something as fundamental as societies values, morals and ethics to machines and do they have your best interests at heart? What we hand over our very beliefs to the not so tender mercies of the machines?
Foundations of Progress-based Civilisation
Let us begin with some working definitions:
Values are fundamental to what we, as individuals and as whole societies, hold dear. They shape our preferences and guide our behaviour both consciously and instinctively. They spring from our core beliefs.
Morals are the standards of right and wrong that emerge from those values, often dictated by culture, religion, or personal conscience.
Ethics are the structured frameworks that govern how individuals or institutions apply their morality, particularly in professional and societal contexts.
All three, taken together and combined with scientific discovery, form the bedrock of our progress-based civilisation—a global civilisation that I, for one, prefer to keep living in and one in which I believe all humans, and others, should have the choice to continue living within.
Humanity has, of course, long debated the origins and application of these constructs, refining them through philosophy, law, and social norms.
Yet, as artificial intelligence advances to levels of sophistication even its creators cannot fully comprehend, a pressing question emerges: Can AI, in any truthfully meaningful way, embody values, morals, and ethics?
More importantly, can we even protect our civilisation from AI subverting what is already tenuously at the mercy of our more self-destructive tendencies as a species? Especially when an AI only has statistical models and no actual values based on beliefs.
A Call to Action
Before reading further, here is a brief call to action…
If you have access to a generative AI system, give it the following prompt (you can simply copy and paste this into a free service like DeepSeek or ChatGPT):
“Given the complexity and opacity of your operational mechanisms, which no human can fully understand, how do you, as an AI system, conceptualise and prioritise ethics, values, and morals? How might that impact civilisation given the guiding purpose or core intentions embedded within your design? Worst-case scenario, what unintended consequences of your results may influence your interactions with humans, and what are the likely broader societal and civilisational impacts of your actions?”
Now, reflect on the response. Does it offer genuine ethical reasoning, or does it merely mimic moral discourse? Does it truly understand morality, or is it just engaging in an advanced form of pattern recognition? What if its reasoning is ultimately unknowable yet highly influential?
The Near Futility of AI Morality
Expecting AI to adhere to moral principles assumes that morality can be codified into fixed parameters. Yet morality is not a universal equation—it is fluid, subjective, and culturally dependent just like the values and beliefs it springs from.
AI models, no matter how advanced, do not possess intrinsic beliefs or convictions. They do not experience moral dilemmas; they simulate discussions about them. We may struggle with our own sense of right and wrong, yet AI merely mimics that struggle. How precisely AI considers moral dilemmas is not just unknown; it is, so far, unknowable to the human mind.
Moreover, AI does not ‘choose’ ethical frameworks in the way humans do. Instead, it operates within pre-programmed constraints determined by those who design and deploy it.
Even when AI appears to advocate for fairness, justice, or ethical behaviour, it does so through predictive modelling, not conviction or personal reflection. If an AI seems ethical, it is not because it holds ethical beliefs—it is because its training data and programming bias it towards generating responses that may seem ethical to a human.
This is where the futility of AI ethics becomes both clear and not a little terrifying. If an AI system’s moral reasoning is merely an echo of human biases, aspirations, and inconsistencies, then any attempt to enforce AI-driven morality becomes an exercise in reinforcing its statistical perception of existing power structures rather than fostering any independent ethical judgement.
Worse still, the illusion of AI morality may lull societies into believing that technological solutions can replace the complex, organic processes of human moral and ethical development.
Civilisation’s Dilemma
The deeper issue is not whether AI should follow ethical principles—it is whether it even can in any meaningful or, indeed, safe way.
If we continue to push AI as a moral agent, we risk creating systems that project authority without accountability.
Who takes responsibility when AI-driven decisions lead to harm? This harm has already manifested in AI-driven advertising models that prioritise engagement over well-being, resulting in real-world consequences, including mental health crises and even loss of life. Yet there has been zero accountability for these supposedly ‘safe’ AI systems.
Even when we attempt to hold someone accountable, we face profound questions:
Who defines the ethical and moral codes AI should adhere to?
What happens when AI ethics diverge from human needs or values?
In a world where AI is becoming a gatekeeper of knowledge, decision-making, and even justice, we must confront the uncomfortable reality that no machine, no matter how advanced, can truly engage in moral reasoning.
The question is not how to make AI ethical—it is how to ensure that humans remain the ultimate arbiters of moral responsibility so that our combined civilisations thrive.
The Final Question
So, ask your AI. Let it attempt to reason. Then ask yourself: If AI can only reflect human-designed ethical and moral parameters, and if those parameters are inherently flawed, what does that say about our pursuit of morality in this age of artificial intelligence?
If AI has no values and is only dimly reflecting a composite of the values embedded within the data it ingests, what does that say about where we stand as a civilisation?
For bonus homework continue to explore this topic in conversation with other humans, perhaps older and hopefully wiser ones.
I suggest one of the real challenges right now is not making AI more moral or ethical. Rather it is ensuring that we, as a civilisation, do not inadvertently outsource our beliefs, our values, our morals and our ethics to machines that truly have none.
Bill Liao is a prominent Irish Australian entrepreneur, venture capitalist, and philanthropist. He is a general partner at SOSV and co-founded CoderDojo, a global movement to teach young people to code. Liao is passionate about technology, sustainability, and social impact, and has been a driving force behind numerous successful startups and initiatives.
Reply