I recently started reading The Coming Wave by Mustafa Suleyman, the co-founder of DeepMind. DeepMind, the pioneering AI company behind AlphaGo, was acquired by Google in 2014. In 2016, AlphaGo achieved a historic milestone by defeating Lee Sedol, one of the world’s top players of the ancient and complex board game Go, in a five-game match that ended with a 4–1 victory. Go has long been regarded as a significant benchmark for artificial intelligence due to its immense complexity and the vast number of strategic possibilities, making it exceptionally challenging for machines to master.
This achievement was more than a technological breakthrough; it marked a profound moment in the evolution of AI. It demonstrated that machines could not only match but also surpass human expertise in a domain once thought to be uniquely human. The emotional weight of the match was captured in a documentary, and I vividly remember being moved to tears by Lee Sedol’s visible despair as he realized that human ingenuity had been outperformed by a machine. It was a humbling reminder of the rapid advancements in AI and their transformative impact on the boundaries of human achievement.
As we approach the concept of Artificial General Intelligence (AGI), as explored in The Coming Wave, we find ourselves at a pivotal crossroads where AI could potentially surpass human capabilities in all cognitive tasks. In my view, however, the most pressing challenge posed by AI is not merely technological but profoundly philosophical. Mustafa Suleyman encapsulates this in what he terms the “Containment Problem”: how can we control and regulate powerful technologies whose widespread adoption could result in unpredictable—and potentially catastrophic—ethical consequences? This question brings to light profound philosophical dilemmas that humanity is not yet prepared to fully confront.
The Trolley Problem, a well-known ethical thought experiment in philosophy, provides an excellent starting point for discussions on ethics in AI. The classic Trolley Problem presents a moral dilemma:
Scenario 1: The Switch Track
Imagine a runaway trolley is heading down a track where it will kill five people who are tied to the track. You are standing next to a lever that can divert the trolley onto another track, but there’s one person tied to that alternate track.
The Dilemma:
- If you pull the lever, one person will die, but five will be saved.
- If you do nothing, five people will die, and the one person on the other track will be safe.
Scenario 2: The Footbridge
In a variation of the problem, you are on a footbridge over the track. The trolley is headed toward five people tied to the track, and the only way to stop it is by pushing a large person off the bridge onto the track to block the trolley. This will kill the person but save the five.
The Dilemma:
- Do you push the person, sacrificing them to save the five?
- Or do you refrain from acting, letting the trolley kill the five people?
What would you do?
This dilemma explores two primary ethical frameworks. Utilitarianism, which focuses on maximizing positive outcomes, argues that pulling the lever is morally correct because it minimizes harm by saving five lives at the expense of one. In contrast, Deontological ethics, which emphasizes adherence to rules and principles, suggests that actively pulling the lever makes you morally responsible for the one death, whereas doing nothing allows the deaths of the five people to remain an indirect consequence.
When applied to AI, this ethical quandary becomes highly relevant. For instance:
- A self-driving car might face an unavoidable accident: should it prioritize the lives of pedestrians over its passengers, or vice versa?
- In healthcare, AI systems managing limited resources may need to prioritize patients based on survival probabilities, raising debates about fairness and equality.
- Autonomous weapons, such as drones and robots, might have to decide whether to sacrifice some lives to minimize overall casualties.
These scenarios underscore the moral agency we assign to AI systems and highlight the need for ethical frameworks that align with human values.
Complicating matters further is the long-standing debate over whether morality is subjective or objective. Objective morality asserts that universal principles govern what is right or wrong, independent of personal beliefs or cultural differences. Subjective morality, on the other hand, contends that ethical judgments are relative, shaped by personal or societal contexts.
For AI, this raises a critical question: should its decisions be grounded in universal moral principles, or should they adapt to specific cultural and contextual nuances? This dilemma illustrates the complexity of embedding human ethics into AI systems.
Religious Reflections on Creation and Purpose
Both religious and philosophical traditions have long pondered humanity’s purpose and relationship to creation.
In both the Bible and the Quran, humanity’s purpose is often tied to glorifying God. The Bible states, “Everyone who is called by my name, whom I created for my glory, whom I formed and made” (Isaiah 43:7) and “The heavens declare the glory of God; the skies proclaim the work of his hands” (Psalm 19:1). Similarly, the Quran repeatedly invites humanity to reflect upon the natural world as a testament to God’s power, wisdom, and artistry: “And I did not create the jinn and mankind except to worship Me” (Surah Adh-Dhariyat, 51:56). “Indeed, in the creation of the heavens and the earth and the alternation of the night and the day are signs for those of understanding” (Surah Aal-E-Imran, 3:190). “Do they not see the birds above them with wings outspread and [sometimes] folded in? None holds them up except the Most Merciful” (Surah Al-Mulk, 67:19).
Religious texts often depict creation as a reflection of divine intent and purpose. Humanity’s innate drive to innovate and create mirrors this divine attribute. According to religious teachings, God created humans in His image, endowing them with free will—the ability to make moral choices and shape their destinies. Similarly, humans create AI systems with increasing autonomy, enabling these systems to make decisions without direct human oversight. This parallels the role of a creator imparting a degree of independence to its creation.
Free will, however, implies accountability. In most theological frameworks, humans are expected to act ethically, with consequences for their actions (e.g., reward or punishment in the afterlife). If AI systems are granted autonomy or even a semblance of free will, profound questions of accountability arise. Who is responsible if an autonomous AI causes harm—its creators, its operators, or the AI itself? Should AI systems adhere to universal moral standards, or should their actions adapt to specific cultural and contextual nuances?
Many religious traditions teach that free will enables humans to seek truth and goodness out of choice, not compulsion. Similarly, humans often envision AI as a tool to serve higher purposes: solving complex problems, advancing science, or emulating human creativity and reasoning. This leads to a pressing question: Is AI autonomy an end in itself, or merely a means to further human progress?
As big data continues to drive AI development, humanity’s collective experiences—our lives, sufferings, joys, moments, and stories—are being used to teach and train these systems. This realization prompts a deeper philosophical reflection: Was all of humanity’s striving—its wars, philosophies, ideologies, and achievements—ultimately a step toward creating AI? Are we now witnessing the culmination of human history in the birth of a new form of intelligence, one that stands upon the foundation of our collective efforts and struggles?

Leave a comment