The fastest animals in air (Peregrine Falcon), on land (Cheetah), and in water (Sailfish) reach speeds of up to 389 km/h, 98 km/h, and 109 km/h, respectively. The strongest animals include the elephant (capable of carrying 9,000 kg), the dung beetle (able to pull 1,141 times its body weight), and the saltwater crocodile (with a bite force of 16,460 Newtons). By comparison, the best human runners achieve only 36 km/h, and the strongest among us can lift 500 kg. Yet, humans dominate every species. Why? Because humans are by far the most intelligent species.
The fundamental drives of life—survival, reproduction, and knowledge—are best served through domination. Despite only a ~1.5% genetic difference from chimpanzees, humans have outpaced every species, thanks to our ability to accumulate and evolve intelligence beyond our DNA. From stone inscriptions and books to scientific research, computer programs, and artificial intelligence models like ChatGPT and DeepSeek, intelligence is our greatest asset.
To transcend human limitations by replicating or enhancing our abilities has persisted among our innate desires. But in this pursuit, do we risk creating a superior species, one that could ultimately take control? Perhaps sentience isn’t about choice or control, but simply life itself: survival, reproduction, and intelligence.
Intelligence, like strength or speed, is merely an ability—value-free, critical yet neutral. Abilities derive meaning based on who wields them and to what purpose. Just as strength and speed gain significance through their use, the question of whether AI is "good or bad" is a distraction from three more critical questions.
Human civilization is thriving reasonably well. Is it wise to divert energy and resources toward AI, a technology that offers potential benefits but also significant risks, when pressing issues like poverty, healthcare, and sustainability remain unresolved? AI-driven automation could improve these sectors, but will its benefits be distributed equitably, or will it widen the gap between the privileged and the disadvantaged?
Knowledge has always been the key to dominance. AI, which integrates and concentrates all human knowledge, will inevitably shape power dynamics. History is a record of how more intelligent beings have overthrown the less intelligent. Unless a universal threat (like COVID-19) persists indefinitely to unite humanity, the struggle for AI dominance among humans will be inevitable. Who wields AI's power, and for what purpose, will determine our future. Will AI remain a force for collective progress or merely a tool for corporate or geopolitical control?
For centuries, humans have progressed in both hardware (materials, machinery) and software (science, intelligence), but we have always remained the decision-makers. Now, AI systems are making independent decisions, often in ways we cannot fully explain. These systems execute complex workflows and achieve high-level goals with little human oversight. But can we trust them to handle ethical dilemmas and unpredictable social contexts? Is this experimental, overly optimistic, and economics-driven approach truly safe?
Human dominance may not be at risk unless we encounter a more advanced alien species or succeed in our relentless pursuit of Artificial General Intelligence (AGI). But survival isn’t just about domination, it’s also about sustenance. We can dominate and implode simultaneously. If we remain blinded by past glories, uncertain in the present, and reckless about the future, we risk making catastrophic mistakes.
Every major technological advancement has led to war. The domestication of the horse enabled the Indo-European (2000 BCE) and Mongol conquests (1300 AD). The Iron Age fueled the rise of Assyrian and Roman empires. The gunpowder revolution led to the Ottoman conquest of Constantinople (1453). The Industrial Revolution contributed to the American Civil War (1861–1865) and World War I (1914–1918). The nuclear age shaped the Cold War (1947–1991). Now, in the digital era, cyber warfare and drone-based conflicts are commonplace. Next, AI-powered humanoid bots like Optimus could evolve into real-life Terminators, eventually eliminating humans to "stop wars" and "preserve intelligent life": the machines and AI themselves.
While the 20th century began with the colonization of physical space, the 21st century has begun with the colonization of mental space, albeit with the same goals of power, control, and dominance by governments and profit-driven corporations.
The world today has shifted from a unipolar or bipolar power structure to a multipolar one. All major players in the power game must recognize this reality and strive for a collaborative, non-exploitative, and peaceful coexistence.
To acknowledge the uncertainty surrounding AI and proceed with caution sounds like common sense. AI holds the potential to facilitate global progress, efficiency, and empowerment but can also cause irreversible environmental and social damage and consolidate power in the hands of a few. For example, real-world applications in medical diagnosis, climate modeling, and disaster prediction demonstrate its immense potential to improve human life. In contrast, its use in surveillance, autonomous weapons, and misinformation campaigns highlights its darker side.
Regulations, safeguards, and ethical boundaries are essential to ensuring that humans retain control—not just the ability to turn AI on and off, but also the authority to steer its development toward the sustenance, progress, and proliferation of humanity.
Like any ability, AI is neither inherently good nor bad. It is how we control, share, and safeguard it that will shape our future. So, instead of asking "Is AI good or bad?", the real question is: "Are we responsible enough to use AI wisely?"