AI can perform calculations, but does it have the capacity to care?
By Abdulhamid Abdullahi Aliyu
When most people hear the phrase “Artificial Intelligence” (AI), their minds often drift toward futuristic fantasies: robots that think like humans, machines plotting to overthrow their creators, or computers smarter than their inventors. Science fiction has fed us these images for decades. Yet, beyond Hollywood thrillers, AI is already here, quietly shaping the world around us. It answers customer queries through chatbots, selects the next movie you’ll watch on Netflix, predicts what story appears at the top of your newsfeed, and even decides whether a bank approves your loan.
But this growing presence of AI in our daily lives forces us to confront a pressing question: how intelligent is artificial intelligence?
The honest answer is that AI is not a brain. It is not some mystical creation that understands, feels, or reasons like humans do. What appears to be “thinking” in AI is essentially mathematics—machines processing massive datasets, detecting patterns, and making predictions based on those patterns. Take medicine, for instance. AI can analyse thousands of X-rays or MRI scans in minutes, flagging possible signs of disease with astonishing speed. Yet, it does not comprehend illness, nor does it share in the burden of delivering a life-changing diagnosis. It only “sees” shapes, signals, and recurring features in data.
This distinction raises a critical debate: Is AI genuinely intelligent, or is it just an extraordinary mimic?
Human intelligence is not simply about solving problems or recalling information. It is a rich blend of memory, imagination, intuition, creativity, and moral reasoning. It includes the ability to feel empathy, wrestle with ethical dilemmas, or create art that expresses the soul. AI has none of these. It has no emotions, no conscience, no instinct for right and wrong. When it generates a song, writes an essay, or navigates a self-driving car, it is not exercising creativity or judgment. It is reproducing patterns learned from the data it has been trained on.
Yet, to dismiss AI as a hollow imitation would be unfair. Its capabilities, in specific domains, far exceed human performance. Banks now rely heavily on AI systems to monitor millions of transactions, detecting fraud almost instantly —a feat that no team of human auditors could achieve at the same scale. In agriculture, AI-driven weather forecasts and soil sensors enable farmers to predict rainfall, manage crops effectively, and enhance food security. In education, adaptive learning platforms can tailor lessons to meet each student’s unique learning style, giving teachers powerful tools to reach struggling learners. These are not gimmicks; they are reshaping how we live, work, and think.
Still, with such benefits come significant dangers. The real problem arises when society overestimates AI’s intelligence, attributing to it a wisdom it does not possess. Algorithms are only as good as the data they consume, and data is often flawed. Recruitment systems trained on biased records have been caught replicating discrimination, silently excluding qualified women or minorities. Predictive policing tools fed with skewed crime statistics risk unfairly targeting entire communities, reinforcing cycles of distrust and marginalisation.
Even more worrying is the human temptation to outsource too much decision-making to machines. When schools, governments, or businesses heavily rely on AI, they risk eroding human capacity for critical thinking. Societies that allow machines to make moral or civic decisions run the risk of dulling their own judgment, a peril that no amount of computing power can rectify.
This is why interrogating the “intelligence” of AI is not just an academic exercise; it is a civic responsibility. Policymakers must move beyond lip service and regulate how AI is designed and deployed, ensuring that it serves the public good rather than private profit alone. Technology companies must become more transparent about how their algorithms operate, particularly when these systems impact jobs, justice, and access to essential services. Citizens, too, have a role to play. Digital literacy must become as fundamental as reading and writing, empowering people to understand what AI can and, crucially, what it cannot do.
Ultimately, the irony of AI is this: the real intelligence lies not inside the machine but in the humans who create, guide, and govern it. AI can calculate faster than any brain, but it cannot care about the consequences of those calculations. It can analyse data at lightning speed, but it cannot empathise with the human beings behind the numbers. That is the dividing line between computation and compassion, between efficiency and wisdom.
If we maintain this distinction, AI will remain a powerful tool that amplifies human potential, rather than one that diminishes it. The smartest move is to resist the illusion that machines are thinking entities and instead recognise them for what they are: products of human ingenuity, useful only to the extent that we wield them responsibly.
Ultimately, the future of AI will not be dictated by algorithms, but by people. The question is not whether AI can become truly intelligent; it cannot. The real question is whether humans will remain wise enough to use it well.
Abdulhamid Abdullahi Aliyu writes on disaster management, humanitarian response, and national development.

