AI

AI and the uncertain tomorrow of journalism

By Rabi Ummi Umar

Technology has always been transformative, easing burdens, accelerating processes, and reducing the burden of human effort. The emergence of artificial intelligence (AI) is no exception. 

According to the International Data Corporation (IDC), AI is projected to add $19.9 trillion to the global economy by 2030, representing 3.5% of global GDP. Small wonder nations are scrambling to embrace its promise, racing to uncover new applications and transformative capabilities across industries.

AI has simplified nearly everything, from routine office work to academic research. Yet, it embodies the very phrase “double-edged sword”. Like every innovation before it, it carries both promise and peril. Social media once dazzled with its vast resources for learning, but soon revealed its darker influence, with studies linking its overuse to declining academic performance. 

Could AI be following the same path? The questions now hang heavy: What is the result of excessive reliance on AI in journalism and communication? What happens to our collective intellect when we let machines think for us? Will AI make man redundant, or will it sharpen our creativity?

Already, teachers in secondary schools and universities lament students’ overdependence on AI. Before this wave, young learners combed through pages of books and libraries, piecing together assignments and research with sweat and patience. 

That very process gave them a broader horizon of knowledge. Now, the temptation is to let AI provide shortcuts. Is it truly an aid or a crutch? For journalism, the stakes are even higher. AI now creates deepfakes, fabricates news, disseminates disinformation, and facilitates copyright theft at an alarming rate. 

Fake content often passes for truth, staining reputations and distracting journalists from developmental reporting as they are forced instead into endless fact-checking.

Yes, using AI to polish grammar, punctuation, and spelling is helpful. But handing over the soul of reporting—the storytelling itself—to machines erodes accuracy, credibility, and that irreplaceable human touch. Readers can sense when a piece lacks heartbeat. 

Journalism, at its core, thrives on ethics, context, and empathy. AI cannot carry those values. The danger is clear: unchecked dependence on AI undermines the profession of communication. Anyone can now generate a passable article and publish it online, blurring the line between trained journalists and casual content creators. 

The profession risks losing its gatekeeping role if carelessly diluted. So, what does the future hold? Are we surrendering decades of built expertise to algorithms? Will there be a conscious regulation of AI use? How far are we willing to go to defend the integrity of journalism?

What is certain is that AI brings challenges but also opportunities. With discipline, ethical restraint, and wisdom, journalists can harness AI for richer storytelling without compromising their responsibility to the truth. 

The future of journalism in the AI era depends not on machines, but on the choices of those who hold the pen.

Rabi Ummi Umar can be reached via rabiumar058@gmail.com.

Ethical AI, public health reforms dominate resolutions as IMAN concludes 26th conference

The Islamic Medical Association of Nigeria (IMAN) has rounded off its 26th Annual General Meeting and Scientific Conference in Kaduna with strong calls for ethical regulation of artificial intelligence in healthcare and the elimination of harmful cultural practices that threaten public health.

The five-day hybrid conference, held at Arewa House from December 1–5, brought together 1,018 delegates from across Nigeria and beyond.

Discussions focused on the conference theme, “Artificial Intelligence in Healthcare,” alongside subthemes on reproductive health, harmful customs, medical ethics, palliative care, and the rising burden of non-communicable diseases.

Experts highlighted concerns about the rapid growth of AI technologies outpacing legal and religious guidance, the persistence of female genital mutilation and vaccine refusal, and the risk of AI reducing human compassion in clinical care.

Delegates also noted that cultural and religious misconceptions continue to hinder timely uptake of vaccinations, contraception, and modern treatment.

At the end of deliberations, IMAN resolved to push for Islamically grounded legal frameworks for AI and reproductive technologies, intensified collaboration with religious leaders to dispel myths, and stronger national ethical oversight of AI.

The Association also called for improved training of Muslim health professionals, development of AI-based accident-prevention systems, and expansion of telemedicine nationwide.

IMAN expressed gratitude to President Bola Ahmed Tinubu, Kaduna State Governor Sen. Uba Sani, Jigawa State Governor Mal. Umar A. Namadi, the Emir of Zazzau, and other health sector leaders for their support and hospitality.

NERDC chief renews calls for wider AI adoption in Nigerian schools

By Sabiu Abdullahi

The Executive Secretary of the Nigerian Educational Research and Development Council (NERDC), Prof. Salisu Shehu, has renewed calls for a stronger embrace of Artificial Intelligence (AI) across the country’s education system.

Prof. Shehu made the appeal during the AI in-Practice Forum held in Lagos on Wednesday, 3rd December 2025.

He thanked the organizers and participants, saying the gathering showed a shared national resolve to upgrade teaching and learning through new technological tools.

He explained that NERDC has made notable progress in introducing AI-related skills and concepts into the updated Basic and Senior Secondary Education Curriculum.

He restated the Council’s commitment to expanding innovation driven by AI within the school system.

According to him, the revised curriculum now features vital digital knowledge areas such as coding, programming, artificial intelligence and robotics, among others.

He said these additions are aimed at preparing young Nigerians for opportunities in a fast-changing digital era.

Prof. Shehu also commended the Nigerian Information Technology Development Agency (NITDA) for its steady partnership with NERDC in producing the Digital Literacy Curriculum for Basic Education.

He praised NITDA for helping shape the country’s digital learning framework and stressed the need to strengthen this cooperation.

He appealed to NITDA to widen its collaboration with NERDC, especially as the Digital Literacy Curriculum and the Digital Technologies Curriculum move into the implementation phase.

He pointed out that developing a curriculum is a major step, but effective delivery calls for continuous teamwork, capacity enhancement, infrastructure and coordinated support at national level.

The Executive Secretary attended the forum with his Special Assistant (Technical), Dr. Garba Gandu, and the Head of the Policy and Programmes Unit, Dr. Oladiran Famade. Both officials were acknowledged for playing key roles in NERDC’s digital advancement efforts.

The AI in-Practice Forum brought together specialists, government representatives, educators and technology stakeholders.

The event focused on practical measures for expanding AI use in Nigerian schools and added to ongoing efforts to prepare learners for the demands of the future.

AI can perform calculations, but does it have the capacity to care?

By Abdulhamid Abdullahi Aliyu

When most people hear the phrase “Artificial Intelligence” (AI), their minds often drift toward futuristic fantasies: robots that think like humans, machines plotting to overthrow their creators, or computers smarter than their inventors. Science fiction has fed us these images for decades. Yet, beyond Hollywood thrillers, AI is already here, quietly shaping the world around us. It answers customer queries through chatbots, selects the next movie you’ll watch on Netflix, predicts what story appears at the top of your newsfeed, and even decides whether a bank approves your loan.

But this growing presence of AI in our daily lives forces us to confront a pressing question: how intelligent is artificial intelligence?

The honest answer is that AI is not a brain. It is not some mystical creation that understands, feels, or reasons like humans do. What appears to be “thinking” in AI is essentially mathematics—machines processing massive datasets, detecting patterns, and making predictions based on those patterns. Take medicine, for instance. AI can analyse thousands of X-rays or MRI scans in minutes, flagging possible signs of disease with astonishing speed. Yet, it does not comprehend illness, nor does it share in the burden of delivering a life-changing diagnosis. It only “sees” shapes, signals, and recurring features in data.

This distinction raises a critical debate: Is AI genuinely intelligent, or is it just an extraordinary mimic?

Human intelligence is not simply about solving problems or recalling information. It is a rich blend of memory, imagination, intuition, creativity, and moral reasoning. It includes the ability to feel empathy, wrestle with ethical dilemmas, or create art that expresses the soul. AI has none of these. It has no emotions, no conscience, no instinct for right and wrong. When it generates a song, writes an essay, or navigates a self-driving car, it is not exercising creativity or judgment. It is reproducing patterns learned from the data it has been trained on.

Yet, to dismiss AI as a hollow imitation would be unfair. Its capabilities, in specific domains, far exceed human performance. Banks now rely heavily on AI systems to monitor millions of transactions, detecting fraud almost instantly —a feat that no team of human auditors could achieve at the same scale. In agriculture, AI-driven weather forecasts and soil sensors enable farmers to predict rainfall, manage crops effectively, and enhance food security. In education, adaptive learning platforms can tailor lessons to meet each student’s unique learning style, giving teachers powerful tools to reach struggling learners. These are not gimmicks; they are reshaping how we live, work, and think.

Still, with such benefits come significant dangers. The real problem arises when society overestimates AI’s intelligence, attributing to it a wisdom it does not possess. Algorithms are only as good as the data they consume, and data is often flawed. Recruitment systems trained on biased records have been caught replicating discrimination, silently excluding qualified women or minorities. Predictive policing tools fed with skewed crime statistics risk unfairly targeting entire communities, reinforcing cycles of distrust and marginalisation.

Even more worrying is the human temptation to outsource too much decision-making to machines. When schools, governments, or businesses heavily rely on AI, they risk eroding human capacity for critical thinking. Societies that allow machines to make moral or civic decisions run the risk of dulling their own judgment, a peril that no amount of computing power can rectify.

This is why interrogating the “intelligence” of AI is not just an academic exercise; it is a civic responsibility. Policymakers must move beyond lip service and regulate how AI is designed and deployed, ensuring that it serves the public good rather than private profit alone. Technology companies must become more transparent about how their algorithms operate, particularly when these systems impact jobs, justice, and access to essential services. Citizens, too, have a role to play. Digital literacy must become as fundamental as reading and writing, empowering people to understand what AI can and, crucially, what it cannot do.

Ultimately, the irony of AI is this: the real intelligence lies not inside the machine but in the humans who create, guide, and govern it. AI can calculate faster than any brain, but it cannot care about the consequences of those calculations. It can analyse data at lightning speed, but it cannot empathise with the human beings behind the numbers. That is the dividing line between computation and compassion, between efficiency and wisdom.

If we maintain this distinction, AI will remain a powerful tool that amplifies human potential, rather than one that diminishes it. The smartest move is to resist the illusion that machines are thinking entities and instead recognise them for what they are: products of human ingenuity, useful only to the extent that we wield them responsibly.

Ultimately, the future of AI will not be dictated by algorithms, but by people. The question is not whether AI can become truly intelligent; it cannot. The real question is whether humans will remain wise enough to use it well.

Abdulhamid Abdullahi Aliyu writes on disaster management, humanitarian response, and national development.

Drones, AI will be deployed to combat oil bunkering, maritime crimes — Naval Chief

By Anwar Usman

The chief of naval staff, Idi Abbas, has said that the Nigerian Navy will adopt advanced technology, including drones and artificial intelligence, to modernise its operations and tackle maritime crime across the country’s waterways.

Speaking during his screening by the senate on Wednesday, Abbas said the navy would prioritise technological innovation over traditional fuel-heavy patrols for smarter, faster, and more cost-efficient approach to maritime security.

He stated that, “We will incorporate more technology, including the use of drones, to tackle maritime crime.” “A lot of resources are currently wasted fuelling boats to reach remote areas. Technology will help us respond faster and more effectively.”

He further stated that, the navy was fully committed to improving operational efficiency and reducing costs through innovation, adding that surveillance tools would be central to preventing oil theft and illegal bunkering.

“We already have structures in place to curb maritime crimes, but I intend to incorporate more technology, especially drones,” he said.

Abbas, while responding to a question from Olamilekan Solomon, senator representing Ogun west and senate committee chair on appropriation, said oil theft persists mainly in hard-to-reach creeks and coastal areas.

He also revealed that “the theft may appear minimal individually, but when accumulated, it becomes substantial. We’re exploring drone technology to monitor and control these leakages.”

Abbas reaffirmed the navy’s commitment to its total spectrum maritime strategy, which, he said, addresses major security challenges such as piracy, oil theft, kidnapping, and banditry.

Recall that, Tinubu nominated Abbas as chief of naval staff; Olufemi Oluyede as chief of defence staff; Wahidi Shaibu as chief of army staff; Kennedy Aneke as chief of air staff; while Emmanuel Undiendeye was retained as chief of defence intelligence.

The Google gauntlet and the grandfather’s trust: An African lesson in peace

By Hauwa Mohammed Sani, PhD

I thought I was making a simple, kind gesture—choosing an older gentleman’s cab late one night after a long flight. I figured it would be an easy ride. What unfolded next wasn’t just a navigation problem; it was a bizarre, real-time collision between the old way of the world and the new, AI-driven one. This true story of a taxi ride truly happened to me last week.

​It was late, the kind of late where the airport lights look sickly and the air is thick with fatigue. I needed a ride. Looking over the line of sleek, modern taxis, my eye landed on one driven by an old man—a true gentleman of the road, old enough to be my own grandfather. A small surge of pity, mixed with a desire to give him the fare, made me choose him. Little did I know, I wasn’t just hopping into a cab; I was walking into a generational drama.

​The man knew the general area of my destination, but finding the exact estate became an odyssey. We drove, we turned, we asked passersby—a frantic, real-world search in a fog of darkness and street names. Frustrated, I reviewed the apartment information on my phone and saw a contact number within the address details. I called it.

​The voice on the other end was bright and American. “Oh, that’s my apartment, but I live in the U.S.,” she cheerfully informed me. “I’ll have someone call you.”

​True to her word, a local contact called back. “I’ve sent you the location,” she said. “Just Google it.”

​And there was the rub. My driver—a man whose mind held a living map of the city’s every alley and backstreet—and I, a modern traveller, stared at each other. Neither of us was familiar with using Google Maps.

​The poor old man was desperate. “What are the landmarks? Describe the building!” he pleaded into the night air. The girl on the phone, however, was stubbornly one-dimensional: “Just follow the GPS. Google the location.”

​That’s when it hit us both. In that moment, the taxi cab became a time capsule. Here were two people operating on landmarks, intuition, and human description, battling against an AI generation that has completely outsourced its sense of direction. Simple communication—a left at the bakery, a right past the big tree—was utterly lost.

​The driver was absolutely fuming. He kept grumbling, “Where is our sense of reasoning? They’re being machine is programming them!” To him, this reliance on tech wasn’t progress; it was the crippling of a fundamental human skill. He saw creativity and simple reason dying, replaced by a glowing screen that gives an answer but can’t hold a conversation.

​We eventually found the place, not by Google, but by a final, desperate, human description from a local. But the lesson lingered: Technology is fantastic, but sometimes, when it replaces basic common sense, it really can feel useless. We need to remember how to read the world, not just the map.

The Climax: The Race for the Flight

The next day, it was time for my return. The old man—who I now affectionately called Papa—had promised to pick me up. He came, but he was late. I kept calling, reminding him of my flight and the town’s busy roads. He assured me we would take an “outskirt” route with no traffic.

We found otherwise.

The clock was racing, and the roads were choked. In his confusion, the poor man even pulled into a station to buy fuel, a detour that felt catastrophic. But the beautiful part? He kept accepting his mistakes. He was frantic, not defensive. We kept running against the clock, fueled by mutual anxiety.

By the time we reached the terminal, the counter was closed.

“Hajiya,” he said, using the Hausa honorific reserved for me, the Yoruba man’s passenger. “Don’t worry about the fare. Just run. Run and make your flight first.”

I rushed in and had to beg the counter staff to issue my ticket. I became the last passenger on the flight, all thanks to a desperate sprint.

The Unbreakable Trust

A display of profound, inter-tribal trust eclipsed that moment of panic. Here was Papa, a Yoruba man, sending off Hajiya, a Hausa woman, without a dime for his service, instructing me not to worry about payment until I was safely at my destination.

He kept calling me after I took off, checking on my travel and praying I made my connection. Not once did he mention money.

It wasn’t until I reached out and said, “Papa, please send me your account details,” that the drama of the day resumed (as expected, getting that detail was another adventure!). But in the long run, I paid Baba a generous amount—one he met with a flood of heartfelt prayers for my future.

This journey, from a confusing GPS battle to a race against the clock, taught me the most significant lesson: amidst all the conflict and generational friction, there is still peace and trust in connection. 

As I work on our research for the University of Essex London on conflict resolution and prepare for my ‘Build Peace’ conference in Barcelona, I realise that sometimes the greatest examples of peace aren’t in treaties, but in a simple promise between a Yoruba taxi driver and his Hausa passenger.

Hauwa Mohammed Sani, PhD, teaches at the Department of English and Literary Studies, Ahmadu Bello University, Zaria.

“AI is neither a friend nor an enemy” – Dr. Maida

By Fatima Badawi

Scholars, educators and policymakers converged at Bayero University, Kano this week for the 5th International Conference of the Nigeria Centre for Reading Research and Development (NCRRD). Held under the theme “Reading Research and Practice: The Implication of Artificial Intelligence,” the conference examined how AI-driven technologies are reshaping reading instruction, literacy assessment, publishing and access to texts across Nigeria and the larger Global South.

The opening session featured a keynote address delivered in absentia by Dr. Aminu Maida, who was represented on the platform by Dr. Isma’il Adegbite. Dr. Maida, who currently serves as a leading figure in Nigeria’s technology and telecommunications space, set the tone by urging researchers and practitioners to treat AI as both an opportunity and a responsibility: a tool that can expand access to reading materials and personalized learning, but one that must be governed by inclusive policy and literacy-centred design.

The conference’s intellectual programme was anchored by lead papers from eminent figures in Nigerian education and development. Professor Sadiya Daura, Director General of the National Teachers’ Institute (NTI), presented her lead paper on teacher preparation for AI-enhanced classrooms, arguing that pre-service and in-service teacher education must integrate digital literacies and critical appraisal of algorithmic tools. Professor Mohammed Laminu Mele, the Vice-Chancellor of the University of Maiduguri, addressed infrastructure and equity, highlighting that without targeted investment in connectivity and localized content, AI risks widening existing literacy gaps in underserved communities.

Furthermore, in her remarks, Professor Amina Adamu, Director of the Nigeria Centre for Reading Research and Development, framed the conference’s aims around actionable outcomes: stronger university–school partnerships, pilot programmes that deploy AI tools for mother-tongue reading instruction, and an ethics working group to develop guidelines for the use of automated assessment and adaptive reading platforms. In her remarks Professor Adamu emphasised the Centre’s commitment to research that is directly useful to classrooms and communities in Northern Nigeria. She also commended and thanked all the partners who are always there for the Centre right from its inception to date. Some of the International and Local partners who participate in the conference include; QEDA, Ubongo, NERDC, UBEC, Plain, USAID among many others.

Some of the panel discussions explored concrete applications: on how AI-assisted text-to-speech and speech-to-text for low-resource languages; automated item generation for formative reading assessments; and data-driven reading interventions that preserve local genres and oral traditions rather than replacing them. Most of the papers presented during the event stressed that technology pilots must be accompanied by teacher coaching, community engagement and open-access content.

Participants included university academics, representatives from teacher education institutions, ministry officials, civil society literacy advocates and publishing professionals. The conference closed with a call for a multi-stakeholder roadmap: investment in localized datasets and annotated corpora for Nigerian languages, professional development pathways for teachers, and research ethics protocols to ensure that AI systems amplify, rather than marginalize, local knowledge and reading practices.

Organisers said the 5th NCRRD conference will feed into pilot projects and policy briefs to be shared with educational authorities and development partners. Delegates left with a clear message: AI’s promise for reading and literacy is real, but realising it will require literate design, purposeful investment and a sustained partnership between researchers, teachers and communities.

China introduces Artificial Intelligence education in schools

By Muhammad Abubakar 

China has taken a significant step in preparing its next generation for the digital future by introducing artificial intelligence (AI) education across primary and secondary schools. 

The Ministry of Education has announced that AI will now be included in the national curriculum, with lessons ranging from basic coding and machine learning concepts to discussions on the ethical implications of technology.

Officials say the program aims to build students’ digital literacy and give them early exposure to skills critical in the 21st-century economy. 

Pilot projects in cities such as Shanghai and Shenzhen have already shown strong interest, with students using AI-powered tools in mathematics, language learning, and creative projects.

Educators emphasise that the initiative is not only about technical training but also about fostering innovation, problem-solving, and responsible use of emerging technologies. “We want our children to understand AI as both a tool and a responsibility,” said an education ministry spokesperson.

The move reflects China’s broader ambition to lead in AI development globally, while also addressing concerns that young people must be equipped to navigate a rapidly changing technological landscape.

The coming age of AI, knowledge, conscience, and the future of human creativity

By Ibraheem A. Waziri

Artificial Intelligence has arrived, and in many ways, it is already surpassing humankind in numerous tasks – frominformation retrieval and decision-making to writing essays, diagnosing illnesses, and simulating human conversations. 

The rapid advancement of AI over the past decade is no longer a marvel; it is a living reality. With its relentless progress, we are standing on the cusp of a new era, an age in which the human mind and artificial intelligence may become intimately intertwined, both physically and cognitively. 

Over the next ten to twenty years, we can expect to witness the rise of brain-chip implants, neural devices capable of recording thoughts and memories, and integrating them with external data in real-time. This development, already underway in advanced laboratories, will redefine the limits of human cognition. Learning may no longer require years of study. Instead, information could be uploaded directly into the brain, rendering traditional education models obsolete or significantly transformed. 

The barriers to knowledge acquisition—once dependent on time, resources, and access—would essentially vanish. Everyone might stand on equal ground when it comes to information. In this sense, AI could appear to be the long-awaited solution to humanity’s historic struggle with ignorance. A world where information is no longer hoarded but instantly shared would mark a fundamental shift in human civilisation. 

Yet, in this possible future, one thing remains uniquely human: our conscience. The power of choice, the intention behind our actions, and the moral compass guiding our decisions stay beyond the reach of AI. The Islamic prophetic saying “Innamal a’malu binniyat”- “intentions judge actions” -takes on renewed weight. When knowledge becomes universally accessible, what will distinguish one person from another is no longer what they know, but how and why they use it. 

AI may provide the tools, but only our conscience can determine their application. In this new world, the essence of being human —the power to choose, to discern, and to act with purpose —becomes our most valuable trait. 

In writing and speech, large language models (LLMs) have dramatically reduced the burden of expression. AI tools can correct grammar, enhance clarity, and structure arguments. In this way, AI handles the “form,” allowing humans to focus more on “substance”: the meaning, purpose, and ethical significance of their message. 

Yet the human mind’s natural tendency to ask questions, to imagine, and to critique will not diminish. If anything, it will deepen. Humans are not passive recipients of knowledge; we are also its interpreters, critics, and re-creators. Far from becoming complacent in the presence of AI, people will begin to question it, reshape it, and rise above it. 

The reason is simple: the human mind cannot stagnate. It searches for meaning and thrives in ambiguity. Our ability to reflect, imagine, and dwell on abstract ideas remains unmatched. AI can mimic patterns and predict outcomes, but it cannot experience wonder, nor can it feel regret, nor grapple with moral ambiguity. 

Creativity itself arises from three essential human components: conscience, emotions, and environment. AI may support this triad; it may even challenge or stimulate it, but it cannot generate it. AI is a product of creativity, not its source. And it cannot be the source of what it did not create. 

By automating routine tasks, AI liberates the human mind to think more deeply and act more boldly. It frees us from mechanical repetition, allowing for higher-order thinking, innovation, and artistry. Writers, thinkers, inventors, and designers now have more time for exploration and imagination, which remain the core of human advancement. 

This evolving relationship mirrors humanity’s relationship with the Divine. Just as no human can rival the wisdom or creative force of God, AI can never match the core of our humanity. It cannot outfeel us. It cannot outdo us. It cannot outvalue us. It cannot possess conscience, consciousness, or emotion; the divine triad that defines who we are. 

When AI becomes fully integrated into daily life, at work, in education, healthcare, governance, and homes, we won’t become less human. In fact, we will become more human. We will have to let go of much of the mechanical and embrace the reflective. We will have more space to think, more time to connect, and more clarity to imagine. 

And in this space, we may at last pursue what has always eluded us, even in our most extraordinary scientific and industrial feats: wisdom. While AI may provide us with access to vast amounts of information, only the human soul, guided by conscience, can discern what is just, what is meaningful, and what is beautiful. 

AI does not represent the end of humanity. It is the beginning of a new chapter, one filled with tools of immense potential. But as with all tools, their value depends on the hands that use them. In the age of AI, the accurate measure of a person will no longer be what they know, but why they act and how they choose to use what they have. 

AI may become the great equaliser of knowledge, but it is only the human conscience that can give that knowledge direction, purpose, and value. And that is a gift no machine can replicate.

Redefining relevance: The strategic role of accountants in an AI-driven era

By Sunusi Abubakar Birnin Kudu

Accounting, traditionally seen as the process of recording, summarising, analysing, and reporting financial transactions about individuals, businesses, or other organisations, is currently facing a transformative shift due to technological advancements, especially in Artificial intelligence. AI-powered accounting software has taken over many routine tasks performed by accountants. 

AI now automates core accounting tasks such as categorisation, data entry, and reconciliation. These tools now efficiently deliver real-time financial statements and modern finance metrics. Thus, the shift creates the fear of job displacement and professional irrelevance among accounting students and accountants. This calls for accountants to adapt to those changes and avoid being irrelevant. A study published by Forbes supports these concerns, noting that among the factors that led over $300,000 accountants and auditors to leave their jobs between 2019 and 2021 is a fear of being replaced by automation. 

However, this assertion has a contrary narrative. A recent survey by an automation platform called DataSnipper indicates that auditing/accounting job vacancies rose by 25% in 2024. This was attributed to the high demand for accounting personnel, the retirement of those in practice, and the role of AI in cutting down auditors’ repetitive work. The survey also indicates that 83% of the auditors in the world tend to stay in companies with AI initiatives. 

These findings illustrate a key truth. AI has posed both threats and opportunities to accountants and the accounting profession. However, the determining factor lies in how accountants respond to them. 

Although AI can perform many accounting functions that accountants carry out, it can’t replace the human judgment required to weigh up different variables and make an informed decision. For this reason, accountants might have a respite. However, they need to evolve from being financial reporters to becoming strategic advisors, leveraging financial data analytics (DA) to interpret data, advise their clients, and enhance organisational performance.

Financial data analytics in accounting involves making critical financial decisions for an organisation. It enables accountants to keep track of the overall organisation’s functions. Accountants with DA knowledge can help organisations to make informed decisions. They can assist organisations in maintaining records, budgeting and financial forecasting, and setting targets and projections with high accuracy.

An accountant can use DA to guide company-employee relations by establishing key performance indicators to analyse employees’ overall financial impact on the company. Through AI-driven analytic tools like Zoho and Qlik, accountants can simplify complex financial circumstances into useful information. 

Furthermore, in tax consultancy and advisory services, accountants can use financial data analytics to guide clients through tax planning and compliance. They can also liaise with revenue agencies for efficient revenue collection. Data analytics tools can be harnessed by accountants based on the nature and circumstances of the clients. 

Accountants who transition from ordinary financial data processing to advanced financial data interpretation tend to be more relevant to the accounting profession. Adopting data analytics helps accountants stay relevant in a competitive labour market and improves their professionalism and expertise. 

The accounting profession is no longer limited to classification, summarisation, and reporting. It requires accurate data analysis and informed decisions. AI is an opportunity for accountants, not a deterrent. Accountants shouldn’t resist this development but rather adapt it, harness it, and grow. This is the only way to redefine their relevance in an AI-driven era.

Sunusi Abubakar (ACA in view) wrote this from Arawa B. Akko Local Government, Gombe State.