AI

Understanding ChatGPT and addressing issues of concern

By Ismail Ismail Tijjani

Large Language model (LLM) is a subset of Generative AI that focuses on generating human-like text based on the input it receives. Evidence shows how good it is in generating creative text formats, like poems, code, scripts, musical pieces, emails, letters, etc. Chatgpt plays a significant role in bringing LLMs to wider public attention, though it wasn’t the first one. I will use Chatgpt throughout this article because of its popularity, though there are other popular ones like Gemini, Bard.AI, LaMDA and much more.

Let me provide a very simple description of how ChatGPT works. Just imagine you enter a library and ask the librarian a question. The librarian will first try to understand your question and then scan the shelves, looking for books they think might contain your specific answer. Using their records and expertise, they connect related stories from different books and give you the best possible answer. This is what ChatGPT does in a few seconds.

ChatGPT underwent training using an extensive and diverse internet dataset, which covered a wide spectrum of content ranging from different subjects, styles, and perspectives. Its core skill lies in tranformer architecture, a neural network which is primarily designed for language processing to encode an input text, analyze its structure and meaning, and decode it to produce an output by predicting the most likely next word in a sequence.

Certainly, the introduction of new technologies often sparks heated debates. Critics often strive to oppose and even reverse these advancements. However, their efforts typically falter in the end. Some critics may genuinely misunderstand the technology, while others, perhaps a majority, are driven by the pursuit of publicity rather than accurate assessment.

ChatGPT wasn’t an exception. When it was launched in October 2022, some people argued that it would make students lazy, lead to job loss for editors, result in high plagiarism, breach copyrights, steal people’s data, exhibit sentiment, intentional bias, spread disinformation, create deep fakes, and much more. We will discuss some of these concerns below. Some of the allegations are true and have already been addressed, while others are false.

ChatGPT lacks the ability to discern whether information is biased, disinformation or false. It operates based on its programmed structure and produces results accordingly. OpenAI, the creator of ChatGPT, has taken measures to enhance the model. Although the technical details haven’t been disclosed, it likely involves implementing guardrails and filtering mechanisms to address accusations of misinformation, bias, falsehood and more.

For students and researchers, ChatGPT will serve just like an advanced internet search engine that will generate output after going into multiple webpages, saving time and stress hopping between sites. It will in no way make students lazy, However, some concerns related to students’ use will be discussed in a later article.

Its ability to remember previous prompts, though impressive, doesn’t imply sentience. It’s merely a clever technique within its architecture. This raises separate questions about the path to achieving true artificial general intelligence (AGI), discussed in my previous article. It is only a technique in tranformer that makes it capture it.

The impact of ChatGPT on jobs is a complex and nuanced issue with both potential downsides and upsides. While some job losses are inevitable, they will likely be offset by the creation of new ones. Adapting to this changing landscape through education, reskilling and responsible policymaking is key to ensuring a future where AI benefits everyone.

Despite the evident improvements seen since ChatGPT’s initial release, OpenAI must continue to dedicate significant resources to refining its model. This is crucial not only to mitigate legal risks but also to enhance its accuracy and reliability for responsible public use. While striving for absolute perfection is unrealistic, reaching a consistently high level of trustworthiness should be a priority. Additionally, users must be mindful of the model’s limitations and exercise critical judgment, fact-checking, and verification before relying on its output.

AI is here for good. Innovation often sparks a variety of perspectives, and AI is not an exception. Some people believe that AI has the potential to solve some of the world’s most pressing problems, such as poverty, hunger, climate change, corruption and disease. Others are concerned about the potential for misuse, such as the development of autonomous weapons systems or the use of AI to manipulate people.

AI is not like any other innovation we have seen before in the history of humankind. It is among the most powerful of all, and it is likely to be among the last innovation that we ever need.

AI is already making significant positive impacts in various industries, such as healthcare, finance, retail, manufacturing and many others. Of course, like any industry, there may be individuals with malicious intent in the AI sector who are willing to exploit it for negative purposes. For example, I recently came across a women-uncovering app with high precision that raised concerns. However, the actions of these individuals should not lead to the shutdown of the entire industry.

We don’t shut down the arms and weapons industry because of terrorists, the financial industry because of fraud, the biotechnology industry because of bioweapons or social media because of misinformation and hate speech. Instead, we regulate them by establishing governing bodies to oversee their operations and foster collaboration between top companies, stakeholders, researchers and the government to develop effective solutions. This approach will also be applied to AI.

Our primary focus should be on humanity. It is crucial for everyone to actively participate and collaborate in order to develop effective solutions that will propel us and the industry forward as a unified whole.

Path to Artificial General Intelligence (AGI) 1: the year 2023 stood out as an exceptional period in the AI industry, marking a significant moment when the masses truly connect with the essence of AI.

AI has been around for years, primarily utilized in backend functions like relevance ranking, personalization, spam detection, and more. ChatGPT was the remarkable innovation that astonished the world, revealing the true potential of AI. While it may not have surprised researchers who had already witnessed AI capabilities in the lab, its impact on a broader audience is undeniable to the extent that many non technical individuals use the words AI and ChatGPT interchangeably, thinking its same thing.

Other notable innovations in 2023 include Hugging Face, Google Bard, Capcutand many others, all these are great innovations we have seen in 2023.

Are these innovations clearing a path to Artificial General Intelligence (AGI), which informally means machines reaching human-level intelligence? This question remains unclear to researchers, as there are two camps with differing opinions on the matter. Some believe AGI is imminent, while others hold a more skeptical view.

Yann LeCun the chief AI scientist in Meta is among those with skeptical perspective, he believes that Language models like ChatGPT that people are using as an evidence that AGI is imminent are not as smart as a cat which is the truth and believes it will happen in decades or even a century — a point of view that I share.

For machines to achieve true intelligence, they must possess both cognitive and metacognitive abilities. While significant advancements have been made in cognitive AI, bridging the gap to metacognitive intelligence remains the key barrier. Researchers are diligently seeking solutions to overcome this challenge. For machines to be metacognitive intelligent, which necessitates the ability to sense the environment and effectively process and interpret sensory signals. Our discussion was focused on the process of it being intelligent alone, not as intelligent as a human being, which is the AGI. This clearly shows that we are nowhere near AGI.

The timeline of AGI is not only a matter of time but rather depends on the speedy research and innovation advancement. Improvement in advanced neural networks, symbolic reasoning, embodied cognition, unsupervised, and reinforcement learning will make a great impact in clearing the path to AGI.

The path to AGI is not a solitary trek for AI researchers. It demands a symphony of minds, where scientists, physiologists, engineers and other researchers from diverse fields join hands in a grand collaborative effort. Only through their combined expertise and tireless dedication can we hope to unlock the secrets of true machine intelligence.

Artificial Intelligence: The good, the bad and the ugly

By Haruna Chiroma

Artificial Intelligence, commonly known as AI, has recently garnered significant attention in mainstream media outlets such as BBC, CNN, Al-Jazeera, Daily Trust Newspaper, Forbes, New York Times, Wired, and others. It is widely considered to be the most talked-about scientific discipline globally at present. AI is like a smart and helpful digital friend. It’s a computer system trained to perform tasks that usually require human intelligence, such as learning, understanding language, and solving societal problems.

The AI-based computer system learns from experience and adjusts to new information, making it a bit like a digital wizard that can handle various tasks independently. Siri on Apple devices, Google Assistant on Android phones, Amazon’s Alexa, facial recognition capabilities, Facebook’s language translation feature, friend suggestions on Facebook, and language translators are examples of AI systems in operation. These are some of the few AI systems that illustrate how AI impacts everyday tasks.

The influence of AI on our daily lives is increasing across various domains, including security, small and medium enterprises, education, communication, health, business, entertainment, transportation, homes and workplaces. The realm of AI is a double-edged sword. While we have elucidated the opportunities and benefits, there are growing concerns surrounding risks, ethical considerations, job displacement, potential threats, and legal conflicts. Here, I will delve into the positive aspects, reserving a discussion of some negative dimensions later in the article.

AI is the foundation for transformative technologies like the widely discussed ChatGPT, which has over 18 million active users daily. Now, the GPTs store has been launched for business. An AI non-invasive device has been invented to read what a human is thinking in his mind, convert it to text and display the text on the computer screen for everyone to read. The research has been conducted and pilot-tested at the University of Sydney. Such a device has a multifaceted benefit to humans. For example, anyone with speech impairment can use the device to communicate his thoughts and wants with people without talking.

There is an AI tool for converting text to video that only requires the user to write a story in text, then prompt the AI tool with the story and a video based on the story will be generated. I foresee the possibility of rapidly integrating the text-to-video converter on phones in the near future. Imagine with a simple request like, “Hey Siri, Alexa or Google assistance, turn my story into a cool video,” you’re on your way to experiencing your tale in vibrant animations and vivid scenes. With her AI prowess, Siri makes storytelling not just a written adventure but a visual journey for all to enjoy. The text-to-video converter can potentially revolutionize the movie industry by reducing the cost of production and time for making movies. Content creators like skit makers can utilize such an AI tool to create short videos to post to engage their followers.  

In a remarkable leap forward for technology, a cutting-edge AI device has emerged, revolutionizing how we experience videos by effortlessly transcending language boundaries. A cutting-edge AI tool that seamlessly translates video-spoken words into different languages, showing the same person speaking in different languages, opening up new possibilities for global communication. The device operates as a user-friendly interface where videos are uploaded and transformed into a linguistic tapestry. The device not only translates spoken words but also adapts captions and subtitles, preserving the original intent and emotions of the content. Filmmakers and content creators worldwide have already begun incorporating AI devices into their creative process, providing viewers a more inclusive and immersive experience. As stories seamlessly unfold in multiple languages, the device adds a new layer of depth to digital storytelling.

Let’s now turn to the dark side of AI: Should AI systems attain or surpass human intelligence, there exists the potential for these systems to make decisions that could lead to the extinction of the human race on Earth or decide to go to war with humans. A recent incident in South Korea exemplifies the risks, where an industrial robot designed to identify boxes mistakenly perceived an industrial worker carrying a box as a box, leading to a fatal outcome – death. Legal conflicts further highlight the challenges posed by AI advancements.

The New York Times has initiated a court case against OpenAI, the owner of ChatGPT, in which Microsoft holds a significant investment. The lawsuit alleges copyright infringement by ChatGPT, prompting OpenAI to assert that developing a powerful system like ChatGPT without some level of copyright implications is unfeasible. This legal dispute initiates discussions on copyright issues in the AI era. Google is also facing legal action related to AI, with a patent infringement suit filed by Singular Computers against the Tensor Processing Unit, an AI-based processor.

Concerns about job displacement loom large, with an estimated 800 million jobs expected to be replaced by AI in 2024 alone. Additionally, the unethical use of AI tools to generate false or misleading information disseminated through social media raises significant concerns about potential threats to coming democratic elections in Asia, the USA, South America, and the UK, potentially leading to civil unrest.

On a final note, criticisms have emerged regarding AI tools employed in recruitment processes, with accusations of introducing bias and ethical concerns. In a recent publication by Shelton Leipzig on the responsible use of AI systems, she categorizes AI systems into three groups: low risk, high risk, and prohibited. The classification is based on the varying levels of risk associated with each AI system. Certain situations are deemed inappropriate for deploying AI systems, as exemplified by their exclusion in voting during elections.

Some AI systems are considered very low risk, such as those employed in video games or product recommendation systems on e-commerce sites. However, most AI systems fall into the high-risk category, including those used in recruitment and financial applications; 140 use cases were identified within this classification. These multifaceted challenges underscore the complex landscape surrounding AI development and deployment.     

Haruna Chiroma, University Professor of Artificial Intelligence, wrote from the University of Hafr Al Batin, Saudi Arabia, via freedonchi@yahoo.com.

AI takes center stage: A look at the powerful new Galaxy S24 lineup

By Sabiu Abdullahi

The dust has settled after the dazzling launch of the Samsung Galaxy S24 series, and the tech world is abuzz with excitement (and a touch of scrutiny).

While not a complete overhaul, the S24 lineup boasts refined features, AI-powered innovations, and battery life improvements that promise to solidify Samsung’s position as a smartphone leader.

Let’s dive deep into what makes these new flagships tick.

Key Features:

AI Focus: The S24 series prioritizes artificial intelligence, with enhanced camera processing, AI-powered search tools, and personalized user experiences. The ProVisual AI Engine elevates photo quality, while AI Edit Suggestions offer helpful tweaks for your snapshots.

Display Upgrades: While the S24 retains its Full HD+ resolution, the S24 Plus gets a glorious Quad HD+ boost. Both models now boast a staggering 2,600 nits peak brightness, ensuring vibrant visuals even in harsh sunlight. Additionally, variable 120Hz refresh rates adjust to optimize battery life.

Camera Powerhouse: All three models pack a punch with their camera systems. The S24 and S24 Plus sport a 50MP main sensor, 12MP ultrawide, and 10MP telephoto lens, while the S24 Ultra ups the ante with a groundbreaking 200MP main sensor and improved zoom capabilities.

Long-lasting Power: Battery life receives a welcome boost, with the S24 housing a 4,000mAh battery and the S24 Plus carrying a hefty 4,900mAh cell. The S24 Ultra retains its impressive 5,000mAh battery, ensuring you stay connected all day long.

Price and Availability: The S24 and S24 Plus maintain their familiar price points of $799 and $999 respectively, while the S24 Ultra sees a $100 bump to $1,300. Pre-orders are already open, with in-store sales kicking off on January 31st.

Conclusion

The Galaxy S24 series may not be a revolutionary leap, but it delivers a refined and powerful offering for Android enthusiasts. The focus on AI, upgraded displays, camera improvements, and extended battery life solidifies Samsung’s commitment to cutting-edge technology. Whether you’re a die-hard Samsung fan or simply looking for the best Android experience, the S24 series deserves a closer look.

Welcome to 2024 – the Digital Age!

By Ismaila Academician

People often frown at the content generated using AI. And I believe there is another set of people who copy and paste content generated by AI without any consideration or editing. Perhaps, the former group relies on the latter to pass their judgment. But I think both groups misuse and misunderstand the idea, they misuse AI and pass their opinions subjectively. Both fail to understand that AI is here not to do the actual work but to help us do it better.

Literally, intelligence refers to the capacity to understand principles, facts or meanings and apply it to practice. On the other hand, artificial implies something not natural to the human world, and not normal to certain principles or conditions. Put the two together you will have a complete clear picture of what AI is.

As a domain, Artificial Intelligence or AI, is a branch of computer science that aims to create machines capable of “thinking” and “acting” intelligently, much like humans. This could encompass various forms of intelligence, such as linguistic, biological and mathematical intelligence.

AI is a byproduct of human intelligence. It’s a human construct with limited and subjective experiences. It’s like a mirror reflecting our cognitive abilities. AI’s intelligence, designed to mimic our thought processes and actions, is a derivative of human intelligence. AI is currently available in various forms. The one we are most familiar with is ChatGPT. There are also thousands of machines in numerous industries doing remarkable jobs.

One of the key differences between AI and humans is predictability. AI is predictable as it operates based on pre-defined patterns subject to human manipulation. In contrast, we humans are unpredictable. We’re capable of creativity and spontaneity. We assume personality traits and express emotions. Human power is inimitable!

For instance, AI can recognise images of a cat but can never “feel or understand” what a cat is in reality because it doesn’t possess a mind of its own. AI can tell you the weather condition of your location, whether it’s cold, hot, sunny, hazy or raining without feeling any. It does not have feelings but can express them.

However, another striking difference between AI and humans is ‘Consciousness’. AI can neither assume nor replicate human consciousness. AI strictly operates based on algorithms designed by humans. The greater the data input the bigger the data output, and vice versa.
There’s a common misconception or fear that AI will replace us and render us jobless. But that’s far from the truth. AI was primarily created to complement our abilities. It’s a tool designed to help us do our jobs more efficiently and effectively, to enhance our skills, and to unearth and explore our hidden talents. AI is NOT here to do the work for us, but rather to teach us how to do the work faster, better and smarter.

Artificial Intelligence is not mere a trend. It’s a constant human companion like dogs and cats that will remain useful and loyal to humans as long as humanity stands. As we steadily navigate through the Digital Age, understanding AI is no longer a choice but a necessity. It’s crucial to know, learn and utilise the power of AI for productivity.

Sometimes, change can be difficult to cope with. But resisting change means missing out the opportunities that come along with it. Instead of viewing AI as a threat, we should embrace it as a tool for improvement, a lifelong companion that’s here to make our lives better.

Embrace AI, embrace the future!

Ismaila Academician can be reached via; 07034413534 or his email: ismailaacademician@gmail.com

Can AI surpass human intelligence?

By Muhammad Ubale Kiru

Whether Artificial Intelligence (AI) can surpass human intelligence is a complex and debated topic. Many scientists, AI users, and observers have argued whether what we see in movies regarding AI surpassing human intelligence will come true. I have asked this question several times, and colleagues at work and friends on social media have asked me whether this myth can be true. Since then, I have been gaining momentum, strength, and proof to be able to answer this question.

However, something triggered my urge to share my thoughts on this question today after I received a notification from OpenAI. This company developed the famous ChatGPT, informing users about their new “Terms of Use and Privacy Policy.” One of their newly updated clauses says, “We have clarified that we may collect information you provide us, such as when you participate in our events or surveys.”

The above statement has directly or indirectly revealed that if you agree to use ChatGPT, you must surrender to the fact that OpenAI will collect personal user information for research and training purposes. A non-specialist will not understand the implications or consequences of that. One may think it is an ongoing activity because social media companies like Facebook, X (formerly Twitter), etc.) collect users’ personal information for business and quality assurance purposes.

So, what is the real implication here?

It is simple. AI and machine learning algorithms are like weeds on a plant. They rely heavily on data to learn. The more data they consume, the more intelligent they become. Most of us are already using AI to solve our day-to-day activities and problems. For instance, tasks that used to take me seven days to complete can now be done in 10 minutes. I am handing over my tasks to AI to handle them for me.

Each time I ask AI to handle my task, AI is learning the task more and more. Humans perfect their skills through constant and regular learning. Now, I’m handing over most of my tasks to AI; AI learns while I lose because previously, I learned from my work experiences, and now AI does the work for me. AI is becoming more intelligent and capable, while I am becoming less intelligent and less capable. By the way, I’m not the only one in this mess. Nowadays, even programmers who rely on constant practice to improve their coding skills are also using AI to generate codes or programs that used to take months to complete.

ChatGPT, for example, is used by millions of users daily. When it was first developed, they used random internet data to train its learning models. Now, they are using real-time human input (data) to train the AI. If you look at the core foundation of any AI in the world, it is designed to capitalise on learning from its environment. Our phones are AI-enabled, laptops are AI-enabled, web apps, games, calendars—everything is now AI-enabled. The more we use AI, the more AI learns about us.

Today, your phone keypad knows more about your words and thoughts than you know yourself. As you begin typing, it completes the rest for you. So, with time, your AI-enabled devices would learn more about you than you could ever learn about yourself. Thus, what is left of us if AI has learned everything about us? In Sun Tzu’s book, The Art of War, he says, “Knowing your enemy is akin to winning half the battle. Understanding their strengths and weaknesses provides a strategic advantage that can pave the way to victory.”

The question of whether AI can start a revolution or take over the world, as we have seen in movies, is another debate for another day. The Tesla CEO Elon Musk and AI guru Lex Fridman are among the few people in the world who are always concerned about the potential danger of AI and have continued to call for regulations before AI gets out of hand. The technology has very speedy and staggering growth potential. It is growing at a breakneck pace right now.

To this end, I urge policymakers and regulatory bodies to take necessary precautions before AI gets out of control. AI is undoubtedly powerful, and if unleashed without caution, it can create devastating chaos.

Let me hear your thoughts in the comment box.

(c) Muhammad Ubale Kiru

We have the simple Artificial Intelligence to secure our rail tracks

By Hamid Al-Hassan Hamid

I wrote twice about possible attacks on our rail tracks; it is just a matter of time. This, in my opinion, is just a test run; expect more to come if we continue to neglect simple and sincere advice due to ineptitude and corruption. The rail tracks are not left alone, on their own, anywhere in the world. They are protected, monitored and secured. It is done through determination and sincerity of purpose. How many souls would have been lost had the rail skidded and crashed! How disastrous!

Again, with all our tech universities, we cannot build local drones to fly 24/7 and monitor at least our rail tracks. The only thing our professors are good at is attacking another person who became a professor that they do not like.

The technology we need to curb these security challenges is too expensive to buy; we do not have the money. But it is cheaper to develop, and we can do that locally.

I once reached out to MTN, asking how much it would cost me to connect drones that will fly across the country, especially our forest, for intelligence gathering. I will build the server, and they will provide the network, without the Internet data, I don’t need the internet. They gave me two options:

1. Pay 150,000 naira monthly to connect as many drones as possible nationwide.

2. Make them partners in the project, and I will not have to pay a dime.

They needed confirmation and approval from appropriate security bodies. It has been about a year or so now. Getting the interest of the appropriate security bodies alone is more complex than quantum physics.

In Africa, the only thing we love is physical cash, but I don’t blame us. I just pray that God cures our sickness soon.

We need to establish private tech defence companies that are private entities and not owned by the government.

Artificial Intelligence has more practical use cases in Africa. In addition, it will be easier to implement because the biggest fear against Artificial Intelligence is that it will compete with humans in jobs and take away those jobs.

Africans don’t want jobs; they just want to have something to eat throughout the week. Forget about the rampant cry of unemployment. As soon as you employ, you will begin to see. Artificial Intelligence will have no resistance in Africa, especially in security.

What shall we do?

I have been getting messages and comments from brothers trying to help with the Private Defence Tech Company Startup. Some proposed sending proposals to either Minister of Communications and Digital Economy Professor Ali Ibrahim Pantami or Vice-President Yemi Osinbajo. Some also proposed promoting the idea in media houses until it reaches the ears of those in power.

First of all, I have access to Professor Pantami through childhood friends who can meet him whenever they want to. I also know people that can reach the VP. But I disdain the idea of sending proposals.

This is what I am doing at the moment:

I have a team of four individuals with backgrounds in the military and tech. We are making plans to partner with anybody (with genuine sincerity) interested in starting something simple that can be pushed into the market for testing and continue building from there.

At this point, what we want is to partner with the research department of any Nigerian university, military institutions like the Airforce Institute of Technology (AFIT), or the Nigerian Defence Academy (NDA). We want to start by building an AI that will be able to:

1. Identify faces at entrances through cameras.

2. Log check-in and checkout time of each face.

3. Determine if anyone checked in and did not check out, and report such cases to analyse the data to know why such checkout did not occur. The checkout may be missed because the camera did not capture the face or another exit was used, in which case we would like to know if the use of such alternate exits is valid and improve the AI to be more accurate with regards to missing faces.

4. Print out daily, weekly, monthly and yearly statistics on such check-ins and checkouts per individual and the whole entrants.

5. Try predicting possible movements of each individual based on the data collected as they grow.

6. Send silent alerts to mobile phones of respective security personnel on duty if a breach in the entry is detected, for example, an individual using an entrance or exit that is not within their jurisdiction.

We can develop the AI, create the server, and assist with the statistics as part of our responsibility.

We can start by using cheap Android phones as cameras at respective entrances and exits by connecting them to the server via wifi; this cuts down costs by far at the initial stage.

We want to grow the system gradually by later introducing drones to fly outside and see if they can recognise personnel that have been logged in the building at various entrances, identify the cars they use, log their car plate numbers, identify what canteen they like taking coffee within the vicinity and so on. Then gradually scale to state and federal levels.

It is very simple. But can corruption and corrupt individuals allow this?

Hamid is a social commentator, an expert in AI and writes from Sudan.