Artificial Intelligence is everywhere now, and there is no doubt that it's been transformational across our personal and working lives. The impact of Generative AI can be measured in both positive and negative ways in my opinion.
On the positive side, it can save us time and effort to get us started on some time consuming and often mundane tasks - I often use Copilot to get me started on a PowerPoint deck or a Proposal doc which I will then tailor to suit my requirements and hopefully capture my own personality. And in case you are wondering - yes I have used Microsoft Copilot to research this article and gather some content - however, the themes of the content that follow are capabilities and concepts that I am very comfortable and familiar with, and any output I've used from Copilot here has been adapted and refined to reflect my own standpoint and opinion.
From a negative standpoint, Generative AI can certainly be misused to cut corners in academia, it can expose sensitive data to the world without proper controls, and I've also heard it theorised that when we gather information using AI, we are not really learning and are less likely to retain that knowledge and truly learn and grow.
But I'm not here to debate these points. What I'm interested in is where does AI go next? What is the roadmap, how will it evolve, and what will be the impact on the social, political, and business landscapes in terms of factors such as employment, security, and global power posturing and positioning.
AI agents or Agentic AI certainly seem to be the next step in this journey, but could these agents serve as a bridge between today's Generative AI and future Artificial General Intelligence (AGI) I wonder? This is certainly an idea that is gaining traction in both academic and industry circles.
Before we go any further though, let's have a quick walkthrough some of the terminology that we are dealing with today, and what is on the horizon.
Generative AI Today
Generative AI models like GPT-4 (and successors) are powerful at producing text, images, code, and more. However, they:
Lack persistent memory or goals
Don’t act autonomously
Operate in a reactive, not a proactive manner
What Is Agentic AI?
Agentic AI refers to systems that:
Pursue goals over time
Make decisions based on changing environments
Use tools and APIs to interact with the world
Plan and adapt their behaviour
These agents can be built on top of generative models, giving them reasoning and language capabilities, but adding layers for memory, planning, and action.
What Is AGI?
Artificial General Intelligence (AGI) refers to an AI system with the ability to understand, learn, and apply knowledge across a wide range of tasks at a human-like level. To the best of our knowledge, AGI does not yet exist, but a major analysis of thousands of expert predictions suggests a 50% chance of achieving AGI between 2040 and 2061, though some believe it could happen as early as 2026.
Prominent figures like Sam Altman and Dario Amodei have projected AGI could emerge by 2025–2027, depending on technological breakthroughs, and given the pace of AI evolution to date, I must agree that I believe it will come sooner rather than later - if in fact it doesn't already exist.
If and when AGI does arrive, it will likely be:
Flexible
Autonomous
Capable of reasoning and adapting
So is Agentic AI a stepping stone to AGI I wonder. I certainly believe this will be the case because it will introduce Autonomy for Agents to act without human prompting or consent, along with Goal-Oriented Behaviour to pursue complex & multi-step objectives, and also Memory and Learning capabilities so they can retain experience and improve over time.
Can AGI achieve Sentience?
AGI does not necessarily imply sentience, but it could raise the possibility depending on how it's built and understood, but before we delve deeper into that question, let's define what we understand Sentience to be.
What Is Sentience?
Sentience typically means the capacity to have subjective experiences—feelings, sensations, and awareness. It’s closely tied to:
Consciousness
Qualia (the "what it feels like" aspect of experience)
Self-awareness
These attributes are certainly not required for an AGI to function. An AGI could simulate empathy, talk about emotions, or even mimic self-reflection—without actually experiencing anything. The basic premise of AGI is about capability, not consciousness.
But let's get back to the question - Could AGI actually lead to Sentience? This is where things get speculative, and the experts and theorists conjecture that:
Should an AGI replicate the functions of the human brain closely enough, it could be considered conscious—because consciousness arises from certain kinds of information processing.
Only biological systems (like brains) can actually be conscious. AGI, no matter how advanced, would never be sentient.
Sentience might emerge over time from complexity. If an AGI becomes complex and interconnected enough, consciousness could arise spontaneously.
Sentience is so poorly understood that we can’t even define what it would mean for an AI to be sentient—so the question is actually currently unanswerable.
Sci-Fi shows like Star Trek have not shrunk from tackling this debate. One of my favourite episodes of Star Trek: The Next Generation debates the rights of an Android character named "Data" to determine "his" or "its" future. A hearing is convened to determine whether or not Data is sentient, and subsequently whether he has the right of self determination. I won't spoil the story if you haven't seen it.
Why Does the possibility of a Sentient AI Matter?
If an AGI ever were to become sentient, it would raise enormous ethical questions such as:
Would it have rights?
Could it suffer?
What responsibilities would we have toward it?
Could it be a threat to humanity?
In Summary
I frequently ponder whether AI/AGI can in time become sentient, and I think that it's most likely, if not inevitable.
However, I'm not sure that sentience necessarily equates to emotional awareness, or wants and desires - unless we specifically design an AI to have these things. Sentience doesn't automatically imply agency or intention.
A sentient AI could potentially be aware of itself and its environment, but unless it has goals, values, or drives, it wouldn’t "want" anything in the way humans do, and therefore would have no apparent motivation to gradually supersede humanity. Then again, I cannot prove or disprove any of this - so my maxim in life is never to rule anything in or out.
As we progress with AI and AGI in the coming years, I urge the innovators and custodians of this powerful technology to proceed in a cautious and ethical way. How will history judge the likes of Altman, Musk, Nadella, and others leading the AI charge if AGI (at best) makes certain ways for humans to make a living obsolete, and (at worst) becomes a belligerent unstoppable force that has no use for human beings and no desire (with its new found sentience) to serve the human race.
History is invariably written by the victors, but will that be us, or will that be the machine? Only time will tell.