The smart survival strategy for each of us is to start thinking and planning now for a world in which there are not just AIs, but AGIs. This will be a very different world.
In this blogpost, and in the accompanying YouTube vid (see below), we look at the most essential distinction that we can make between an AI and an AGI: the difference between a tool and a person.
Specifically, because these artificially created “persons” (intelligences) are ones that we will put to work for us – and because they won’t have any say-so in whether or not they WANT to work for us, AGIs will be – effectively – slaves.
Key AI vs. AGI Distinction: “Tools” vs. “Slaves”
The key distinction, as we move from the AI era (which is inherently transitional) to the AGI era is that we’re moving from AI “tools” to AGI “persons,” which we can functionally view as “slaves.”
Yes, existing AI tools can masquerade as “persons.” They’re not. The best that they can do is draw on a HUGE resource of how people have talked about things and described things, in order to generate “person-like” materials – text, image, video, or other outputs. (We’ll briefly talk about how transformer-based AIs work in the next section.)
In short, current AIs such as ChatGPT and all other LLM (Large Language Model, or transformer-based) AIs can present themselves as “pseudo-persons” when we interact with them.
But they’re not, and thinking that LLMS ARE persons can lead to all sorts of difficult consequences, ranging from embarrassing to lethal.

Back to our main theme: simply “generating” does NOT mean that there is anything at all really like a “person” involved. But, since what is probabilistically generated (in a generative transformer-based AI) can read and look like what a person might say (based on lots and LOTS of comparable situations, i.e., training data), people start to drift, and treat the transformer-based output as though it were generated by a real person.
Even those who are most deeply familiar with how LLMs (e.g., chatbots) operate will affirm that these systems mimic people, but LLMs (or any transformer-based AI) are not really intelligent in the way that people are.
(I tried for hours to find what I was certain was a quote from Lex Fridman, in one of his many interviews with diverse AI leaders, where he said – paraphrasing – that AIs could “mimic” human SOMETHING … not sure if it was behavior, consciousness, whatever. And I couldn’t find that. But I’ll look through previous blogposts, see if I can’t come up with the original quote, and will likely generate material for several YouTube vids and blogposts in the process.)
A Little Digression – How Transformer-Based AIs Actually Work
In short, transformer-based AI tools use a relatively simple algorithm that iteratively generates “next tokens” (words or other material) based on the most recently-generated tokens, along with tokens that initiated the whole process.

Since this token generation is probabilistic, it is influenced by the tokens (e.g., words) most frequently found after the most recent token. This means that the volume of material used to train the system influences outcome.
What is really important here is that the content of this huge volume of training data is not always high-quality. In fact, a very small percentage will be verifiably high-quality. But … more on that rant in an ensuing YouTube and blogpost.
Sidenote: GPTS of all sorts, from all sources, frequently use reinforcement learning with human feedback (RLHF) superimposed on the generative process. This is what helps them refine their answers, or their “next-generated token,” to be in certain styles. Presumably, this also keeps them from advising people on how to do harmful things, or keeps GPTs from spouting hate speech, etc.

We’ll continue this discussion – on the fundamental LLM + RLHF architecture, and its capabilities (and associated problems) – another day.
Further Reading/Viewing: For a bit more along these lines, see the YouTube where the figures used in this section were originally presented:
For more technical depth on transformers, please see this previous blogpost:
- Maren, Alianna J. 2024. “Twelve Days of Generative AI: Day 3 – Transformers and Beyond.” Themesis Inc., Blogpost Series (Dec. 20, 2024). (Accessed Sept. 29, 2025; available online at Day 3: Transfomers.)
A Little Digression (Personal Story)
Just before getting back to adding-to and editing this post (at 4:30 AM), I was going through a quiz that was provided to me by one of my favorite mentors, someone whom I truly consider a “Muse.” This quiz was designed to help elicit which of certain archetypes best described me (and each client), so that this Muse could personalize course content. Each student in the course would receive a “customized” set of instructional materials.
I opened the quiz … started reading … and immediately got a “ick” feeling.
I LOVE the archetypes and concepts when presented live by this Muse. But when translated into materials generated by a ChatGPT – the result feels … simply … yech.
Just way too … plasticky.
Kind of like trying to have lunch with a life-sized plastic Barbie doll, fitted out with an ability to hold a “conversation.” (“Chatty Cathy” dolls, revisited and brought up-to-date for this timeframe.)

Right now, ChatGPT (and all other LLMs) straddle the line between “tool” and “pseudo-person.” When they act strictly as a tool, they don’t feel so “off.” But when they act as a “pseudo-person” – well, there’s often that “ick” feeling.
I’m going to go on extended rants about LLM-based (transformer-based) systems – ChatGPT and all its derivatives, analogs, and friends. That will consume a LOT of time and content, and is brewing strongly within me, so … forewarned. That’s coming.
But for now, I’m sucking it up (with a tummy that feels acutely ill at the thought), because transformer-based AIs are where we are right now.
This is a phase. And it’s ONLY a phase.
When we have AGIs, the difference between transformer-based “pseudo-persons” and the much more real “AGI-persons” will become MUCH more obvious and pronounced.
Best Background Vid
This YouTube vid (link below) describes thoughts by various scientific leaders, as well as leading AI companies, on what will lead to AGI.
Also, lots of rich supporting material in the accompanying blogpost:
- Maren, Alianna J. 2024. “AGI Wars: Emerging Landscape.” Themesis Inc. Blogpost Series (Oct. 20, 2024). (Accessed Sept. 25, 2025, available at AGI Wars Blogpost.)
Markov Blankets: An Essential AGI Component
One key differentiator between simple (or even complex) AI “tools” and AGI “persons” (or autonomous or semi-autonomous entities) is that the AGI will be differentiated from the world in which it senses, perceives and constructs reality “representations,” and then operates or takes actions. This notion of a separation underlies the concept of a Markov blanket.
The “Markov blanket” notion is essential to Karl Friston’s concept of active inference. (See References and Resources for citations.) Friston adopted this concept in his earliest active inference work (circa 2013-2015), building on notions and derivations used by Matthew Beal in his 2003 Ph.D. dissertation.

Active inference is a form of generative AI. This means that getting a solid foundation in the basic principles used in generative AI is an essential step – not just for “general” generative AI (pun not intended), but also the more advanced concepts that will play a role in AGI development.
I’ve put together a lot of teaching materials on the notion of generative AI.
I’ve also written a very useful tutorial that delves into Friston’s core equations in his early (2013/2015) work. (See citation in References and Resources.) It’s a bit of a daunting read (just over 70 pages), but is MUCH easier than trying to teach yourself “Fristonese” on your own.
The best compendium that I have on Friston’s recent active inference (and its evolution into Renormalising Generative Models, or RGMs) is in THIS blogpost. (There’s more recent published work, but I haven’t caught up with blogging about it yet.) So START HERE; this is a very resource-rich blogpost:
- Maren, Alianna J. 2024. “Big AGI Breakthrough: Leveling the Playing Field.” Themesis, Inc. Blogpost Series (August 19, 2024). (Accessed Sept. 29, 2025; available at “Big AGI Breakthrough.”)
The YouTube video that accompanies this blogpost is found HERE:
Your AGI Timeframe
In a 2025 Lex Fridman YouTube interview with Google CEO Sundar Pichai, Pichai hovers around 2030 as an “AGI emergence” date that has been offered by several researchers.
This gives us five years, approximately, for AGI emergence.
And when Pichai (and others) propose 2030 as the likely AGI emergence timeframe, they are likely talking about commercial, deployable AGIs.
Lab-bench AGIs are likely a lot sooner – ones that are testbeds for AGI functionality, but NOT optimized in any way for computational performance. I’d put lab-bench AGIs at 2026 – 2027, for simple, limited functionality – just prototypes, demonstrating key capabilities.
I do NOT think that early AGIs will be available for hands-on experimentation and play. Also, as they are released, they will NOT be free, or even cheap. AGIs will be luxury items, where AI tools (and pseudo-persons) will continue to be available to anyone who has internet access.
That said – for those of us who want to be in on the early AGI development, the early evolution towards functionally-useful AGIs – five years is a VERY short window.
Five years is short, whether we’re thinking about business development and infrastructure, or technically/scientifically – what tools and methods we’ll use in our AGIs.
Most of us in the AI community do not have sufficient depth to embrace AGI development.
Five years is BARELY ENOUGH time to pull together and execute a comprehensive action plan – whether for business or for self-education, and then applying the knowledge to AGI development.
And realistically, the early AGI development timeframe will be much closer to hand – more like two-to-three years.
Clock’s ticking.
What to Do Next (Action Items)
It all depends on your intended level of play.
If your goal is to be a business leader/entrepreneur, and take advantage of AGI as it emerges, then just watching our YouTube vids and consulting these blogposts will be JUST FINE.
However, we DO suggest that you “Opt-In” – which means that you’ll get emails from us whenever we put out a new YouTube or blog, or make any kind of special offer.
If you’re going to actively play in AGI development, or even AGI adaptation to your business, then you need more in-depth knowledge.
For that, we recommend that you:
- Watch the YouTubes,
- Read the blogposts, and
- Follow through on the reading/video links – some of which lead to very substantive material.
And – if you’re really serious – consider inviting us to be your personal Muse in the AGI development and evolution space.
Muse
Do you want to work with me as your AGI Muse?
A Muse is different from a consultant or coach. A Muse — ELEVATES you to your next level.
How a Muse Can Help You – Historical Example
Two men who most influenced western society were the Athenian philosopher Socrates and the Athenian general and statesman Pericles. Both Socrates and Pericles consulted with Aspasia as their Muse.

Want to learn more about how Aspasia, as Muse, coached Socrates as he developed concepts that become core to western philosophy? Check out THIS YOUTUBE:
How to Begin
Start with an email. Use the email that we access MOST often – themesisinc1 (at) gmail (dot) com. (We have a primary corporate email that is mostly for “official” business, but the gmail one will get our attention faster.)
Tell us who you are, what your role is (Key Investor, Founder/CEO, Chief Magician).
Tell us about your vision for AGI – what you want to develop/create. How you want to use it. How you see influencing the world with your “foundational” AGI.
We accept only a few clients, and only after careful screening.
But – if you’re sincerely interested – start the process. Contact us, we’ll have an initial phone call, and we’ll ask you to follow up with supporting information. Over a short period, we’ll determine if we are mutually a good match for each other.
To your fabulous success, my dear! – AJM (Sept. 29, 2025)
References and Resources
- Beal, M. 2003. Variational Algorithms for Approximate Bayesian Inference, Ph.D. Thesis, Gatsby Computational Neuroscience Unit, University College London. (Accessed Oct. 13, 2022; pdf.)
- Fridman, Lex. 2025. “Sundar Pichai: CEO of Google and Alphabet | Lex Fridman Podcast #471.” Lex Fridman YouTube Channel (June, 2025). (Accessed Sept. 12, 2025; available at Lex Fridman Interview with Sundar Pichai.)
- Friston, K. 2013. “Life as We Know It.” Journal of The Royal Society Interface 10(86).
- Friston, K., M. Levin, B. Sengupta, and G. Pezzulo. 2015. “Knowing One’s Place: A Free-Energy Approach to Pattern Regulation.” J. R. Soc. Interface, 12:20141383. doi:10.1098/rsif.2014.1383. (Available online at: http://dx.doi.org/10.1098/rsif.2014.1383.)
- Friston, Karl, Conor Heins, Tim Verbelen, Lancelot Da Costa, Tommaso Salvatori, Dimitrije Markovic, Alexander Tschantz, Magnus Koudahl, Christopher Buckley and Thomas Parr. 2024. “From Pixels to Planning: Scale-Free Active Inference.” arXiv:2407.20292v1 [cs.LG] 27 Jul 2024.
- Maren, Alianna J. 2019. (Revised 2022, 2024.) “Derivation of Variational Bayes Equations.” Themesis Technical Report TR-2019-01v6 (ajm). arXiv1906.08804v5 [cs.NE]. doi:10.48550/arXiv.1906.08804. (Accessed June 8, 2023; available online at Deriv Var. Bayes (v6).)