Neuro-Symbolic AGI: Emerging Groundswell Shift

Three key indicators that the shift from a “simple” AI architecture (transformers + reinforcement learning) to a “real” (and more complex” neuro-symbolic AI is FINALLY happening:

  • Crucial research papers – addressing the important issues that MUST be resolved to create functional neuro-symbolic AI,
  • Attention in the business world – business leaders KNOW that our current AI approach (transformers + RL) is just not cutting it; WAY too many unresolved and unresolvable problems, and we’re seeing “neuro-symbolic AI” overviews in the glossy business magazines (Forbes, Fortune) – a CLEAR INDICATOR that the money segment is looking beyond current systems, and – MOST IMPORTANT –
  • JOBS – I saw two job postings in the last few months. (A recruiter directly reached out to me for one of them.) BRIEF description below, and I’ve got an upcoming YouTube on this.

YouTube: “Neuro-Symbolic AI Manager: Engineer or Magician?”


Jobs First

There has been no point in talking about neuro-symbolic AI (or any form of AGI beyond transformers + RL) until for-real jobs started to show up – which is NEW. (Starting around February/March, and I’m writing this in late June, 2025.)

These are REAL job postings – with Tech Recruiters hired by the various companies to screen and present candidates. And with very well-known companies/institutions. (YouTube discussion of these new jobs forthcoming.)

Typically, they’re looking to start teams – so they’re making their first hires, for “Neuro-Symbolic AI Manager” roles.

I have all sorts of comments to make about the NATURE of those postings (which is why I declined to apply when the recruiter found me) – The important thing is: “neuro-symbolic AI” jobs are starting to show up.

“Neuro-symbolic” will force us to PIVOT in fundamental ways:

  • PIVOT our whole AI educational system.
  • PIVOT how we present ourselves in our online profiles.
  • PIVOT our expectations for what AI will be – in the relatively near, “start planning now” future.

The world is shifting – seismic ripples right now.

I have a YouTube on this. Filmed, not yet edited or posted.

When it’s done, those of you who are “opted-in” to Themesis will hear about it – and of course, those of you who’ve subscribed to the Themesis YouTube channel will hear about it. And I’ll update this blogpost.

But let’s move on.


The Business World Is Starting to Pay Attention

After more than two years of over-zealous subscription to the generative-AI (“gen-AI”) craze, there is now GROWING recognition of all the faults and foibles of gen-AI. 

What’s more, people are recognizing that we can’t fix gen-AI from “within” the gen-AI system. That is – we can’t make it (that much) better by adding on more … more nodes, more weights, more processing power … larger context windows … even more reinforcement-learning-based feedback training. 

All that we’ve done has been to bury the problem. 

The answer that is emerging – in a LOT of people’s minds – is to switch focus. Go from gen-AI to neuro-symbolic AI. 

Recognition of this is now so widespread that there is THIS ARTICLE in (of all surprising places) Fortune magazine: 

  • Meyer, David. 2024.”Generative AI Can’t Shake Its Reliability Problem. Some Say ‘Neurosymbolic AI’ Is the Answer.” Fortune. (Dec, 4, 2024). LINK

And DAMN, I’m so sorry – this takes us to a “subscribers-only” page. But EVEN BETTER, here’s an article available on Yahoo! by the same author, likely the very same article.)  

And here’s an article in a similar vein from Forbes: 

  • Bridgwater, Adrian. 2025. “Why We Need Neuro-Symbolic AI.” Forbes. (Feb. 6, 2025).  LINK

This one you CAN read … if you have not yet exhausted your “four free articles” from Forbes

The message is clear – “neuro-symbolic” is now a thing. 

And far from being in the realm of “far-off crazy” – neuro-symbolic AI will likely take the lead by the end of 2025. 

HUGE game-shifter. 

For all of us. 

This is just MASSIVE. 


The Real Reason That We Don’t Have AGI Yet (Neuro-Symbolic Architecture Complexity)

The real reason that we don’t already have neuro-symbolic AI – or, ONE of the real reasons – is that it is going to be more architecturally complex than simple transformer-based AI, or even transformers + RL.

But if we want to create intelligence that can become equivalent to or (at some point, in some ways) actually surpass human intelligence, we need more complex architectures.

The simple probability-based approach of transformers won’t cut it, no matter HOW much “reinforcement” we give to this method.

I wrote about this as far back as 2023 – actually, created THIS YouTube:

Maren, Alianna J. 2023. “Key Concepts for a New Class of Neural Networks.” Themesis, Inc. YouTube Channel.

If you want to look at the neurophysiology behind neural networks -with some ideas as to what to do NEXT (i.e., go beyond simple neural networks), here’s THIS YouTube vid:

Maren, Alianna J. 2023. “CORTECONs and AGI: Powerful Inspirations from Brain Processes.”

But for a more recent take, look at this article by Wan et al. (2024) on neuro-symbolic architectures. (See the Resources and References section for full citation.) This is a useful study, because it addresses the architectural/computational challenges in AGI creation.

Wan et al. note in their Abstract that:

“Our studies reveal that neuro-symbolic models suffer from inefficiencies on off-the-shelf hardware, due to the memory-bound nature of vector-symbolic and logical operations, complex flow control, data dependencies, sparsity variations, and limited scalability. Based on profiling insights, we suggest cross-layer optimization solutions and present a hardware acceleration case study for vector symbolic architecture to improve the performance, efficiency, and scalability of neuro-symbolic computing.”

What we like most about the Wan et al. approach is that it is a coherent and logical effort to address the problems associated with simple Gen-AI methods – problems that will not be overcome simply by adding in more and more processing units (of the same kind).

In other words, “Stargate” will not solve the problem.

Nor will many of the (useful, but still limited) efforts to create smaller Gen-AI systems that draw from larger, “foundation” models – because the problems inherent in the large-scale systems will be perpetuated in the smaller-scale ones.

A different approach is needed.

As Wan et al. offer in their Introduction:

Neural methods are highly effective in extracting complex
features from data for vision and language tasks. On the other
hand, symbolic methods enhance explainability and reduce
the dependence on extensive training data by incorporating
established models of the physical world, and probabilistic
representations enable cognitive systems to more effectively
handle uncertainty, resulting in improved robustness under
unstructured conditions.

Rather than go into more detail, we gently encourage the interested reader to consult the original Wan et al. (2024) article.

These authors do point out the computational difficulties inherent in creating multi-representation AGI systems, which is very likely the key reason that we’ve not seen AGI to date. (And we were hopelessly optimistic in thinking that Google had solved this problem with Veo-2.)



Resources and References

AGI (Neuro-Symbolic) Architectures

  • Wan, Zishen, et al. 2024. “Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture.” IEEE Trans. on Circuits and Systems for Artificial Intelligence (September, 2024). Available also on arXiv; arXiv:2409.13153v2 [cs.AR] (23 Sep 2024). (Accessed Oct. 7, 2024; available at arXiv.)

The following arXiv paper is the one I mentioned in the vid – it’s my 70++ “tutorial” on Karl Friston’s core equations in his 2013/2015 papers:

  • Maren, Alianna J. 2024. “Derivation of Variational Bayes.” arXiv:1906.08804v6 [cs.NE]. (Rev 6: 18 Aug 2024). (Accessed July 23, 2025; available at arXiv link.)

Share via
Copy link
Powered by Social Snap