Writing to Get Your Next Job: Five Essential Rules

Whether you’re currently employed, or are actively seeking (as in, job-hunting is your full-time occupation), one of the most important things that you can do is to build your Portfolio. We talked about your Portfolio in this previous blogpost, with examples of how to use GitHub, LinkedIn, and your personal domain as “Portfolio bases.” Your… Continue reading Writing to Get Your Next Job: Five Essential Rules

Pivoting to AGI: What to Read/Watch/Do This Weekend

We are moving from a generative-AI era to an AGI era. What that means – in the simplest technical terms – is that we’re pivoting from “single-algorithm systems” to systems that must – intrinsically – involve multiple major subsystems, and multiple control structures. We’re closing out a 50-year connectionist AI era. This era began with… Continue reading Pivoting to AGI: What to Read/Watch/Do This Weekend

CORTECONs: AGI in 2024-2025 – R&D Plan Overview

By the end of 2024, we anticipate having a fully-functional CORTECON (COntent-Retentive, TEmporally-CONnected) framework in place. This will be the core AGI (artificial general intelligence) engine. This is all very straightforward. It’s a calm, steady development – we expect it will all unfold rather smoothly. The essential AGI engine is a CORTECON. The main internal… Continue reading CORTECONs: AGI in 2024-2025 – R&D Plan Overview

Evolution of NLP Algorithms through Latent Variables: Future of AI (Part 3 of 3)

AJM Note: ALTHOUGH NEARLY COMPLETE, references and discussion still being added to this blogpost. This note will be removed once the blogpost is completed – anticipated over the Memorial Day weekend, 2023. Note updated 12:30 AM, Hawai’i Time, Tuesday, May 29, 2023. This blogpost accompanies a YouTube vid on the same topic of the Evolution… Continue reading Evolution of NLP Algorithms through Latent Variables: Future of AI (Part 3 of 3)

New Neural Network Class: Framework: The Future of AI (Part 2 of 3)

We want to identify how and where the next “big breakthrough” will occur in AI. We use three tools or approaches to identify where this next big breakthrough will occur: The Quick Overview Get the quick overview with this YouTube #short: The Full YouTube Maren, Alianna J. 2023. “A New Neural Network Class: Creating the… Continue reading New Neural Network Class: Framework: The Future of AI (Part 2 of 3)

Kuhnian Normal and Breakthrough Moments: The Future of AI (Part 1 of 3)

Over the past fifty years, there have only been a few Kuhnian “paradigm shift” moments in neural networks. We’re ready for something new!

The Future of AI: Part 0 (Prelude) – Reductio ad Absurdum

A few weeks ago, I went to the Lihue-based Kaua’i Farmer’s Market for locally-grown fresh veggies. And, of course, I swung by the Kaua’i Master Gardeners booth. Now, I’d visited with these guys before. And yes, they knew that I taught AI at Northwestern. But still, my jaw dropped when the first question that one… Continue reading The Future of AI: Part 0 (Prelude) – Reductio ad Absurdum

Variational Free Energy and Active Inference: Pt 5

The End of This Story This blogpost brings us to the end of a five-part series on variational free energy and active inference. Essentially, we’ve focused only on that first part – on variational free energy. Specifically, we’ve been after Karl Friston’s Eqn. 2.7 in his 2013 paper, “Life as We Know It,” and similarly… Continue reading Variational Free Energy and Active Inference: Pt 5

Variational Free Energy and Active Inference: Pt 4

Today, we interpret the q(Psi | r) and p(Psi, s, a, r | m) in Friston’s (2013) “Life as We Know It” (Eqn. 2.7) and Friston et al. (2015) “Knowing One’s Place” (Eqn. 3.2). This discussion moves forward from where we left off in the previous post, identifying how Friston’s notation builds on Beal’s (2003)… Continue reading Variational Free Energy and Active Inference: Pt 4

Variational Free Energy and Active Inference: Pt 3

When we left off in our last post, we’d determined that Friston (2013) and Friston et al. (2015) reversed the typical P and Q notation that was commonly used for the Kullback-Leibler divergence. Just as a refresher, we’re posting those last two images again. The following Figure 1 was originally Figure 5 in last week’s… Continue reading Variational Free Energy and Active Inference: Pt 3