Skip to content

Themesis, Inc.

Where AI Equals Physics

  • About
  • Blog
  • Academy
  • Events
  • Muse
  • Protocols
  • Resources
  • Salon
  • Connect

Tag: conscience

AGI: Google’s Mixture of Depths is a Baby Step Towards Artificial General Intelligence (AGI)

LLMs, mixture-of-depths, mixture-of-experts, MoD, MoE

Published April 14, 2024
Categorized as A Resource, A Resource - News or Blogpost, A Resource - Papers, A Resource - YouTube, artificial general intelligence, Artificial Intelligence Tagged conscience, sentient-intelligence

Recent Posts

  • LLMs Don’t Work; Why They Don’t and What’s Next May 16, 2025
  • AGI Architecture Stability with Multiple Timeframes March 29, 2025
  • Steps Towards an AGI Architecture (“Aw, Piffle!”) January 28, 2025
  • Veo-2 vs Sora: What Google Has (That OpenAI Doesn’t) January 7, 2025
  • What Will REALLY Give Us Some AGI December 27, 2024

Recent Comments

  • Steps Towards an AGI Architecture (“Aw, Piffle!”) - Themesis, Inc. on Veo-2 vs Sora: What Google Has (That OpenAI Doesn’t)
  • Kenneth on Veo-2 vs Sora: What Google Has (That OpenAI Doesn’t)
  • Tarun Katneni on Veo-2 vs Sora: What Google Has (That OpenAI Doesn’t)
  • Twelve Days of Generative AI: Day 3 – Transformers and Beyond - Themesis, Inc. on Twelve Days of Generative AI: Day 2 – Autoencoders
  • AJ Maren on Quick Look at AGI Evolution: Current Status
  • A Resource
  • A Resource – Annotated Collection
  • A Resource – Books
  • A Resource – Code
  • A Resource – News or Blogpost
  • A Resource – Papers
  • A Resource – Prior Blogposts
  • A Resource – Worked Example
  • A Resource – YouTube
  • artificial general intelligence
  • Artificial Intelligence
  • Bayesian probabilities
  • Career Strategies
  • Career Strategies – Research and Publication
  • Conversationi
  • CORTECONs
  • CORTECONs – Architecture
  • CORTECONS – Code
  • Divergence Methods
  • Future of AI
  • Hawai'i
  • Hawai'i – Coffee
  • Hawai'i – Mai Tais
  • Info Theory – Kullback-Leibler DIvergence
  • Information Theory
  • K-L Divergence
  • Large Language Models
  • Latent Variables
  • Natural Language Processing
  • Neural Networks
  • Neural Networks Learning Methods
  • NN – Backpropagating Multilayer Perceptron (MLP)
  • NN – Boltzmann Machine
  • NN – Deep Learning
  • NN – Hopfield NN
  • NN-Learning – Contrastive Divergence
  • NN-Learning – Stochastic Gradient Descent Backpropagation
  • Notation
  • Notation – Friston
  • Pleasure and Pain
  • Stat Mech – Enthalpy
  • Stat Mech – Entropy
  • Stat Mech – Free Energy
  • Statistical Mechanics
  • Sustainability
  • Uncategorized
  • Variational Methods
  • Vartl Mthds – Active Inference
  • World Affairs

"Knowing One's Place" "Life as We Know It" "The Art of War" "Variational Algorithms for Approximate Bayesian Inference" "What Color Is My Parachute?" "Where Do I Go from Here with My Life?" "Wild Mercy" 1-D Cluster Variation Method 1D CVM active inference backpropagation black diamonds bunny trail ChatGPT Code coffee contrastive divergence CVM David Blei deep learning Geoffrey Hinton Hawaii-Christmas Infusions Ising equation John Crystal K-L Divergence Karl Friston Katherine Brooks Kullback-Leibler LLM Matthew Beal Matthew Bernstein Mattias Bal Mirabai Starr Museum of Alexandria notation Paul Werbos RBMs restricted Boltzmann machines Richard Bolles Roosters Russia invasion of Ukraine sabbath Sun Tzu Yann LeCun

Themesis, Inc.
Proudly powered by WordPress.