Without intrinsic motivation, the ability to hypothesize beyond training, or the capacity for real-world experimentation, AI will remain a tool that processes human-created information rather than a creator of new knowledge.
AI
Does an AI produce knowledge?
The LLM can interpolate within the knowledge space it’s been trained on, filling in gaps by blending concepts in novel ways. However, it will stay within the convex hull of its training data, constrained by the boundaries of what it has learned.
Open and closed loop learning
Closed-loop learning is almost a law of nature. It is applied by all sustainable natural systems and in most successful human endeavors.
Despite the obvious benefit or even need of closed-loop learning, we fail to implement it in many contexts that would clearly benefit from it.
Subjective experiences from an information differential
A hypothesis that I’d like to investigate is that subjective experiences arise as a consequence of state changes in an algorithm. I provisionally call such state changes information differentials.
Can an AI take responsibility?
A mantra repeated several times at a healthcare conference that I attended recently, is that only humans, not AI, can take responsibility for something. This made me think more deeply about what it really means to take responsibility and what, if anything, sets humans and AIs apart in this respect.
Emergent properties misunderstood
Most systems have the properties they have because they were designed that way, either by humans or by nature, not because they “emerged”.
A segway into information theory
Information theory may be useful for understanding active inference. As a minimum it offers some alternative perspectives on the quantities used in the active inference theory such as surprise, KL divergence, and entropy. This post provides a very short introduction to information theory.
Active inference lecture notes
I have set out to gain some insights into the active inference theory that provides a unified framework for perception, learning, decision making, and action. I will share my “lecture notes” combined with my own comments in this and future posts. My focus is on the mathematical models and the necessary algorithms.
Is artificial intelligence a threat to humanity?
I’m worried about irreversible climate change, nuclear war, war on rationality, isolationism, extreme nationalism, intolerance, pandemics, the declining mental health of the young, religious extremism, bioterrorism, and many other things. AI doesn’t make it to my top 10 list. Why?