Some pundits claim that AIs pose an existential threat to humankind. I argued against this in an earlier post with the help of the observations that intelligence is orthogonal to will, that any action, whether beneficial or detrimental to humans requires will to take that action, and that an AI doesn’t have a will in any meaningful way. In my previous post I also claimed that an AI doesn’t really produce any new knowledge why it it difficult to see how it would even acquire superhuman knowledge. This begs three questions:
- What is knowledge?
- What is required to gain new knowledge.
- Why does an AI not have what it takes?
Starting with the first question, the classical definition of knowledge is justified true belief (JTB).
- Justification is the reasoning or evidence that supports a belief. It refers to the process or grounds upon which a person forms a belief. For a belief to count as knowledge, it can’t just be a lucky guess—it needs to be based on something solid, like evidence, logic, or reliable testimony.
- Truth is the true state of the universe.
- Belief is the personal hypothesis about the state of some aspect of the universe. A belief supervenes on a brain state.
This definition implies that knowledge is a certain kind of belief, a mental state. A belief is by something or “somebody” (in the brain), about something (outside the brain, or, in the case of self-awareness, about some other mental state or function of the same brain). Beliefs guide our actions and enable us to navigate the universe in a more or less adaptive way as explained in this post.
I find the justified true belief concept less than useful of several reasons:
- If adhered to literally, there would be very little knowledge in the world. Not even the pinnacle of science, quantum field theory, would be considered as knowledge since we know that it is not entirely true. It is a very good approximation to the truth in several domains though and a useful foundation of much of today’s technology.
- Since beliefs are held to guide our actions it is more important that they are useful in terms of the actions they guide us towards than that they are true. One example is the belief that all snakes are poisonous. It is patently untrue but if held increases the chances of survival of humans in snake-rich environments so it nevertheless useful. (We can call this kind of belief metaphorical knowledge.)
- The word belief hints at dualism, that there is a believer, a self, that believes something about something external to the believer. I reject the concept of a self (if I can) and therefore find the word belief at least unintuitive. For more arguments see this post and this post.
With inspiration from another definition of knowledge, pragmatism, I’d therefore like to define knowledge simply as useful information. The information is stored as a mental state. If a piece of information increases our chances to reach an objective in the universe, then it is useful, whether true, approximately true, or false and thefore knowledge.
We seek and acquire knowledge about the world through research, engineering, journalism, arts, sports, and many other activities. We usually do it to promote an objective; we have a purpose. If our objective is to negotiate a tricky stretch of single track on our mountainbike then we seek knowledge about obstacles, inclines, the friction coefficient of the surface etc. (It is likely that we use action inference for doing so.) If our objective is to go to Mars then we need knowledge about astronomy, physics, psychology, rocket science, and much more. The most basic objectives that we need knowledge for are of course survival and procreation.
Gaining new knowledge is with the above definition tantamount to establishing the usefulness of a tentative information (a hypothesis). It is often not possible to asses the usefulness of information without actual interaction with the universe in the form of closed loop learning that I wrote about in this earlier post. This is for instance very much the case when forming new hypotheses about physics or other natural sciences. A more mundane example is finding information about the road conditions of a winter road. I myself often perform an informal test of the friction coefficient or a winter road by a break test to asses how fast it is safe to drive.
The creation of new knowledge about the world thus requires (at least) these things:
- A universe that we wish to learn about.
- A motivation to learn, i.e., to gain information.
- An ability to come up useful tentative information (hypotheses) about the universe.
- Tools, energy, and skills to perform experiments in the universe to establish the usefulness of the information.
While the universe is out there for humans and AIs alike, AIs today are lacking in the other three departments.
Today’s AIs are not given a general will to learn about the universe except about very limited or artificial parts of the universe. It is “prodded” into any learning by the machine learning engineers during training but is not designed to learn continuously and widely.
As I claimed in my previous post, LLMs can’t say anything about what’s outside their training data with any confidence. It still takes a human to come up with a hypothesis outside the current body of knowledge that may eventually lead to a paradigm shift. (Why humans can come up with these ideas is an interesting but separate question.)
Also performing experiments in an environment is difficult for AIs except in some limited cases where different types of reinforcement learning can be applied, e.g., AlphaGo Zero. AIs still can’t dig like an archeologist or do chemical experiments in a laboratory. Or indeed invent necessary new experiments never done before.
Without intrinsic motivation, the ability to hypothesize beyond training, or the capacity for real-world experimentation, AI will remain a tool that processes human-created information rather than a creator of new knowledge.