It takes a human to find the dragons

Some pundits claim that AIs pose an existential threat to humankind. I argued against this belief in an earlier post. with the help of the observation that intelligence and competence is orthogonal to will and that any action, whether beneficial or detrimental to humans requires will to take that action. In my previous post I also claimed that an AI doesn’t really produce any new knowledge why it it difficult to see how it would even acquire superhuman knowledge. This begs three questions:

  1. What is knowledge?
  2. What is required to gain new knowledge.
  3. Why does an AI not have what it takes?

Starting with the first question, the classical definition of knowledge is justified true belief (JTB).

  • Justification is the reasoning or evidence that supports your belief. It refers to the process or grounds upon which a person forms a belief. For a belief to count as knowledge, it can’t just be a lucky guess—it needs to be based on something solid, like evidence, logic, or reliable testimony.
  • Truth is the true state of the universe.
  • Belief is the personal hypothesis about the state of some aspect of the universe. A belief supervenes on a brain state.

This definition implies that knowledge is a certain kind of belief, a mental state. A belief is about something outside the brain or, in the case of self-awareness, about some other mental state or function of the same brain. Beliefs guide our actions and enable us to navigate the universe in a more or less adaptive way as explained in this post.

Justified true belief is not a particularly useful definition because of at least two reasons.

  • If adhered to literally, there would be very little knowledge in the world. Not even the pinnacle of science, quantum field theory, would be considered as knowledge since we know that it is not entirely true. It is a very good approximation to the truth though and a useful foundation of much of today’s technology.
  • Since beliefs are held to guide our actions it is more important that they are useful in terms of the actions they guide us towards than that they are true. One example is the belief that all snakes are poisonous. It is patently untrue but if held increases the chances of survival of humans in snake-rich environments so it nevertheless useful. (We can call this kind of belief metaphorical knowledge.)

With inspiration from another definition of knowledge, pragmatism, I’d therefore like to define knowledge as justified useful belief (JUB). If a belief increases our chances to reach an objective in the universe, then it is useful, whether true, approximately true, or false and thefore knowledge.

We seek and acquire knowledge about the world through research, engineering, journalism, arts, sports, and many other activities. We usually do it to promote an objective; we have a purpose. If our objective is to negotiate a tricky stretch of single track on our mountainbike then we seek knowledge about obstacles, inclines, the friction coefficient of the surface etc. It is likely that we use action inference for doing so. If our objective is to go to Mars then we need knowledge about astronomy, physics, psychology, rocket science, and much more. The most basic objectives that we need knowledge for are of course survival and procreation.

Gaining new knowledge is with the above definition tantamount to establishing the usefulness of a tentative belief (a hypothesis). It is often not possible to asses the usefulness of a belief without actual interaction with the universe in the form of closed loop learning that I wrote about in an earlier post. This is for instance very much the case when forming new hypotheses about physics or other natural sciences. A more mundane example is forming a belief about the road conditions of a winter road. I myself often perform an informal test of the friction coefficient or a winter road by a break test to asses how fast it is safe to drive.

The creation of new knowledge about the world thus requires (at least) these things:

  • A motivation to learn.
  • A universe that we wish to learn about.
  • An ability to come up useful tentative beliefs (hypotheses) about the universe.
  • Tools and skills to perform experiments in the universe to establish the usefulness of the belief.

While the universe is out there for humans and AIs alike, AIs today are lacking in the other three departments.

Today’s AIs are not given a general motivation to learn about the universe except about very limited or artificial parts of the universe. It is “prodded” into any learning by the machine learning engineers. Also, most AIs don’t have an innate purpose and they don’t take action. They can therefore strictly speaking not even hold a belief and therefore not gain and hold knowledge (a kind of a belief). Conceptually it is more correct to say that an AI holds information rather than beliefs since it lacks intentions.

As I claimed in my previous post, LLMs can’t say anything about what’s outside their training data with any confidence. It still takes a human to come up with a belief outside the current body of information that may eventually lead to a paradigm shift. (Why humans can come up with these ideas is an interesting but separate question.)

Also performing experiments in an environment is difficult for AIs except in some limited cases where different types of reinforcement learning can be applied, e.g., AlphaGo Zero. AIs still can’t dig as like an archeologist or do chemical experiments in a laboratory. Or indeed invent necessary new experiments never done before.

Without intrinsic motivation, the ability to hypothesize beyond training, or the capacity for real-world experimentation, AI will remain a tool that processes human-created information rather than a creator of new knowledge.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *