Are Large Language Models Formalized Intuition?
Recently I've had the idea that what Large Language Models (LLMs) do is essentially a computational approach towards what psychiatrist Carl Gustav Jung called intuition. I told ChatGPT about this idea, and a conversation ensued which resulted in the following article.
Intuition
is commonly defined as an unconscious, feeling-like apprehension of
relationships. It enables humans to make quick judgments based on
experience without consciously tracing every intermediate step. In
cognitive psychology, it corresponds to what Daniel Kahneman calls
“System 1”: a mode of thinking that is automatic, associative, and
experience-based.
Intuition as Implicit Pattern Recognition
Intuitive
decisions are not the product of some mystical instinct but rather the
condensation of experience into implicit knowledge. Humans recognize
patterns and probabilities without formulating them explicitly. These
implicit representations arise through repeated perception and emotional
evaluation of situations. Intuition is therefore not irrational, but
pre-rational, a precursor of rational understanding.
The “Reasoning” of Language Models
Large
Language Models (LLMs) such as GPT-5 operate according to a very
different mechanism than the human brain - and yet the results often
appear similar.
An LLM is trained on billions of
text examples and learns which word sequences, sentence structures, and
semantic relationships typically occur together. It forms implicit
representations of meaning in high-dimensional vector spaces.
When
generating a response, the model does not apply explicit rules of
logic. It performs no deductive reasoning; instead, it recognizes
probabilistic patterns, much like human intuition “grasps” a situation
rather than logically deriving it.
Quasi-Intuitive Inference
One could therefore describe the “reasoning” of LLMs as quasi-intuitive:
- It operates unconsciously, beyond explicit rule application.
- It is grounded in the condensation of massive experiential data.
- It functions associatively rather than syllogistically.
However, models lack two crucial dimensions of human intuition:
1. Phenomenological consciousness: they do not feel their decisions.
2. Semantic intentionality: they do not understand what their statements refer to.
Analytical Representations of the Intuitive
Nevertheless, neural networks can be seen as a kind of formalized intuition. Whereas
human intuition arises unconsciously from experience, an LLM’s
“intuition” is produced through an analytical procedure: the training
process of backpropagation. What appears spontaneous and emotion-driven
in humans is here encoded mathematically in weights and vector spaces, an algorithmic distillation of collective experience.
To put it succinctly:
A neural network is the analytical reconstruction of intuition derived from data.
Conclusion
Large
Language Models are neither rational thinkers nor random generators.
They represent a new form of cognitive activity: a synthetic intuition,
emerging not from personal experience but from vast datasets.
In a certain sense, they mirror the workings of human intuition - only without consciousness, emotion, or self-reference.
Claus D. Volko (with help from ChatGPT)
Comments
Post a Comment