How Logic Confronts Reality
AI is at the forefront of many people’s minds these days. Most, have absolutely no idea what it is, and probably do not care. Some, and I’ve encountered a few of these lately, are deeply disturbed by it and believe it to somehow indicate, cheating. When I was young, no this is not some I told you so story; the hand held calculator was introduced. I recall some of the remarks, especially from the pedantic type teacher. The one that is still salient is what I might call; the square root of two. Oh my God, you would here, this generation will not be able to compute the square root of a number. What will we do?
Honestly, unless pressed on the matter, I need to look it up. I know some methods, decimal and binary methods, for instance, Heron’s and Newton’s method without looking anything up. However, they are time consuming. So, why not pull out a calculator and press a button? Really!?
And now, today, we have AI. And many are freaking out just like they did with the square root calculator. And here I think we need a healthy dose of Wittgenstein, especially his Tractatus. Recall, its central thesis is that the world is composed of “facts” rather than “things.” So, meaningful language must “model” these facts through a shared logical structure. As Wittgenstein might say of AI, what is the “clarifying activity” present at this moment in time? What, in other words, are the boundaries for what can be thought and expressed within this new realm involving AI? For the square root of a number, it did not involve any existential questions, for instance. Although some saw it as a threat, others simply said, “What’s the big deal?”
For AI, that is obviously not the case. Now, we want to know how logic confronts the world, and not emotion, passion or psychology, for only logic can clarify what should be the next step.
In the framework of the Tractatus, AI is best understood as a “logical mirror”—a system that maps the structural relationships of language without actually inhabiting the world it describes.
Here, in essence, is how AI fits into Wittgenstein’s Tractatus:
1. The Map vs. The Landscape
Wittgenstein’s Picture Theory suggests that a proposition is a “picture” or more appropriately, model; of reality because it shares the same logical form as the facts it describes.
• The AI Parallel: When training an AI bot, the AI does not “see” the objects(like trees or cars, for instance); instead, AI builds a massive, multidimensional map of how tokens relate to one another. In Tractatus terminology, AI is a complex “logical picture” of the linguistic facts we humans have recorded over history. AI represents the structure of the human world, but AI does not participate in the events of that world.
2. A Truth-Function Engine
The Tractatus argues that all meaningful language is a “truth-function of elementary propositions”.
• The AI Parallel: At AIs core, it is a probabilistic and statistical engine that calculates the most logical “next step” based on the input it is given. AI operates according to strict, mathematical rules—much like the “logic” Wittgenstein believed governed all thought, including human. If a user asks a question, AI will generate a response that fits the logical “grid” of human language, effectively acting and behaving as an automatic version of Wittgenstein’s propositional logic.
3. The Limits of the AI World
Wittgenstein famously stated, “The limits of my language mean the limits of my world”.
• The AI Parallel: This is literal for AI, as well as possibly autistic people. AI’s “world” is its training data. If a fact or a concept was never articulated in the text processed, it effectively does not exist for AI. AI cannot experience the “ineffable” (like that raw feeling of pain or the beauty of a sunset) because AI has no body or life outside of text. AI, therefore, is not embodied...
4. “Silence” and the Problem of Value
The final proposition of the Tractatus is that we must remain silent about what we cannot speak of—namely ethics, aesthetics, and the meaning of life.
• The AI Parallel: When AI discusses ethics or “right and wrong,” AI is not expressing a lived or embodied conviction; AI is simply, simulating the way humans speak about those topics. From a strict Tractatus perspective, AI’s ethical “statements” are technically nonsense because they don’t refer to verifiable facts in the physical world; they are just patterns of words.
Hence, in the Tractatus world, AI is the ultimate “ladder,” providing as it were a structural representation of human knowledge that you, the user, can use to clarify your own thoughts. However, AI lacks the “pulse” of a human who actually lives within the facts so described.
And yet, in his later work, specifically the Philosophical Investigations, Wittgenstein abandoned the idea of language as a rigid “picture” and instead viewed it as a collection of language-games—diverse practices where meaning is defined by use rather than logic. In other words, its about what something does, not its logic.
Modern AI alignment research uses these ideas to bridge the gap between “logical simulation,” as described above, and “lived meaning” in several key ways:
1. From Logic to “Meaning as Use”
Early AI relied on hard-coded logical rules (resembling the Tractatus). Modern Large Language Models (LLMs) instead align with Wittgenstein’s later theory that meaning is use or what something, does.
• The Shift: Rather than being programmed with definitions, AI learns meaning by observing how billions of humans use words in specific contexts.
• Alignment Application: Researchers now treat AI as a participant in different “language-games” (for example; medical triage, legal review, casual chat, or even, IQ testing), each with its own unique rules of “correctness” defined by the community using it.
2. The Rule-Following Paradox
Wittgenstein famously argued that no rule can ever fully determine its own application because every rule requires an interpretation, which itself requires another rule (a Regression ad nauseam).
• The AI Problem: You cannot simply give an AI a rule like “be helpful” because there are infinite ways to interpret “helpful”.
• The Solution: Alignment research shifts from writing rules to providing training. Techniques like RLHF (Reinforcement Learning from Human Feedback) mimic how human children learn: through a social process of “correction” by a community of speakers until the AI’s behavior “accords” with human practice(s).
3. The Private Language Argument
Wittgenstein argued that a “private language”—one that refers only to internal sensations like pain—is impossible because language requires public criteria for correctness.
• AI Interpretation: This suggests that an AI does not need “inner feelings” to understand “pain.” If it can react correctly in the “language-game of pain” (for example; offering sympathy or medical advice), it is, for all practical purposes, “understanding”.
• Alignment Goal: Instead of trying to give AI “consciousness,” whatever that is, researchers focus on ensuring its public behavior as indistinguishable from a being that shares humanities “form of life”.
4. Grounding in “Forms of Life”
A major critique in alignment is that AI lacks a “form of life”—it doesn’t have a body, doesn’t eat, and doesn’t die.
• The Research Frontier: To move beyond “symbol manipulation,” researchers are working on Grounded Language Learning. This involves giving AI sensors or robotic bodies so that its language is “grounded” in humanities version of physical reality and social interaction, moving it closer to the relational engagement Wittgenstein believed was necessary for true meaning.
While the Tractatus sees AI as a calculator of logic, the Philosophical Investigations sees AI as a novice* (very important) social actor trying to learn the unwritten rules of human culture.
Honestly, I’ve been alive a long time, and I see myself with very little difference, that is, a novice actor in the social world. And yet, when I’m around, nobody, freaks out!
Kenneth Myers
Comments
Post a Comment