Confession of an IQ Hacker, Part II
I never expected the first confession to become anything more than a curiosity, and still, do not; an odd little artifact of a life spent tinkering with systems that were never meant to be airtight. Yet here I am again, blathering, and more amused but perhaps more honest than before. If the first confession was about the why of my IQ-hacking escapades, this one is about the how—the practical, analog, pre-internet machinery that made the whole enterprise possible. And maybe, in the process, it’s also about the strange psychology of someone who never cared about the number attached to his name, but cared deeply about the machinery behind the number.
If Part I was the blueprint, Part II is the field manual.
Origins of a Method
Long before large language models, before search engines, before the internet became the collective prosthetic memory of humanity, there were libraries. And before libraries, there were people like me—kids who took apart radios, televisions, clocks, and occasionally their own reputations, just to see how the gears meshed.
By the time I encountered my first high-range IQ test, I had already internalized a worldview: every system can be reverse-engineered if you understand the rules it pretends not to have. The Mega Test was simply the first time I realized that the rules of the game allowed me to bring the entire library with me. That was the moment the Chinese Room stopped being a philosophical puzzle and became a practical engineering challenge.
The question was no longer “What is my IQ?” but “What is the IQ of the tools I can assemble?”
The Analogue Arsenal
People today assume that “hacking” implies computers. But in the 1980s and early 1990s, hacking meant something closer to carpentry: you built your tools by hand. You shaped them. You sharpened them.
You learned their quirks. And if you were clever, you could make them do things their creators never imagined.
What follows is a reconstruction of the toolkit I built—part memory, part archaeology, part confession.
1. The Dictionary Triangulation Method
This was the method I described in Part I, but it deserves a deeper dive because it became the backbone of everything else.
The idea was simple: If an analogy is a relationship between meanings, then the meanings can be extracted, compared, and ranked.
The workflow:
• Look up every term in an unabridged dictionary.
• Extract the “atoms”—the smallest semantic units.
• Cross-reference those atoms in crossword dictionaries, thesauri, and word-frequency lists.
• Identify overlaps, a quasi-Venn diagram of intersecting meanings.
• Rank the overlaps by density.
• Eliminate outliers.
• Repeat until convergence (quasi).
It was slow, methodical, but shockingly effective. It was also the closest thing to a manual semantic network one could build before WordNet existed.
The beauty of the method was that it didn’t require understanding—only correlation. It was the Chinese Room with improved furniture.
2. The Sequence Dissection Bench
Number sequences were the other half of the HRIQ universe. And unlike verbal items, they were often built from recognizable mathematical families.
Before the OEIS existed, I built my own miniature version using:
• Prime tables
• Factorization tables
• Logarithm tables
• Recreational math books
• CRC handbooks
• Notebooks filled with hand-computed differences and ratios
The method was always the same:
1.Compute first differences.
2.Compute second differences.
3.Check for polynomial fits.
4.Check for multiplicative patterns.
5.Check for alternating rules.
6.Check for digit-level patterns.
7.Check for base conversions.
8.Check for alphabetic encodings.
If all else failed, I would consult my “sequence graveyard”—a binder of oddities I had catalogued over the years. Many test creators reused structures without realizing it. Pattern recognition is a powerful thing when you’ve seen enough patterns.
3. The Anagrammatic Mill
Some HRIQ tests loved verbal puzzles that required rearranging letters, spotting hidden words, or identifying obscure terms. Without digital anagram solvers, I relied on:
• Scrabble dictionaries
• Crossword puzzle dictionaries
• Word-frequency lists
• Cryptogram-solving guides
I developed a habit of alphabetizing the letters of every candidate word by hand. Once alphabetized, comparing patterns became trivial. If two words shared the same alphabetized signature, they were anagrams. If not, they weren’t.
It was mechanical, almost meditative. And it worked.
4. ECRE or The Encyclopedic Cross-Reference Engine
Some items required knowledge so obscure that no dictionary could help. For these, I used:
• Encyclopedias (Britannica, World Book)
• Specialized encyclopedias (mythology, biology, art, music)
• Atlases
• Almanacs
The trick was not to read the entries, but to map them. I would extract:
• Hierarchies
• Taxonomies
• Chronologies
• Causal chains
Then I would match the structure of the test item to the structure of the encyclopedia entry. It was like fitting a key to a lock.
5. The Meta-Testmaker Analyzer
After taking enough tests, I realized something important: Test creators have signatures.
Some favored biological metaphors. Some favored mathematical elegance. Some favored trickery. Some favored obscurity for obscurity’s sake.
By studying the patterns of a particular author, I could often predict the shape of an answer before analyzing the content. It was like learning a composer’s style—you could recognize the melody even before hearing the notes.
6. The Index-Card Database
Before computers were common, I built a physical database using index cards. Each card contained:
• A word
• Its definition
• Its synonyms
• Its antonyms
• Its morphological variants
• Its frequency rank
• Its crossword dictionary associations
Over time, the box grew into a miniature semantic universe. When I encountered a new test, I would flip through the cards, looking for patterns. It was slow, but it was mine.
7. Early Computer Scripts
By the late 1980s and early 1990s, I had access to early personal computers. They were primitive by today’s standards, but they were powerful enough for:
• Factorization scripts
• Polynomial-fit routines
• Permutation generators
• Word-list searchers
• Simple neural-network experiments
These programs were not sophisticated, but they were fast. And speed mattered.
8. The Psychological Filter
One of the most underrated tools in my arsenal was psychological profiling—not of myself, but of the test creators.
If a test item felt “forced,” it usually meant the creator had a specific trick in mind. If it felt “natural,” it usually meant the answer was structurally simple. If it felt “clever,” it usually meant the creator wanted to show off.
Understanding the psychology behind the puzzle often revealed the puzzle itself.
The Turnkey Moment
Over time, these methods stopped being separate tools and became a single integrated system. I could sit down with a test, open my notebooks, dictionaries, and tables, and begin the process with almost mechanical precision.
It was no longer a challenge of intelligence. It was a challenge of engineering.
And that was the moment I realized something profound: I wasn’t measuring myself. I was measuring the system I had built.
The library had an IQ. The notebooks had an IQ. The algorithms had an IQ. The correlations had an IQ.
I was simply the operator.
Rise of the Machine
When large language models arrived, I recognized them instantly—not as something alien, but as something familiar. They were the Chinese Room with a billion more books. They were the dictionary triangulation method at industrial scale. They were the sequence dissection bench with infinite memory. They were the anagramming mill with perfect recall.
They were, in short, the turnkey operation I had spent decades building by hand.
And just like that, the game changed.
HRIQ tests grew more complex, but not more profound. They grew horizontally—more obscure references, more convoluted sequences, more noise—but not vertically. They did not ascend the hierarchy of complexity. They simply sprawled.
The machines could handle the sprawl. Humans could not. And so the era of analog IQ hacking quietly ended.
Reflections on a Life of Hacking IQ
People sometimes ask whether I regret it—whether I feel I “cheated” or “gamed the system.” But the truth is, I never cared about the number. I cared about the machinery. I cared about the process. I cared about the elegance of building a system that could solve problems without understanding them.
In that sense, I was never hacking IQ tests. I was hacking the idea of intelligence itself.
And now, watching LLMs do the same thing at a scale I could never have imagined, I feel something unexpected: not envy, not obsolescence, but kinship.
They are my descendants—not biologically, but philosophically. They are the Chinese Room with better lighting. They are the library with a pulse. And they are here to stay.
Closing Thoughts
If Part I was a confession, Part II is a eulogy—an elegy for a time when intelligence was something you could tinker with using nothing but books, notebooks, and stubborn curiosity. The world has moved on. The machines have taken the mantle. The Chinese Room has become a cathedral.
But somewhere in a box in my attic, there is still a stack of index cards, now yellowing with age, filled with definitions, synonyms, and scribbled notes. Of course, they are merely relics now, but also reminders.
Intelligence was never a number. It was a craft. And for a brief moment in history, I felt I was one of its craftsmen.
Kenneth Myers
Comments
Post a Comment