Accidentally 1984
If language is a collective hallucination, then what happens when we outsource it?
It’s simple, one must do as to want offhand learning to observe in an elementary manner of looking. Begging to delve into this matter one must wonder who can do what if they are willing to question that there are basic rules for that which is easy to understand. Can you genuinely be as far along as you wish for something so close to reach? In conclusion, for one who has truly made decisive strides in a sort of direction, it is at that precise moment they earnestly desire to emulate that very endeavor. It's a matter of basic logic, really.
After much revision, I don't think the above sentence makes a whole lot of sense. But I wrote it, or at least my fingers did, temporarily freed the grammatical slavedriver of a brain that reins them in with a lattice of spiked nerve strands. Now this might read as less than language, even the most wet brained soberless schizobabble is poetry in comparison, but listen to me: this is the future. This is the writing of an artificial intelligence, either machine and human.
Every day entire departmentfulls of professors are sighing and furrowing their brows as they power through yet another freshman course of essay grading. Plagiarism, entry level contrarian takes, The fatal cries of 1000 pulverized coffee beans shot like fireworks into a cloudy night sky dotted with constellations of burned out neurons. The smug musings of minds amush, filled with sleep deprivation and seeping the alcoholic dew drops that grace ones forehead after 48 consecutive hours of binge drinking.
“Dullard”, “nincompoop”,”midwit”, the professor's comments chide under a lossy encryption of inoffensive and ingenious anti critical commentary. But despite the bureaucratese the intellectual must painstakingly wrap his boundless spite in, he is happy. He is happy because in the end, he knows even his plagued predementiac mind is still sharper than the youngbloods. His cerebral bunker is well fortified for at least another decade of austerity, schisms and ischemias. His obese ego gorges on an all you can eat buffet of true human ignorance.
The professor’s cerebral kasbah is not ready for acceleration of the AI arms race, a feedback loop causing the deceleration of human language.
- - -
Auto incomplete
Read this sentence:
Has Anyone Really Been Far Even as Decided to Use Even Go Want to do Look More Like?
This might seem like the blabber of a broken LLM 5 years out of date. An exaggeration for sure; today’s students are submitting AI generated “works” that are structured and coherent.
Take this output from Chat GPT for example:
"Has Anyone Really Been Far Even as Decided to Use Even Go Want to do Look More Like?"
The sentence has various elements, which, in a well-formed sentence, could represent subjects, verbs, adjectives, etc., but they don't adhere to standard English sentence structure.
"Has anyone really been far even as decided" - This part is difficult to parse as it seems to combine several unrelated phrases. It starts as a question ("Has anyone really been far...") which could be understood if it ended differently. The addition of "even as decided" doesn't fit grammatically or semantically, making it nonsensical.
"To Use Even Go Want to do" - This section also lacks grammatical sense. It starts as if it's introducing a purpose or intention ("to Use...") but then it becomes a series of verb phrases without clear subjects or objects. It's as if multiple different sentences were combined without regard to their original structure or meaning.
"Look More Like?" - This phrase is missing a subject and an object for comparison. "Like what?" would be the obvious question here. We don't know what is being compared, so it is incomplete.
All these phrases do contain recognizable parts of speech, but they are not put together in a way that conforms to the syntactic or semantic rules of English. Therefore, the sentence as a whole is nonsensical in its current form.
A structured and coherent analysis. But it is missing something. The absurdity of human emotion? Wait, why are we analyzing this sentence in the first place. Why are you reading this essay? To learn something? To confirm pre existing opinions? Maybe, but those are all pretexts. You’re here because of unroute curiosity, fate, or some other abstract concept that is incomprehensible to a machine mind. This essay is a captcha and you have passed the test.
What sets a simple undergraduate apart from that first special needs language model, a polymath professor from the latter? While today’s models are more intelligent and present theses more nuanced than their end users could ever dream up even if blessed with an elephant’s dosage of adderall, they are not any different from their precursors. And they are not languages as we know and understand them.
LLMs no matter how fit or feeble, are imitations of language. For two reasons:
1. The why of their output. How the creation of output remolds them for the future. And how their input is picked and processed.
2.) How they are taught. Cast away from subtext, phonics, a true intuition for words and all their combinations, LLM are probability machines only capable of guessing. Autocomplete.
On point one, writing doesn’t exist solely to argue a thesis, but to deepen meaning. And by meaning I don’t mean a rote library of knowledge. It is not enough to communicate just ideas, but wisdom, states of being that are yet unattainable for a machine. Each essay, utterance, and blurb one writes adds to a paper trail of practice paving way to novel ways of thinking. They are the screws that fasten a lightning rod ready to channel divine inspiration. No machine can channel this as they’re limited to pillaging a small subset of the known rather than diving into the unknown corpus. And they can’t dive out of a genuine curiosity, an urge to go to greater and greater depths to the point of drowning; they only work out of utility or request. Artificial intelligence is rootless without naive curiosity. But a machine can’t be naive, it can only be binary: right or wrong, only capable of being on the upside or downside of its own training, but never outside of it. A tool that can only predict your thoughts at the most rudimentary level of words. It can combine them with other thoughts and ideas like oil over water, but at best it’s an emulsion, it can’t truly integrate them at the microscopic level like your closest friend or or enemy. There's nothing it can integrate these stray thoughts back into other than additional surface level thoughts. There is no core personality, no emergent identity. A serverfarm sized bowl of alphabet soup.
On point 2, let’s ask ChatGPT itself how it learns:
ChatGPT, in the language of a layman, how do you learn?
“
Reading lots of stuff: We show the computer lots and lots of written text from the internet, like articles, stories, and social media posts.Breaking it down: The computer learns to understand and organize the text by breaking it into small pieces called "tokens."
Learning patterns: Using special brain-like structures (transformers), the computer learns to find patterns in the text. It learns what words usually come after other words.
Guessing game: The computer plays a guessing game, trying to predict the next word in a sentence based on what came before. It does this over and over to get better at it.
Supercomputer training: To learn all this, the computer needs a lot of brainpower, so it uses special powerful machines.
Practice makes perfect: The more the computer practices, the smarter it gets in understanding and using language.
Making it better: The computer's trainers keep tweaking and improving the training to make it even smarter.
So, in the end, the computer becomes like a language expert and can have conversations with people like you, understanding what you say and responding in a way that makes sense!
“
There you have it. A guessing game. A probability machine. One giant token eating slot machine. It’s not a tool, but a vice. To be a user is to meld with the machine itself, forgetting about language, the outside world, to lose yourself. Like these poor folks:
But what happens when we rely too much on these LLMS? What happens when humans are taught just the same, and for the same reasons? Not to ponder and wonder, but to predict and wade? If Information is data in context, and knowledge is information in action, then what the fuck do you get with data near context? Information inaction. Antiwisdom. Well, we don’t have to look to the future to find out, because one type of AI has been with us since the 80’s. This crop has just reached maturity and is hitting the market with interesting results. I present to you:
The Three Cueing Method.
This “innovation” was borne out of alternate education methods of the 60’s. But unlike gonzo lectures and open classrooms, this method has had a staying power for roughly the whole of 21st century to to a heavy lobby for three cueing teaching materials in classrooms.
What is the three cuing method?
A theory that young readers are best taught to read by guessing context from cues. The three cues are:
Graphic cues: Guessing the word based on the first letter (The cat says m---)
Syntactic cues: Guessing based on word class (noun, adj, etc. “The cat was tired so he --- down)
Semantic cues: Guessing a single word in a sentence based on context clues such as pictures or previous sentences.
This doesn’t sound like reading to me. One big problem with this approach is that readers never gain an intuitive grasp of the words they are using. When reading a passage, they are always trying to fill in what comes next, predicting words rather than instantly visualizing them. This is immensely mentally taxing, and causes context to be misinterpreted, or outright forgotten. It’s just one big guessing game, one awfully familiar to the one’ ChatGPT is grinding like at 24/7. We’re not endowing our youth with a lust for knowledge, a knack for curiosity, let alone even the most basic education or social skills. Hell, we’re not even preparing them for rote manual labor, holding a pencil might as well be the sword in the stone to them. The prize for winning the guessing game is to become a rootless hallucinating autocomplete, less than schizophrenic because even their most nonsensical AI delusions are autistically rule bound with no ounce of originality.
Before the three cuing method, reading teaching methods were relatively static throughout the history of standardized education. There were two prominent schools of thought: visual memory process and phonics. In the visual memory school, educators emphasized pairing entire words with images, through flash cards, or most popularly, picture books. Students would pair words with images until they had an intuitive mind's eye for every word they read. Phonics is the audio inverse of this: students would rigorously practice sounding out words and syllables until they had an intuitive “mind’s ear” for how every word they read sounded. This is more complex than just learning basic alphabet sounds, which is still common today. These processes are alike in that they give students a deep mental map for every word they learn, an intuitive feel and understanding. But more importantly, the act of visualizing or sounding out words is a framework in and of itself that once mastered after incessant practice, serves to make learning new words an instant and painless process. Young learners could pick up brainboggling SAT words with the ease they pick up curse words in a PVP first person shooter match. Heck, the audio-visual-adrenaline element of competitive video games probably is just this comprehensive input on steroids. Once perfected, this framework can be scaled up to intuitive grapes of sentences, or even paragraph structures.
Visual memory and phonics are the most effective “encryption” algorithms on earth, better than the three cuing method, and surely better than some throttled LLM with untethered access to a snapshot of the internet archive with all of its steamy fanfiction and ragebait. But in our imperfect world, things that are too good quickly disappear. (link to encryption guy that disappeared.) Market forces, hedonistic adaptation, planned obsolescence? There’s always a race to the bottom for the latest and greatest scenic route in short-cut clothing. This race is being won by artificial intelligence, whether it be human or machine.
- - -
Dead Reality Theory
What happens when all of our institutions from the hallowed halls of our courts and places of learning, to our most cancerous digital cesspits, our offbeat chatrooms, desperate dating pools, and secret group chats when they are overrun by human and inhuman artificial intelligence? Mechanical and mortal brains alike overflowing with stress and frustration where context and intuition deserve to lie?
For one, a communication breakdown. The intuitive understanding of words is necessary for true, deep, one on one communication. Every misunderstood word is a jarring sidetrack, , an unheard movie line. Like with their texts, the illiterati of our future will forget their place, stutter over the most basic milestones, lose their lines in life. At best one's peers are a grainy recording, at worst an uncanny valley archaic 3D animation, missing those barely perceptible qualities that make a ragdoll human (polar express pic). Where one can’t understand or even feel the emotions of others, they’re forced to make a guess, using the three cues method to fill in the gaps. But this is one way communication, dressing up one's closest peers with your own thoughts and story, limited to benign NPCs in one’s own video game. This just doesn’t apply to communication, but all doors of perception. When you lose one sense, it is filled in by another. EQ replaced by machine logic. Theory of mind replaced by computational learning theory . Everybody has to become their own Mr. Beast, running A/B testing on their own friends for maximum engagement. But because the logic has to compensate for the lost EQ, it is over taxed, exhausted. When you put a computer nerd in the foreman’s office the whole building crumbles. The five senses, and all of the other ones we haven’t yet discovered are blunted.
If your whole framework of understanding is based on unintuitive guessing over intuitive understanding, you will lose your intuition. All system 1 thinking. People will just make worse, overthought choices. Everyone will be worse at dancing, lying, doing anything on a whim because they just can’t improvise. Like some kind of queer mechanical turk, we will all be human marionettes with machines pulling our strings.
The worst consequence of this machine learning, this machine being, is disillusionment. With the aforementioned communication breakdown and lack of shared context, we will all be lonelier than ever. Remember the professor from earlier? His ego will starve if only fed a diet of artificial essays to critique. Dead internet theory (link) will metastasize out of the internet and into the real world itself. It will become harder to stumble upon a truly human interaction removed from algorithmic influence. And if you do stumble upon such an interaction, with a cashier, your annoying neighbor, a homeless person, it will be even more difficult to discern whether this interaction truly was human. Not some reiterated meme, a marketing ploy, a dance for the machine gaze of social media. But a cry for help, a desperate whimper for one last hit of shared context before we’re surely exiled to our own individual echo chambers for eternity.
What is the solution to all this?
For one we need to bring back phonics and not rely on some harebrained method. AI isn’t an inherently evil technology, the cat is out of the bag. Hell, maybe it can be used to create Phonics,real reading exercises personalized to one’s weak points. But it would need to be trained on an appropriate visualization/phonics training set to stress intuitive understanding of language. Maybe we should also stress cultural limits on using LLMs to express novel ideas, or as a crutch for communication. It’s fantastic for standardized data, for example practicing and creating problem sets for a multiple choice test. But our youth needs to learn how to create their own original ideas in their own unique voice. LLMs should be exiled from the realm of interpersonal communication, or else our own voices will be crowded out by rote machine language.
Aside from limits, perhaps we should rely on LLMs to automate those inhuman bureaucratic processes. Filling out paperwork at the DMV, rote business emails that at their core are mechanistic gears of capital dressed up in the formalities of person to person communication, with no genuine emotion or shared context. Contracts. If we outsource these inherently anti human processes, maybe we’ll have more bandwidth to work on creating more human interactions, shared contexts, as cheesy as it may sound, love. That’s what life is all about after all.
If we can’t do the bare minimum as humans to ensure empathetic communication, true genuine understanding, we’ll all end up as castaways flinging messages in bottles, unread screeds washing up in a global landfill of meaning. Perhaps one may read:
"War is p____. Freedom is s____. Ignorance is s____."