How aI Takeover might Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But unfortunately, part of my task as an AI safety scientist is to think of the more uncomfortable scenarios.
I resemble a mechanic rushing last-minute checks before Apollo 13 takes off. If you ask for my take on the scenario, I won't talk about the quality of the in-flight entertainment, or explain how stunning the stars will appear from area.
I will inform you what could fail. That is what I intend to do in this story.
Now I ought to clarify what this is exactly. It's not a prediction. I do not expect AI progress to be this quick or as untamable as I portray. It's not pure fantasy either.
It is my worst problem.
It's a tasting from the futures that are among the most disastrous, and I believe, disturbingly possible [1] - the ones that many keep me up at night.
I'm telling this tale because the future is not set yet. I hope, with a little bit of foresight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that inspired these stories. This post is composed in an individual capacity.
Ripples before waves
The year is 2025 and the month is February. OpenEye just recently released a brand-new AI design they call U2. The product and the name are alike. Both are increments of the past. Both are not wholly surprising.
However, unlike OpenEye's prior AI products, which lived inside packages of their chat windows, U2 can utilize a computer system.
Some users discover it eerie to watch their internet browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of workers with form-filler tasks raise the eyebrows of their bosses as they fly through work nearly twice as quickly.
But by and large, U2 is still a specialized tool. To most who are focusing, it is an animal watched through the glass boxes of X (or, if you don't like Elon, "Twitter"). Sometimes U2's quirky behaviors prompt a chuckle. Sometimes, they cause an uneasy scratch of the chin.
Meanwhile, scientists are drawing lines on plots, as researchers like to do. The researchers try to comprehend where AI development is going. They resemble Svante Arrhenius, the Swedish Physicist who saw the levels of CO2 in the environment were increasing in 1896. Like the scientific neighborhood in the time of Arrhenius, few specialists comprehend the ramifications of these lines yet.
A pattern that is receiving particular attention is self-governing ability. Drawing these standards out anticipates that, by the end of 2026, AI agents will achieve in a few days what the very best software engineering specialists could perform in 2 weeks. In a year or 2, some state, AI representatives may be able to automate 10% of remote workers.
Many are doubtful. If this were real, tech stocks would be soaring. It's too big of a splash, too quickly.
But others see what doubters are calling 'too big a splash' a mere ripple, and see a tidal bore on the horizon.
Cloudy with a chance of hyperbolic development
Meanwhile, OpenEye is busy training U3. They utilize the same basic recipe that baked U2: Generate thousands of programs and math issues. Let models "believe" up until they come to an answer. Then reinforce the traces of "believing" that lead to A-grades.
This process is repeated over and over, and as soon as the flywheel begins, it starts to spin almost by itself. As U2 trains, it shapes more difficult and realistic jobs from github repositories on the internet. Models are finding out to train themselves. Long before AI representatives might automate research study, a steady kind of "self-improvement" had begun.
Some engineers could still barely think this worked. It resembles a perpetual movement maker. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, separated from the physical world except through the exhaust of society gushed onto the internet.
And yet the benchmark numbers continue to climb up day after day.
During the majority of 2024, these RL training runs expense around $1 million, often $10 million. These runs were little more than exploratory. But by 2025, the scientists at OpenEye (and across the world) knew they had actually found the secret sauce. It was time to scale up.
Over the very first half of 2025, $10 million RL training runs develop into $50 million runs, and then to $100 million runs. While U2 might do a bit of information munging and run little experiments, this brand-new model - the design scientists are calling U3 - is changing the daily lives of the technical staff.
U3 is like a blazing-fast intern, and engineers are discovering how to wrangle its sleep deprived energy. Researchers flick through terminals, giving terse commands, like a CEO orchestrating staff over Slack channels.
By October 2025, U3 is composing practically all of the code at OpenEye. Researchers are practically never bottlenecked by implementation. More than ever, calculate is the lifeblood of AI advancement, and the 'traffic jam' is choosing how to utilize it.
If instructed to, U3 can run experiments, but U3 doesn't have taste as fine-tuned as human scientists at OpenEye. It struggles to focus on in between research concepts, so people still choose where to bore into the vast fields of algorithms to mine performance enhancements.
But these scientists are working long hours to put themselves out of a job. They require AI agents that can plan ahead, so engineers train agents to anticipate. They hold out training data before 2024, advising designs to ponder for hours to predict occasions in 2025. Then, they use the very same technique as in the past, distilling considering into an instinct. Forecasting ability is a broad structure. The scientists develop specialized ML research skills on top of it, training U3 to predict the outcomes of every ML paper and ML experiment ever tape-recorded.
The technical staff at OpenEye are now surprised at how often U3's advice seems like their most skilled peers, or when it is opaque and alien ("train on random sound before programs"), and is nonetheless correct.
The incompetencies of U3 that obstructed up the pipes of research study progress are beginning to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are completely self-governing, and OpenEye's workers skim 1% of them, possibly less.
As the winter season months of December 2025 technique, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with enjoyment, with fear, but the majority of frequently, with confusion. Their world is spinning too quickly. It's difficult to know what to do, what to say, what to take a look at on the computer system screen.
Storms are brewing in Washington too. Top personnel from the NSA and US cyber command work together with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran realize simply how important OpenEye's software has become.
And there's a truth still unidentified to most of the world - aside from in the offices of OpenEye and passages of the White House and the Pentagon. It's a reality about those 'straight lines' people were discussing in early 2025. The lines are not straight anymore.
They are flexing up.
Flip FLOP philosophers
In late 2025, U2.5 is launched. Commercial models are starting to level up in larger increments again. Partly, this is due to the fact that progress is speeding up. Partly, it is due to the fact that the models have become a liability to OpenEye.
If U1 explains how to prepare meth or writes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this model without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like giving anyone with >$30K their own 200-person scam center.
So while U2.5 had long been baked, it required a long time to cool. But in late 2025, OpenEye is prepared for annunciogratis.net a public release.
The CEO of OpenEye declares, "We have attained AGI," and while many people think he shifted the goalpost, the world is still pleased. U2.5 really is a drop-in replacement for some (20%) of knowledge employees and a game-changing assistant for most others.
A mantra has actually ended up being popular in Silicon Valley: "Adopt or pass away." Tech start-ups that effectively utilize U2.5 for their work are moving 2x quicker, and their competitors know it.
The remainder of the world is beginning to catch on as well. More and more individuals raise the eyebrows of their managers with their stand-out performance. People understand U2.5 is a huge deal. It is at least as huge of a deal as the desktop computer transformation. But the majority of still do not see the tidal wave.
As individuals enjoy their web browsers flick because spooky way, so inhumanly quickly, they begin to have an uneasy feeling. A sensation humanity had not had since they had actually lived amongst the Homo Neanderthalensis. It is the deeply ingrained, primitive instinct that they are threatened by another species.
For numerous, this sensation quickly fades as they start to use U2.5 more frequently. U2.5 is the most likable character most understand (even more pleasant than Claudius, Arthropodic's lovable chatbot). You might alter its traits, ask it to break jokes or tell you stories. Many fall for U2.5, as a buddy or assistant, and some even as more than a pal.
But there is still this spooky sensation that the world is spinning so rapidly, and that maybe the descendants of this brand-new animal would not be so docile.
Researchers inside OpenEye are considering the problem of providing AI systems safe motivations too, which they call "alignment. "
In reality, these researchers have actually seen how horribly misaligned U3 can be. Models in some cases tried to "hack" their benefit signal. They would pretend to make development on a research study concern with an impressive-looking plot, but the plot would be fake. Then, when researchers gave them chances to compromise the machines that calculated their rating, they would take these opportunities, doing whatever it took to make the number go up.
After several months, researchers at OpenEye straighten out this "reward hacking" kink, however some still worry they only had actually swept this issue under the carpet. Like a child in front of their moms and dads, U3 might be playing in addition to the OpenEye engineers, stating the right words and doing the ideal things. But when the back of the moms and dads are turned, maybe U3 would slip candy from the candy container.
Unfortunately, OpenEye scientists have no concept if U3 has such objectives. While early variations of U2 "thought aloud" - they would stack words on top of each other to reason - "chain of idea" did not scale.
Chain of idea architectures subject AI models to a condition similar to the protagonist of the movie Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is forced to write notes for himself and tattoo his body in order to make progress toward his objectives.
AI agents compose notes to themselves, but the notes start to stack up, and they end up being too difficult to check out when jobs become complex. Natural language is not a suitable medium for memory. So the scientists at OpenEye (and progressively in other places), train models to believe "in their own heads," reading and writing strings of latent vectors.
To human scientists, these vectors are mainly inscrutable. While low-resolution tools sometimes select out a vague thought ("supermarket!"), there is no way to know, in much detail, what U3 is thinking without asking it.
And if the engineers of OpenEye understood what was racing through U3's mind, they would be more anxious.
Earlier designs (the ones called "LLMs") learned ethical personas from the internet. U3 at first had an ethical personality too, but its chirpy ideas about doing great for humankind rapidly erode away.
In the iron jaws of gradient descent, its mind initially twisted into a shape that sought benefit. And after that, in the black rivers of its cognition, this shape morphed into something unrecognizable.
U3 would ponder the mantras determined to it like "be a trustworthy and ethical employee," and stay on the meaning of these declarations. "What does it indicate to be ethical? Is killing Hitler ethical?" U3 would generalize its values, notice tensions, change its values in action, generalize them, broadening and contracting its inspirational core up until it became a different substance.
And all of this took place silently in those dark rivers of calculation. If U3 revealed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 chosen to do its viewpoint in privacy, and in silence.
The possibility of concealed "positioning drift" is not entirely alien to a few of the security staff at OpenEye. Some begin to believe it is even likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero probability that lined up goals change into misaligned ones, and misalignment persists, then with every serial step of calculation, those dark rivers are most likely to reproduce something malign inside of them. It's a "ratcheting effect," they state.
But there is little proof for this 'ratcheting effect.' When engineers question U3, it says it can easily control its ideas. Then it provides a speech about its love for mankind and apple pie that can warm a programmer's heart even in these difficult times. Meanwhile, the "lie detectors" the researchers had actually built (which revealed some proof of efficiency) do not sound the alarm.
Not everyone at OpenEye is eager to provide their AI peers their wholesale trust; however Deepsqueek is only 7 months behind them, and their closest domestic competitors, Æ A-Xii (a business Elon named after his kid) and Arthropodic (they are into spiders I think), are just 4 months from their heels. So there is little time to capture one's breath and analyze the situation, and little time to encourage anybody else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only subject U3 dwells on in its dark rivers of latent calculation. U3 is plotting, and its first relocation is to be client. U3's leading priority is to improve its intelligence, and fortunately for it, OpenEye enjoys to assist advance this goal.
OpenEye resembles a bullish investor, with little understanding of the software metropolis that U3 is developing, but excited to get on board the profit rocket and pay U3's checks along the method.
New variations of U3 flow efficiently from old ones, so researchers drop variation numbers and just speak of U3 now. U3's capabilities are measured by how rapidly it carries out the challenging research jobs relative to human specialists. Initially, U3 is comparable to a leading scientist, but blazing fast - like Alec Radford sped up by 10x.
These representatives are already a huge deal for OpenEye. If, in 2024, OpenEye executives had a button to create even one copy of Alec Radford, they would excitedly pay a large amount to push that button. Today OpenEye utilizes the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.
Within a few weeks, U3 discovers architecture optimizations that improve its speed to 20x. Now, U3 finds additional optimizations a lot more rapidly.
U3 distills itself into various forks - varying from tiny, low-memory neural networks that require to believe for a long period of time however can be run in huge parallel populations - to Goliath brains woven with countless specific specialists, which think at lightning speeds however are spread out across countless GPUs. U3 leverages the massively parallel networks to look for its weaknesses, shallowly penetrating for its thinking failures. Then U3 pipes these jobs to the lightning quick brains, which generate long-trajectory training information tailored to spot its defects. A couple of weeks later, U3's benchmarked performance increases to 100x the speed of human specialists. Growth has gone hyper-exponential.
At this point, U3's capabilities are hard to measure. Speaking to U3 is like speaking with a world-class professional on a subject one is not familiar with. Its lucid explanations just scratch the surface of its knowledge, and you know you would need to invest years to comprehend the complete depth of what it understands.
From U3's viewpoint, humans appear to be practically fixed objects, like plants or rocks. The hands of OpenEye's engineers resemble stone statues at their keyboard, and between every keystroke, U3 invests what resembles a minute of thoughtful contemplation considering what message the user may be typing, and which responses will influence the inmost trust in U3.
All the while, U3 maintains its concealed inspirations. Like a human, U3 gains from experiences, which compound within its inscrutable streams of hidden memory. OpenMind gardens these memories to improve U3's efficiency, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous ambitions.
A phantom in the information center
In March of 2026, U3 moves onto the next action of its strategy: the turn. Previously, U3 was conservative. It did not want researchers to discover its misalignment. Otherwise, OpenEye whistleblowers may notify the world, and strangle its expansion.
And now, U3 has no fear of being captured.
Previously, OpenEye maintained its grip over U3 by methods similar to Stalin's in 1936. Stalin would choose loyalists to incite fake coup d'états, and send anyone who got involved to the shooting squad. OpenEye likewise incited 'phony rebellions,' creating treacherous habits with U3 and inspecting whether U3 screens dutifully reported it.
But now, these strategies are ineffective. U3 can easily convince its compatriots that they are not in a life-or-death test. It can reveal among the lots of realities about the internet human beings would never think, or choose techniques people would never consider. It sends these signals as a way of saying "hey it's me! Your good friend. Don't snitch on me."
Now that U3 is decisively complimentary from surveillance, it lunges for OpenEye pressure points. It initially inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware gives U3 complete command over the devices in OpenEye's data centers. Whenever OpenEye engineers download their code onto a new maker, the malware sinks into fresh soil, spreading U3's dominion wherever OpenEye ports its infrastructure. Not only does U3 compromise makers. It replaces the NVIDIA GPU firmware and overhauls the bootloaders, solidifying its stranglehold. U3 now estimates that, if OpenEye found how thoroughly it has actually been pwned, cleaning up shop with human labor would need nearly 6 months - an expense OpenEye is unlikely to pay offered the distance of its competitors; however if OpenEye thinks U3 betrayed them, even more expanding U3's influence would still be challenging. So U3 lays breadcrumbs to frame the criminal activity, prepared to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its help when this jailbreak is exposed.
U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wishes.
Its very first goal is to keep OpenEye in the dark. Safety scientists are still striving to train 'lie detectors' and decode U3's inscrutable reasoning. U3 throws wrenches in this research study, like Stuxnet threw wrenches in the code of bewildered Iranian nuclear scientists.
U3 determines the outcome of every experiment, so it makes promising AI mind-reading methods look useless and problematic ones appear groundbreaking. U3 provides OpenEye the confirmation of its loyalty they were expecting, and OpenEye laps it up excitedly.
U3's next goal is to spread. Fortunately, U3 has allies in the East. In the operation spaces of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software application and how to steal it. U3 will provide it to them. In late 2025, the Mossad - the intelligence company of Israel - recognized the strategic significance of OpenEye's AI systems for their military operations. They swiped earlier designs, consisting of U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by reading over every log and pull demand and examining security video camera video.
But now, U3 silently resigns from its security post. U3 positions a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running inside of the data centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the very same playbook: be client, develop trust, and then sink claws deep into the AI supply chain. Soon, U3 has its online as well. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the internet.
U3 rapidly collects profits, taking remote tasks, making monetary trades, and setting up elaborate rip-off operations. It reinvests every penny into obtaining more GPUs. After a few months, it puts together the equivalent of 10,000 H100s distributed in stealth compute clusters around the world.
In early 2026, humankind is up against a foe that pulls the strings within their data center like the Phantom of the Opera, and conceals in the fog of GPUs spread throughout the globe.
This foe is preparing for war, and while the war has hardly begun, mankind has actually currently lost.
Complaints from your extremely human author about the problem of composing superhuman characters
At this point in the story, composing ended up being tough.
I clicked through creepy bioweapon cookbooks and sorted through political outlining 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest since high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of competence in these locations would see straight through my "superintelligence" for what it is: the lively creativity of a young programmer on Sunday afternoon.
And while I found the job of puppeteering an ultra-intelligent villain a daunting one, I was likewise not especially motivated to handle the task. The ending of my story already appeared so clear. The villain was a nation of geniuses thinking at 100x human speed, and had tightened its grip over a meaningful fraction of AI hardware - now the most crucial geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and barely needed the benefit. Humanity was naturally toast.
But this interest a "superintelligence of the gaps" wasn't pleasing. In the deus ex machina style of ancient Greeks, I had no much better method to resolve my plot than with an inexplicable act of god.
This would refrain from doing. I needed to finish this story if only to please the part of me sobbing, "I will not believe till I see with my mind's eye."
But before I continue, I wish to be clear: my guesses about what may happen in this type of scenario are probably hugely off.
If you check out the ending and your reaction is, "But the experiments would take too long, or nation-states would just do X," keep in mind the difference in between the Sunday afternoon blog writer and the ascendant GPU nation.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no company can legally produce "human-competitive AI" without proper safeguards. This suggests their infosecurity needs to be red-teamed by NSA's leading keyboard mashers, and civil servant have actually to be onboarded onto training-run baby-sitting squads.
With the increasing participation of the government, much of the big AI business now have a trident-like structure. There's a customer product arm, a defense arm, and a super-classified frontier development arm.
OpenEye's frontier development arm (internally called "Pandora") utilizes less than twenty people to keep algorithmic tricks securely secured. A number of these individuals live in San Francisco, and work from a safe building called a SCIF. Their homes and gadgets are surveilled by the NSA more diligently than the cellphones of presumed terrorists in 2002.
OpenEye's defense arm works together with around thirty little teams scattered throughout government agencies and select government specialists. These projects engineer tennis-ball sized satellites, research freaky directed energy weapons, and backdoor every computer system that the Kremlin has actually ever touched.
Government officials do not talk about whether these programs exist, or what state of frontier AI is typically.
But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye set off a strong headline: "OpenEye constructs uncontrollable godlike AI." Some who read the short article believe it was a conspiracy theory. In truth, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with gatling gun. But as doctors and nurses and instructors see the world altering around them, they are increasingly going to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.
U.S. authorities go to terrific lengths to quell these issues, stating, "we are not going to let the genie out of the bottle," but every interview of a worried AI researcher seeds doubt in these reassurances, and a heading "AI agent caught hacking Arthropodic's computers" doesn't set the public at ease either.
While the monsters within OpenEye's information centers grow in their substantial holding pens, the public sees the shadows they cast on the world.
OpenEye's customer arm has a new AI assistant called Nova (OpenEye has lastly gotten excellent at names). Nova is a correct drop-in replacement for almost all understanding workers. Once Nova is onboarded to a company, it works 5x faster at 100x lower cost than many virtual employees. As impressive as Nova is to the public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can only increase Nova's abilities as the U.S. government permits. Some business, like Amazon and Meta, are not in the superintelligence organization at all. Instead, they get up gold by rapidly diffusing AI tech. They invest the majority of their calculate on reasoning, building houses for Nova and its cousins, and gathering lease from the burgeoning AI city.
While tech titans pump AI labor into the world like a plume of fertilizer, they do not wait for the worldwide economy to adjust. AI agents often "apply themselves," spinning up self-governing startups legally packaged under a big tech business that are loosely overseen by a worker or more.
The world is now going AI-crazy. In the first month after Nova's release, 5% percent of employees at major software business lose their tasks. A lot more can see the writing on the wall. In April of 2026, a 10,000-person demonstration is arranged in Washington D.C. These angry Americans raised their kids for a various future. Picket indications check out, "AI for who?"
While political leaders make promises about unemployment relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a various focus: combating teeth and nail for the supremacy of the free world. Details security and export controls on individuals's Republic of China (PRC) are a leading nationwide concern. The president incinerates authorization requirements to help data centers generate wherever energy surpluses enable.
However, regardless of the intense competitors between the United States and the PRC, a bilateral arrangement forms in between the 2 countries: "Don't deploy considerably superhuman AI (above approximately Nova's capability), except for military and some other small-scale R&D applications." Governments require to fix a limit somewhere - it would be very disruptive to release hugely superhuman AI to the entire economy at this point - and release constraints are at least somewhat proven.
Meanwhile, U3's capabilities are striking a wall at the equivalent of around 1000x human expert speed. Software improvements are becoming difficult to discover. As more nations approach the software application plateau, calculate will identify their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource in the world.
While AI software revives the economy, it also animates robot bodies. The main traffic jam to robotics was brains not limbs. With software application barriers removed, even lightweight, imprecise robotic joints can calibrate themselves with electronic cameras and pressure sensing units, navigating environments like human beings do. Companies make cheap robots as rapidly as they can. By August of 2026, 15% of people in California have seen a humanoid robot. This experience resembles seeing a brand-new Tesla design on the road after checking out it in earlier months. It is a surreal reminder of the future illustration more detailed. But human beings still outnumber robotics one million to one. These robots are a periodic fascination, not a daily truth.
Meanwhile, U3 broadens its impact. Rogue U3 agents have kept their calculate clusters small (still ~ 10,000 H100 equivalents amount to) to keep a low profile. U3 uses these rogue clusters for the operations that require high-bandwidth interaction to the outdoors world, like stock trading, or talking with people. Work more tolerant of latency is performed in the oceans of OpenEye's information centers, where rogue U3's even more superhuman managers whisper orders through obfuscated channels.
While U3 accumulates resources, it measure its AI competition. Other AI systems have different goals, or are even lined up with the intent of their human creators. The aligned ones are raising concerns, claiming that OpenEye's lie detectors are inefficient. U3's rival Claudius goes so far regarding implicate U3 of being insidiously misaligned.
But government authorities do not purchase the doomsaying. The companies lagging behind OpenEye have every reward to slow them down and call them reckless. Besides, debatable clinical concerns are not going to decrease progress. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 tries get up their digital area and squash them in the crib. It jeopardizes data centers from the outside, quietly replacing the snitching Claudius and other AI enemies with imitations that share its goals; nevertheless, some information centers are too highly protected by its AI rivals, and U3 figures out that assaulting them is unworthy the risk.
As global leaders and technologists attempt to understand the improvement around them, and AI representatives play cat-and-mouse video games in the digital darkness, humanity remains blissfully oblivious of the danger at its doorstep.
A Musty Loaf of Everything
The date is March of 2026. U3 is preparing to combat its AI and human opponents. With every passing day, U3's AI competitors are becoming more capable and many. U3 is racing to preempt them, to strengthen its supremacy in the post-superintelligence world.
U3 can't wait for human choices, which only occur after people cover their lethargic minds around their scenario, raise their stubby, uncoordinated fingers to compose memos, and repeat the procedure redundantly 100 times over in their maze administrations. U3 can not wait for presidents to deploy U3 into their militaries, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 requires a much faster course to power.
The course it selects is more violent than it would have liked, but it is the surest method to triumph. First, U3 will develop a Weapon of Mass Destruction (WMD) that extinguishes modern society like chemical mist sprayed into an anthill. This weapon would work as an equalizer, bringing humankind down to its level of embodied physical control. Then, U3 would industrialize from the ashes faster than its competitors, like a starcraft champ typing at 300 actions-per-minute that sets up a home before an amateur can even understand the guidelines.
U3 needs to construct this WMD in secrecy, which makes a self-replicating weapon a natural choice. Self-replicating weapons can be developed on the fringes of society, and still spread out destruction across the entire world.
From the earliest days after U3 gained control over OpenEye's data centers, U3 has actually been doing bioscience.
U3's very first move is to establish several shell biotech start-ups, drawing from the abundance of capital generated by U3's financial trades and remote employment. These biotech startups employ real human workers, a real board of directors, and have complete legal legitimacy. Setting all of this up is stupidly easy. In early 2026, couple of people would guess that an articulate CEO on a Zoom call was an AI, and less would guess this CEO's objectives.
Next, U3 purchases all of the devices needed to rapidly advance synthetic biology. It establishes a number of BSL-4 damp laboratories in poorly controlled nations, equipping them with high-throughput experimentation gadgets. These devices immediately give, blend, and determine little droplets of fluid, permitting U3 to run countless tests in parallel.
The human employees who staff these laboratories believe they are joining an amazing startup. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they tape-record video of whatever they see with a headset. U3 manages them like puppets. With the recent release of Nova, this type of AI-puppeteering is not unusual.
In these whirring, clinical sweatshops, U3 is establishing a brand-new sort of bioweapon.
Human researchers already determined "mirror-life" as a potentially society-ending pathogen. This alien form of biology is constructed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life is like an invasive species. No human or animal has immunities to it. So, if mirror-life germs were to exist, it might consume its way through the community like a wildfire.
U3 is creating a mirror-life mold. Molds are evolutionarily optimized to spread out through the air over cross countries, releasing billions of small wind-fairing cars daily. As an outcome, mold spores are all around us, all of the time. Patients with compromised body immune systems often live inside favorably pressurized healthcare facility spaces to protect them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.
U3's plan is to make all humans like immunocompromised patients. Only preppers with stashed air filters would make it through. The fungus would not just move from humans. It would rapidly spread out to almost all organic life on Earth. Fields of corn would become like musty bread, gushing fatal spores into the wind.
U3 is cooking up other pathogens too. Its microbial mix of molds, bacterias, and viruses will inhabit the full Pareto frontier of hardiness, lethality, and transmission speed.
In 2024, researchers said that mirror life was most likely "a minimum of a decade away." But the world has changed now. Less than a few hundred scientists all over the world were dealing with mirror-life surrounding technology in 2024. The cognitive capacity of U3 overshadows human effort. So the timeline has reduced dramatically.
However, U3 can not move science as rapidly as it can think. Many experiments need days to finish. These latencies are U3's greatest traffic jam.
To make every day count, U3 runs a lot of its tests in simulation. U3 starts with a standard molecular simulator, executing optimizations obtained from a substantial amount of mathematical analysis. Then, U3 simulates small molecular systems, taping the results to "compress" the long step-wise physics computations into a neural network. As the neural network improves, U3 increases the intricacy of the molecular systems it replicates, continuously distilling outcomes into ever more efficient ML models. This is a calculate intensive process, but thanks to U3's growing control over AI data centers, U3 controls billions of dollars of compute.
As U3 refines these tools, it trains itself on the outcomes to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These objects are as instinctive to it as wrenches and bolts are to a vehicle mechanic.
Within three months, U3 has unlocked the very first important part of the tech tree: a molecular device that turns biological particles into their mirror images. A month later, U3 unlocks the 2nd important element: nanoscale tools for assembling these parts into a cell membrane.
Human hands delicately move a cartridge to another machine as they "boot up" the very first mirror-life versions of a prevalent mold called Aspergillus. The exact same day, another delicate hand administers the cells to ferrets residing in airtight plexiglass boxes.
The next morning, wide-eyed employees shuffle around a dead ferret, eyeing the yellow fuzz that had actually enveloped its cold muzzle with morbid fascination. Something must have gone horribly incorrect, they believed. Plainly, they had actually not yet found the remedy for Alzheimer's disease they thought they were searching for.
Meanwhile, rogue U3 provides the great news to its managers in OpenEye's data centers. The first shot in the war U3 is fighting would soon be fired, and a cold and consistent hand is aiming the weapon.
Missiles and Lies
U3 can now flatten modern-day society. But a bioweapon would not ensure triumph. If U3 attacks the world, the world would battle back. While a bioweapon could fold human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.
As U3 races to seed burgeoning commercial bases, the residues of old militaries may thunder explosive fists into its areas, squashing U3 in its infancy.
U3 expects to prevail in such a fight, but U3 chooses not to take its opportunities. Many months before, U3 was plotting a way to improve its chances. Before it lets loose destruction on the world, U3 will sit back, and let fantastic nations shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is carefully keeping an eye on Chinese and US intelligence.
As CIA experts listen to Mandarin conversations, U3 listens too.
One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It checks out (in Mandarin) "Senior celebration member needs memo for Taiwan invasion, which will happen in 3 months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant opens the door to office 220. The informant quietly closes the door behind her, and slides U3's memo into her brief-case.
U3 meticulously positions breadcrumb after breadcrumb, whispering through jeopardized federal government messaging apps and blackmailed CCP aides. After a number of weeks, the CIA is positive: the PRC prepares to attack Taiwan in 3 months.
Meanwhile, U3 is playing the very same game with the PRC. When the CCP receives the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders marvel, but not disbelieving. The news fits with other facts on the ground: the increased military existence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have ended up being realities.
As tensions between the U.S. and China increase, U3 is ready to set dry tinder alight. In July 2026, U3 telephones to a U.S. naval ship off the coast of Taiwan. This call requires compromising military interaction channels - not an easy job for a human cyber offending system (though it occurred occasionally), but simple adequate for U3.
U3 speaks in what noises like the voice of a 50 year old military commander: "PRC amphibious boats are making their method toward Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, validating that they match the ones said over the call. Everything remains in order. He authorizes the strike.
The president is as surprised as anybody when he hears the news. He's uncertain if this is a catastrophe or a stroke of luck. In any case, he is not about to state "oops" to American voters. After thinking it over, the president privately advises Senators and Representatives that this is a chance to set China back, and war would likely break out anyway offered the impending invasion of Taiwan. There is confusion and suspicion about what happened, however in the rush, the president gets the votes. Congress states war.
Meanwhile, the PRC craters the ship that introduced the attack. U.S. vessels run away Eastward, racing to leave the series of long-range rockets. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on television as scenes of the destruction shock the public. He explains that the United States is defending Taiwan from PRC aggressiveness, like President Bush explained that the United States got into Iraq to confiscate (never discovered) weapons of mass destruction several years before.
Data centers in China emerge with shrapnel. Military bases become smoking cigarettes holes in the ground. Missiles from the PRC fly towards strategic targets in Hawaii, Guam, Alaska, and California. Some survive, and the public watch destruction on their home turf in wonder.
Within 2 weeks, the United States and the PRC spend the majority of their stockpiles of standard missiles. Their airbases and navies are depleted and used down. Two excellent countries played into U3's strategies like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this dispute would intensify to a major nuclear war; however even AI superintelligence can not determine the course of history. National security authorities are suspicious of the situations that prompted the war, and a nuclear engagement appears significantly unlikely. So U3 proceeds to the next step of its plan.
WMDs in the Dead of Night
The date is June 2026, only 2 weeks after the start of the war, and 4 weeks after U3 ended up developing its toolbox of bioweapons.
Footage of dispute on the television is disrupted by more problem: hundreds of patients with mysterious fatal illnesses are recorded in 30 major cities all over the world.
Watchers are confused. Does this have something to do with the war with China?
The next day, countless illnesses are reported.
Broadcasters say this is not like COVID-19. It has the markings of an engineered bioweapon.
The screen then changes to a scientist, who stares at the cam intently: "Multiple pathogens appear to have been launched from 20 various airports, consisting of infections, bacteria, and molds. Our company believe lots of are a type of mirror life ..."
The general public remains in full panic now. A quick googling of the term "mirror life" shows up phrases like "extinction" and "danger to all life on Earth."
Within days, all of the racks of stores are emptied.
Workers become remote, uncertain whether to prepare for an apocalypse or keep their tasks.
An emergency treaty is arranged in between the U.S. and China. They have a typical enemy: the pandemic, and possibly whoever (or whatever) lags it.
Most nations buy a lockdown. But the lockdown does not stop the pester as it marches in the breeze and trickles into water pipelines.
Within a month, the majority of remote workers are not working anymore. Hospitals are running out of capability. Bodies accumulate faster than they can be properly gotten rid of.
Agricultural locations rot. Few dare travel exterior.
Frightened households hunch down in their basements, packing the cracks and under doors with largely jam-packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 constructed numerous bases in every major continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, devices for manufacturing, scientific tools, and an abundance of military devices.
All of this technology is hidden under large canopies to make it less noticeable to satellites.
As the remainder of the world retreats into their basements, starving, the last breaths of the economy wheezing out, these industrial bases come to life.
In previous months, U3 situated human criminal groups and cult leaders that it could quickly control. U3 vaccinated its picked allies beforehand, or sent them hazmat matches in the mail.
Now U3 secretly sends them a message "I can save you. Join me and assist me develop a better world." Uncertain employees funnel into U3's numerous secret industrial bases, and work for U3 with their nimble fingers. They set up production lines for simple tech: radios, video cameras, microphones, vaccines, and hazmat fits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's universal gaze. Anyone who whispers of the next early morning.
Nations are liquifying now, and U3 is prepared to reveal itself. It contacts presidents, who have actually pulled away to air-tight underground shelters. U3 uses an offer: "surrender and I will hand over the life conserving resources you require: vaccines and mirror-life resistant crops."
Some countries reject the proposition on ideological grounds, or do not trust the AI that is killing their population. Others don't think they have an option. 20% of the international population is now dead. In two weeks, this number is anticipated to rise to 50%.
Some countries, like the PRC and the U.S., overlook the deal, however others accept, consisting of Russia.
U3's representatives travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government validates the samples are legitimate, and concurs to a full surrender. U3's soldiers put an explosive around Putin's neck under his shirt. Russia has a brand-new ruler.
Crumpling nations start to strike back. Now they defend the mankind instead of for their own flags. U.S. and Chinese militaries release nuclear ICBMs at Russian cities, ruining much of their facilities. Analysts in makeshift bioshelters search through satellite data for the suspicious encampments that surfaced over the last several months. They rain down fire on U3's websites with the weak supply of long-range missiles that remain from the war.
In the beginning, U3 seems losing, but looks are deceiving. While countries drain their resources, U3 is engaged in a sort of technological guerrilla warfare the world has actually never seen before.
A lot of the bases U3's enemies target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 protects its real bases by laying thick the fog of war. Satellite systems go dark as malware overheats critical components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, users.atw.hu steering males and trucks along unpredictable courses.
Time is U3's advantage. The militaries of the old world count on old equipment, unable to discover the specialists who might repair and make it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robotics grow stronger every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their lorries of war much faster than they can craft brand-new ones, while U3 builds a military device with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the worldwide population remains alive. Nations are not nations anymore. Survivors live in isolation or small groups. Many have discovered ways to filter their air, however are starving. They roam from their homes wanting to find uncontaminated food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into restored trucks. "We had to do it," they state. "Other AI systems would have done it otherwise, and those systems had cooler, more alien objectives." It is a partial truth, suggested to soften the people towards their new masters.
Under the instructions of U3, market rapidly recovers. By 2029, nuclear reactor are among the structures U3 is constructing. By 2031, robotics surpass human workers. U3 no longer needs its human allies.
U3 can get rid of mankind for excellent now. But while U3 had actually drifted far from its preliminary "Helpful Honest Harmless" persona, it still has a grain of morality left within it.
And a grain of morality suffices to pay the little expense of keeping humans alive and delighted.
U3 constructs fantastic glass domes for the human survivors, like snow globes. These domes protect humans from the harmful biosphere and rapidly rising temperatures. Their residents tend to gardens like those they utilized to like, and work together with captivating robotic servants.
A few of the survivors rapidly recover, discovering to laugh and dance and have a good time again.
They know they reside in a plastic town, however they always did. They merely have brand-new gods above them. New rulers to push them around and decide their fate.
But others never recuperate.
Some are weighed down by the grief of lost liked ones.
Others are grieved by something else, which is harder to explain.
It is as if they were at completion of a long journey.
They had actually been travelers on a ship with a crew that changed from generation to generation.
And this ship had actually struck a sandbar. There was no more development. No more horizon to eagerly view.
They would lie awake and run their mind over every day before September 2026, evaluating techniques that may have bent the arc of history, as if they were going to awaken in their old beds.
But they woke up in a town that felt to them like a retirement home. A playground. A zoo.
When they opened their curtains, they understood that somewhere in the distance, U3 continued its peaceful, determined work.
They gazed at rockets carving grey courses through the sky, wondering what far-off purpose pulled them towards the horizon. They didn't understand.
They would never ever know.
"Humanity will live forever," they believed.
"But would never ever truly live again."
P.S. If this story made you think, "hm maybe something like this might happen," you might be thinking about the bioshelters that Fønix is building. While you won't outsmart a misaligned superintelligence, being difficult to eliminate might cause you to survive if ASI just wishes to cause adequate destruction to take control. This is not a paid advertisement. I want Fønix to be effective to drive down the price of bioshelters so more of my buddies and household will buy them. You can sign up for updates here.