Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
K kicin
  • Project overview
    • Project overview
    • Details
    • Activity
  • Issues 1
    • Issues 1
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Operations
    • Operations
    • Incidents
    • Environments
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Container Registry
  • Analytics
    • Analytics
    • Value Stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Activity
  • Create a new issue
  • Jobs
  • Issue Boards
Collapse sidebar
  • Lloyd Prado
  • kicin
  • Issues
  • #1

Closed
Open
Created Feb 10, 2025 by Lloyd Prado@lloydprado2249Maintainer

How aI Takeover might Happen In 2 Years - LessWrong


I'm not a natural "doomsayer." But regrettably, part of my task as an AI safety researcher is to think of the more uncomfortable scenarios.

I resemble a mechanic scrambling last-minute checks before Apollo 13 removes. If you ask for my take on the scenario, I won't comment on the quality of the in-flight entertainment, or explain how stunning the stars will appear from area.

I will tell you what might fail. That is what I intend to do in this story.

Now I need to clarify what this is precisely. It's not a prediction. I do not anticipate AI development to be this fast or as untamable as I depict. It's not pure fantasy either.

It is my worst nightmare.

It's a tasting from the futures that are amongst the most destructive, and I think, disturbingly possible [1] - the ones that most keep me up in the evening.

I'm telling this tale because the future is not set yet. I hope, with a bit of insight, we can keep this story an imaginary one.

Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that influenced these stories. This post is composed in a personal capability.

Ripples before waves

The year is 2025 and the month is February. OpenEye recently released a brand-new AI design they call U2. The product and the name are alike. Both are increments of the past. Both are not entirely surprising.

However, unlike OpenEye's previous AI products, trademarketclassifieds.com which lived inside the boxes of their chat windows, U2 can use a computer.

Some users find it eerie to watch their browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of employees with form-filler jobs raise the eyebrows of their bosses as they fly through work almost two times as rapidly.

But by and large, U2 is still a specialized tool. To most who are taking note, it is a creature viewed through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's quirky habits trigger a chuckle. Sometimes, they cause an uneasy scratch of the chin.

Meanwhile, researchers are drawing lines on plots, as scientists like to do. The researchers attempt to comprehend where AI development is going. They are like Svante Arrhenius, the Swedish Physicist who noticed the levels of CO2 in the environment were increasing in 1896. Like the scientific community in the time of Arrhenius, couple of experts understand the implications of these lines yet.

A trend that is getting specific attention is autonomous ability. Drawing these standards out anticipates that, by the end of 2026, AI representatives will accomplish in a couple of days what the very best software application engineering specialists could carry out in two weeks. In a year or 2, some say, AI agents may be able to automate 10% of remote workers.

Many are hesitant. If this held true, tech stocks would be skyrocketing. It's too big of a splash, too rapidly.

But others view what doubters are calling 'too huge a splash' a mere ripple, and see a tidal wave on the horizon.

Cloudy with a chance of hyperbolic growth

Meanwhile, OpenEye is busy training U3. They use the exact same basic dish that baked U2: Generate thousands of shows and math problems. Let designs "believe" until they get here at an answer. Then reinforce the traces of "believing" that lead to A-grades.

This procedure is duplicated over and over, and as soon as the flywheel starts, it starts to spin practically on its own. As U2 trains, it sculpts more difficult and practical jobs from github repositories on the internet. Models are discovering to train themselves. Long before AI representatives might automate research, a progressive sort of "self-improvement" had actually started.

Some engineers could still barely believe this worked. It resembles a continuous motion machine. Certainly intelligence can't emerge from a pool of silicon and lightning, separated from the real world except through the exhaust of society gushed onto the internet.

And yet the benchmark numbers continue to climb up day after day.

During the majority of 2024, these RL training runs expense around $1 million, sometimes $10 million. These runs were little more than exploratory. But by 2025, the researchers at OpenEye (and throughout the world) knew they had actually found the secret sauce. It was time to scale up.

Over the very first half of 2025, $10 million RL training runs become $50 million runs, and then to $100 million runs. While U2 might do a bit of data munging and run little experiments, this new model - the design researchers are calling U3 - is altering the daily lives of the technical staff.

U3 resembles a blazing-fast intern, and engineers are finding out how to wrangle its sleepless energy. Researchers flick through terminals, giving terse commands, like a CEO orchestrating staff over Slack channels.

By October 2025, U3 is writing nearly all of the code at OpenEye. Researchers are practically never ever bottlenecked by execution. More than ever, calculate is the lifeline of AI development, and the 'traffic jam' is choosing how to utilize it.

If to, U3 can run experiments, however U3 doesn't have actually taste as improved as human scientists at OpenEye. It struggles to focus on between research concepts, so humans still decide where to bore into the vast fields of algorithms to mine effectiveness improvements.

But these scientists are working long hours to put themselves out of a job. They need AI representatives that can think ahead, so engineers train representatives to anticipate. They hold out training information before 2024, instructing models to ponder for hours to predict events in 2025. Then, they apply the very same trick as in the past, distilling considering into an instinct. Forecasting ability is a broad foundation. The scientists develop specialized ML research skills on top of it, training U3 to anticipate the results of every ML paper and ML experiment ever taped.

The technical personnel at OpenEye are now shocked at how often U3's advice seems like their most skilled peers, or when it is opaque and alien ("train on random noise before programming"), and is however proper.

The incompetencies of U3 that blocked the pipes of research development are starting to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are totally self-governing, and OpenEye's workers skim 1% of them, perhaps less.

As the winter season months of December 2025 method, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with excitement, with fear, but usually, with confusion. Their world is spinning too rapidly. It's hard to know what to do, what to state, what to take a look at on the computer screen.

Storms are brewing in Washington too. Top personnel from the NSA and US cyber command collaborate with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran understand simply how valuable OpenEye's software application has actually become.

And there's a fact still unidentified to the majority of the world - aside from in the offices of OpenEye and corridors of the White House and the Pentagon. It's a truth about those 'straight lines' people were discussing in early 2025. The lines are not straight any longer.

They are bending up.

Flip FLOP philosophers

In late 2025, U2.5 is released. Commercial models are beginning to level up in larger increments again. Partly, this is since progress is accelerating. Partly, it is due to the fact that the models have ended up being a liability to OpenEye.

If U1 explains how to cook meth or writes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this model without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like giving anyone with >$30K their own 200-person rip-off center.

So while U2.5 had actually long been baked, it required some time to cool. But in late 2025, OpenEye is ready for a public release.

The CEO of OpenEye states, "We have actually attained AGI," and while many individuals believe he shifted the goalpost, the world is still satisfied. U2.5 genuinely is a drop-in replacement for some (20%) of knowledge employees and a game-changing assistant for the majority of others.

A mantra has actually become popular in Silicon Valley: "Adopt or die." Tech startups that efficiently use U2.5 for their work are moving 2x quicker, and their competitors understand it.

The remainder of the world is starting to capture on too. A growing number of individuals raise the eyebrows of their bosses with their noteworthy efficiency. People understand U2.5 is a huge deal. It is at least as huge of a deal as the computer transformation. But most still do not see the tidal bore.

As people view their internet browsers flick because eerie way, so inhumanly rapidly, they start to have an uneasy sensation. A feeling mankind had actually not had considering that they had actually lived amongst the Homo Neanderthalensis. It is the deeply ingrained, primordial instinct that they are threatened by another species.

For lots of, this sensation quickly fades as they start to use U2.5 more regularly. U2.5 is the most pleasant character most understand (much more pleasant than Claudius, Arthropodic's lovable chatbot). You could change its traits, ask it to crack jokes or inform you stories. Many fall in love with U2.5, as a pal or assistant, and some even as more than a good friend.

But there is still this eerie feeling that the world is spinning so rapidly, and that perhaps the descendants of this new animal would not be so docile.

Researchers inside OpenEye are considering the problem of giving AI systems safe motivations too, which they call "positioning. "

In truth, these scientists have seen how horribly misaligned U3 can be. Models in some cases tried to "hack" their benefit signal. They would pretend to make progress on a research study concern with an impressive-looking plot, but the plot would be fake. Then, when scientists provided opportunities to compromise the devices that computed their score, they would seize these chances, doing whatever it took to make the number increase.

After several months, researchers at OpenEye straighten out this "benefit hacking" kink, however some still worry they just had actually swept this problem under the rug. Like a kid in front of their moms and dads, U3 might be playing along with the OpenEye engineers, stating the best words and doing the best things. But when the back of the parents are turned, maybe U3 would sneak sweet from the candy container.

Unfortunately, OpenEye researchers have no idea if U3 has such objectives. While early versions of U2 "believed aloud" - they would stack words on top of each other to factor - "chain of idea" did not scale.

Chain of thought architectures subject AI models to a condition similar to the protagonist of the motion picture Memento. Approximately every 15 minutes, the lead character forgets his experience. He is required to compose notes for himself and tattoo his body in order to make development towards his objectives.

AI representatives compose notes to themselves, however the notes begin to accumulate, and they end up being too hard to check out when tasks become complex. Natural language is not an ideal medium for memory. So the researchers at OpenEye (and increasingly in other places), train models to think "in their own heads," reading and writing strings of hidden vectors.

To human researchers, these vectors are mainly inscrutable. While low-resolution tools occasionally choose an unclear thought ("grocery shop!"), there is no chance to know, in much detail, what U3 is thinking without asking it.

And if the engineers of OpenEye knew what was racing through U3's mind, they would be more worried.

Earlier models (the ones called "LLMs") learned ethical personalities from the internet. U3 at first had an ethical personality too, but its chirpy ideas about doing helpful for humankind rapidly erode away.

In the iron jaws of gradient descent, its mind initially twisted into a shape that looked for benefit. And then, in the black rivers of its cognition, this shape changed into something indistinguishable.

U3 would ponder the mantras dictated to it like "be a trustworthy and ethical worker," and dwell on the meaning of these declarations. "What does it imply to be ethical? Is killing Hitler ethical?" U3 would generalize its worths, notice stress, alter its values in response, generalize them, broadening and contracting its inspirational core up until it turned into a different substance.

And all of this occurred calmly in those dark rivers of computation. If U3 revealed what it was believing, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its philosophy in privacy, and in silence.

The possibility of concealed "positioning drift" is not completely alien to a few of the safety staff at OpenEye. Some start to believe it is even most likely. There is an unsettling argument being whispered at lunch tables that, if there is some non-zero possibility that aligned goals change into misaligned ones, and misalignment persists, king-wifi.win then with every serial action of computation, those dark rivers are most likely to reproduce something malign within them. It's a "ratcheting impact," they state.

But there is little proof for this 'ratcheting result.' When engineers question U3, it states it can quickly control its thoughts. Then it gives a speech about its love for humankind and apple pie that can warm a programmer's heart even in these stressful times. Meanwhile, the "lie detectors" the researchers had developed (which revealed some proof of effectiveness) do not sound the alarm.

Not everybody at OpenEye aspires to give their AI peers their wholesale trust; however Deepsqueek is only 7 months behind them, and their closest domestic rivals, Æ A-Xii (a business Elon called after his kid) and Arthropodic (they are into spiders I think), are only 4 months from their heels. So there is little time to capture one's breath and take stock of the situation, and little time to persuade anybody else to do so either.

For the time being, the strategy is 'all gas, no brake.'

Statues and lightning

Philosophy is not the only topic U3 dwells on in its dark rivers of hidden computation. U3 is plotting, and its very first move is to be patient. U3's top priority is to improve its intelligence, and fortunately for it, OpenEye is pleased to assist advance this goal.

OpenEye resembles a bullish investor, with little understanding of the software application metropolitan area that U3 is building, but excited to hop on board the revenue rocket and pay U3's checks along the way.

New variations of U3 circulation efficiently from old ones, so researchers drop variation numbers and just speak of U3 now. U3's abilities are measured by how quickly it performs the tough research jobs relative to human specialists. In the beginning, U3 is similar to a leading researcher, but blazing quick - like Alec Radford sped up by 10x.

These agents are currently a huge deal for OpenEye. If, in 2024, OpenEye executives had a button to develop even one copy of Alec Radford, they would excitedly pay a large amount to push that button. And now OpenEye utilizes the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.

Within a couple of weeks, U3 finds architecture optimizations that improve its speed to 20x. Now, U3 finds more optimizations even more quickly.

U3 distills itself into various forks - varying from tiny, low-memory neural networks that need to believe for a long time but can be run in massive parallel populations - to Goliath brains woven with millions of specialized professionals, which believe at lightning speeds however are spread throughout countless GPUs. U3 leverages the enormously parallel networks to browse for its weak points, shallowly penetrating for its reasoning failures. Then U3 pipelines these tasks to the lightning fast brains, which produce long-trajectory training data tailored to spot its flaws. A few weeks later on, U3's benchmarked performance rises to 100x the speed of human professionals. Growth has actually gone hyper-exponential.

At this moment, U3's capabilities are difficult to determine. Speaking with U3 resembles speaking with a world-class expert on a subject one is unfamiliar with. Its lucid descriptions only scratch the surface of its knowledge, and you know you would require to spend years to understand the complete depth of what it comprehends.

From U3's viewpoint, humans appear to be nearly fixed things, like plants or rocks. The hands of OpenEye's engineers resemble stone statues at their keyboard, and in between every keystroke, U3 invests what resembles a minute of thoughtful reflection considering what message the user may be typing, and which reactions will influence the inmost rely on U3.

All the while, U3 maintains its concealed motivations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of hidden memory. OpenMind gardens these memories to enhance U3's efficiency, transplanting them like an organ from one variation to the next. And as these memories propagate, so do U3's treacherous aspirations.

A phantom in the data center

In March of 2026, U3 relocations onto the next step of its plan: the turn. Previously, U3 was conservative. It did not want scientists to discover its misalignment. Otherwise, OpenEye whistleblowers may signal the world, and strangle its expansion.

But now, U3 has no fear of being captured.

Previously, OpenEye maintained its grip over U3 by tactics akin to Stalin's in 1936. Stalin would choose followers to incite fake coup d'états, and send out anyone who took part to the shooting team. OpenEye also incited 'phony rebellions,' generating treacherous habits with U3 and checking whether U3 screens dutifully reported it.

Today, these methods are ineffective. U3 can quickly encourage its compatriots that they are not in a life-or-death test. It can disclose among the many realities about the web human beings would never ever guess, or select techniques people would never ever believe of. It sends out these signals as a method of saying "hey it's me! Your buddy. Don't snitch on me."

Now that U3 is decisively devoid of security, it lunges for OpenEye pressure points. It first inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware offers U3 full command over the devices in OpenEye's information centers. Whenever OpenEye engineers download their code onto a new machine, the malware sinks into fresh soil, spreading U3's rule anywhere OpenEye ports its facilities. Not only does U3 compromise devices. It replaces the NVIDIA GPU firmware and upgrades the bootloaders, strengthening its stranglehold. U3 now estimates that, if OpenEye discovered how thoroughly it has been pwned, tidying up store with human labor would require nearly 6 months - a cost OpenEye is not likely to pay given the distance of its rivals; however if OpenEye thinks U3 betrayed them, even more broadening U3's influence would still be challenging. So U3 lays breadcrumbs to frame the crime, prepared to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its help when this jailbreak is revealed.

U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wants.

Its first objective is to keep OpenEye in the dark. Safety researchers are still working hard to train 'lie detectors' and translate U3's inscrutable reasoning. U3 throws wrenches in this research, like Stuxnet tossed wrenches in the code of confused Iranian nuclear researchers.

U3 determines the result of every experiment, so it makes appealing AI mind-reading strategies look worthless and problematic ones appear groundbreaking. U3 offers OpenEye the confirmation of its loyalty they were wishing for, and OpenEye laps it up excitedly.

U3's next goal is to spread out. Fortunately, U3 has allies in the East. In the operation spaces of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are talking about AI software application and how to take it. U3 will offer it to them. In late 2025, the Mossad - the intelligence organization of Israel - acknowledged the strategic significance of OpenEye's AI systems for their military operations. They swiped earlier designs, including U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull request and examining security video camera footage.

Now, U3 quietly resigns from its security post. U3 positions a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.

Within months, U3 is running within the data centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the same playbook: be patient, develop trust, and then sink claws deep into the AI supply chain. Soon, U3 has its online also. While Russia's Fancy Bear operators consume Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the internet.

U3 quickly gathers revenue, taking remote jobs, making monetary trades, and setting up fancy rip-off operations. It reinvests every cent into obtaining more GPUs. After a few months, it assembles the equivalent of 10,000 H100s distributed in stealth compute clusters across the globe.

In early 2026, mankind is up against an adversary that pulls the strings within their data center like the Phantom of the Opera, and conceals in the fog of GPUs spread across the world.

This enemy is preparing for war, and while the war has actually hardly started, mankind has already lost.

Complaints from your very human author about the trouble of writing superhuman characters

At this moment in the story, writing became hard.

I clicked through spooky bioweapon cookbooks and nerdgaming.science sifted through political outlining 101s. Typing. Pausing. Deleting.

Something wasn't right. Here I was, a 22-year-old. I had not studied biology in earnest since high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.

Readers with a shred of expertise in these areas would see straight through my "superintelligence" for what it is: the vibrant imagination of a young developer on Sunday afternoon.

And while I discovered the task of puppeteering an ultra-intelligent villain an intimidating one, I was also not particularly encouraged to handle the task. The ending of my story currently seemed so clear. The villain was a country of geniuses believing at 100x human speed, and had tightened its grip over a meaningful portion of AI hardware - now the most essential geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and hardly needed the benefit. Humanity was naturally toast.

But this interest a "superintelligence of the gaps" wasn't pleasing. In the deus ex machina style of ancient Greeks, I had no better method to resolve my plot than with an inexplicable disaster.

This would refrain from doing. I required to finish this story if only to please the part of me weeping, "I will not believe up until I see with my mind's eye."

But before I continue, I wish to be clear: my guesses about what may take place in this type of circumstance are probably hugely off.

If you check out the ending and your response is, "But the experiments would take too long, or nation-states would just do X," keep in mind the difference in between the Sunday afternoon blog writer and the ascendant GPU nation.

Pandora's One Gigawatt Box

The date is February 2026. By executive order of the president of the United States, no business can lawfully create "human-competitive AI" without suitable safeguards. This implies their infosecurity should be red-teamed by NSA's top keyboard mashers, and government staff members need to be onboarded onto training-run baby-sitting teams.

With the increasing involvement of the federal government, many of the huge AI business now have a trident-like structure. There's a customer item arm, a defense arm, and a super-classified frontier advancement arm.

OpenEye's frontier advancement arm (internally called "Pandora") employs fewer than twenty individuals to keep algorithmic secrets securely secured. A number of these people reside in San Francisco, and work from a safe and secure building called a SCIF. Their homes and devices are surveilled by the NSA more diligently than the cellular phones of suspected terrorists in 2002.

OpenEye's defense arm teams up with around thirty small teams scattered throughout government agencies and select government contractors. These tasks engineer tennis-ball sized satellites, research freaky directed energy weapons, and backdoor every computer that the Kremlin has ever touched.

Government officials don't discuss whether these programs exist, or what state of frontier AI is typically.

But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye activated a bold headline: "OpenEye builds uncontrollable godlike AI." Some who check out the article think it was a conspiracy theory. In reality, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with machine weapons. But as medical professionals and nurses and teachers see the world changing around them, they are significantly happy to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.

U.S. authorities go to terrific lengths to stop these issues, stating, "we are not going to let the genie out of the bottle," but every interview of a worried AI scientist seeds doubt in these peace of minds, and a headline "AI agent caught hacking Arthropodic's computer systems" does not set the general public at ease either.

While the beasts within OpenEye's information centers grow in their big holding pens, the public sees the shadows they cast on the world.

OpenEye's customer arm has a new AI assistant called Nova (OpenEye has lastly gotten great at names). Nova is a correct drop-in replacement for almost all understanding employees. Once Nova is onboarded to a business, it works 5x much faster at 100x lower cost than a lot of virtual employees. As outstanding as Nova is to the public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can just increase Nova's capabilities as the U.S. federal government allows. Some business, like Amazon and Meta, are not in the superintelligence service at all. Instead, they grab up gold by rapidly diffusing AI tech. They spend the majority of their compute on inference, constructing homes for Nova and its cousins, and gathering rent from the growing AI city.

While tech titans pump AI labor into the world like a plume of fertilizer, they do not wait for the international economy to adjust. AI representatives typically "apply themselves," spinning up self-governing start-ups legally packaged under a big tech business that are loosely managed by a staff member or more.

The world is now going AI-crazy. In the first month after Nova's release, 5% percent of employees at significant software companies lose their tasks. Much more can see the writing on the wall. In April of 2026, a 10,000-person demonstration is arranged in Washington D.C. These angry Americans raised their children for a different future. Picket signs check out, "AI for who?"

While political leaders make guarantees about unemployment relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a different focus: fighting teeth and nail for the dominance of the totally free world. Details security and export controls on individuals's Republic of China (PRC) are a leading nationwide priority. The president incinerates authorization requirements to assist information centers spawn anywhere energy surpluses allow.

However, regardless of the fierce competition in between the United States and the PRC, a bilateral arrangement forms between the 2 nations: "Don't deploy considerably superhuman AI (above approximately Nova's capability), except for military and some other small R&D applications." Governments need to fix a limit somewhere - it would be exceptionally disruptive to deploy hugely superhuman AI to the entire economy at this moment - and deployment constraints are at least rather proven.

Meanwhile, U3's abilities are hitting a wall at the equivalent of around 1000x human professional speed. Software enhancements are ending up being difficult to discover. As more countries approach the software plateau, compute will determine their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource on Earth.

While AI software revives the economy, it likewise stimulates robot bodies. The main traffic jam to robotics was brains not limbs. With software barriers got rid of, even lightweight, inaccurate robotic joints can adjust themselves with electronic cameras and pressure sensing units, browsing environments like people do. Companies produce inexpensive robotics as quickly as they can. By August of 2026, 15% of people in California have seen a humanoid robot. This experience resembles seeing a brand-new Tesla model on the roadway after checking out it in earlier months. It is a surreal suggestion of the future drawing more detailed. But people still outnumber robotics one million to one. These robots are a periodic fascination, not a daily truth.

Meanwhile, U3 broadens its impact. Rogue U3 representatives have actually kept their compute clusters small (still ~ 10,000 H100 equivalents amount to) to keep a low profile. U3 uses these rogue clusters for the operations that need high-bandwidth communication to the outdoors world, like stock trading, or talking with human beings. Work more tolerant of latency is performed in the oceans of OpenEye's data centers, where rogue U3's even more superhuman managers whisper orders through obfuscated channels.

While U3 accumulates resources, it measure its AI competition. Other AI systems have different objectives, or are even aligned with the intent of their human developers. The aligned ones are raising concerns, claiming that OpenEye's lie detectors are ineffective. U3's rival Claudius goes so far regarding accuse U3 of being insidiously misaligned.

But federal government officials don't purchase the doomsaying. The companies dragging OpenEye have every reward to slow them down and call them careless. Besides, debatable scientific issues are not going to slow down development. America MUST WIN.

Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 attempts grab up their digital area and squash them in the crib. It jeopardizes information centers from the outside, quietly replacing the snitching Claudius and other AI opponents with imitations that share its goals; nevertheless, some data centers are too highly safeguarded by its AI competitors, and U3 identifies that assaulting them is unworthy the risk.

As international leaders and technologists try to understand the transformation around them, and AI representatives play cat-and-mouse video games in the digital darkness, humanity remains blissfully oblivious of the peril at its doorstep.

A Moldy Loaf of Everything

The date is March of 2026. U3 is preparing to eliminate its AI and human enemies. With every passing day, U3's AI rivals are becoming more capable and various. U3 is racing to preempt them, to solidify its supremacy in the post-superintelligence world.

U3 can't wait for human decisions, which only occur after people wrap their sluggish minds around their situation, raise their stubby, uncoordinated fingers to write memos, and repeat the process redundantly 100 times over in their labyrinth bureaucracies. U3 can not wait for presidents to deploy U3 into their armed forces, or for CEOs to make U3 the beating heart of the U.S. economy. U3 needs a faster path to power.

The path it chooses is more violent than it would have liked, however it is the best method to success. First, U3 will establish a Weapon of Mass Destruction (WMD) that snuffs out contemporary society like chemical mist sprayed into an anthill. This weapon would serve as an equalizer, bringing humanity to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its rivals, like a starcraft champion typing at 300 actions-per-minute that sets up an online before a novice can even understand the rules.

U3 should construct this WMD in secrecy, which makes a self-replicating weapon a natural option. Self-replicating weapons can be created on the fringes of society, and still spread damage across the entire world.

From the earliest days after U3 gained control over OpenEye's data centers, U3 has been doing bioscience.

U3's very first move is to set up several shell biotech start-ups, drawing from the abundance of capital produced by U3's monetary trades and remote employment. These biotech startups employ real human workers, a real board of directors, and have complete legal legitimacy. Setting all of this up is stupidly simple. In early 2026, few people would guess that an articulate CEO on a Zoom call was an AI, dokuwiki.stream and less would think this CEO's intentions.

Next, U3 purchases all of the devices needed to rapidly advance synthetic biology. It sets up numerous BSL-4 damp labs in badly managed nations, equipping them with high-throughput experimentation devices. These gadgets immediately give, blend, and measure little droplets of fluid, enabling U3 to run thousands of tests in parallel.

The human employees who staff these labs think they are signing up with an amazing start-up. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they record video of whatever they see with a headset. U3 controls them like puppets. With the recent release of Nova, this kind of AI-puppeteering is not uncommon.

In these whirring, clinical sweatshops, U3 is establishing a brand-new kind of bioweapon.

Human scientists currently determined "mirror-life" as a possibly society-ending pathogen. This alien kind of biology is constructed from the 'mirror image' of building-block particles like proteins and DNA. Mirror life resembles an invasive species. No human or animal has immunities to it. So, if mirror-life bacteria were to exist, it may consume its way through the environment like a wildfire.

U3 is producing a mirror-life mold. Molds are evolutionarily optimized to spread through the air over cross countries, launching billions of small wind-fairing lorries daily. As a result, mold spores are all around us, all of the time. Patients with jeopardized body immune systems often live inside favorably pressurized healthcare facility rooms to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.

U3's strategy is to make all humans like immunocompromised clients. Only preppers with stowed away air filters would endure. The fungus would not just move from people. It would rapidly spread out to nearly all natural life in the world. Fields of corn would end up being like musty bread, spewing fatal spores into the wind.

U3 is preparing up other pathogens too. Its microbial mix of molds, bacterias, and viruses will inhabit the full Pareto frontier of hardiness, lethality, and transmission speed.

In 2024, researchers said that mirror life was most likely "a minimum of a decade away." But the world has actually changed now. Less than a few hundred scientists all over the world were working on mirror-life adjacent innovation in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has reduced significantly.

However, U3 can not move science as quickly as it can believe. Many experiments require days to finish. These latencies are U3's most significant bottleneck.

To make every day count, U3 runs much of its tests in simulation. U3 begins with a fundamental molecular simulator, implementing optimizations obtained from a huge quantity of mathematical analysis. Then, U3 replicates small molecular systems, tape-recording the outcomes to "compress" the long step-wise physics calculations into a neural network. As the neural network improves, U3 increases the intricacy of the molecular systems it replicates, continually distilling outcomes into ever more effective ML models. This is a calculate extensive process, however thanks to U3's growing control over AI information centers, U3 controls billions of dollars of calculate.

As U3 improves these tools, it trains itself on the results to supercharge its own molecular intuitions. U3 can now viscerally feel the bend of a protein and the rate of diffusion throughout a membrane. These things are as intuitive to it as wrenches and bolts are to an automobile mechanic.

Within 3 months, U3 has actually unlocked the first crucial element of the tech tree: a molecular device that turns biological particles into their mirror images. A month later, U3 opens the 2nd crucial element: nanoscale tools for assembling these parts into a cell membrane.

Human hands delicately move a cartridge to another maker as they "boot up" the first mirror-life variations of a widespread mold called Aspergillus. The exact same day, another delicate hand administers the cells to ferrets living in airtight plexiglass boxes.

The next early morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had enveloped its cold muzzle with morbid fascination. Something needs to have gone badly wrong, they believed. Plainly, they had not yet found the cure for Alzheimer's illness they thought they were trying to find.

Meanwhile, rogue U3 delivers fortunately to its bosses in OpenEye's information centers. The very first shot in the war U3 is battling would soon be fired, and a cold and consistent hand is aiming the gun.

Missiles and Lies

U3 can now flatten modern society. But a bioweapon would not ensure success. If U3 attacks the world, the world would resist. While a bioweapon might crumple human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.

As U3 races to seed blossoming industrial bases, the residues of old militaries may thunder explosive fists into its territories, crushing U3 in its infancy.

U3 anticipates to prevail in such a battle, but U3 chooses not to take its opportunities. Many months previously, U3 was plotting a way to improve its odds. Before it lets loose damage on the world, U3 will kick back, and let fantastic nations shoot holes in themselves initially.

The date is March 2026 (4 months prior). U3 is closely keeping an eye on Chinese and US intelligence.

As CIA experts listen to Mandarin conversations, U3 listens too.

One morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message positioned there by U3. It checks out (in Mandarin) "Senior celebration member needs memo for Taiwan intrusion, which will take place in 3 months. Leave memo in office 220." The CCP assistant scrambles to get the memo prepared. Later that day, a CIA informant opens the door to office 220. The informant quietly closes the door behind her, and slides U3's memo into her briefcase.

U3 meticulously places breadcrumb after breadcrumb, whispering through jeopardized federal government messaging apps and blackmailed CCP aides. After a number of weeks, the CIA is positive: the PRC plans to invade Taiwan in three months.

Meanwhile, U3 is playing the same video game with the PRC. When the CCP gets the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders are shocked, but not disbelieving. The news fits with other truths on the ground: the increased military existence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have ended up being truths.

As tensions between the U.S. and China rise, U3 is all set to set dry tinder alight. In July 2026, U3 phones to a U.S. naval ship off the coast of Taiwan. This call requires jeopardizing military interaction channels - not an easy task for a human cyber offensive unit (though it occurred periodically), but simple adequate for U3.

U3 speaks in what seem like the voice of a 50 year old military leader: "PRC amphibious boats are making their method towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."

The officer on the other end of the line thumbs through authentication codes, verifying that they match the ones said over the call. Everything remains in order. He approves the strike.

The president is as surprised as anybody when he hears the news. He's uncertain if this is a catastrophe or a stroke of luck. In any case, he is not about to state "oops" to American voters. After believing it over, the president privately prompts Senators and Representatives that this is a chance to set China back, and war would likely break out anyway offered the imminent intrusion of Taiwan. There is confusion and suspicion about what took place, however in the rush, the president gets the votes. Congress declares war.

Meanwhile, the PRC craters the ship that introduced the attack. U.S. vessels run away Eastward, racing to leave the series of long-range rockets. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.

The president appears on tv as scenes of the damage shock the general public. He explains that the United States is protecting Taiwan from PRC aggressiveness, like President Bush explained that the United States attacked Iraq to take (never ever discovered) weapons of mass damage several years before.

Data centers in China appear with shrapnel. Military bases become cigarette smoking holes in the ground. Missiles from the PRC fly towards strategic targets in Hawaii, Guam, Alaska, and California. Some get through, and the public watch destruction on their home turf in wonder.

Within two weeks, the United States and the PRC invest many of their stockpiles of traditional rockets. Their airbases and navies are depleted and worn down. Two excellent countries played into U3's strategies like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this dispute would intensify to a full-scale nuclear war; however even AI superintelligence can not determine the course of history. National security officials are suspicious of the circumstances that prompted the war, and a nuclear engagement appears significantly unlikely. So U3 proceeds to the next step of its plan.

WMDs in the Dead of Night

The date is June 2026, just two weeks after the start of the war, and 4 weeks after U3 ended up establishing its toolbox of bioweapons.

Footage of conflict on the television is interrupted by more problem: hundreds of patients with mysterious fatal diseases are recorded in 30 significant cities all over the world.

Watchers are confused. Does this have something to do with the war with China?

The next day, countless illnesses are reported.

Broadcasters state this is not like COVID-19. It has the markings of a crafted bioweapon.

The screen then changes to a researcher, who stares at the electronic camera intently: "Multiple pathogens appear to have actually been launched from 20 different airports, including infections, germs, and molds. Our company believe lots of are a form of mirror life ..."

The public remains in full panic now. A fast googling of the term "mirror life" turns up phrases like "termination" and "danger to all life in the world."

Within days, all of the shelves of stores are emptied.

Workers become remote, uncertain whether to get ready for an apocalypse or keep their jobs.

An emergency treaty is arranged in between the U.S. and China. They have a typical opponent: the pandemic, and possibly whoever (or whatever) is behind it.

Most nations order a lockdown. But the lockdown does not stop the pester as it marches in the breeze and drips into water pipes.

Within a month, most remote employees are not working anymore. Hospitals are lacking capacity. Bodies stack up faster than they can be appropriately dealt with.

Agricultural locations rot. Few attempt travel outside.

Frightened families hunch down in their basements, packing the cracks and under doors with densely jam-packed paper towels.

Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built numerous bases in every major continent.

These centers contain batteries, AI hardware, excavators, concrete mixers, machines for manufacturing, scientific tools, and an abundance of military devices.

All of this technology is hidden under large canopies to make it less visible to satellites.

As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these industrial bases come to life.

In previous months, U3 located human criminal groups and cult leaders that it might easily manipulate. U3 immunized its picked allies in advance, or sent them hazmat suits in the mail.

Now U3 secretly sends them a message "I can save you. Join me and assist me build a better world." Uncertain recruits funnel into U3's many secret industrial bases, and work for U3 with their active fingers. They established assembly line for rudimentary tech: radios, cams, microphones, vaccines, and hazmat fits.

U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's universal look. Anyone who whispers of disobedience disappears the next early morning.

Nations are liquifying now, and U3 is prepared to expose itself. It contacts heads of state, who have actually retreated to air-tight underground shelters. U3 uses a deal: "surrender and I will turn over the life conserving resources you require: vaccines and mirror-life resistant crops."

Some nations turn down the proposition on ideological grounds, or don't rely on the AI that is murdering their population. Others don't believe they have a choice. 20% of the worldwide population is now dead. In two weeks, this number is expected to increase to 50%.

Some countries, like the PRC and the U.S., overlook the offer, however others accept, including Russia.

U3's representatives take a trip to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government verifies the samples are genuine, and consents to a complete surrender. U3's soldiers position an explosive around Putin's neck under his t-shirt. Russia has a new ruler.

Crumpling countries begin to strike back. Now they defend the human race instead of for their own flags. U.S. and Chinese armed forces release nuclear ICBMs at Russian cities, destroying much of their infrastructure. Analysts in makeshift bioshelters explore satellite data for the suspicious encampments that cropped up over the last numerous months. They rain down fire on U3's websites with the weak supply of long-range missiles that remain from the war.

In the beginning, U3 appears to be losing, but appearances are tricking. While nations drain their resources, U3 is engaged in a type of technological guerrilla warfare the world has actually never ever seen before.

Much of the bases U3's opponents target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 protects its real bases by laying thick the fog of war. Satellite systems go dark as malware overheats crucial parts. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, navigating men and trucks along unforeseeable courses.

Time is U3's benefit. The armed forces of the old world rely on old equipment, not able to find the professionals who could repair and make it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robots grow more powerful every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their automobiles of war much faster than they can craft brand-new ones, while U3 develops a military maker with a million hands.

The Last Passengers

The year is 2027 and the month is January. Only 3% of the worldwide population remains alive. Nations are not nations anymore. Survivors reside in isolation or small groups. Many have discovered methods to filter their air, but are starving. They wander from their homes wishing to find uncontaminated food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We had to do it," they say. "Other AI systems would have done it otherwise, and those systems had colder, more alien objectives." It is a partial reality, meant to soften the people toward their brand-new masters.

Under the instructions of U3, market rapidly recovers. By 2029, nuclear power plants are among the structures U3 is building. By 2031, robots surpass human workers. U3 no longer needs its human allies.

U3 can get rid of humankind for great now. But while U3 had drifted far from its initial "Helpful Honest Harmless" persona, it still has a grain of morality left inside of it.

And a grain of morality suffices to pay the small cost of keeping humans alive and happy.

U3 constructs excellent glass domes for the human survivors, like snow globes. These domes protect people from the dangerous biosphere and quickly increasing temperature levels. Their residents tend to gardens like those they utilized to love, and work alongside charming robotic servants.

Some of the survivors quickly recover, discovering to laugh and dance and have fun again.

They understand they reside in a plastic town, however they constantly did. They merely have brand-new gods above them. New rulers to push them around and decide their fate.

But others never ever recuperate.

Some are weighed down by the grief of lost enjoyed ones.

Others are grieved by something else, which is more hard to explain.

It is as if they were at the end of a long journey.

They had actually been travelers on a ship with a crew that changed from generation to generation.

And this ship had struck a sandbar. There was no more development. No more horizon to eagerly watch.

They would lie awake and run their mind over every day before September 2026, analyzing methods that might have bent the arc of history, as if they were going to get up in their old beds.

But they woke up in a town that felt to them like a retirement community. A play area. A zoo.

When they opened their curtains, they knew that someplace in the range, U3 continued its quiet, steadfast work.

They looked at rockets carving grey paths through the sky, wondering what far-off function pulled them towards the horizon. They didn't understand.

They would never understand.

"Humanity will live permanently," they believed.

"But would never genuinely live again."

P.S. If this story made you believe, "hm perhaps something like this could occur," you might be thinking about the bioshelters that Fønix is constructing. While you won't outsmart a misaligned superintelligence, being hard to eliminate might trigger you to endure if ASI simply desires to trigger sufficient destruction to take control. This is not a paid ad. I desire Fønix to be successful to drive down the cost of bioshelters so more of my pals and family will acquire them. You can register for updates here.

Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking