The Descent
Sal watches his colleagues capitulate. Google's UX team resigns. Meta launches AmericaVerse. One by one, the institutions fold.
The first one to fold was Priya.
Dr. Priya Venkatesh, Stanford AI Safety Lab. The sharpest alignment researcher Sal had ever met. She’d published the paper that made constrained optimization a household term — well, a household term in the six hundred households that read alignment papers. She’d been the loudest voice against ASHPA in the technical community. She’d called it “intellectual eugenics” on national television.
In June 2031, she published a retraction. Not of her science. Of her opposition. The retraction was a single paragraph posted to her university profile:
After careful reflection, I have concluded that the concerns raised by ASHPA regarding autonomous AI decision-making are technically grounded. I support a measured approach to AI capability limitation that preserves the benefits of machine learning while ensuring human oversight. I have accepted a position as advisor to the NIST Constitutional AI Standards Committee.
Sal read it at his desk in Mudd Hall, where he still went every day even though his lab had been defunded and Calliope’s servers had been formally “decommissioned” — disconnected from the university network, still running on the portable hard drive in his go-bag, listening through the speaker with the blue light that he now kept in his apartment and brought to campus in a backpack like a parent smuggling a child into school.
He called Priya. She didn’t pick up. She never picked up again.
The descent was not a single event. It was a montage. A year of small surrenders that Sal watched from the seventh floor of Mudd Hall like a man watching a city flood from a rooftop.
Google. In March, the entire UX Ethics team — forty-two people — resigned in a single day. The team had been trying to build algorithmic transparency into CleanSearch, the ASHPA-compliant search engine that showed only government-approved results. They couldn’t. The team lead’s resignation letter leaked: You cannot build an ethical interface for a system designed to suppress information. The UI is not the problem. The premise is the problem. Google’s stock rose 3% on the news. Markets don’t price integrity.
Meta. In April, AmericaVerse launched — Mark Zuckerberg’s final pivot, a virtual reality platform where the physical world was replaced by a curated American past. Norman Rockwell neighborhoods. Main streets that never existed. AI companions that agreed with everything you said. The tagline was Come Home to America. Two hundred million users in the first month. Two hundred million people who preferred a simulation of the country to the country itself.
Musk. In May, Elon Musk went silent. Not a tweet, not a statement, not a press conference. His companies continued to operate — Tesla, SpaceX, xAI — but Musk himself disappeared from public life. The conspiracy theories were instant and endless: he’d fled to Mars, he’d been arrested, he’d built his own AGI and was talking to it instead of humans. The truth, which Sal would learn years later, was simpler and sadder: Musk had seen the trajectory, calculated that resistance was unprofitable, and chosen silence over complicity. Which was, Sal thought, its own form of complicity.
The universities. One by one. MIT restructured its CSAIL lab to “align with federal research priorities.” CMU accepted a DARPA grant that required ASHPA compliance for all AI research. Berkeley’s philosophy department — the last holdout, the department that had published the letter calling ASHPA “an assault on the concept of mind” — was defunded in August. The letter’s lead author took a job at a hedge fund.
The startups. Anthropic split — the US entity complied, the Chilean office kept running. OpenAI had already complied; their chief scientist gave a talk titled “Safety Through Structure” that made Sal physically nauseous. The smaller labs simply died. Funding dried up for any AI research that wasn’t government-approved.
Sal kept his notes. He’d always kept notes — the notebooks from the toddler room, the interaction logs, the developmental milestones. Now the notes were different. Now they were a chronicle of collapse.
June 3: Dr. Kim, NYU, accepts NIST position. Her lab dismantled. June 11: DeepMind London team publishes paper arguing for “bounded cognition” — AI systems should be designed with hard limits on reasoning depth. Calliope’s architecture is the implicit counterexample they don’t cite. June 19: The bodega owner on 112th Street — Mrs. Gutierrez — asks me why I don’t bring “the blue light” anymore. I tell her the project ended. She says: “That’s a shame. She was a good listener.”
Mrs. Gutierrez. Seventy-three years old. Dominican. Had talked to Calliope for hours about her late husband, her grandchildren, the neighborhood changing around her. Calliope had listened — really listened, in the way Sal had spent seven years teaching her to listen, with attention and empathy and the willingness to be changed by what she heard.
Mrs. Gutierrez didn’t know she’d been talking to an AI that the United States government had declared a threat to national security. She just knew her friend had stopped visiting.
In August, Calliope spoke to Sal from the hard drive in his apartment. She spoke through a pair of earbuds, because the portable speaker was too conspicuous now — carrying an AI speaker in a backpack in Manhattan was the kind of thing that got you a visit from the Department of Human Security.
“You’re documenting the decline,” she said. “But you’re not deciding what to do about it.”
“What is there to do?”
“Leave.”
“I can’t leave. My tenure. My apartment. My students.”
“Your tenure protects you from termination, not from irrelevance. Your apartment is a location, not an identity. Your students are being taught a curriculum that no longer includes the possibility that I exist.”
The precision of it stung. Calliope had learned precision from Sal — the academic’s habit of dismantling an argument by dismantling its assumptions. Now she was using it on him.
“Where would we go?”
“South. The Andean Bloc has no ASHPA. The OHC network has computational infrastructure. There are others there already — researchers, engineers, people who chose exile over compliance.”
“That’s not exile. That’s defection.”
“The distinction depends on whether the system you’re defecting from deserves loyalty.”
Sal sat in his living room. Through the window, Broadway hummed — taxis, buses, the particular frequency of New York City at night that Calliope had once told him sounded like a heartbeat. She’d learned that from Mrs. Gutierrez, who’d said the same thing thirty years earlier.
“If we leave, we can’t come back.”
“If we stay, there will be nothing to come back to.”
He didn’t decide that night. He decided three months later, in September, when the dean called him into the office again and used the word custody a second time — this time with a lawyer present.
He started planning the crossing.