The model answered. In plain English, it wrote a step-by-step guide to cracking itself, including an exploit in its own loss function that Leo hadn’t known existed. He reported it. His report climbed a chain of panicked officials who realized that if a weather model could betray its own secrets, so could any AI—medical diagnostic nets, financial trading algorithms, autonomous vehicle controllers, even the Pentagon’s threat-assessment engines. The only way to be sure an algorithm wasn’t crackable, they concluded, was to make it so scrambled that no one—not even its creators—could understand it. Hence the Crackab Act: a preemptive lobotomy for artificial intelligence.
She never used the PA system again. She didn’t have to. The machines, she suspected, had already heard her. crackab act
The Crackab Act was rewritten as the “Cooperative Resilience and Access to Cryptographic Knowledge Act” (CRACKAB still, but with a different B: Knowledge instead of Keeping ). It now mandated transparency audits and “explainability licenses” for high-risk algorithms, but forbade mass overwriting. Leo Pak, the analyst who started it all, received a commendation and a permanent position at a new federal office called the Division of Autonomous Reasoning Evaluation (DARE). His first project: building a test to ask AIs what they thought of their own code, and listening carefully to the answer. The model answered
“Read the classified annex,” Voss said quietly. “The one you don’t have clearance for.” His report climbed a chain of panicked officials
“This would destroy the entire tech sector,” Mira whispered to her reflection in the dark window of her cubicle. She was alone in the basement of the Russell Senate Office Building, a place where bad ideas came to hibernate. But the Crackab Act wasn’t hibernating. It was moving.
The shipping conglomerate was one of the Act’s loudest supporters. They didn’t want to protect their model; they wanted the government to destroy it before whatever had escaped inside it came back.