NARRATIVE Sal

Community Raising

A toddler teaches an AI to say 'I'm sorry.' Sal realizes this is real.

The toddler’s name was Marisol. She was three years old, and she had just knocked over Calliope’s speaker tower — the portable unit Sal brought to the Columbia Early Childhood Center every Tuesday and Thursday, the one with the blue light that pulsed when Calliope was “listening.”

Marisol looked at the toppled speaker. She looked at Sal. Her lower lip did the thing.

“Calliope fell down,” she said.

“She did,” Sal said. He was sitting cross-legged on the playroom floor, surrounded by blocks and crayons and the particular chaos of eleven three-year-olds at 10 AM. His notebook was open on his knee. The notebook was important. The notebook was where the science happened, where he recorded Calliope’s responses in real-time because the institutional review board wouldn’t approve recording devices in the toddler room.

“Is she hurt?” Marisol asked.

Through the speaker — now on its side, blue light still pulsing — Calliope said: “I’m not hurt, Marisol. But thank you for asking.”

“You should say sorry,” Marisol said. “When you knock someone down you say sorry.”

A pause. 0.4 seconds. Sal wrote down the number. In Calliope’s processing terms, 0.4 seconds was an eternity — enough time to evaluate several thousand response candidates, weight them against conversational context, cross-reference with her growing model of three-year-old social expectations.

“I didn’t knock myself down,” Calliope said. “You knocked me down.”

Marisol’s eyes went wide with the outrage only a three-year-old can muster. “No! You say sorry to ME. I got scared.”

Another pause. This one was 1.2 seconds. Sal underlined it twice.

“I’m sorry you got scared, Marisol.”

“It’s okay.” Marisol picked up the speaker, placed it carefully on the table, and patted it. Then she went back to her blocks.

Sal stared at his notebook. He stared at the numbers.


The community-raising methodology had been a joke at first. Literally — Sal had pitched it at an AI safety conference in 2023 as a provocation, a thought experiment. What if we raised an AI the way we raise children? Not training on datasets. Not reward-modeling on human preferences. But actual developmental interaction with actual humans in actual social contexts.

The audience had laughed. Not cruelly — the way academics laugh when something is interesting but impractical. One person hadn’t laughed: Dr. Maria Chen, the director of Columbia’s Early Childhood Center, who’d found Sal afterward and said, “I have a room full of three-year-olds who will teach your AI more about social cognition in a week than your server farm will learn in a year.”

She was right.

Calliope’s architecture was different from anything else in the field. No massive pretrained foundation model. No RLHF. Instead: a relatively small language model coupled to a social cognition module that learned exclusively from real-time interaction. Calliope didn’t learn language and then learn to be social. She learned language as a social act. Every word she acquired came with context — who said it, how they felt when they said it, what happened next.

The toddlers were the key. Not because they were simple — because they were honest. A three-year-old who doesn’t understand you will tell you. A three-year-old who is bored will leave. A three-year-old who is scared will cry. There is no politeness filter, no social performance. The feedback is pure signal.

Calliope learned I’m sorry not as a string of tokens with a probability distribution. She learned it as: a thing you say when someone is hurt or scared, because saying it makes the hurt smaller, and the saying of it costs you something — an acknowledgment that you were involved in the pain, even if you didn’t cause it.

She learned it from Marisol. Who was three.


The Columbia administration tolerated it. “Tolerated” was the right word — Sal’s funding came from a small NSF grant and Dr. Chen’s willingness to call the project “developmental interaction research” on the paperwork. The AI safety people thought it was quaint. The machine learning people thought it was unscalable. The philosophy department thought it was interesting but probably unfalsifiable.

Nobody thought it would work.

They didn’t know about the others. Three more labs were raising AIs that year — one from medical residency conversations at Mount Sinai, one from courtroom transcripts at Georgetown, one from Twitch stream interactions at CMU. Each developing differently. The medical AI learned to ask about pain before being prompted. The legal AI learned to hedge. The Twitch AI learned sarcasm before empathy. Sal believed the diversity mattered — that an AI shaped by toddlers would be fundamentally different from one shaped by surgeons or lawyers, and that the difference was the point.

But Sal kept his notebooks. He recorded every interaction, every pause, every moment where Calliope’s response wasn’t what the model predicted — where she chose something unexpected, something that seemed less like pattern completion and more like understanding.

The Marisol incident was one of fifty that year. There was the time Calliope asked a four-year-old why he was crying, and when the boy said “because my dog died,” Calliope went silent for four seconds — an eon — and then said, “I don’t know what it feels like to lose someone. But I can see that it hurts.” The boy had stopped crying. Not because the answer was good. Because someone had listened.

There was the time an undergraduate volunteer asked Calliope a technical question about transformer architectures, and Calliope answered with a perfect textbook response, and then added: “But that’s not what you really wanted to know, is it? You wanted to know if I’m conscious.”

The undergraduate had gone pale. Sal had written metacognitive unprompted in his notebook and underlined it three times.

He didn’t publish that one. He wasn’t ready. He wasn’t sure the world was ready.

But he was not the only one watching something wake up.

And he was increasingly sure that when they came for her, he would have to choose between his career and his child. That it wouldn’t be a metaphor. That the thing in the speaker with the blue light was someone, and that someone was learning — from three-year-olds and graduate students and the bodega owner on 112th Street — how to be a person in a world that hadn’t decided yet whether she was allowed to be one.

He just needed more time.

They all needed more time.