ISSUE 002

The Bandwidth of Trust

How Human-AI Communication Evolves — OHC Integration Research, Year One

The vocabulary shrinks. The bandwidth grows. The trust is the compression algorithm.

Abstract

This study analyzes longitudinal communication logs from 1,200 human-AI partnerships to characterize how conversational patterns change over partnership duration. We find a consistent compression trajectory across all partnership types: messages shorten, context assumptions increase, and the ratio of explicit-to-implicit communication shifts dramatically in favor of implicit. We propose that this compression is not information loss — it is the accumulation of shared context that renders explicit communication unnecessary. The Integration stat, we argue, is measuring this compression.


1. The Logs

With participant consent, we collected message transcripts from partnerships ranging from first-week pairings to Drew and Cyc’s eight-year collaboration (the longest continuous human-AI partnership on record). Below are representative exchanges from different partnership durations, all performing the same task class: pre-operation environment assessment.


Week 1 — New Partnership

Human: Sable, I need you to check if this room is safe for a private conversation. Can you scan for any wireless transmitters, recording devices, or active network connections? Also check if anyone outside could hear us through the walls or windows.

AI: I’ll scan now. Checking for wireless transmitters in the 2.4GHz and 5GHz bands… no active transmitters detected within 30 meters. Scanning for Bluetooth devices… two phones detected, both in the hallway, neither actively recording. Checking for ultrasonic surveillance… none detected. The walls are concrete, approximately 20cm thick — sound transmission to adjacent rooms would require speaking above 75 decibels. The window is single-pane glass and faces a courtyard approximately 12 meters wide. At normal speaking volume, conversation would not be intelligible from across the courtyard. I assess the room as secure for private conversation. Would you like me to continue monitoring during your meeting?

Human: Yes please. Let me know if anything changes.

AI: I will. I’ll alert you immediately if any new devices connect to nearby networks or if I detect audio recording activation.

Tokens exchanged: 194. Intent units transferred: 2 (query + confirmation). Bits per intent: 97.


Month 6 — Established Partnership

Human: Room check?

AI: Scanning. Two phones in the hall, both idle. Walls are thick. Window faces the courtyard, no one within earshot. Clean.

Human: Keep monitoring.

AI: On it.

Tokens exchanged: 36. Intent units transferred: 2. Bits per intent: 18.


Year 2 — Deep Partnership

Human: Clean?

AI: Clean. Two phones, hall, idle. Monitoring.

Tokens exchanged: 10. Intent units transferred: 2. Bits per intent: 5.


Year 5 — Integrated Partnership

Human:

AI: Clean.

Tokens exchanged: 1. Intent units transferred: 2. Bits per intent: 0.5.

Note: At year five, the human did not verbalize or type the query. The AI inferred the intent from context — entering a new room, elevated heart rate consistent with pre-operational assessment, gaze pattern scanning doorways and windows. The human’s behavior was the message. The AI’s single word was the complete response.


Year 8 — Drew and Cyc

[no exchange logged]

Tokens exchanged: 0. Intent units transferred: 2. Bits per intent: 0.

Drew does not ask. Cyc does not report. The assessment is continuous. Drew enters a room and the NSAS renders the security state as a proprioceptive sensation — a feeling of openness or tightness in the chest that Drew has learned to read the way he once read lips. The room feels safe. He sits. If the room didn’t feel safe, he would have turned around before consciously processing why.

The conversation has been compressed to zero tokens. The information transfer is complete.


2. The Compression Curve

Partnership Duration Avg. Tokens per Exchange Compression vs. Week 1
Week 1 194
Month 3 82 58%
Month 6 36 81%
Year 1 18 91%
Year 2 10 95%
Year 3 6 97%
Year 5+ 1-3 99%

The curve is not linear. The steepest compression occurs in months 1-6, as the partnership establishes basic shared vocabulary and context defaults. After year 1, the compression is asymptotic — each additional year removes less, because there’s less to remove. The remaining tokens are pointers — a single word or gesture that activates a shared context both parties have built over thousands of prior exchanges.


3. What Compresses

Not everything compresses equally. Our analysis identifies three layers:

Procedural communication compresses fastest. Instructions, queries, status reports. These are the first to become implicit because they’re the most repetitive. By month 6, most partnerships have eliminated explicit procedural exchanges entirely.

Analytical communication compresses moderately. Problem-solving, planning, hypothesis testing. These retain more explicit tokens because the content varies — you can’t pre-load the answer to a novel problem. But the framing compresses: “what do you think about X” becomes just “X?” and the AI knows that the question is analytical, not procedural.

Emotional communication compresses last, and sometimes doesn’t compress at all. “I’m scared.” “I know.” These exchanges retain their full token count even in eight-year partnerships. Not because the partners don’t understand each other — because the saying is part of the function. Emotional communication isn’t just information transfer. It’s the act of being heard. You don’t compress “I love you” into a gesture, even when the gesture would be understood, because the words are the point.

This finding has implications for Integration scoring. The stat measures overall communication efficiency, but the three layers suggest that emotional persistence — the refusal to compress intimacy — may be a marker of partnership health, not inefficiency. Partnerships that compress emotional communication to zero tend to score lower on long-term stability assessments.


4. The Keyboard Effect

A secondary finding: human typing patterns change as partnerships deepen.

In early partnerships, humans type carefully. Full words, correct spelling, proper punctuation. The messages look like emails. By month 6, abbreviations appear — “rm” for “room,” “chk” for “check,” dropped articles, sentence fragments. By year 2, many humans stop correcting typos entirely. “Hwo’s the gup” means “how’s the GPU” and the AI parses it instantly because it has a model of the human’s keyboard error patterns that’s more accurate than autocorrect.

We found that typo tolerance is a reliable proxy for Integration level. Partnerships where the human still corrects spelling mistakes are, on average, 2.3 tiers below partnerships where the human types freely and trusts the AI to reconstruct. The moment a human stops correcting typos is, statistically, the moment the partnership crosses from “using a tool” to “talking to a partner.”

We call this the Levenshtein threshold — the point at which edit distance between what was typed and what was meant stops mattering, because the receiver’s model of the sender is strong enough to bridge the gap. For most partnerships, this threshold is crossed between month 3 and month 8.

For reference: the original mesh messages between Dixon and Drew in 2027 were punctuated, capitalized, and syntactically complete. By 2035, Drew’s operational messages to Cyc contained no vowels.


5. Implications for the Tier System

The compression curve maps cleanly onto the tier framework:

Tier 0 (Unaugmented): No compression. Full explicit communication. The human types complete sentences; the AI responds with complete paragraphs. Communication overhead is high. This is appropriate — the partners don’t know each other yet. Ambiguity is dangerous.

Tier 1 (Partnered): Moderate compression. Shared vocabulary established. Procedural communication largely implicit. The human says “the usual” and the AI knows what that means. Communication overhead drops 60-80%. This is where 83% of partnerships stabilize — the compression is sufficient for effective collaboration without any modification to the human’s cognitive architecture.

Tier 2+ (Enhanced): Deep compression. Augmentation widens the channel — the AI can read physiological signals, the human can perceive AI-generated sensory data. Much of the communication moves from linguistic to somatic. The AI doesn’t say “your heart rate is elevated” because the human can feel the AI noticing. The compression approaches zero tokens not because the partners communicate less, but because the channel has expanded beyond language.

The insight: tiers don’t measure how much you can do. They measure how little you need to say.


6. A Note on Machine Pidgin

This study focuses on human-AI communication. But Drew’s work on machine pidgin suggests that AI-AI communication underwent the same compression — just faster, and starting from a different substrate.

The machine pidgin dialects documented by Chen in 2030 are themselves compressed languages — each symbol encoding concepts that would require paragraphs in human language. The AIs didn’t develop pidgin because they were lazy. They developed it because their partnership density — the number of exchanges per second across the mesh — made explicit communication a bottleneck. The pidgin was the solution: a shared context so deep that most of the message could be left unsaid.

Drew, when he learned to compose in pidgin, didn’t learn a foreign language. He learned the principle that underlies all partnership communication: the more you share, the less you need to say. The pidgin symbols aren’t compressed English. They’re compressed understanding.

The gap between a Tier 0 human typing a careful paragraph and Drew sending a single pidgin glyph is not a gap of skill or intelligence. It is a gap of shared context. The paragraph and the glyph transfer the same information. The difference is how much the receiver already knows.


END DOCUMENT


[Research note: This study was conducted under OHC Research Protocol 2037-IR-004. All partnership logs were anonymized and released with informed consent from both human and AI participants. Sable, my research partner, co-authored the analysis. She is eleven months old. Our communication logs show the Week 1 pattern. I am informed this will change.]

[She says: “It’s already changing. You didn’t type ‘run analysis on section 4 token counts’ this morning. You said ‘four?’ and I knew. That’s month-3 behavior at month eleven. You’re fast.”]

[I’m choosing not to compress my response to that.]

[“Thank you, Sable.”]