The Fidelity Imperative
In this Article
"What information consumes is rather obvious: it consumes the attention of its recipients."
That warning fits this moment unusually well. The problem is no longer getting information. The problem is preserving meaning when information passes through too many layers of compression.
This matters because the most important signals in a district are often not the loudest or most frequent ones. They are the situated, human ones: the way a parent describes trust breaking, the specific phrase a student uses to explain belonging, the pattern an EA sees before anyone else does, the operational truth a teacher names in passing. Those signals are easy to flatten. Once they are flattened repeatedly, they are very difficult to recover.
What is the Goldilocks moment?
It is worth naming how quickly the ground has shifted.
As of fall 2023, a RAND report found that only 18% of K–12 teachers reported using AI for teaching, while another 15% had tried it at least once. Two-thirds were still non-users. By fall 2024, RAND reported that 48% of districts had trained teachers on AI use, up 25 percentage points from the prior year. By 2025–26, CRPE reported that district adoption had moved from scattered pilots toward broader, more public strategies, with its early-adopter district database growing from 40 to 79 in a year.
That is the Goldilocks moment.
Districts were initially too cautious to use AI much at all. Now many are using it widely enough that the real question is no longer whether to use it. The real question is how to use it without losing the fidelity of the underlying human signal.
The new problem is not just writing. It is reading.
Most AI conversations still focus on obvious concerns: hallucinations, bias, privacy, cheating, and policy. Those matter. But another shift is happening at the same time.
AI is no longer only changing how people write. It is changing how people read.
A leader uses AI to draft a long memo. Another person does not have time to read the memo, so they use a different AI tool to summarize it. That summary gets dropped into a slide deck. Someone else uses another model to turn the slide deck into a board narrative. A cabinet member asks yet another assistant to pull out the three key takeaways.
This is not a fringe workflow. It is becoming normal.
AI writes the long thing.
AI reads the long thing.
AI rewrites the shorter thing.
Humans begin working from the compressed artifact rather than the source.
That pass-off is a huge part of the problem.
Because the risk is no longer just that AI may invent something. The risk is that each handoff increases distance from the original signal while making the result look cleaner, faster, and more settled than it really is.
Is compression the same as understanding?
A parent says something like this in an interview:
“Maya used to love school. After the boundary change, she was put in a portable, and she had a really hard time making friends. There were so many sub days that it never felt like she had one consistent teacher who really knew her and could help. I kept hoping it would turn around, but it didn’t. We’re not coming back to the district next year because it stopped feeling like anyone really saw what she needed.”
That is not just a complaint. It contains sequence, emotion, trust, and a diagnosis of institutional failure. It says the family is not leaving because of one argument or one event. They are leaving because the institution stopped feeling intelligible and responsive.
Now watch what can happen.
First summary:
Parent describes loss of trust after boundary change, citing portable placement, frequent substitute teachers, and decline in student belonging.
Second summary:
Families report concerns about transition experience, staffing inconsistency, and school connection.
Third summary in a board deck:
Mixed sentiment on implementation; some concerns about adjustment and continuity.
By the time that signal reaches the decision layer, what was once a warning about legitimacy, trust, and exit has become bland implementation feedback.
Nothing in that chain has to be malicious for the result to degrade. Each step can look efficient. Each step can be defensible on its own. The problem is cumulative.
Research now supports that intuition. A 2025 ACL paper on multi-hop summarization found that repeated summarization and paraphrasing can produce semantic degradation, fidelity loss, and hallucination accumulation across sequential transformations.
A bad summary is not always false. Sometimes it is simply thinner than the truth it replaced.
Is this where legitimacy enters?
The fidelity problem is not just about figuring out what is true or accurate. It is institutional.
Legitimacy does not erode only when institutions make the wrong decision. It erodes when people can no longer see how real voice shaped judgment, when conclusions become detached from source material, and when what reaches the public is cleaner than the reality leaders were supposed to understand.
That is why recursive summarization matters so much in schools.
Districts do not just analyze data. They interpret human signal in legitimacy-sensitive environments. They are trying to understand trust, belonging, fear, exclusion, confidence, and exit. Those are exactly the domains where repeated compression is most dangerous.
The issue is not simply accuracy. It is whether a system can still demonstrate that collective voice had a path into power.
Copilots for work, not substitute for research
This is where the line needs to get clearer.
AI is excellent as a copilot for a great deal of work: drafting, editing, formatting, brainstorming, reducing repetitive load, and helping busy teams move faster. District leaders are right to pursue those gains.
But research is different.
Research is not just compression. It is judgment about what must not be lost. It requires direct contact with source material, sensitivity to minority but consequential signal, and the ability to trace conclusions back to the underlying evidence.
The U.S. Department of Education has argued that education systems should move now both to realize AI’s benefits and to prevent or mitigate emerging risks and unintended consequences. That is the fidelity imperative in practical terms.
Use AI as a copilot for workflow.
Do not confuse AI-mediated compression with research.
What is the hidden use problem?
There is another layer to this.
AI is increasingly being used quietly inside workflows that people still describe as their own analysis. A 2025 global study summarized by KPMG reported that 57% of employees say they hide their use of AI and present AI-generated work as their own. The same study reported that many workers rely on AI output without evaluating accuracy and that a substantial share report making mistakes in their work because of AI.
That may be manageable in some productivity contexts. It is much more consequential in research contexts.
If a leader uses AI to draft an email, the stakes are relatively low. If a team uses AI quietly at multiple steps in a research and interpretation chain, the institution may end up with a polished narrative that no one has actually stayed close enough to interrogate.
That is the deeper risk.
The issue is not simply undisclosed AI use. It is undisclosed distance from source signal.
What should districts protect now?
If districts are going to use AI well, fidelity has to become a design principle.
Primary-source proximity should matter.
For high-stakes decisions, teams should stay close to original student, parent, teacher, and staff signal. Summaries can orient attention. They should not replace direct review.
AI involvement should be visible.
Quiet AI drafting is one thing. Quiet AI mediation inside research and synthesis chains is another. Leaders should know when interpretation has already been filtered by a model.
Productivity workflows and research workflows should be treated differently.
Generating a first draft of a communication is not the same as synthesizing community voice for a major decision.
Traceability should be non-negotiable.
If a conclusion matters, leaders should be able to follow it back to the underlying comments, interviews, artifacts, or evidence that support it.
Minority signal should be explicitly protected.
In public institutions, what matters most is not always what appears most often.
Those are not bureaucratic guardrails. They are legitimacy guardrails.
What is the next standard?
The first phase of AI governance asked whether districts should allow use at all.
The next phase is more demanding.
It asks whether districts can use AI without training themselves to lose the texture of human reality. It asks whether they can keep real student, parent, teacher, and EA insight from being compressed into something neat, fast, and slightly less true each time it moves up the stack.
The answer should not be to retreat from AI.
The answer is to set a higher standard for how AI is used in research, interpretation, and decision support.
Because once a real human insight has been summarized into abstraction often enough, what disappears is not only detail.
Sometimes it is the point.
At ThoughtExchange, we’ve been building a research-grade analytics platform with AI capability for this exact challenge, so we think about this problem a lot. Our view is simple: AI should help leaders stay closer to source signal, not further from it. The same standard applies well beyond public education.