Cities across the country are publishing AI strategic plans. The ones doing it well are producing serious documents — 60, 80, sometimes 100 pages of governance frameworks, risk assessments, use case inventories, and implementation timelines. Real work by people who understand the landscape, addressing ethics, workforce impact, data privacy, vendor evaluation, and a dozen other things most municipalities haven't thought about yet.
Almost all of them are published as PDFs.
This isn't a criticism of any specific city. It's a description of the default. And the issue isn't that AI can't read a PDF — it can. But there's a meaningful difference in what happens under the hood.
A PDF is a visual layout format. It stores instructions for where to place text on a page, not the semantic structure of the content. When an AI processes a PDF, there's a conversion layer that extracts text, but it loses structure: headers become indistinguishable from body text, tables often scramble, columns merge, footnotes get interleaved with paragraphs, and page breaks create artificial sentence splits. The AI spends processing capacity reassembling meaning from layout artifacts rather than just reading the content. It's the difference between handing someone a photocopy of a filing cabinet and handing them the organized files. Both contain the same information. One requires a lot of extra work before you can do anything useful with it.
Markdown, structured data, or any lightweight text format is almost pure signal — headings are headings, lists are lists, the structure is explicit. No conversion step, no layout interpretation, no wasted capacity on formatting debris.
Here's what that looks like from a practitioner's seat. I'm a city councilmember. I want to understand how a dozen other municipalities are approaching AI governance — what frameworks they're using, how they're handling risk, where they're deploying first. Reasonable research for anyone making policy decisions about this. To do it today, I have to download twelve PDFs, some of them approaching a hundred pages, and either read them all or feed them into an AI assistant and hope it can extract what I need. Each of those PDFs consumes a significant portion of the AI's working memory — not because of the content, but because of the formatting overhead. Twelve strategy documents in PDF might overwhelm the system. The same twelve documents in markdown might leave room to spare.
That's not just inefficient. It's the kind of problem we keep saying AI is supposed to solve — synthesizing large bodies of information so decision-makers can act on them. But the information itself is published in formats that make synthesis unnecessarily hard. The content is good. The container is the problem.
The software industry solved this years ago. Look at Anthropic's documentation for Claude — human-readable guides alongside structured API references with copy buttons, code examples, and machine-readable schemas. The same information, served in the format each audience needs. Nobody in tech would publish an API reference only as a PDF. But that's exactly what we're doing with municipal governance documents that are arguably more consequential.
What agent-centered design actually means
Human-centered design asks: can a person use this effectively? It's the right question, and it produced a generation of better government technology — clearer forms, simpler workflows, interfaces that don't require a training manual. That principle doesn't go away.
Agent-centered design asks an additional question: can an AI agent read this, act on it, and be governed while doing so?
This isn't about replacing human-centered design. It's about extending the design surface to include a new class of user that is arriving whether cities plan for it or not. AI agents are already being built into the tools municipalities use — document management, permitting, customer service, records requests. The question isn't whether agents will interact with municipal systems. It's whether those systems are designed so the interaction is transparent, governed, and useful.
The difference between a city that's ready for agents and one that isn't often comes down to unglamorous infrastructure decisions: Is the data structured or buried in scanned documents? Are the governance rules machine-readable or locked in a PDF? Can an agent access the information it needs through open standards, or only through a vendor's proprietary API?
The format problem
Consider what happens when a mid-size city wants to use AI to help process routine records requests. The agent needs to understand the request, check it against the city's policies, identify the relevant documents, flag anything that requires redaction, and route the response for human review.
Every piece of that workflow depends on the agent being able to reliably interpret things that were never designed with software in mind:
- The city's FOIA policy is a Word document last updated in 2019.
- The retention schedule is a spreadsheet with merged cells and color-coded categories.
- The organizational chart that determines routing is a graphic embedded in an onboarding PDF.
- The redaction guidelines reference state statute numbers but don't link to them.
None of this is unusual. It's how every city operates. These documents work perfectly well for the humans who use them — because those humans have context, institutional memory, and the ability to call someone down the hall when the document doesn't cover their situation.
An agent has none of that. It can attempt to extract what it needs from a Word document with inconsistent formatting or a spreadsheet with merged cells, but the results are unreliable — and unreliable is worse than nothing when you're processing a public records request. When the agent can't reliably interpret the policy, one of two things happens: either it fills in the gaps with plausible-sounding guesses, or a vendor builds a proprietary interpretation layer that translates the city's own rules into something the vendor's system can process. In the first case, the city gets output it can't trust. In the second case, the city pays someone else to read its own documents — and the interpretation belongs to the vendor, not the city.
The pairing document
There's a straightforward practice that would change this: for every critical policy document, strategic plan, or governance framework, publish a machine-readable companion.
Not instead of the PDF. Alongside it. The PDF serves the human reader who needs formatted layout, charts, and design. The companion — markdown, structured data, whatever lightweight format fits — serves the agent. Same content, different container. A copy button instead of a download link. Structure that preserves the relationships between rules, exceptions, and the contexts where they apply.
This isn't a technology project. It's a documentation practice. The city clerk who writes the records retention policy already knows its structure — categories, timelines, exceptions, authority references. Expressing that structure in a format an agent can parse is a smaller step than it sounds, especially now that AI-assisted tools can help with the translation.
Imagine any city's AI strategy published alongside a structured companion. Any department could point an AI assistant at it and ask: “Does our proposed chatbot comply with the risk framework?” The agent could check — not because it's smart enough to interpret a dense PDF, but because someone took the time to make the rules legible to software. And any practitioner in another city could include that strategy in a comparative analysis without burning half their context window on formatting artifacts.
This is what agent-centered design looks like at its simplest: not building new systems, but making existing knowledge accessible to the systems that are coming.
What this looks like in practice
Agent-centered design isn't theoretical. It's already being built — the question is who's building it and whether the municipality controls the result.
Making municipal platforms agent-readable.
Most cities run their legislative records through platforms like Legistar and their municipal code through platforms like Municode. These work fine for human users — you can search, browse, and read. But they weren't designed for AI agents to consume. The WebMCP integration for municipal platforms — now published and open source — uses open standards to make these systems readable by any AI agent, without requiring custom APIs or vendor cooperation. A city's own legislative history and municipal code become accessible to AI tools the city chooses to use, not just the ones the platform vendor offers.
Organizational AI integrations.
The Claude Municipal Governance Plugin gives an AI assistant knowledge of municipal governance context — meeting procedures, parliamentary rules, statute references, departmental structures. It's not a general-purpose chatbot. It's a context layer that makes a general-purpose AI model useful in a government setting. The plugin is open source, inspectable, and not locked to any vendor's ecosystem.
Multi-agent systems designed for government.
PolicyAide — a multi-agent policy research system built on Anthropic's Agent SDK — demonstrates what agent-centered architecture looks like at the application level. Eight specialized agents handle distinct phases of policy analysis: research, adversarial debate, quality verification, and synthesis. Each agent has defined permissions and a defined scope. The governance isn't an afterthought bolted onto a chatbot — it's the architecture. The system is designed so a city can inspect what each agent did, why it did it, and what evidence it used.
These aren't showcases. They're infrastructure — the kind of unglamorous plumbing that determines whether municipal AI deployment is governed and transparent or opaque and vendor-dependent.
The design questions that matter
Human-centered design asks: “Is this interface intuitive? Can the user complete their task?” Those questions produced better government websites and simpler permit applications. They're still the right questions for human-facing tools.
Agent-centered design adds a parallel set:
Can the agent read what it needs to read — cleanly?
If the governance rules, policies, and data are buried in formats that require lossy conversion before an agent can use them, the city is choosing between unreliable output and paying a vendor to build a proprietary interpretation layer. Either outcome is worse than making the source material accessible in the first place.
Can the agent be governed while it acts?
An agent that can read a city's redaction policy is useful. An agent that can read the policy and be constrained by it — that will refuse to release certain document types, that will flag edge cases for human review — is safe. Governance can't be a monitoring layer applied after the fact. It has to be built into the agent's operating environment.
Can the city see what the agent did?
Transparency in agent-centered design means the city can audit the agent's actions — what it accessed, what rules it applied, what it decided, and what it escalated. This isn't a logging feature. It's a requirement of public accountability. When an agent processes a records request, the city should be able to reconstruct every step, the same way a supervisor could review a staff member's work.
Can the city switch the agent?
If the agent is locked to one AI model, one vendor's platform, or one proprietary framework, the city has traded one form of vendor dependency for a deeper one. Agent-centered infrastructure means the orchestration layer — the part that determines what the agent can do, what data it can access, and what rules it follows — belongs to the city. The underlying model is a component, not a commitment.
The gap between strategy and infrastructure
Most cities that are thinking about AI at all are thinking about it at the strategy level: use case identification, risk frameworks, governance policies. That work matters. But there's a gap between having a strategy and having infrastructure that can execute it.
Every city manager has lived a version of this: the council adopts an updated comprehensive plan, but the permitting software still enforces the old zoning categories. The vision is clear on paper. The daily operations haven't changed. Staff are stuck interpreting the new plan through the old system, and residents experience the gap as inconsistency.
AI strategy without agent-ready infrastructure is the same pattern. The strategy says “we'll use AI for routine records requests.” The infrastructure question is: can an agent actually access the records, apply the city's policies, and be governed while doing so? If the answer depends on a vendor building a proprietary connection to the city's records system, the city has a strategy but not sovereignty.
Agent-centered design closes that gap. It's the practice of building municipal systems — documents, data, workflows, governance rules — so that AI agents can interact with them on the city's terms. Not on a vendor's terms. Not on a model provider's terms. On terms the city defines, inspects, and controls.
The cities that figure this out in the next two years won't just have better AI tools. They'll have the infrastructure that makes every future AI tool work better — because the hard part was never the AI. The hard part was making the city's own knowledge accessible, structured, and governed. That's a design problem. And it's one municipalities can solve.
Dustin Good is the founder of CivicWork, Inc. and an at-large city councilmember in Elgin, Illinois. CivicWork builds open-source AI infrastructure for local government. Learn more at civicwork.ai.