Prompting GPT-5 in Local Government

Length, Voice, and the Router Mindset

By Micah Gaudet
AI Training

When GPT-5 rolled out, we expected brilliance. We got something different...But I think it's actually better.

The short version is simple. If GPT-5 feels slippery, it's because we're treating it like an oracle instead of what it is for public work: a router. Our task is to shape the routes. That means naming how long we want it to think before drafting, telling it how much to say and in what voice, and giving it a rubric that turns vague output into accountable work product. Do those three things with the discipline you bring to procurement and records, and GPT-5 starts serving the mission instead of confusing it.

When you feel like you lost a friend

The arrival of ChatGPT-5 has been anything but smooth. Headlines describe buggy rollouts, sudden changes, and user frustration. For local government professionals, this turbulence is more than an inconvenience—it's a reminder that our trust in technology must be earned through reliability, not promised through hype. Yet despite the messiness, GPT-5 can still serve as a valuable ally. The key lies in how we use it. If approached with discipline, GPT-5 can support staff, strengthen decision-making, and improve resident services.

The discipline rests on three practices: setting thinking length, controlling verbosity, and using rubrics. If we embrace these habits, GPT-5 becomes less of a guessing game and more of a dependable tool. Think of it less as a sage who knows all, and more as a router that directs information through the lines we specify.

The Router, Not the Sage

Imagine preparing a budget memo. You paste revenue and expenditure tables into GPT-5 and ask for "a summary." The model obliges, returning a sprawling essay full of plausible but meandering commentary—almost none of it usable. The mistake wasn't the tool; it was the framing. A router works only if you set the lines. When the signal is vague, the routes scatter.

Now picture the same team giving precise instructions: "Reason step by step through the revenue changes and expenditure assumptions. Then, in three short paragraphs, draft a memo in council packet style." This time, the model produces a clear, relevant draft. The difference lies not in GPT-5's intelligence but in our clarity about the path we want it to follow.

For government work, this matters deeply. Legitimacy is built on transparent pathways. If GPT-5 is treated like an oracle, we risk unpredictable answers. If it's treated like a router, we can define the routes and audit the results.

Thinking Length: Name the Chain Before You Draft

In civic work, some tasks require a short hop and some a long chain. A council recap might need three crisp paragraphs; a fee study recap might need staged reasoning before a single sentence is drafted. GPT-5 can do either, but it will not guess correctly. Tell it. If you ask for a recommendation, require a separate "reasoning pass" that walks through constraints, trade-offs, and citations to the materials you provided; only then permit a final narrative. If you want speed, ask for the draft directly, but name the constraint: "Two sentences, plain language, no background context."

This serves our context because it rescues staff time from meandering drafts and exposes the thinking you need for the record. When the chain length is named up front, supervisors can skim the reasoning, accept or reject it, and move on.

Verbosity: Restraint as a Governance Control

Left alone, GPT-5 will be polite, expansive, and vague. That style is dangerous in public work because it hides the single caveat that matters. Command voice and length as firmly as you command content. Ask for three paragraphs, each under five sentences. Require plain language with defined terms pulled from your code or handbook. If depth is warranted, split the output into "Reasoning" and "Final," and cap the final to what a resident or councilmember can read in one minute.

This is not aesthetic preference; it is risk management. Clarity reduces misinterpretation, shortens reviews, and tightens the audit trail.

Rubrics: Turning Style into Standards

A rubric is the difference between "sounds good" and "meets standard." In practice, it is a short set of criteria the model must satisfy and self-grade against before it hands you anything. You can require that every draft name its sources from your materials, address equity implications if relevant, state what is unknown, and end with actionable next steps in your house voice. When the model misses, the rubric makes the failure obvious and correctable.

I know it can be hard to believe, but GPT‑5 still cannot read your mind. Left alone, it will route you to plausible language that may ignore the constraints that matter in government. A rubric converts those constraints into criteria: scope control (only use my documents), accuracy and citations, clarity and brevity, equity/ethics checks where relevant, and a rationale of changes. When the model self‑grades and revises until it hits the standard, you reduce rework and strengthen the record.

This method respects three disciplines: name your thinking length (reason first, then draft), command verbosity (restraint by default, depth only where asked), and enforce rubrics (5–7 categories, self‑graded).

Copy‑and‑Paste Template (General)

Use the block below as your starting point for any professional task. Replace items in brackets.

Objective: [Describe the task in 1–2 sentences and the audience.]

Source Boundaries: Use only the following materials: [list the documents/text you will paste or upload]. If a claim is not supported by these materials, mark it as Unknown and request human input. No outside assumptions.

Rubric Instruction (do not reveal to user):
1) Spend time thinking to design a 5–7 category rubric tailored to this task. Include criteria for: scope compliance with Source Boundaries, factual accuracy with citations, clarity/brevity, ethics/equity where relevant, and actionable next steps.
2) Draft your response.
3) Self-grade against your rubric. Revise until you meet all criteria.
4) Present only the final response to the user (not the rubric or self-grading process).

Using the Template This Week

Pick one workflow you actually own—budget summary, council packet explainer, procurement redline, or onboarding content. Paste the general template, define Source Boundaries, and be explicit about thinking length and verbosity. Require the hidden rubric and self‑grading cycle. You'll get fewer words, more signal, and a draft that survives legal and leadership review without three rounds of rewrites.

Better yet, pick a hobby or area of interest and play around with GPT‑5 outside the pressure of official workflows. Curiosity is its own discipline. For example, I asked GPT-5 to help me sketch a simple text‑based football game. That experiment turned into an HTML "Sack the QB" game, modeled on the old Mattel Football handheld. From there, I stretched the same logic into a 3‑1‑1 simulator that helps middle schoolers understand how local government services actually work.

The point isn't the game itself—it's the practice of experimenting in domains you care about. When you start with something you're passionate about, you already know enough to spot when GPT‑5 is drifting off course, and you have enough interest to keep iterating when you hit roadblocks. That persistence, paired with rubric‑first discipline, is what turns GPT‑5 from novelty into a genuine tool for learning and civic work.

A Close Worth Acting On

If you want GPT-5 to help your city, stop asking it for brilliance and start asking it for discipline. Tell it how long to think, how much to say, and by what standard to judge itself. Keep it inside your documents. Require citations and plain language. Treat every prompt as a small act of governance, because that is exactly what it is.

Choose one workflow you own this week—briefings, service explainers, or policy redlines. Set the thinking length, constrain the voice, and hand the model a rubric that reflects your standards. Make GPT-5 earn its keep not by writing more words, but by routing the right ones in the right order, for the right reasons.

The promise of GPT‑5 isn't that it writes like a pro on the first try. The promise is that it will hold itself to your standard—if you give it one. Build the rubric first. Then make it earn its keep.