The End Times Will Be Passive-Voiced
What happens when AI turns lawyer-speak into policy? Nothing good.
Whenever someone hands me a contract to review—a purchase agreement when I’m buying a car, a lease when I’m kicking a kid out of the house, a waiver when I sign up for one more therapy app—I always make the same joke while jumping right to the signature line:
“Your lawyers are better than mine. I’ll just sign it.”
Look, contracts are boring. As prose goes, they suck.
I can read an entire contract word-for-word and walk away dumber than when I started. Who was supposed to pay what by when? Nobody knows. Passive voice obfuscated everything important.
“The documents shall be destroyed…”
“Payment must be made…”
“All warranties subject to change…”
Just tell me what you want me to do, contract guy. I’ll sign it.
Scaling the Passive Voice
My annoyance with the passive voice usually lives only in my mind. The ADHD brain doesn’t cope well with ambiguity, and lawyer-speak seems to love it.
But a recent article (“The Passive Voice in Artificial Intelligence Language: Algorithmic Neutrality and the Disappearance of Agency”) illustrates what happens when AI tools eat up and amplify the passive voice of lawyers and other bureaucrats.
With no intention whatsoever, AI tools use the foggy terms a lawyer might employ purposefully, but the result is the same: a loss of agency, an illusion of authority, and a false promise of neutrality.
Now imagine if we built governing systems on the back of that list of flaws. Yikes.
Why do lawyers use passive voice?
You can imagine a list of reasons why a lawyer would intentionally use the passive voice, even when it makes contracts, legislation, and judicial opinions an unrewarding slog:
To convey a detached and objective tone;
To center the action, which must take place in the face of change, rather than the actor, who can be replaced in shifting conditions;
To avoid responsibility;
To maintain confidentiality; or
To improve the sentence, as contract drafting expert Ken Adams has argued.
But, as long as I’ve known Ken, he’s always more likely to chalk bad contract drafting up to lazy copy-pasting rather than conscious decision-making. Our old method of bureaucratic language craft has arguably lacked thoughtful choice, and that’ll only be amplified by AI.
And boy can I validate Ken’s argument after spending time in the vast contract database that is Law Insider. So much awkward writing mirrored so many times in contract language handed down within law firms and among tech tools.
Now, add some AI compute and multiply that by near-infinity. What kind of society will our passive-voiced lawyerly robot overlords build for us?
Passivity as Political Tool
Thing is, you know what AI writing looks like. You recognize it. It’s just off.
But weirdly, that offness looks a lot like bad legal writing.
You see similar patterns: frequent use of AI-typical words like “essential” and “unleash,” repetitive sentence structures, explanations with no justifications, obvious errors, childish storytelling, inconsistent style, etc.
But these stylistic norms are not just offensive to editors, they perpetuate power structures.
According to a new article from author Agustin V Startari, the passive voice is often used to empower the state, giving dictates the illusion of dispassionate truth. The fact that AI tools reflect that style will, according to Startari, negatively impact policy-making and enforcement.
As Startari writes, “the passive voice has served as a tool for masking responsibility and simulating objectivity.” More than simple linguistic stumble, the passive voice can be a means of justifying power without accountability, especially for autocratic regimes.
That agent-less objectivity is seductive. It echoes politicians’ calls for both “common sense” and “trusting the experts.” It perpetuates the belief in a right answer that we could obey if we’d just get those smarmy bureaucrats out of the way.
“It has been determined…”
“It is required…”
“It shall be done…”
And now, artificial intelligence tools are being trained on publicly available data, much of which is coded in that bureaucratic and legalistic language.
As Startari says, this power-enabling grammar weapon becomes amplified at the scale of AI.
So who cares? How does this impact the world that lawyers inhabit?
A Passive-Voiced Dystopia
Think about how you use artificial intelligence tools, then imagine that use as a policy-making feedback loop.
Do you use AI to take notes during meetings? Draft letters or even briefs? Legislation, maybe?
That activity not only relies on artificial intelligence, it becomes its new training data, perpetuating a cycle of passive language that turns into power.
You can imagine a future in which AI is used as a policy recommendation scapegoat with the illusion of authority, laying groundwork for the new “I was just following orders.”
You can envision an unoriginal legislature, blaming the AI when sameness gets amplified; rather than a response to constituents’ needs, innovation is just a grammatical remix of past norms. Policy imagination becomes constrained by statistical probability, not moral urgency or democratic will.
If every sycophantic and confident AI sentence beings with “Experts say…” when no such consensus exists, will that stand in for stakeholder engagement? Will we end up with a new unfair rule by the majority as any AI dissent is low-probability output, and therefore rare?
This is not hypothetical. We see today what happens when legislation is written passively but interpreted actively. America’s Executive branch is using Congress’s language—written to favor vague goals so the Executive has necessary wiggle room in how it executes—as an open door to purposefully undermine the legislators’ intent.
It all promises to get worse. Read the book Abundance and you’ll see a call for a Manhattan Project for bureaucracy; bureaucratically trained AI undermines that, and politicians are happy to take advantage.
Can the Legal Industry be More Active?
With the kind of copy-pasting we see in contracts, opinions, and legislation, lawyers have traded the utility of passive voice for a culture of passive voice. What can we do to change that?
In terms of AI outputs, the cat’s probably out of the bag. No amount of newly active writing will compensate for the century of bureaucratic training language the AI as already incorporated (though you should still write with less passive voice).
Instead, people who care about legal systems and power structures should choose to be active. Identify a human author of policy proposals. Design ways to surface low-probability but high-moral-weight ideas. Build tools that audit whether AI-generated language is just formally plausible or substantively grounded. Recognize that machine-generated drafts are not neutral or authoritative, then insist on accountability.
Scaling passive voice can go sideways via paths we don’t anticipate, but we can be the wizards behind the machines. We can insist on designing for accountability and agency.
And we must, before my click-through-contract behavior becomes the AI-enabled governing norm.
Keep building.
-Mike