As someone whose career started as the Internet was becoming the place where we discuss everything from deep technical topics to the day’s news, I’ve been exposed to Brandolini’s law even before the term was coined.

Alberto Brandolini stated that “The bullshit asymmetry: the amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.” It’s just telling that this was said on Twitter, when it was a place where we all spent hours debating the fine points and lack thereof of anything.

This was in 2013. Since then the world has become more online, more impersonal, and more tribal. With the popularization of deepfakes and later generative AI, Brandolini’s observation gained an entirely new set of implications—stuff straight out of Simulacra and Simulation.

But there’s a much less catastrophic version of this that impacts day-to-day knowledge work, powered by the availability and push for AI tools.

Almost four years into the LLM era, it should be clear how useful AI tools can be for designing and building things—especially soft things like software, it seems like people building hard things aren’t having as much of a good time just yet.

And yet it should also be obvious that these tools are pretty good at doing average work. As a personal example, I am much worse at front-end development than the average developer. Using AI to help build user interfaces immediately boosts my productivity to a level it would take me years of studying to achieve. On the other hand, I am a pretty proficient distributed systems engineer, so when using AI for these tasks I tend to make it do what I want it to do faster.

You can see this in the way I instruct LLMs, for front end development my prompts are like “I want you to build this thing that’s like Datadog for this data source” whereas my prompts for back end systems tend to be more like “We need to build a semantic event bus. The events will be CQRS but semantic, not CDC. Make sure that we enable pluggable observers for events. Follow the architecture in ARCHITECTURE.md.

There’s a whole conversation about working styles when using coding agents and how much one should or should not micromanage the AI model, but that’s for another day. The point I am making here is that LLMs are excellent at building average versions of things, and very often, average is exactly what you need. That’s not always the case, and why while GPT can pass bar exams, it is a shitty attorney, for example.

Another interesting factor here is AI’s inherent sycophancy. AI models are “yes, and” machines, they work by completing what was given as input. This means that unless intentionally prompted to avoid it, most AI models will do their best to agree with you even if you are talking nonsense.

If you consider that half of us are below average at most things, having the yes man of a machine that is trained to try and agree with you even when you are talking about something where you are not an expert is the perfect environment for boosting the Dunning–Kruger effectthe systematic tendency of people with low ability in a specific area to give overly positive assessments of this ability.

When you have a hot take on something you don’t know much about, the AI will almost never tell you “It’s great that you are interested in this topic! Here are some links to understand more about it and make a proper assessment.” It will instead confirm that “you are absolutely right!” and give you some very average arguments to back your case.

This is not fundamentally new, anyone who has spent hundreds of hours building a project plan or technical design has been victim of drive-by opinions, where folks who have done zero research on a topic but have a lot of big feelings are very keen in sharing them with you—and hopefully an audience that they believe will clap at their commonsense opinion.

My challenge is that, over the years, the way I’ve found to manage these distractions is by asking people to show their work. It’s great that you have thoughts about this, could you please show me how exactly you propose it could be different? Could you explain what is your proposed solution to this and that problem? Could you link me to your references and show your work? How does this match project timelines and other external and organizational factors?

This is usually enough to kill drive-by opinions. Often the person won’t invest the time to actually learn about the problem and will back out, letting people who are actually invested do the work. Sometimes they will put in the effort, and you might end up in the wonderful scenario where they make a good case for change and everyone benefits. At the very least, you’ll have another person in the organization who has been educated about the problem and its implications.

Unfortunately, AI neutralizes my approach completely. Instead of doing actual research (AI-aided or not), the person will often just go to their favorite LLM, upload a few documents or code and say “Make my case for me.” As the good yes man that it is, the LLM will produce an argument that is plausible, but not “correct”. It will be a long document. It will look and feel like a legit rebuttal. If you skim through it you are going to see all the keywords and themes you’d expect to see.

It will take an actual read of the document to work out that it’s full of factual errors, opinions stated as facts, and big holes in the arguments caused by poor context engineering from the prompter. You might want to point these one by one, but if the person didn’t do their research to begin with, there’s no incentive for doing it now; they’ll probably just throw your comments in the LLM again and say “respond to this by defending my point.”

This creates a loop of Brandolini’s law: not only is the original bullshit cheap, but even if you spend a disproportionate amount of time refuting it, generating a new bullshit response to your comments is even cheaper. It’s a hellish version of the Chinese Room experiment where you are debating a machine that has no idea about anything but can look like it does.

As AI usage broadens, education and accountability around what it means to put your name on AI-generated work will likely be the long-term answer. But in the meantime, I’ve learned that the best thing is to stop engaging with slop dressed up as argument. Not only because it traps you in this hellish loop, but because it cheapens the hours you spend doing the research, thinking through the problem, and writing down your work.