When Doctors Publish AI-Generated Advice, Everybody Loses

By Paul von Zielbauer

December 12, 2025 5 min read

I have a problem with the growing number of doctors churning out inhuman amounts of longevity and wellness advice on social media. I say inhuman because more and more of what I see some doctors posting sure looks and sounds like unvarnished AI.

Why it bothers me so much isn't complicated.

Trading on your licensed credentials to communicate information that you didn't write yourself violates the doctor-patient (or doctor-reader, on social media) covenant. To those in whom much trust is invested, much is expected. Pretending ChatGPT crapspackle is your own doctorly ruminations is exactly what is not expected.

It's offensive because we give doctors authority we don't extend to most other people. They've spent so many years learning how the human body functions and falls apart. When a physician tells you something, her or his opinion comes wrapped in our presumption of trust.

Using AI to dispense longevity and wellness advice destroys that trust. And teaches more readers and patients not to trust doctors. This is why it troubles me that a growing number of doctors appear to be using AI to generate the health advice they publish under their own names.

On Substack, for instance, physicians have built substantial followings offering guidance on building physical and nutritional habits, avoiding cardiovascular disease and keeping your brain healthy, among many other topics. But when I started looking more closely at the published work from some of them, in too many instances the prose had the tell-tale staccato "Not X, but Y!" structure, or an unearned certitude or the robotic delivery — or all the above — that are the DNA (pun intended) of machine-made "content."

To test my hunch, I ran selections from six health and wellness professionals through GPTZero, a leading AI text analyzer. The tool isn't infallible, but the patterns it identified matched what my eyes were telling me.

One physician with more than 21,000 subscribers posts lengthy articles almost every day. A punishing pace for anyone, let alone a busy doctor. Several of her posts registered with GPTZero as entirely AI-generated.

That doctor did not respond to the questions I asked her. When I followed up again, she blocked me.

A cardiac surgeon whose newsletter targets fellow physicians produces elaborately designed posts that GPTZero flagged as substantially machine-written. He did not respond to my request for comment, either. A third doctor's work alternates between what sounds like genuine personal reflection and passages with the telltale cadences of ChatGPT — those clipped sentences, the metaphors that strain a little too hard. He also declined to comment.

One writer who uses "Dr." before her name and writes about medical wellness for thousands of subscribers turned out to be, upon a little investigating, a dentist instead of a doctor. Her posts appear to be AI-generated. She didn't respond to my questions.

The physicians who did talk with me were more nuanced. An orthopedic surgeon who's written about health for years said he sometimes uses AI to edit his work or adjust the tone. But one recent post of his included lines like "Comfort can be a coffin" and "Your body doesn't negotiate." He maintains he wrote most of it himself; GPTZero suggested otherwise.

Of course, doctors can use AI responsibly, to vet ideas, edit their written thoughts for clarity, and challenge their theses. Certainly not every doctor using AI does so in bad faith. Writing is hard and physicians are as busy or busier than the rest of us.

But I think there's something unethical about physicians building audiences on the strength of their credentials and then handing the actual work to a machine. Readers come for the doctor's judgment. If the judgment is being outsourced, what exactly are they getting?

"Mindless to write, mindless to read," one physician told me when I asked why they thought AI-generated articles written by fellow doctors still managed to accumulate plenty of likes and comments. "I feel like it steals potential intellectual capital from readers."

Annie Fenn, a doctor who writes about brain health and clearly does her own writing, said she sometimes congratulates subscribers for finishing her harder posts. "Just hanging in there and trying to grasp complex topics is one way to build brain resilience," she told me. "I want them to feel good about reading stuff that's hard."

That seems right to me. When doctors skip that process while trading on the trust their titles confer, something has gone sideways. I'm not sure what to call it, but it's a deliberate lack of transparency. Don't patients and readers deserve better?

To find out more about Paul Von Zielbauer and read features by other Creators Syndicate writers and cartoonists, visit the Creators Syndicate website at www.creators.com.

Photo credit: National Cancer Institute at Unsplash

Like it? Share it!

  • 0

Aging With Strength
About Paul von Zielbauer
Read More | RSS | Subscribe

YOU MAY ALSO LIKE...