My friend Catbird wrote:
Hi Roger—
I just saw Carla Hayden on PBS NewsHour. She made a remark (something to the effect of )“maybe maybe librarians aren’t the flashiest people, but they’re trusted,” that took me right to “information without the bun.”
Information Without the Bun was the name of my blog on the Times Union website from 2008 to 2021.
I got excited to be reminded of you. 

… it was a happy surprise!



I hope you are sufficiently happy in your life right now.
And in case I forgot, happy Father’s Day.
This was very touching. I strive to provide accurate data on this blog diligently. On Facebook, I often post police reports of traffic jams and parking restrictions because it seems useful.
Librarians, by training and perhaps upbringing, want information disseminated. That’s why shutting down the Voice of America, PBS, NPR (Protect My Public Media!), Radio Free Europe, and gutting the Smithsonian breaks my heart.
Tactics designed to make us more stupid, such as book bans or getting rid of people, as well as departmental websites due to “DEI,” etc., which happened to the former librarian of Congress, Carla Hayden, are extremely troubling to me. Incidentally, I never met Dr. Hayden, but librarians I know in real life who have are monumentally impressed with her.
However, it’s challenging, and it’s becoming increasingly complex always to get it right. The things I see on Facebook and other social media that are stated as fact but are wrong cause me some mental pain.
From WIRED: “When I read a tweet about four noted Silicon Valley executives being inducted into a special detachment of the United States Army Reserve, including Meta CTO Andrew ‘Boz’ Bosworth, I questioned its veracity. It’s tough to discern truth from satire in 2025, in part because of social media sites owned by Bosworth’s company. But it indeed was true. According to an official press release, they’re in the Army now, specifically Detachment 201.”
Of COURSE, Steven Levy didn’t believe it. The concept seems absurd.
The liar’s dividend
John Oliver discussed AI Slop on Last Week Tonight. He explains “why you’ve been seeing more AI-generated content online [and] the harm it can do.” This leads to a more toxic spinoff: the liar’s dividend.
From Cambridge Core: “This study addresses the phenomenon of misinformation about misinformation, or politicians ‘crying wolf’ over fake news. Strategic and false claims that stories are fake news or deepfakes may benefit politicians by helping them maintain support after a scandal.
From the Brennan Center: Scholars “posit that liars aiming to avoid accountability will become more believable as the public becomes more educated about the threats posed by deepfakes. The theory is simple: when people learn that deepfakes are increasingly realistic, false claims that real content is AI-generated become more persuasive too… Deepfakes amplify uncertainty.”
And there are other AI informational flaws. From The New York Times: They Asked an AI Chatbot Questions. The Answers Sent Them Spiraling. “Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.”
I try to double- or even triple-check items I post. But if I muff it once in a while, it’s not for lack of trying.
“Librarians, by training and perhaps upbringing, want information disseminated.” Your sentence makes so much sense!! I do wonder sometimes if the things we read or see, even on mainline media sources, is true or not.