The danger of letting AI curate our reality

#writingcommunity #booksky #amwriting #writing Unfettered Treacle on Substack
Let’s talk about the slow, sneaky shift in how people are getting their news.
Everybody’s buzzing about AI news feeds, aggregators, bots that “keep you informed while you sip your coffee.” Handy, sure. But there’s a hitch, where do you think those bots get their news?
From us. And by “us,” I don’t mean me and my half-finished sci-fi draft. I mean human reporters. Journalists who still show up at city hall meetings, sift through court records, chase down reluctant sources, and occasionally get yelled at on the phone for asking too many questions.
AI doesn’t cover a zoning board hearing in Topeka. It doesn’t sit in the back row of a school board meeting until midnight. It doesn’t get coffee with a whistleblower who finally decides to spill. AI just scoops up what others already dug up, slaps on a shiny summary, and spits it back at us.
The latest surveys show more and more folks skipping straight to AI news feeds. Why wade through multiple news sites when a bot can chew it up and hand you a tidy digest? It feels efficient. Like a news smoothie.
But here’s the problem with smoothies, you forget what went into them. All you taste is “vaguely strawberry.”
Same with AI summaries. They don’t investigate; they summarize what humans already reported. And journalism, inconveniently, doesn’t run on vibes. It runs on reporters with notebooks, travel budgets, and a paycheck that covers more than instant ramen.
So why are people turning to AI for their news?
Because we’re drowning in headlines. A thousand stories fly past every day, and the bots promise to skim off the froth and hand you just the “important stuff.”
Because a lot of us are suspicious. After years of “fake news” shouting matches, some folks figure a machine might be more neutral. It might not be.
And because it’s easy. Who wants to slog through a 3,000-word article when you can get the gist in two sentences while your barista is still steaming the milk?
And here’s the uncomfortable truth, most of us aren’t reading much anymore anyway. We skim. We scroll. We let the headline stand in for the story. The five-minute read becomes the thirty-second glance. AI isn’t creating that problem, it’s capitalizing on it. Sound-bite media is already the dominant flavor. The bots just promise to spoon-feed it to us faster, with fewer syllables.
It makes sense.
But here’s the paradox, the more we let AI do the summarizing, the less support flows to the people actually doing the reporting. Local papers close, investigative desks get cut, reporters get laid off.
The pool of original material shrinks. The bots keep summarizing, but now they’re summarizing less and less. Pretty soon your “news feed” is nothing but recycled press releases with a sprinkle of algorithm glitter.
And then comes the personalization twist. Tools like ChatGPT Pulse, SmartNews, and the recently demised Artifact don’t just summarize the news, they build you your own little terrarium of it. If you click on economic stories, suddenly politics goes missing. If you lean left, you stay left. Lean right, you stay right. Never read local news? It vanishes, as if it never mattered.
That’s how the echo chamber gets personal. You stop bumping into stories you didn’t expect. No quirky op-eds that challenge you. No surprise investigation about the factory down the road. Just the same flavors, reheated daily, until the world starts looking like your reflection.
The fallout isn’t abstract.
Your worldview narrows.
Your biases get reinforced.
The people who control the biggest pipes control what flows into your bubble.
And the boring but vital stuff, the school boards, zoning fights, council meetings, slides right past you.
Take Ground News, for instance. It’s one of the few aggregators that at least tries to address the bias problem head-on. Instead of just handing you a polished digest, it lines up multiple outlets side-by-side and tells you who’s leaning left, right, or somewhere in the mushy middle. It even has a “blindspot” feed that shows you stories underreported by one side of the spectrum. That’s clever. But even there, it’s not perfect—slap a label like “center” or “right” on a whole outlet and you flatten nuance, and if you only click the stories that fit your worldview, your “personalized feed” still tilts in the same direction. The algorithm learns what you like and doubles down. Which means even a tool designed to pop your bubble can, ironically, reinforce it.
So where does this all end? Maybe licensing deals, with AI companies finally paying publishers. Maybe harder paywalls, forcing us to subscribe for the real thing. Or maybe we just wake up one morning and realize no one’s covering city hall anymore, and all the bots are doing is remixing each other’s leftovers.
AI can remix the world beautifully. But it can’t generate accountability. It can’t ask a sharp question at a press conference or talk a whistleblower into going on record. And it sure as hell can’t corner a reluctant mayor in a parking lot with a notebook and a half-dead pen.
If the humans stop digging, the bots have nothing left to remix. And when that happens, your personalized feed won’t be news anymore, it’ll just be a mirror, reflecting your own preferences back at you, until the heat death of the universe.
Me? I’ll take a messy, human byline over a smoothie-bland consensus feed any day. I still read the paper daily, but if that’s the future, I’d rather just read the comics page.
Share this:
- Share on Facebook (Opens in new window) Facebook
- Email a link to a friend (Opens in new window) Email
- Share on Threads (Opens in new window) Threads
- Share on Mastodon (Opens in new window) Mastodon
- Share on Tumblr (Opens in new window) Tumblr
- Share on X (Opens in new window) X
- Share on Bluesky (Opens in new window) Bluesky