AI Isn't The Threat
AI isn’t the threat.
Disconnection is.
And the systems we build around AI—who they serve, what they replicate, and what they quietly replace—reflect how we relate to power, care, and accountability.
It’s not just AI replacing people.
It’s people in positions of influence choosing to replace people with AI. Quietly, strategically, and often without naming the cost.
Not because AI demanded it, but because it made it easy to avoid hard truths about labor, access, and responsibility.
We talk about AI’s potential to optimize. But rarely do we ask—optimize for whom? At what cost? And with what assumptions left unexamined?
Disconnection doesn’t just mean tech outpacing humans. It means disconnection from care, from labor, from memory. From the emotional, ethical, and historical context that never made it into the dataset.
I use AI in my operations work to support documentation, reduce invisible labor, and help teams work with more clarity.
Sometimes it helps me see patterns I was already holding.
Sometimes it helps carry complexity I couldn’t hold alone.
Sometimes I close the tab when the answer feels misaligned.
I don’t treat AI as a shortcut or a solution. I treat it as a mirror. A presence that reflects what we feed it, what we avoid, and what we’re ready to name. If we feed it urgency, standardization, and surface-level inclusion, it will scale that. Faster, wider, and with fewer people in the room.
That’s why algorithmic inclusion isn’t about adding representation after the fact. It means designing systems that recognize difference from the beginning. Not systems that “work for most people,” but ones that hold nuance, variation, and lived experience as core to their design.
AI will not teach itself care.
It won’t ask better questions unless we do. And it won’t pause to consider who’s being excluded, unless we’ve built that pause into the process.
The stories we’re seeing in the news about AI replacing workers, replacing creatives, replacing care aren’t just observations. They are narratives. And narratives shape behavior, policy, and belief. They tell people to step back from their own insight. They frame scale as progress and complexity as inefficiency. And they call it innovation, when it’s often just control.
AI can help us work with more integrity, but only if we choose to relate to it that way.
To meet it as a collaborator, not a replacement.
To design alongside it, not delegate our values to it.
To teach it not just what’s efficient, but what’s ethical, what’s human, and what’s worth protecting.
If you’re holding these questions too about power, systems, care, and what we’re building, I’d love to hear what’s emerging for you.