← Back to blog

Why I switched from ChatGPT to Claude

| 3 min read

and why I’m not going back.. for the most part


I didn’t switch to Claude because someone told me to. I switched because ChatGPT kept missing the mark in a way I just couldn’t keep ignoring.

For a long time, ChatGPT was my default flavor. It was what everyone was using. It was funny (thanks me). It was what most tutorials referenced. We had nicknames (Chad being a personal favorite of mine.) But the reality is that Chad was just.. fine. And fine is a word that will come up a lot when I talk about Chat. It’s a capable tool. It just wasn’t my tool.

And here’s what finally did it in for me.

I was going through a period where I needed real, raw advice and practical help. Not therapy. Not careful, cushioned responses that tiptoed around the actual answer I was looking for. ChatGPT has a tendency to lead with emotional validation before it gets to the point, and when everyone in your life is already handling you carefully, having your AI do it too gets real old.. real fast.

I understand why it’s built that way. I’m sure it’s helpful for a lot of people. It just wasn’t helpful for me.

Claude doesn’t do that. It treats you like an adult with a question who came for an answer. It can read the room when you need something gentler, but it doesn’t default to fragile. That difference sounds small until you’re on the receiving end of it every single day.

I’ll be honest though, that was only the first thing.


The second “thing” was much bigger. And meant much more to me as a veteran.

When news broke that OpenAI was in conversations to allow government use of their models in ways that conflicted with their original commitments, I had to be done. Not because AI and government can’t coexist, they can and in my opinion, should. But because Anthropic looked at a powerful entity with a lot of money and said “no, this goes against our ethics”, and meant it.

That matters to me. A lot.

I spent seven years in the U.S. Navy. I understand the weight of institutional commitments and what it means when an organization actually stands behind theirs. Anthropic standing on their values when it would have been very easy and very lucrative not to is the kind of thing I notice. It’s the kind of thing that builds trust.

I’m not here to tell you what to think about OpenAI. I’m just telling you what I think, and what I think is that the values of the people building your tools matter.


The third thing was simply about me, not the tools.

Around the same time I made the switch, I was leaning hard into something I’d always done but never fully claimed: building. Not just using tools. Actually building with them. Agents, automations, systems that solve real operational problems. That’s where I come alive.

Claude is a builder’s tool. The way it reasons, the way it holds context, the way it pushes back when something doesn’t make sense, all of it is oriented toward making something. ChatGPT is great at a lot of things. Image generation, quick lookups, casual use. But when I sit down to build, Claude is who I want in the room.

That’s not a dig. That’s just fit.


So that’s why I switched. Tone, values, and identity. Three things that have nothing to do with benchmark scores and ROI and everything to do with how a tool actually feels to use over time.

This blog exists because I kept learning things I wanted to write down, and because writing things down is how I actually learn them. If something I figured out the hard way saves you time or frustration, that’s the whole point. If something I say misses the mark or there are questions, I appreciate the feedback. This is how we (and AI) learn.

Open source learning. Girl bosses don’t gatekeep.

← Back to blog