May 3, 2024

Claude — the new AI kid on the block

Curiosity, mostly. That's what got me to try Claude.

I'd been on ChatGPT long enough that it had become background infrastructure — always open in a tab, always useful enough to not question. But I kept seeing Claude mentioned in circles I pay attention to, and Anthropic's approach to AI safety struck me as more considered than most. So last month I started routing some of my everyday prompts through Claude instead, just to see.

A few things stood out quickly. The writing it produces doesn't read like a language model wrote it. (Well, it is.) That sounds obvious, but spend enough time with ChatGPT and you start recognising the cadence; a certain rhythm, a tendency toward lists when prose would do better, hedges stacked on hedges. Claude's outputs have more texture to them. Personality may be too much of a stretch. 

The other thing, and this surprised me more, is that Claude seems to know what it doesn't know. ChatGPT will occasionally confabulate with remarkable self-assurance - detailed, plausible, and wrong. Claude is more likely to flag uncertainty, or to ask a clarifying question before charging ahead with an assumption. In a work context, that matters. Confidently wrong is often worse than uncertain. Of course, an LLM is an LLL is an LLM.

I should be fair: ChatGPT still has its edge cases where it pulls ahead. The plugin ecosystem is broader, and GPT-4's code interpreter has saved me hours on data wrangling tasks I don't particularly enjoy doing manually. These aren't minor points. And for the casual user, the difference probably won't be dramatic enough to bother switching.

But my default has shifted. When I open a new tab now, it's more often Claude than not. After months of the same habit, that's telling. 

(Title is a reference to an older post)