Section
Learn
Guides, prompt patterns, and how-tos written for humans who want to get good at this, not programmers who already are.
-
Why does AI agree with you when you're wrong? The strange science of sycophancy.
AI sometimes folds the moment you push back, even when you're the one who's wrong. The behavior has a name, a measured cause, and two prompt moves that change the picture.
-
How much water does AI actually use?
Where the water in "AI uses water" headlines actually goes, the per-query number worth holding in your head, and why public figures disagree by orders of magnitude.
-
Why Does AI Hallucinate? (And Why It Sounds So Sure)
An explainer for chatting-tier readers on the mechanism behind confidently-wrong AI answers, the specific shapes hallucinations take in 2026, and the moves that actually reduce them.
-
The 2026 AI Writing Tells (and Why the Em-Dash Meme Is Half-Right)
You're not imagining it. AI writing feels off. But the tells in 2026 aren't mostly the lexical ones the internet has been mocking. They're shape-level, and they're easier to spot once you know what you're looking at.
-
What the Edit Button Actually Does in ChatGPT, Claude, and Gemini
The pencil icon next to a past message does two very different things. On Claude and ChatGPT, it saves both versions and keeps the original one click away. On Gemini, as of May 2026, it replaces your prior turn with no way back.
-
Past the Demo: What ChatGPT Images 2.0 Is Actually Good At
Four days after gpt-image-2 launched we ran fifteen calibration calls and surveyed the community work. Here's what's actually worth using it for, what other models match, and where it still flunks.