An old-fashioned electric typewriter on a desk, with a piece of paper coming out of it showing faces and animals made out of typed characters.

Tagged: brains

There is 1 post tagged "brains."

Home - Resume - Projects - BlogRSS Feed

From Saturday, March 15th, 2025

Patterns, Samples, and Artificial Knowledge

Buckle up, everybody. The AI understander has logged on.

Human brains are wired to anticipate and recognize patterns. The quiet thump of footsteps coming up behind you; the sinuous stripes of a tiger’s fur, undulating as it quietly crouches in the bushes with knife, fork, and barbecue sauce clutched in its paws; the halting rhythm of popcorn in the microwave as it finishes up its popping and starts to burn. In particular, there’s something in us that responds to patterns in language. I don’t necessarily mean literary, sophisticated forms of language: I mean that people still put fragments of Radiohead lyrics in their Instagram bios, and that we all grew up enthralled by the three-syllable rhythms of Dr. Seuss.

But what if I told you that writing that kind of thing is now easy?

A chat with ChatGPT. The user says: "Write a short Dr. Seuss-style poem about driving with broken brake lines." ChatGPT replies: "Oh, the places you’ll go— but not where you planned! For your brake lines are broken, your stopping’s unmanned! You roll down the street, you pick up some speed, you pump on the pedal— but stop? No, indeed!"

Well, it’s not quite top-tier Dr. Seuss material, I guess, but that comes down to the prompt, right? I could have told it to write about something more Dr. Seuss-like. Animals, breakfast foods, headwear, that kind of thing. But the fact that it understood the prompt - the fact that it knows who Dr. Seuss is, that it presumably knows what a trisyllabic meter is, that it knows which words rhyme and which don’t - that makes it smart, right?

I’m not so sure? If a human sat down to write a Dr. Seuss-style poem, and produced that result, then we could probably conclude that the human knew those things, or at least had a fundamental intuitive sense of them. But at this point, in year three of their mildly terrifying reign, I’m pretty sure that large language models are mostly good at replicating patterns. That doesn’t mean that they make good decisions about what to replicate or why they’re doing it. But, they feel like they’re smart, partly because they hack the human brain by reproducing patterns that are interesting to us, and partly because they mix the patterns together.

Read More right-pointing flowery thing

Tagged as computers, coding, brains, music.