Advertise here with Carbon Ads

This site is made possible by member support. ❀️

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

πŸ”  πŸ’€  πŸ“Έ  😭  πŸ•³οΈ  🀠  🎬  πŸ₯”

kottke.org posts about ChatGPT

AI Image Feedback Loop

Data artist Robert Hodgin recently created a feedback loop between Midjourney and ChatGPT-4 β€” he prompted MJ to create an image of an old man in a messy room wearing a VR headset, asked ChatGPT to describe the image, then fed that description back into MJ to generate another image, and did that 10 times. Here was the first image:

AI-generated image of an old man in a messy room wearing a VR headset

And here’s one of the last images:

AI-generated image of an old man in a cloudy room wearing a VR headset

Recursive art like this has a long history β€” see Alvin Lucier’s I Am Sitting in a Room from 1969 β€” but Hodgin’s project also hints at the challenges facing AI companies seeking to keep their training data free of material created by AI. Ted Chiang has encouraged us to “think of ChatGPT as a blurry jpeg of all the text on the Web”:

It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

And we already know what you get if you recursively save JPEGs

See also La Demoiselle d’Instagram, I Am Sitting in a Room (with a video camera), Google Image Search Recursion, and Dueling Carls.

Reply Β· 2

Ayo Edebiri Draws a New Yorker Cartoon

In June 2021 (pre The Bear), New Yorker cartoonist Zoe Si coached Ayo Edebiri through the process of drawing a New Yorker cartoon. The catch: neither of them could see the other’s work in progress. Super entertaining.

I don’t know about you, but Si’s initial description of the cartoon reminded me of an LLM prompt:

So the cartoon is two people in their apartment. One person has dug a hole in the floor, and he is standing in the hole and his head’s poking out. And the other person is kneeling on the floor beside the hole, kind of like looking at him in a concerned manner. There’ll be like a couch in the background just to signify that they’re in a house.

Just for funsies, I asked ChatGPT to generate a New Yorker-style cartoon using that prompt. Here’s what it came up with:

A New Yorker style cartoon depicting a man standing in a hole in the floor of an apartment, holding a shovel with only his head and shoulders visible. A woman floats beside him, with a concerned expression.

Oh boy. And then I asked it for a funny caption and it hit me with: “I said I wanted more ‘open space’ in the living room, not an ‘open pit’!” Oof. ChatGPT, don’t quit your day job!

Reply Β· 1

ChatGPT Made Me Cry and Other Adventures in AI Land

ChatGPT answers a question about what kottke.org is

[Yesterday I spent all day answering reader questions for the inaugural Kottke.org Ask Me Anything. One of them asked my opinion of the current crop of AI tools and I thought it was worth reprinting the whole thing here. -j]

Q: I would love to know your thoughts on AI, and specifically the ones that threaten us writers. I know you’ve touched on it in the past, but it seems like ChatGPT and the like really exploded while you were on sabbatical. Like, you left and the world was one way, and when you returned, it was very different. β€”Gregor

A: I got several questions about AI and I haven’t written anything about my experience with it on the site, so here we go. Let’s start with two facts:

  1. ChatGPT moved me to tears.
  2. I built this AMA site with the assistance of ChatGPT. (Or was it the other way around?)

Ok, the first thing. Last month, my son skied at a competition out in Montana. He’d (somewhat inexplicably) struggled earlier in the season at comps, which was tough for him to go through and for us as parents to watch. How much do we let him figure out on his own vs. how much support/guidance do we give him? This Montana comp was his last chance to get out there and show his skills. I was here in VT, so I texted him my usual “Good luck! Stomp it!” message the morning of the comp. But I happened to be futzing around with ChatGPT at the time (the GPT-3.5 model) and thought, you know, let’s punch this up a little bit. So I asked ChatGPT to write a good luck poem for a skier competing at a freeski competition at Big Sky.

In response, it wrote a perfectly serviceable 12-line poem with three couplets that was on topic, made narrative sense, and rhymed. And when I read the last line, I burst into tears. So does that make ChatGPT a soulful poet of rare ability? No. I’ve thought a lot about this and here’s what I think is going on: I was primed for an emotional response (because my son was struggling with something really important to him, because I was feeling anxious for him, because he was doing something potentially dangerous, because I haven’t seen him too much this winter) and ChatGPT used the language and methods of thousands of years of writing to deliver something a) about someone I love, and b) in the form of a poem (which is often an emotionally charged form) β€” both of which I had explicitly asked for. When you’re really in your feelings, even the worst movie or the cheesiest song can resonate with you and move you β€” just the tiniest bit of narrative and sentiment can send you over the edge. ChatGPT didn’t really make me cry…I did.

But still. Even so. It felt a little magical when it happened.

Now for the second part. I would say ChatGPT (mostly the new GPT-4 model), with a lot of hand-holding and cajoling from me, wrote 60-70% of the code (PHP, Javascript, CSS, SQL) for this AMA site. And we easily did it in a third of the time it would have taken me by myself, without having to look something up on Stack Overflow every four minutes or endlessly consulting CSS and PHP reference guides or tediously writing tests, etc. etc. etc. In fact, I never would have even embarked on building this little site-let had ChatGPT not existed…I would have done something much simpler and more manual instead. And it was a *blast*. I had so much fun and learned so much along the way.

I’ve also been using ChatGPT for some other programming projects β€” we whipped the Quick Links into better shape (it can write Movable Type templating code…really!) and set up direct posting of the site’s links to Facebook via the API rather than through Zapier (saving me $20/mo in the process). It has really turbo-charged my ability to get shit done around here and has me thinking about all sorts of possibilities.

I keep using the word “we” here because coding with ChatGPT β€” and this is where it starts to feel weird in an uncanny valley sort of way β€” feels like a genuine creative collaboration. It feels like there is a “someone” on the other side of that chat, a something that’s really capable but also needs a lot of hand-holding. Just. Like. Me. There’s a back and forth. We both screw up and take turns correcting each other’s mistakes. I ask it please and tell it thank you. ChatGPT lies to me; I gently and non-judgmentally guide it in a more constructive direction (as you would with a toddler). It is the fucking craziest weirdest thing and I don’t really know how to think about it.

There have only been a few occasions in my life when I’ve used or seen some new technology that felt like magic. The first time I wrote & ran a simple BASIC program on a computer. The first time I used the web. The first time using a laptop with wifi. The first time using an iPhone. Programming with ChatGPT over the past few weeks has felt like magic in the same way. While working on these projects with ChatGPT, I can’t wait to get out of bed in the morning to pick up where we left off last night (likely too late last night), a feeling I honestly have not consistently felt about work in a long time. I feel giddy. I feel POWERFUL.

That powerful feeling makes me uneasy. We shouldn’t feel so suddenly powerful without pausing to interrogate where that power comes from, who ultimately wields it, and who it will benefit and harm. The issues around these tools are complex & far-reaching and I’m still struggling to figure out what to think about it all. I’m persuaded by arguments that these tools offer an almost unprecedented opportunity for “helping humans be creative and express themselves” and that machine/human collaboration can deepen our understanding and appreciation of the world around us (as has happened with chess and go). I’m also persuaded by Ted Chiang’s assertion that our fears of AI are actually about capitalism β€” and we’ve got a lot to fear from capitalism when it comes to these tools, particularly given the present dysfunction of US politics. There is just so much potential power here and many people out there don’t feel uneasy about wielding it β€” and they will do what they want without regard for the rest of us. That’s pretty scary.

Powerful, weird, scary, uncanny, giddy β€” how the hell do we collectively navigate all that?

(Note: ChatGPT didn’t write any of this, nor has it written anything else on kottke.org. I used it once while writing a post a few weeks ago, basically as a smart thesaurus to suggest adjectives related to a topic. I’ll let you know if/when that changes β€” I expect it will not for quite some time, if ever. Even in the age of Ikea, there’s still plenty of handcrafted furniture makers around and in the same way, I suspect the future availability of cheap good-enough AI writing/curation will likely increase the demand and value for human-produced goods.)


The Octopus Test for Large Language Model AIs

In 2020, before the current crop of large language models (LLM) like ChatGPT and Bing, Emily Bender and Alexander Koller wrote a paper on their limitations called Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. In the paper, Bender and Koller describe an “octopus test” as a way of thinking about what LLMs are capable of and what they aren’t. A recent profile of Bender by Elizabeth Weil for New York magazine (which is worth reading in its entirety) summarizes the octopus test thusly:

Say that A and B, both fluent speakers of English, are independently stranded on two uninhabited islands. They soon discover that previous visitors to these islands have left behind telegraphs and that they can communicate with each other via an underwater cable. A and B start happily typing messages to each other.

Meanwhile, O, a hyperintelligent deep-sea octopus who is unable to visit or observe the two islands, discovers a way to tap into the underwater cable and listen in on A and B’s conversations. O knows nothing about English initially but is very good at detecting statistical patterns. Over time, O learns to predict with great accuracy how B will respond to each of A’s utterances.

Soon, the octopus enters the conversation and starts impersonating B and replying to A. This ruse works for a while, and A believes that O communicates as both she and B do β€” with meaning and intent. Then one day A calls out: “I’m being attacked by an angry bear. Help me figure out how to defend myself. I’ve got some sticks.” The octopus, impersonating B, fails to help. How could it succeed? The octopus has no referents, no idea what bears or sticks are. No way to give relevant instructions, like to go grab some coconuts and rope and build a catapult. A is in trouble and feels duped. The octopus is exposed as a fraud.

The paper’s official title is “Climbing Towards NLU: On Meaning, Form, and Understanding in the Age of Data.” NLU stands for “natural-language understanding.” How should we interpret the natural-sounding (i.e., humanlike) words that come out of LLMs? The models are built on statistics. They work by looking for patterns in huge troves of text and then using those patterns to guess what the next word in a string of words should be. They’re great at mimicry and bad at facts. Why? LLMs, like the octopus, have no access to real-world, embodied referents. This makes LLMs beguiling, amoral, and the Platonic ideal of the bullshitter, as philosopher Harry Frankfurt, author of On Bullshit, defined the term. Bullshitters, Frankfurt argued, are worse than liars. They don’t care whether something is true or false. They care only about rhetorical power β€” if a listener or reader is persuaded.

The point here is to caution against treating these AIs as if they are people. Bing isn’t in love with anyone; it’s just free-associating from an (admittedly huge) part of the internet.

This isn’t an exact analogue, but I have a car that can drive itself under certain circumstances (not Tesla’s FSD) and when I turn self-drive on, it feels like I’m giving control of my car to a very precocious 4-year-old. Most of the time, this incredible child pilots the car really well, better than I can really β€” it keeps speed, lane positioning, and distance to forward traffic very precisely β€” so much so that you want to trust it as you would a licensed adult driver. But when it actually has to do something that requires making a tough decision or thinking, it will either give up control or do something stupid or dangerous. You can’t ever forget the self-driver is like a 4-year-old kid mimicking the act of driving and isn’t capable of thinking like a human when it needs to. You forget that and you can die. (This has the odd and (IMO) under-appreciated effect, when self-drive is engaged, of shifting your role from operator of the car to babysitting the operator of the car. Doing a thing and watching something else do a thing so you can take over when they screw up are two very different things and I think that until more people realize that, it’s going to keep causing unnecessary accidents.)


Ted Chiang: “ChatGPT Is a Blurry JPEG of the Web”

This is a fantastic piece by writer Ted Chiang about large-language models like ChatGPT. He likens them to lossy compression algorithms:

What I’ve described sounds a lot like ChatGPT, or most any other large-language model. Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

Reframing the technology in that way turns out to be useful in thinking through some of its possibilities and limitations:

There is very little information available about OpenAI’s forthcoming successor to ChatGPT, GPT-4. But I’m going to make a prediction: when assembling the vast amount of text used to train GPT-4, the people at OpenAI will have made every effort to exclude material generated by ChatGPT or any other large-language model. If this turns out to be the case, it will serve as unintentional confirmation that the analogy between large-language models and lossy compression is useful. Repeatedly resaving a jpeg creates more compression artifacts, because more information is lost every time. It’s the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse.

Indeed, a useful criterion for gauging a large-language model’s quality might be the willingness of a company to use the text that it generates as training material for a new model. If the output of ChatGPT isn’t good enough for GPT-4, we might take that as an indicator that it’s not good enough for us, either.

Chiang has previously spoken about how “most fears about A.I. are best understood as fears about capitalism”.

I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.

Let’s think about it this way. How much would we fear any technology, whether A.I. or some other technology, how much would you fear it if we lived in a world that was a lot like Denmark or if the entire world was run sort of on the principles of one of the Scandinavian countries? There’s universal health care. Everyone has child care, free college maybe. And maybe there’s some version of universal basic income there.

Now if the entire world operates according to β€” is run on those principles, how much do you worry about a new technology then? I think much, much less than we do now.

See also Why Computers Won’t Make Themselves Smarter. (via @irwin)