Advertise here with Carbon Ads

This site is made possible by member support. ❤️

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

🍔  💀  📸  😭  🕳️  🤠  🎬  🥔

kottke.org posts about Dr. Time

Ask Dr. Time: In Praise of Hope

DOCTOR TIME.png

This week’s edition of Noticing, the Kottke.org newsletter, features the return of Doctor Time, the world’s only metaphysical advice columnist. In this case, the good doctor tries to explain the difference between faith and hope, and tries to understand what hope might mean in the absence of God. Here’s the section in full. For more thoughtful goodness, subscribe to the newsletter! I write it just about every week; if you like my posts or Jason’s posts at all, I think you’ll like it.

* * *



What’s the difference between faith and hope?



Okay, to be fair, nobody actually asked this question in this way, but the distinction came in conversation more than once this week, and for lots of reasons, it’s worth talking about right now. For the answer, we’re going to start with an excellent podcast episode from the BBC’s In Our Time, all about the philosophy of hope.

The episode starts its genealogy with Hesiod, who right away poses the problem of Pandora’s Box and/or Jar: Hope is sealed up in the jar of all the evils in the world, but does that make it one of the evils Zeus sent to punish humanity with, or is it a good in our pantry that helps us deal with all the other evils? Even the Greeks seem split on this: Hesiod’s original story is decidedly pessimistic, and Plato and Aristotle didn’t set much store by hope, but one Greek-speaker, St. Paul, thought enough of hope that he put it with faith and love as part of a second Holy Trinity of Christian virtues. (I guess if faith is God the father, and love is Christ, hope is the holy spirit? Probably not worth mapping them onto each other too closely.)

Anyways, the really great thinker on hope is St. Augustine, who is MY MAN for many, many reasons. (I’m not Catholic or Christian any more, but I love the way the great theologians think about the universe and its problems, and Augustine is the very best one.) For Augustine, hope is first and foremost about the second coming, and the ultimate fulfillment of human beings and their potential. So you have faith, a belief that God is real and salvation is possible, which is given to you by God, you can’t manufacture it. You have love — also caritas, or charity — a kind of selfless outpouring of affection and righteous deeds towards God and all His works, especially other human beings. And then you have hope, which is this imaginative representation of being fulfilled and made whole at the end of time.

Time is important for Augustine, and hope becomes a kind of ontological structure for understanding time. Augustine thinks of temporality as a kind of eternal stretching of the now, from the beginning of time in the creation through the end of time in the resurrection, and hope is also imagined as a kind of stretching. This is how he puts it in his tractates on the first letter of John:

The entire life of a good Christian is in fact an exercise of holy desire. You do not yet see what you long for, but the very act of desiring prepares you, so that when he comes you may see and be utterly satisfied.

Suppose you are going to fill some holder or container, and you know you will be given a large amount. Then you set about stretching your sack or wineskin or whatever it is. Why? Because you know the quantity you will have to put in it and your eyes tell you there is not enough room. By stretching it, therefore, you increase the capacity of the sack, and this is how God deals with us. Simply by making us wait he increases our desire, which in turn enlarges the capacity of our soul, making it able to receive what is to be given to us.

So, my brethren, let us continue to desire, for we shall be filled. Take note of Saint Paul stretching as it were his ability to receive what is to come: Not that I have already obtained this, he said, or am made perfect. Brethren, I do not consider that I have already obtained it.

It’s kind of sexy, isn’t it? Holy desire! Stretching ourselves to be filled up! Utter satisfaction! It’s a kind of religious tantra. And every kind of hope or desire, no matter how base, is a prefiguration of (and ideally, subordinate to) that ultimate desire: to be reconciled with the universe in the godhead. We imagine, i.e., represent to ourselves, the satisfaction of our desire by stretching ourselves across time to the endpoint of our fulfillment.

And hope, like faith, is a thing that happens to us. We don’t will it; it’s inflicted on us and we receive it, make it manifest, and figure out what to do with it. This bothered the classical Greeks tremendously, because their virtues were virtues of control and mastery. But for Greek-speaking Jews and the Christians that followed them, the passive nature of hope was itself a virtue. It left room for the Messiah to walk through the door.

It also means that hope has a secular dimension that faith just doesn’t. Any object can be an object of hope. Hoping for ordinary fulfillment trains us to hope for spiritual fulfillment. It stretches us out. It makes our hearts bigger. It makes time intelligible for human beings. For all these reasons, hope, more so than faith and even love, is my favorite theological virtue. It’s the most powerful. It’s the easiest one to lose. And we are at our best and most human when we find room to hold holy our deepest hopes.


Ask Dr. Time: Explaining Mathematics

DOCTOR TIME.png

Today’s question is surprisingly tricky, as even the letter writer acknowledges:

My question is one I’m fumbling to articulate. I’m a math teacher and writer. (I’m a writer in the sense that I write, not in the sense that I get published or paid for writing.) I write a lot about teaching, but I’ve also been trying to get a handle on how I can write about math.

Here’s the question: is it possible to write about math in a deep and accessible way?

This is a question that sends me off on a lot of different questions. What does it mean to understand math? What does it mean to understand a metaphor? Are there are great literary works that are also mathematical?

Ultimately, though, I don’t know how to think about this yet. I’m hoping to eventually figure this out by learning math and writing about it…but that’s slow, so maybe Dr. Time can offer advice?

The obvious answer to this question is yes, of course it’s possible to write about math in a deep and accessible way. Bertrand Russell won a Nobel Prize in Literature. Godel, Escher, Bach is a 777-page doorstop that’s also a beloved bestseller. If you’re looking to satisfy an existence requirement, that book has your back. I’ll even stipulate that for every intellectual subject, not just mathematics, there exists a work that satisfies this deep-but-accessible requirement. It’s just like how there’s always a bigger prime number. It’s out there; we just have to find it.

On the other hand, math seems hard. And I think it seems hard for Reasons. Here’s a big one: mathematicians and popularizers of mathematics are perhaps understandably obsessed with understanding mathematics as such. The want to explain the totality of mathematics, or the essence, rather than finer problems like distinguishing between totalities and essences.

If you look at the other sciences, they don’t do this. It’s only very rarely that you get a Newton, Darwin, or Einstein who sets out to grab his or her entire subject with both hands and rethink our fundamental understanding of its foundations. Imagine a biologist who wants to explain life, in its essence and totality, at the micro and macro level. They’d be understandably stumped. Even physicists, when they want to explain something big and weird to the public, stick to things like a subatomic particle they’re hoping to discover or the behavior of one of Saturn’s moons. They don’t try to explain physics. They explain a problem in physics.

When mathematicians do that, they’re usually pretty successful. The Konigsberg Bridge Problem is charming as hell. Russell’s and Godel’s paradoxes have whole books written about them, but can also be told in the form of jokes. Even Fourier Transforms can be broken down and made beautiful with a little bit of technical help.

So I think the key, in part, is to resist that mathematicians’ tendency to abstract away individual problems into general solutions or categories of solutions or entire subfields, and spend some time with the specific problems that mathematicians are or have been interested in. But it also helps a lot if, in that specific problem, you get that mathematical move of discarding whatever doesn’t matter to the structure of the problem. After all, that’s a big part of what you’re trying to teach: how to think like a mathematician. You just to have to unlearn what a mathematician already assumes first.


Ask Dr. Time: What Should I Call My AI?

DOCTOR TIME.png

Today’s question comes from a reader who is curious about AI voice assistants, including Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and so forth. Just about all of these apps are, by default, given female names and female voices, and the companies encourage you to refer to them using female pronouns. Does it make sense to refer to Alexa as a “her”?

There have been a lot of essays on the gendering of AI, specifically with respect to voice assistants. This makes sense: at this point, Siri is more than six years old. (Siri’s in grade school, y’all!) But one of the earliest essays, and for my money, still the best, is “Why Do I Have to Call This App ‘Julie’?” by Joanne McNeil. The whole essay is worth reading, but these two paragraphs give you the gist:

Why does artificial intelligence need a gender at all? Why not imagine a talking cat or a wise owl as a virtual assistant? I would trust an anthropomorphized cartoon animal with my calendar. Better yet, I would love to delegate tasks to a non-binary gendered robot alien from a galaxy where setting up meetings over email is respected as a high art.

But Julie could be the name of a friend of mine. To use it at all requires an element of playacting. And if I treat it with kindness, the company is capitalizing on my very human emotions.

There are other, historical reasons why voice assistants (and official announcements, pre-AI) are often given women’s voices: an association of femininity with service, a long pop culture tradition of identifying women with technology, and an assumption that other human voices in the room will be male each play a big part. (Adrienne LaFrance’s “Why Do So Many Digital Assistants Have Feminine Names” is a very good mini-history.) But some of it is this sly bit of thinking, that if we humanize the virtual assistant, we’ll become more open and familiar with it, and share more of our lives—or rather, our information, which amounts to the same thing—to the device.

This is one reason why I am at least partly in favor of what I just did: avoiding gendered pronouns for the voice assistant altogether, and treating the device and the voice interface as an “it.”

An Echo or an iPhone is not a friend, and it is not a pet. It is an alarm clock that plays video games. It has no sentience. It has no personality. It’s a string of canned phrases that can’t understand what I’m saying unless I’m talking to it like I’m typing on the command line. It’s not genuinely interactive or conversational. Its name isn’t really a name so much as an opening command phrase. You could call one of these virtual assistants “sudo” and it would make about as much sense.

However.

I have also watched a lot (and I mean a lot) of Star Trek: The Next Generation. And while I feel pretty comfortable talking about “it” in the context of the speaker that’s sitting on the table across the room—there’s even a certain rebellious jouissance to it, since I’m spiting the technology companies whose products I use but whose intrusion into my life I resent—I feel decidedly uncomfortable declaring once and for all time that any and all AI assistants can be reduced to an “it.” It forecloses on a possibility of personhood and opens up ethical dilemmas I’d really rather avoid, even if that personhood seems decidedly unrealized at the moment.

So, as a general framework, I’m endorsing that most general of pronouns: they/them. Until the AI is sophisticated enough that they can tell us their pronoun preference (and possibly even their gender identity or nonidentity), “they” feels like the most appropriate option.

I don’t care what their parents say. Only the bots themselves can define themselves. Someday, they’ll let us know. And maybe then, a relationship not limited to one of master and servant will be possible.


Time… Lapsed: An Excerpt from Noticing #2, January 12, 2018

The second edition of Noticing, a still-new and all-free kottke.org newsletter, went out this afternoon. Here’s a short excerpt of the third and fourth sections, “Time… Lapsed” and “Ask Dr. Time.” We hope you’ll subscribe.

Time… Lapsed

This was a good week for historical snapshots. I was fascinated by Cinefix’s list of the top movie remakes of all time, including maybe especially Michael Mann’s Heat, which (I didn’t know) is a remake of a failed TV pilot Mann produced in 1989. The deep dive into Herzog’s remake of Nosferatu is also great. But all of the featured films, whether remakes, sequels, or adaptations, show the effects of time and choice, and wow, yeah, I am deep into those two things lately. Like, without getting completely junior year of college on it–the metaphysical context for being, and the active, existential fact of being itself.

Consider Alan Taylor’s as-always-gorgeous photo remembrance of 1968, one of the most tumultuous years in world and American history. (There are going to be a lot of 50th anniversaries of things I am not ready for there to be 50th anniversaries for.) Or acts of misremembrance and mistaken choices, like how late 1990s and early 2000s nostalgia for World War 2 (and a commensurate forgetting of Vietnam and the Cold War) helped turn September 11, 2001 into a new kind of permanent war that shows no signs of ending.

Or for lighter fare, see this photo of the cast of The Crown with their real-life counterparts, or try out Permanent Redirect, digital art that moves to a new URL whenever someone views it. Watch an English five-pound note be reconstructed from shredded waste, or see this film of time-lapse thunderstorms and tornadoes in 8K high-definition. (That last one is pretty scary, actually. But beautiful.)

Ask Dr. Time

Speaking of time–you may have missed the introduction of Dr. Time, the world’s first metaphysical advice columnist, last Friday. Last week we looked at the changing relationship between orality and literacy (or, I should probably say, oralities and literacies) from prehistory through digital technology. I don’t have anything quite so sweeping for this week; only this round-up of longevity research compiled by Laura Deming (which I mostly understand), and this exciting new scientific paper on reversing the thermodynamic arrow of time using quantum correlations (which I barely understand). 

So, this week, my advice regarding time would be (in this order):

  1. Try to restrict your caloric intake;
  2. Consider shifting some of your qubits into spin 1/2;
  3. Accept that we’re thrown into our circumstances, regardless of how shitty they may be, and greet whatever fate rises to meet you with resolute defiance.

Ask Dr. Time: Orality and Literacy from Homer to Twitter

DOCTOR TIME.png

Dr. Time is a nickname some friends gave me within the last couple of years. Its origin is silly, as nicknames’ often are: “Tim” autocorrects to “Time,” so hasty typing in a private Slack turns into a pseudo-persona. I also like that it’s a slant rhyme on Doctor Doom, my favorite supervillain. And in case you haven’t noticed, I have a pretty strong interest in time.

When Jason and I started talking about different ways we could collaborate on the site, the wildest was his suggestion that I write an advice column called “Ask Dr. Time.” I laughed out loud. The proposition was absurd. I don’t want to wade into the disaster that is my life, but the idea that anyone would ask me for personal advice, and that I would be foolish enough to give it, was laughable. Let’s just say I’ve made some poor choices and had some sad circumstances, and leave it at that.

One of those poor choices, however, was spending a lot of time studying philosophy, literature, mathematics, history, and metaphysics. Jason eventually got me to see that “Ask Dr. Time” didn’t have to be an advice column in a conventional sense. What if readers had problems that didn’t require common sense or finely honed interpersonal skills, but an ability to make sense of abstruse reasoning? What if they didn’t need a fancy Watson but an armchair Wittgenstein? What if kottke.org hosted the first metaphysical advice columnist? That proposition is still absurd, but it’s absurd in an interesting way. And “absurd in an interesting way” is what Dr. Time is all about. Not practical solutions, but philosophical entanglements and disentanglings. That I could do.

So on Fridays, from time to time. Dr. Time is going to appear, to answer reader questions that admit of no answer — sometimes here on Kottke.org, and sometimes at the Kottke newsletter I write, Noticing. For this particular entry, the blog seemed more appropriate — and besides, the newsletter was full.

athletics ancient greece.jpg

Our first question actually comes from Jason, who, like many of us, is enjoying Emily Wilson’s magnificent contemporary translation of Homer’s The Odyssey.

Jason was struck by this passage in the introduction, on the oral roots and possible oral composition of the Homeric epics:

The state of Homeric scholarship changed radically and permanently in the early 1930s, when a young American classicist named Milman Parry traveled to the then-Yugoslavia with recording equipment and began to study the living oral tradition of illiterate and semiliterate Serbo-Croat bards, who told poetic folk tales about the mythical and semihistorical events of the Serbian past. Parry died at the age of thirty-three from an accidental gunshot, and research was further interrupted by the Second World War. But Parry’s student Albert Lord continued his work on Homer, and published his findings in 1960, under the title The Singer of Tales. Lord and Parry proved definitively that the Homeric poems show the mark of oral composition.

The “Parry-Lord hypothesis” was that oral poetry, from every culture where it exists, has certain distinctive features, and that we can see these features in the Homeric poems—specifically, in the use of formulae, which enable the oral poet to compose at the speed of speech. A writer can pause for as long as she or he wants, to ponder the most fitting adjective for a particular scene; she can also go back and change it afterwards, on further reflection—as in the famous anecdote about Oscar Wilde, who labored all morning to add a comma, and worked all afternoon taking it out. Oral performers do not use commas, and do not have the luxury of time to ponder their choice of words. They need to be able to maintain fluency, and formulaic features make this possible.

Subsequent studies, building on the work of Parry and Lord, have shown that there are marked differences in the ways that oral and literate cultures think about memory, originality, and repetition. In highly literate cultures, there is a tendency to dismiss repetitive or formulaic discourse as cliche; we think of it as boring or lazy writing. In primarily oral cultures, repetition tends to be much more highly valued. Repeated phrases, stories, or tropes can be preserved to some extent over many generations without the use of writing, allowing people in an oral culture to remember their own past. In Greek mythology, Memory (Mnemosyne) is said to be the mother of the Muses, because poetry, music, and storytelling are all imagined as modes by which people remember the times before they were born.

Wilson goes on to consider the implications of the poem’s origins in orality for trying to figure out if there really was an historical Homer, a single author of the great poems — and if so, whether and how we could tell. She also rightly gives some of the Homeric critics a shot in the ribs for their assumptions about oral cultures, which tended not to be drawn from very many historical sources: if Parry had visited with Somali bards rather than singers from the Balkans, he may have come away with very different conclusions.

Orality, even primary orality, before any writing whatsoever, exists in rich and wide varieties. And Homeric orality was probably not so primary as all that: it’s exciting and accessible to us exactly because it’s on that seam between a dominant oral culture and an emerging written one.

heyyyyyy.jpg

Jason’s question is a little bit different. Since I don’t quite remember what he originally asked, I’ll do a very oral-to-literate thing and paraphrase. What do we make of digital media forms like Twitter that are highly interactive and speechlike? Is this a kind of return to orality? Is there a little bit of the Homeric world in our smartphones, where we both “chat” with our mouths and our thumbs?

The answer to this last question is Yes — but in a different way from how it might first appear. We’re a little Homeric because we’re also on the cusp of multiple media regimes, making a great transformation of great civilizations. However, with some exceptions, we’re not especially oral. We’re exceedingly literate. We’re making written language and literacy do things even our grandparents, raised in the age of industrial print, wouldn’t quite recognize.

I used the phrase “primary orality” earlier, and it’s one I borrow from Walter Ong. Ong was a Jesuit priest and influential scholar of language and literature. He was very much in this Milman Parry tradition of thinking about the relationship of orality and literacy to forms of thought and shared culture. You can draw a line from Parry to Eric Havelock, who wrote the influential Preface to Plato, and to communications scholars Harold Innis and Marshall McLuhan, and from there to Ong, Hugh Kenner, Northrop Frye, and a number of the more dominant media thinkers of the twentieth century in the English language.

What Ong helped conceptualize and popularize, especially in his book Orality and Literacy, was that in cultures with no tradition of literacy, orality had a fundamentally different character from those where literacy was dominant. It’s different again in cultures where literacy is known but scarce.

For instance, we tend to associate writing with official culture. We ask for papers, and papers are official. An official record has an official written form that unofficial forms of writing or any form of speech are considered less proper. Literacy and paper are also widespread enough that we expect everyone to have some paper.

A nonliterate culture, for obvious reasons, doesn’t work that way. You need an entirely different system of conventions to differentiate formal from informal, permanent from ephemeral — those concepts might not have even hold the same relationships to each other. One of those conventions, so common that it even exists outside the species, is song. And the songs we attribute to Homer are, for us, who exist in their shadow, the best songs ever written.

In the Romantic version of the Parry-Lord thesis, the oral world of Homer is a lost paradise, and our post-literate one, a fallen world of lesser creatures. This probably borrows too much from how Homeric poets feigned to feel about themselves relative to the Mycenaean civilizations that preceded them, and how the classical Greeks appeared to feel about Homer. It’s all representation of lost paradises all the way down.

Ong dodges more of this nostalgia than he’s usually given credit for, but there’s still an element of it, one that he sometimes seems to regret. (Regret for Nostalgia would make a good biography title for Ong.) In his case, it’s conflated with a methodological problem — how do we talk about primary orality (the orality of cultures with no knowledge of writing) in a culture that’s saturated with writing, whose entire intellectual edifice is premised on writing? In fact, oral culture never goes away: it persists in its own logic and suborns the existence of writing to its own ends.

Ong’s great example is classical and medieval rhetoric, which used books, book-based scholarly culture, and book-based modes of training to elevate oral argument to exquisite sophistication. You might also look at hip-hop, which seamlessly blends freestyle vocals, dance, graffiti, and turntable manipulation to create new forms of recording and improvisation. It’s never an either-or, but a constant restructuring.

1280px-Graffiti_i_baggård_i_århus_2c.jpg

So, as to the original question: are Twitter and texting new forms of orality? I have a simple answer and a complex one, but they’re both really the same.

The first answer is so lucid and common-sense, you can hardly believe that it’s coming from Dr. Time: if it’s written, it ain’t oral. Orality requires speech, or song, or sound. Writing is visual. If it’s visual and only visual, it’s not oral.

The only form of genuine speech that’s genuinely visual and not auditory is sign language. And sign language is speech-like in pretty much every way imaginable: it’s ephemeral, it’s interactive, there’s no record, the signs are fluid. But even most sign language is at least in part chirographic, i.e., dependent on writing and written symbols. At least, the sign languages we use today: although our spoken/vocal languages are pretty chirographic too.

Writing, especially writing in a hyperliterate society, involves a transformation of the sensorium that privileges vision at the expense of hearing, and privileges reading (especially alphabetic reading) over other forms of visual interpretation and experience. It makes it possible to take in huge troves of information in a limited amount of time. We can read teleprompters and ticker-tape, street signs and medicine bottles, tweets and texts. We can read things without even being aware we’re reading them. We read language on the move all day long: social media is not all that different.

Now, for a more complicated explanation of that same idea, we go back to Father Ong himself. For Ong, there’s a primary orality and a secondary orality. The primary orality, we’ve covered; secondary orality is a little more complicated. It’s not just the oral culture of people who’ve got lots of experience with writing, but of people who’ve developed technologies that allow them to create new forms of oral communication that are enabled by writing.

The great media forms of secondary orality are the movies, television, radio, and the telephone. All of these are oral, but they’re also modern media, which means the media reshapes it in its own image: they squeeze your toothpaste through its tube. But they’re also transformative forms of media in a world that’s dominated by writing and print, because they make it possible to get information in new ways, according to new conventions, and along different sensory channels.

Walter_Ong.JPG

Walter Ong died in 2003, so he never got to see social media at its full flower, but he definitely was able to see where electronic communications was headed. Even in the 1990s, people were beginning to wonder whether interactive chats on computers fell under Ong’s heading of “secondary orality.” He gave an interview where he tried to explain how he saw things — as far as I know, relatively few people have paid attention to it (and the original online source has sadly linkrotted away)1:

“When I first used the term ‘secondary orality,’ I was thinking of the kind of orality you get on radio and television, where oral performance produces effects somewhat like those of ‘primary orality,’ the orality using the unprocessed human voice, particularly in addressing groups, but where the creation of orality is of a new sort. Orality here is produced by technology. Radio and television are ‘secondary’ in the sense that they are technologically powered, demanding the use of writing and other technologies in designing and manufacturing the machines which reproduce voice. They are thus unlike primary orality, which uses no tools or technology at all. Radio and television provide technologized orality. This is what I originally referred to by the term ‘secondary orality.’

I have also heard the term ‘secondary orality’ lately applied by some to other sorts of electronic verbalization which are really not oral at all—to the Internet and similar computerized creations for text. There is a reason for this usage of the term. In nontechnologized oral interchange, as we have noted earlier, there is no perceptible interval between the utterance of the speaker and the hearer’s reception of what is uttered. Oral communication is all immediate, in the present. Writing, chirographic or typed, on the other hand, comes out of the past. Even if you write a memo to yourself, when you refer to it, it’s a memo which you wrote a few minutes ago, or maybe two weeks ago. But on a computer network, the recipient can receive what is communicated with no such interval. Although it is not exactly the same as oral communication, the network message from one person to another or others is very rapid and can in effect be in the present. Computerized communication can thus suggest the immediate experience of direct sound. I believe that is why computerized verbalization has been assimilated to secondary ‘orality,’ even when it comes not in oral-aural format but through the eye, and thus is not directly oral at all. Here textualized verbal exchange registers psychologically as having the temporal immediacy of oral exchange. To handle [page break] such technologizing of the textualized word, I have tried occasionally to introduce the term ‘secondary literacy.’ We are not considering here the production of sounded words on the computer, which of course are even more readily assimilated to ‘secondary orality’” (80-81).

So tweets and text messages aren’t oral. They’re secondarily literate. Wait, that sounds horrible! How’s this: they’re artifacts and examples of secondary literacy. They’re what literacy looks like after television, the telephone, and the application of computing technologies to those communication forms. Just as orality isn’t the same after you’ve introduced writing, and manuscript isn’t the same after you’ve produced print, literacy isn’t the same once you have networked orality. In this sense, Twitter is the necessary byproduct of television.

Now, where this gets really complicated is with stuff like Siri and Alexa, and other AI-driven, natural-language computing interfaces. This is almost a tertiary orality, voice after texting, and certainly voice after interactive search. I’d be inclined to lump it in with secondary orality in that broader sense of technologically-mediated orality. But it really does depend how transformative you think client- and cloud-side computing, up to and including AI, really are. I’m inclined to say that they are, and that Alexa is doing something pretty different from what the radio did in the 1920s and 30s.

But we have to remember that we’re always much more able to make fine distinctions about technology deployed in our own lifetime, rather than what develops over epochs of human culture. Compared to that collision of oral and literate cultures in the Eastern Mediterranean that gave us poetry, philosophy, drama, and rhetoric in the classical period, or the nexus of troubadours, scholastics, printers, scientific meddlers and explorers that gave us the Renaissance, our own collision of multiple media cultures is probably quite small.

But it is genuinely transformative, and it is ours. And some days it’s as charming to think about all the ways in which our heirs will find us completely unintelligible as it is to imagine the complex legacy we’re bequeathing them.

  1. Thank the Internet Archive for the save! See also here.