kottke.org posts about Nick Bostrom

Will technology help humans conquer the universe or kill us all?Feb 27 2013

Ross Andersen, whose interview with Nick Bostrom I linked to last week, has a marvelous new essay in Aeon about Bostrom and some of his colleagues and their views on the potential extinction of humanity. This bit of the essay is the most harrowing thing I've read in months:

No rational human community would hand over the reins of its civilisation to an AI. Nor would many build a genie AI, an uber-engineer that could grant wishes by summoning new technologies out of the ether. But some day, someone might think it was safe to build a question-answering AI, a harmless computer cluster whose only tool was a small speaker or a text channel. Bostrom has a name for this theoretical technology, a name that pays tribute to a figure from antiquity, a priestess who once ventured deep into the mountain temple of Apollo, the god of light and rationality, to retrieve his great wisdom. Mythology tells us she delivered this wisdom to the seekers of ancient Greece, in bursts of cryptic poetry. They knew her as Pythia, but we know her as the Oracle of Delphi.

'Let's say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,' Dewey told me. 'And let's say the Oracle AI has some goal it wants to achieve. Say you've designed it as a reinforcement learner, and you've put a button on the side of it, and when it gets an engineering problem right, you press the button and that's its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses. It might start by behaving really well, trying to please us to the best of its ability. Not only would it answer our questions about how to build a flying car, it would add safety features we didn't think of. Maybe it would usher in a crazy upswing for human civilisation, by extending our lives and getting us to space, and all kinds of good stuff. And as a result we would use it a lot, and we would feed it more and more information about our world.'

'One day we might ask it how to cure a rare disease that we haven't beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it's actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it's going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage -- and then it would take that advantage and start doing what it wants to in the world.'

Read the whole thing, even if you have to watch goats yelling like people afterwards, just to cheer yourself back up.

Are we underestimating the risk of human extinction?Feb 22 2013

Nick Bostrom, a Swedish-born philosophy professor at Oxford, thinks that we're underestimating the risk of human extinction. The Atlantic's Ross Andersen interviewed Bostrom about his stance.

I think the biggest existential risks relate to certain future technological capabilities that we might develop, perhaps later this century. For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems. You could also have risks associated with certain advancements in synthetic biology.

Of course there are also existential risks that are not extinction risks. The concept of an existential risk certainly includes extinction, but it also includes risks that could permanently destroy our potential for desirable human development. One could imagine certain scenarios where there might be a permanent global totalitarian dystopia. Once again that's related to the possibility of the development of technologies that could make it a lot easier for oppressive regimes to weed out dissidents or to perform surveillance on their populations, so that you could have a permanently stable tyranny, rather than the ones we have seen throughout history, which have eventually been overthrown.

While reading this, I got to thinking that maybe the reason we haven't observed any evidence of sentient extraterrestrial life is that at some point in the technology development timeline just past the "pumping out signals into space" point (where humans are now), a discovery is made that results in the destruction of a species. Something like a nanotech virus that's too fast and lethal to stop. And the same thing happens every single time it's discovered because it's too easy to discover and too powerful to stop.

Do we live in a computer simulation?Dec 11 2012

In 2003, British philosopher Nick Bostrom suggested that we might live in a computer simulation. From the abstract of Bostrom's paper:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a "posthuman" stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

The gist appears to be that if The Matrix is possible, someone has probably already invented it and we're in it. Which, you know, whoa.

But researchers believe they have devised a test to check if we're living in a computer simulation.

However, Savage said, there are signatures of resource constraints in present-day simulations that are likely to exist as well in simulations in the distant future, including the imprint of an underlying lattice if one is used to model the space-time continuum.

The supercomputers performing lattice quantum chromodynamics calculations essentially divide space-time into a four-dimensional grid. That allows researchers to examine what is called the strong force, one of the four fundamental forces of nature and the one that binds subatomic particles called quarks and gluons together into neutrons and protons at the core of atoms.

"If you make the simulations big enough, something like our universe should emerge," Savage said. Then it would be a matter of looking for a "signature" in our universe that has an analog in the current small-scale simulations.

If it turns out we're all really living in an episode of St. Elsewhere, I'm going to be really bummed. (via @CharlesCMann)

Tags related to Nick Bostrom:
Ross Andersen

kottke.org

Front page
About + contact
Site archives

Subscribe

Follow kottke.org on Twitter

Follow kottke.org on Tumblr

Like kottke.org on Facebook

Subscribe to the RSS feed

Advertisement

Ads by The Deck

Support kottke.org shop at Amazon

And more at Amazon.com

Looking for work?

More at We Work Remotely

Kottke @ Quarterly

Subscribe to Quarterly and get a real-life mailing from Jason every three months.

 

Enginehosting

Hosting provided EngineHosting