homeaboutarchives + tagsshopmembership!
aboutarchivesshopmembership!
aboutarchivesmembers!

kottke.org posts about mathematics

The size of life: the differing scales of living things

posted by Jason Kottke   Aug 10, 2017

In the first in a series of videos, Kurzgesagt tackles one of my favorite scientific subjects: how the sizes of animals governs their behaviors, appearance, and abilities. For instance, because the volume (and therefore mass) of an organism increases according to the cube of the increase in length (e.g. if you double the length/height of a dog, its mass roughly increases by 8 times), when you drop differently sized animals from high up, the outcomes are vastly different (a mouse lands safely, an elephant splatters everywhere).

The bit in the video about how insects can breathe underwater because of the interplay between the surface tension of water and their water-repellant outer layers is fascinating. The effect of scale also comes into play when considering the longevity of NBA big men, how fast animals move, how much animals’ hearts beat, the question of fighting 100 duck-sized horses or 1 horse-sized duck, and shrinking people down to conserve resources.

When humans get smaller, the world and its resources get bigger. We’d live in smaller houses, drive smaller cars that use less gas, eat less food, etc. It wouldn’t even take much to realize gains from a Honey, I Shrunk Humanity scheme: because of scaling laws, a height/weight proportional human maxing out at 3 feet tall would not use half the resources of a 6-foot human but would use somewhere between 1/4 and 1/8 of the resources, depending on whether the resource varied with volume or surface area. Six-inch-tall humans would potentially use 1728 times fewer resources.

See also The Biology of B-Movie Monsters, which is perhaps the most-linked article in the history of kottke.org.

Systemic racism in America explained in just three minutes

posted by Jason Kottke   Jun 07, 2017

This short video shows several ways in which systemic racism is still very much alive and well in the United States in 2017. See also Race Forward’s video series featuring Jay Smooth.

“What Is Systemic Racism?” is an 8-part video series that shows how racism shows up in our lives across institutions and society: Wealth Gap, Employment, Housing Discrimination, Government Surveillance, Incarceration, Drug Arrests, Immigration Arrests, Infant Mortality… yes, systemic racism is really a thing.

The reason why this matters should be obvious. Just like extra effort can harness the power of compound interest in knowledge and productivity, even tiny losses that occur frequently can add up to a large deficit. If you are constantly getting dinged in even small ways just for being black, those losses add up and compound over time. Being charged more for a car and other purchases means less life savings. Less choice in housing results in higher prices for property in less desirable neighborhoods, which can impact choice of schools for your kids, etc. Fewer callbacks for employment means you’re less likely to get hired. Even if you do get the job, if you’re late for work even once every few months because you get stopped by the police, you’re a little more likely to get fired or receive a poor evaluation from your boss. Add up all those little losses over 30-40 years, and you get exponential losses in income and social status.

And these losses often aren’t small at all, to say nothing of drug offenses and prison issues; those are massive life-changing setbacks. The war on drugs and racially selective enforcement have hollowed out black America’s social and economic core. There’s a huge tax on being black in America and unless that changes, the “American Dream” will remain unavailable to many of its citizens.

Compound interest applied to learning

posted by Jason Kottke   Jun 06, 2017

How are some people more productive than others? Are they smarter or do they just work a little bit harder than everyone else? In 1986, mathematician and computer scientist Richard Hamming gave a talk at Bell Communications Research about how people can do great work, “Nobel-Prize type of work”. One of the traits he talked about was possessing great drive:

Now for the matter of drive. You observe that most great scientists have tremendous drive. I worked for ten years with John Tukey at Bell Labs. He had tremendous drive. One day about three or four years after I joined, I discovered that John Tukey was slightly younger than I was. John was a genius and I clearly was not. Well I went storming into Bode’s office and said, “How can anybody my age know as much as John Tukey does?” He leaned back in his chair, put his hands behind his head, grinned slightly, and said, “You would be surprised Hamming, how much you would know if you worked as hard as he did that many years.” I simply slunk out of the office!

What Bode was saying was this: “Knowledge and productivity are like compound interest.” Given two people of approximately the same ability and one person who works ten percent more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity — it is very much like compound interest. I don’t want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime.

Thinking of life in terms of compound interest could be very useful. Early and intensive investment in something you’re interested in cultivating — relationships, money, knowledge, spirituality, expertise, etc. — often yields exponentially better results than even marginally less effort.

See also this metaphor for how cultural, technological, and scientific changes happen. (via mr)

Climate change is shifting cherry blossom peak-bloom times

posted by Jason Kottke   Apr 10, 2017

Kyoto Cherry Blossom Chart

Records of when the cherry blossoms appear in Kyoto date back 1200 years. (Let’s boggle at this fact for a sec…) But as this chart of peak-bloom dates shows, since the most recent peak in 1829, the cherry blossoms have been arriving earlier and earlier in the year.

From its most recent peak in 1829, when full bloom could be expected to come on April 18th, the typical full-flowering date has drifted earlier and earlier. Since 1970, it has usually landed on April 7th. The cause is little mystery. In deciding when to show their shoots, cherry trees rely on temperatures in February and March. Yasuyuki Aono and Keiko Kazui, two Japanese scientists, have demonstrated that the full-blossom date for Kyoto’s cherry trees can predict March temperatures to within 0.1°C. A warmer planet makes for warmer Marches.

Temperature and carbon-related charts like this one are clear portraits of the Industrial Revolution, right up there with oil paintings of the time. I also enjoyed the correction at the bottom of the piece:

An earlier version of this chart depicted cherry blossoms with six petals rather than five. This has been amended. Forgive us this botanical sin.

Gotta remember that flower petals are very often numbered according to the Fibonacci sequence.

Abacus use can boost math skills (and other lessons on learning)

posted by Jason Kottke   Mar 07, 2017

Abacus

The abacus counting device dates back thousands of years but has, in the past century, been replaced by calculators and computers. But studies show that abacus use can have an effect on how well people learn math. In this excerpt adapted from his new book Learn Better, education researcher Ulrich Boser writes about the abacus and how people learn.

Researchers from Harvard to China have studied the device, showing that abacus students often learn more than students who use more modern approaches.

UC San Diego psychologist David Barner led one of the studies, and he argues that abacus training can significantly boost math skills with effects potentially lasting for decades.

“Based on everything we know about early math education and its long-term effects, I’ll make the prediction that children who thrive with abacus will have higher math scores later in life, perhaps even on the SAT,” Barner told me.

Ignore the hyperbolic “and it changed my life” in the title…this piece is interesting throughout. For example, this passage on the strength of the mind-body connection and the benefits of learning by doing:

When first I watched high school abacus whiz Serena Stevenson, her hand gestures seemed like a pretentious affect, like people who wear polka-dot bow ties. But it turned out that her finger movements weren’t really all that dramatic, and on YouTube, I watched students with even more theatrical gesticulations. What’s more, the hand movements turned out to be at the heart of the practice, and without any arm or finger motions, accuracy can drop by more than half.

Part of the explanation for the power of the gestures goes to the mind-body connection. But just as important is the fact that abacus makes learning a matter of doing. It’s an active, engaging process. As one student told me, abacus is like “intellectual powerlifting.”

Psychologist Rich Mayer has written a lot about this idea, and in study after study he has shown that people gain expertise by actively producing what they know. As he told me: “Learning is a generative activity.”

I’d never heard of the concept of overlearning before:

Everybody from actors learning lines, to musicians learning new songs, to teachers trying to impart key facts to students has observed that learning has to “sink in” in the brain. Prior studies and also the new one, for example, show that when people learn a new task and then learn a similar one soon afterward, the second instance of learning often interferes with and undermines the mastery achieved on the first one.

The new study shows that overlearning prevents against such interference, cementing learning so well and quickly, in fact, that the opposite kind of interference happens instead. For a time, overlearning the first task prevents effective learning of the second task — as if learning becomes locked down for the sake of preserving master of the first task. The underlying mechanism, the researchers discovered, appears to be a temporary shift in the balance of two neurotransmitters that control neural flexibility, or “plasticity,” in the part of the brain where the learning occurred.

“These results suggest that just a short period of overlearning drastically changes a post-training plastic and unstable [learning state] to a hyperstabilized state that is resilient against, and even disrupts, new learning,” wrote the team led by corresponding author Takeo Watanabe, the Fred M. Seed Professor of Cognitive Linguistic and Psychological Sciences at Brown.

Play around with this trippy Julia set fractal

posted by Jason Kottke   Feb 17, 2017

Julia Set Fractal

Yay! It’s Fractal Friday! (It’s not, I just made that up.) But anyway, courtesy of Christopher Night, you can play around with this Julia set fractal. It works in a desktop browser (by moving the mouse) or on your phone (by dragging your finger).

The Julia set, if you don’t remember, goes thusly: Let f(z) be a complex rational function from the plane into itself, that is, f(z)=p(z)/q(z) f(z)=p(z)/q(z), where p(z) and q(z) are complex polynomials. Then there is a finite number of open sets F1, …, Fr, that are left invariant by f(z) which, uh, is um… yay! Fractal Friday! The colors are so pretty!

Mesmerizing strobe light sculptures

posted by Jason Kottke   Jan 12, 2017

If you spin these sculptures by artist John Edmark at a certain speed and light them with a strobe, they appear to animate in slowly trippy ways.

Blooms are 3-D printed sculptures designed to animate when spun under a strobe light. Unlike a 3D zoetrope, which animates a sequence of small changes to objects, a bloom animates as a single self-contained sculpture. The bloom’s animation effect is achieved by progressive rotations of the golden ratio, phi (ϕ), the same ratio that nature employs to generate the spiral patterns we see in pinecones and sunflowers. The rotational speed and strobe rate of the bloom are synchronized so that one flash occurs every time the bloom turns 137.5º (the angular version of phi).

The effect seems computer generated (but obviously isn’t) and is better than I anticipated. (via colossal)

Update: While not as visually smooth as his sculptures, Edmark’s rotation of an artichoke under strobe lighting deftly demonstrates the geometric rules followed by plants when they grow.

Here we see an artichoke spinning while being videotaped at 24 frames-per-second with a very fast shutter speed (1/4000 sec). The rotation speed is chosen to cause the artichoke to rotate 137.5º — the golden angle — each time a frame is captured, thus creating the illusion that the leaves are moving up or down the surface of the artichoke. The reason this works is that the artichoke grows by producing new leaf one at a time, with each new leaf positioned 137.5º around the center from the previous leaves. So, in a sense, this video reiterates the artichoke’s growth process.

(via @waxpancake)

Update: This similar sculpture by Takeshi Murata is quite impressive as well.

(via @kevmaguire)

The Monty Hall Problem, explained

posted by Jason Kottke   Sep 20, 2016

The Monty Hall Problem is one of those things that demonstrates just how powerful a pull common sense has on the human reasoning process. The problem itself is easily stated: there are three doors and behind one of them there is a prize and behind the other two, nothing. You choose a door in hopes of finding the prize and then one of the other two doors is opened to reveal nothing. You are offered the opportunity to switch your guess to the other door. Do you take it?

Common sense tells you that switching wouldn’t make any difference. There are two remaining doors, the prize is randomly behind one of them, why would switching yield any benefit? But as the video explains and this simulation shows, counterintuition prevails: you should switch every time.

America was introduced to the difficulty of the problem by Marilyn vos Savant in her column for Parade magazine in 1990.1 In a follow-up explanation of the question, vos Savant offered a quite simple “proof” of the always switch method (from Wikipedia). Let’s assume you pick door #1, here are the possible outcomes:

Door 1 Door 2 Door 3 Result if you stay Result if you switch
Car Goat Goat Wins car Wins goat
Goat Car Goat Wins goat Wins car
Goat Goat Car Wins goat Wins car

As you can see, staying yields success 33% of the time while if you switch, you win 2 times out of three (67%), a result verified by a properly written simulator. In his Straight Dope column, Cecil Adams explained it like so (after he had gotten it wrong in the first place):

A friend of mine did suggest another way of thinking about the problem that may help clarify things. Suppose we have the three doors again, one concealing the prize. You pick door #1. Now you’re offered this choice: open door #1, or open door #2 and door #3. In the latter case you keep the prize if it’s behind either door. You’d rather have a two-in-three shot at the prize than one-in-three, wouldn’t you? If you think about it, the original problem offers you basically the same choice. Monty is saying in effect: you can keep your one door or you can have the other two doors, one of which (a non-prize door) I’ll open for you.

See also the case of the plane and the conveyor belt (sorry not sorry, I couldn’t resist).

  1. I was a religious reader of Parade and remember this column and the resulting furor very clearly. Re: the furor, it’s so interesting to note in hindsight (being 16 and clueless at the time) how much of the response was men who clearly were trying to put the smackdown on a prominent, intelligent woman and they just got totally owned.

The infamous Collatz Conjecture

posted by Jason Kottke   Aug 09, 2016

For a recent episode of Numberphile, David Eisenbud explains the Collatz Conjecture, a math problem that is very easy to understand but has an entire book devoted to it and led famous mathematician Paul Erdős to say “this is a problem for which mathematics is perhaps not ready”.

The problem is easily stated: start with any positive integer and if it is even, divide it by 2 and if odd multiply it by 3 and add 1. Repeat the process indefinitely. Where do the numbers end up? Infinity? 1? Loneliness? Somewhere in-between? My favorite moment of the video:

16. Whoa, a very even number.

I love math and I love this video. (via df)

The fractal and geometric beauty of plants

posted by Jason Kottke   Jul 12, 2016

Plant Geometry

Plant Geometry

Plant Geometry

When you look at some plants, you can just see the mathematics behind how the leaves, petals, and veins are organized.

From Winnie Cooper to math whiz

posted by Jason Kottke   Jun 03, 2016

As a child, Danica McKellar played Winnie Cooper on The Wonder Years. After the show was over, McKellar had difficulty breaking away from other people’s perceptions of her. But in college, she discovered an aptitude for mathematics, went on to have a theorem named after her — not because she was famous but because she’d helped prove it — and forged a new identity. (via @stevenstrogatz)

P vs. NP and the Computational Complexity Zoo

posted by Jason Kottke   May 20, 2016

When Grade-A nerds get together and talk about programming and math, a popular topic is P vs NP complexity. There’s a lot to P vs NP, but boiled down to its essence, according to the video:

Does being able to quickly recognize correct answers [to problems] mean there’s also a quick way to find [correct answers]?

Most people suspect that the answer to that question is “no”, but it remains famously unproven.

In fact, one of the outstanding problems in computer science is determining whether questions exist whose answer can be quickly checked, but which require an impossibly long time to solve by any direct procedure. Problems like the one listed above certainly seem to be of this kind, but so far no one has managed to prove that any of them really are so hard as they appear, i.e., that there really is no feasible way to generate an answer with the help of a computer.

The rarified air of the NBA 7-footer

posted by Jason Kottke   May 20, 2016

Muggsey Yao

According to this 2011 article on Mark Eaton and other 7-footers who have played in the NBA, there are only 70 American men between 20 and 40 who are 7 feet tall…and that more than 1 in 6 of them get to play in the NBA.

The curve shaped by the CDC’s available statistics, however, does allow one to estimate the number of American men between the ages of 20 and 40 who are 7 feet or taller: fewer than 70 in all. Which indicates, by further extrapolation, that while the probability of, say, an American between 6’6” and 6’8” being an NBA player today stands at a mere 0.07%, it’s a staggering 17% for someone 7 feet or taller.

Being seven feet tall is absurdly tall and comes with a whole host of challenges, from bumping one’s head on door frames to difficulty finding clothes to health issues. Some of these difficulties arise out of simple geometry: as height and width increase, volume increases more quickly.1

  1. See also one of my favorite links ever, The Biology of B-Movie Monsters.

Some prime numbers are illegal in the United States

posted by Jason Kottke   May 06, 2016

The possession of certain prime numbers is illegal in the US. For instance, one of these primes can be used to break a DVD’s copyright encryption.

How many digits of pi does NASA use?

posted by Jason Kottke   Mar 18, 2016

Mathematicians have calculated pi out to more than 13 trillion decimal places, a calculation that took 208 days. NASA’s Marc Rayman explains that in order to send out probes and slingshot them accurately throughout the solar system, NASA needs to use only 15 decimal places, or 3.141592653589793. How precise are calculations with that number? This precise:

The most distant spacecraft from Earth is Voyager 1. It is about 12.5 billion miles away. Let’s say we have a circle with a radius of exactly that size (or 25 billion miles in diameter) and we want to calculate the circumference, which is pi times the radius times 2. Using pi rounded to the 15th decimal, as I gave above, that comes out to a little more than 78 billion miles. We don’t need to be concerned here with exactly what the value is (you can multiply it out if you like) but rather what the error in the value is by not using more digits of pi. In other words, by cutting pi off at the 15th decimal point, we would calculate a circumference for that circle that is very slightly off. It turns out that our calculated circumference of the 25 billion mile diameter circle would be wrong by 1.5 inches. Think about that. We have a circle more than 78 billion miles around, and our calculation of that distance would be off by perhaps less than the length of your little finger.

When was humanity’s calculation of pi accurate enough for NASA? In 1424, Persian astronomer and mathematician Jamshid al-Kashi calculated pi to 17 digits.

Space filling curves

posted by Jason Kottke   Feb 19, 2016

From 3Blue1Brown, a quick video showing some space-filling curves.

17 equations that changed the world

posted by Jason Kottke   Jan 28, 2016

17 Equations

In the book In Pursuit of the Unknown, Ian Stewart discusses how equations from the likes of Pythagoras, Euler, Newton, Fourier, Maxwell, and Einstein have been used to build the modern world.

I love how as time progresses, the equations get more complicated and difficult for the layperson to read (much less understand) and then Boltzmann and Einstein are like, boom!, entropy is increasing and energy is proportional to mass, suckas!

The beauty of mathematics

posted by Jason Kottke   Dec 18, 2015

The “hidden” mathematics and order behind everyday objects & phenomenon like spinning tops, dice, magnifying glasses, and airplanes. (via @stevenstrogatz)

Einstein’s first proof

posted by Jason Kottke   Nov 19, 2015

Steven Strogatz walks us through the first mathematical proof Albert Einstein did when he was a boy: a proof of the Pythagorean theorem.

Einstein, unfortunately, left no such record of his childhood proof. In his Saturday Review essay, he described it in general terms, mentioning only that it relied on “the similarity of triangles.” The consensus among Einstein’s biographers is that he probably discovered, on his own, a standard textbook proof in which similar triangles (meaning triangles that are like photographic reductions or enlargements of one another) do indeed play a starring role. Walter Isaacson, Jeremy Bernstein, and Banesh Hoffman all come to this deflating conclusion, and each of them describes the steps that Einstein would have followed as he unwittingly reinvented a well-known proof.

Twenty-four years ago, however, an alternative contender for the lost proof emerged. In his book “Fractals, Chaos, Power Laws,” the physicist Manfred Schroeder presented a breathtakingly simple proof of the Pythagorean theorem whose provenance he traced to Einstein.

Of course, that breathtaking simplicity later became a hallmark of Einstein’s work in physics. See also this brilliant visualization of the Pythagorean theorem

P.S. I love that two of the top three most popular articles on the New Yorker’s web site right now are about Albert Einstein.

Brilliant visualization of the Pythagorean Theorem

posted by Jason Kottke   Oct 26, 2015

Everyone knows that the square of the hypotenuse of a right triangle is equal to the sum of the squares of the other two sides. What this video presupposes is, fuck yeah math!

PhotoMath iOS app can do your homework for you

posted by Jason Kottke   Oct 15, 2015

PhotoMath

Some iOS apps still seem like magic. Case in point: PhotoMath. Here’s how it works. You point your camera at a math problem and PhotoMath shows the answer. It’ll even give you a step-by-step explanation and solution.

The mathematical secrets of Pascal’s triangle

posted by Jason Kottke   Sep 16, 2015

Pascal’s triangle1 is a simple arrangement of numbers in a triangle…rows are formed by the successive addition of numbers in previous rows. But out of those simple rows comes deep and useful mathematical relationships related to probability, fractals, squares, and binomial expansions. (via digg)

  1. As the video says, Pascal was nowhere near the discoverer of this particular mathematical tool. By the time he came along in 1653, the triangle had already been described in India (possibly as early as the 2nd century B.C.) and later in Persia and China.

Cool furniture alert: the Fibonacci Shelf

posted by Jason Kottke   Jul 29, 2015

The Fibonacci Shelf by designer Peng Wang might not be the most functional piece of furniture, but I still want one.

Fibonacci Shelf

Fibonacci Shelf

The design of the shelf is based on the Fibonacci sequence of numbers (0, 1, 1, 2, 3, 5, 8, 13, 21, …), which is related to the Golden Rectangle. When assembled, the Fibonacci Shelf resembles a series of Golden Rectangles partitioned into squares. (via ignant)

Fibonacci sequence hidden in ordinary division problem

posted by Jason Kottke   Jul 06, 2015

If you divide 1 by 999,999,999,999,999,999,999,998,999,999,999,999,999,999,999,999 (that’s 999 quattuordecillion btw), the Fibonacci sequence neatly pops out. MATH FTW!

Fibonacci division

At the end of Carl Sagan’s Contact (spoilers!), the aliens give Ellie a hint about something hidden deep in the digits of π. After a long search, a circle made from a sequence of 1s and 0s is found, providing evidence that intelligence was built into the fabric of the Universe. I don’t know if this Fibonacci division thing is on quite the same level, but it might bake your noodle if you think about it too hard. (via @stevenstrogatz)

Update: From svat at Hacker News, an explanation of the magic behind the math.

It’s actually easier to understand if you work backwards and arrive at the expression yourself, by asking yourself: “If I wanted the number that starts like 0.0…000 0…001 0…001 0…002 0…003 0…005 0…008 … (with each block being 24 digits long), how would I express that number?”

(thx, taylor)

Web Mandelbrot

posted by Jason Kottke   May 04, 2015

Mandelbrot

This web app allows you to explore the Mandelbrot set interactively…just click and zoom. I had an application like this on my computer in college, but it only went a few zooms deep before crashing though. There was nothing quite like zooming in a bunch of times on something that looked like a satellite photo of a river delta and seeing something that looks exactly like when you started. (via @stevenstrogatz)

Finding Zero

posted by Jason Kottke   Apr 23, 2015

Finding Zero

The latest book from Amir Aczel, who has written previously about the compass, the Large Hadron Collider, and Fermat’s Last Theorem, is Finding Zero: A Mathematician’s Odyssey to Uncover the Origins of Numbers…in particular, the number zero.

Finding Zero is an adventure filled saga of Amir Aczel’s lifelong obsession: to find the original sources of our numerals. Aczel has doggedly crisscrossed the ancient world, scouring dusty, moldy texts, cross examining so-called scholars who offered wildly differing sets of facts, and ultimately penetrating deep into a Cambodian jungle to find a definitive proof.

The NY Times has a review of the book, written by another Amir, Amir Alexander, who wrote a recent book on infinitesimals, aka very nearly zero. (via @pomeranian99)

The beauty of pi puts infinity within reach

posted by Jason Kottke   Mar 13, 2015

I’m dreading it. No hope of solving any equations that day, what with the pie-eating contests, the bickering over the merits of pi versus tau (pi times two), and the throwdowns over who can recite more digits of pi. Just stay off the streets at 9:26:53, when the time will approximate pi to ten places: 3.141592653.

The New Yorker’s Steven Strogatz on why pi matters.

Dancing mathematics

posted by Jason Kottke   Mar 02, 2015

Dancing Math

Mathematical functions depicted as stick figure dance moves. (via @mulegirl)

The Infinite Hotel Paradox

posted by Jason Kottke   Feb 19, 2015

In a lecture given in 1924, German mathematician David Hilbert introduced the idea of the paradox of the Grand Hotel, which might help you wrap your head around the concept of infinity. (Spoiler alert: it probably won’t help…that’s the paradox.) In his book One Two Three… Infinity, George Gamow describes Hilbert’s paradox:

Let us imagine a hotel with a finite number of rooms, and assume that all the rooms are occupied. A new guest arrives and asks for a room. “Sorry,” says the proprietor, “but all the rooms are occupied.” Now let us imagine a hotel with an infinite number of rooms, and all the rooms are occupied. To this hotel, too, comes a new guest and asks for a room.

“But of course!” exclaims the proprietor, and he moves the person previously occupying room N1 into room N2, the person from room N2 into room N3, the person from room N3 into room N4, and so on…. And the new customer receives room N1, which became free as the result of these transpositions.

Let us imagine now a hotel with an infinite number of rooms, all taken up, and an infinite number of new guests who come in and ask for rooms.

“Certainly, gentlemen,” says the proprietor, “just wait a minute.”

He moves the occupant of N1 into N2, the occupant of N2 into N4, and occupant of N3 into N6, and so on, and so on…

Now all odd-numbered rooms became free and the infinite of new guests can easily be accommodated in them.

This TED video created by Jeff Dekofsky explains that there are similar strategies for finding space in such a hotel for infinite numbers of infinite groups of people and even infinite amounts of infinite numbers of infinite groups of people (and so on, and so on…) and is very much worth watching:

(via brain pickings)

A regular expression for finding prime numbers

posted by Jason Kottke   Feb 12, 2015

Given that there’s so much mathematicians don’t know about prime numbers, you might be surprised to learn that there’s a very simple regular expression for detecting prime numbers:

/^1?$|^(11+?)\\1+$/

If you’ve got access to Perl on the command line, try it out with some of these (just replace [number] with any integer):

perl -wle 'print "Prime" if (1 x shift) !~ /^1?$|^(11+?)\\1+$/' [number]

An explanation is here which I admit I did not quite follow. A commenter at Hacker News adds a bit more context:

However while cute, it is very slow. It tries every possible factorization as a pattern match. When it succeeds, on a string of length n that means that n times it tries to match a string of length n against a specific pattern. This is O(n^2). Try it on primes like 35509, 195341, 526049 and 1030793 and you can observe the slowdown.