Terrapattern is a search engine for satellite images. You click on a specific feature of interest on a map and the site returns results that match it. For instance, here are the locations of solar panels in NYC.
There are only four cities currently represented (Pittsburgh, New York, San Francisco, and Detroit) but this is already super cool to play around with. (via @genmon)
This map was compiled using the autocomplete results for “how much does a * cost” for every country in the world.
Some notable desires: Mexican tummy tucks, Brazilian prostitutes, Albanian nose jobs, Russian MiGs, Lebanese PS3s, and Japanese watermelons.
See also the desire map of the US.
In Alaska, people search for the cost of a gallon of milk. In Alabama and Florida, people search for the cost of abortions. In other states, vasectomies, facelifts, and taxis are popular searches. The map was compiled using the autocomplete results for “how much does a * cost”… for each of the 50 states. (via mr)
This is pretty neat…all the objects are zoomable and rotatable in the browser. (via prosthetic knowledge)
Lantern is a search engine for the books, periodicals, and catalogs contained in the Media History Digital Library. If you are a fan or student of pre-1970s American film and broadcasting, this looks like a goldmine. Here are some of the periodical titles and the years available:
Movie Classic 1931-1937
Home Movies and Home Talkies 1932-1934
Talking Machine World 1921-1928
(via candler blog)
Whenever I start to feel sick, I hit the Internet and start searching for more information about my symptoms. When a doctor writes me a prescription and I start feeling something unexpected, I search the web for side effects. And I’m not the only one whose first instinct is to turn my head and search. So many of us have adopted this behavior that researchers are gathering valuable information by studying our search queries and “have for the first time been able to detect evidence of unreported prescription drug side effects before they were found by the Food and Drug Administration’s warning system.”
Facebook’s new Graph Search can be used to find some very unusual, disturbing, and potentially dangerous things. Like “Married people who like Prostitutes”, “Family members of people who live in China and like Falun Gong”, and “Islamic men interested in men who live in Tehran, Iran”.
Earlier today I shared a quick way to read a links-only version of your Twitter stream using Twitter’s new “people you follow” search filter. More than three years ago, Twitter removed @replies to people you don’t follow from people’s streams… e.g. if I follow Jack Dorsey on Twitter and you don’t, you won’t see my “@jack That’s great, congrats!” tweet in your stream. With the “people you follow” search filter, you now have the option of seeing all those @replies again: just do a search for some gibberish with the not operator in front of it. (But obviously not that gibberish because then you’ll miss tweets with that link in it. Get yer own gibberish!)
Two things that I wished worked that don’t: -@ and -# for searches that exclude @replies and #hastags.
Update: Andy Baio reminds me that you can filter out @replies and #hashtags with “-filter:replies” and “-filter:hashtags”. Which makes things a bit more interesting. Using the “people you follow” filter in combination with other filters, you can see your Twitter stream in all sorts of different ways:
- Only links
- Only links excluding Foursquare, Instagram, or whatever…
- Without links
- Without links and @replies (which is kind of an amazingly old school way to read Twitter)
You can also use it to read your stream with certain terms excluded…say if you didn’t want to read anything about the Presidential candidates, SXSW, Rupert Murdoch, the Yankees, or Gawker. I know other tools let you filter tweets in your stream in different ways, but this is the first time Twitter allows people to do it on their site, even if it is through the back door.
A few weeks ago, Twitter added an option to search the tweets of only the people you follow. This is useful for several different reasons (try searching for [recent pop culture key phrase] to see what I mean) but for those who use Twitter primarily to find cool links to read/watch, it’s an unexpected gift. To view your Twitter stream filtered to include only tweets containing links, just do a search for “http”. Simple but powerful.
ps. Who knows if they’re interested in this or not, but by a) making their entire archive available to search and b) allowing people to limit their search to their friends + 1-2 degrees of separation, Twitter could significantly better the search experience offered by Google et al in maybe 25-30% of all search cases. This is what Google is attempting to do with Google+ but Twitter could beat them to the punch.
Update: The search above, while quick, is also dirty in that it will include non-link tweets like “My favorite protocol is HTTP”. The official Twitter way to is to use “filter:links”, which will avoid that problem.
This is mesmerizing: using Google Image Search and starting with a transparent image, this video cycles through each subsequent related image, over 2900 in all.
If you search Wolfram Alpha for “planes overhead”, it returns a list of planes passing over your current location along with a sky map of where to look.
The most fun on the internet right now: go to Google and search for “do a barrel roll” (no quotes). Whee!
Google is thirteen today…back in 1998 when the site was still hosted at http://google.stanford.edu, Keith Dawson gave the search engine its first online coverage in English on the fondly remembered Tasty Bits From the Technology Front.
This site, one of the few rigorous academic research projects on Web searching, presents a demonstration database — only 25M documents — that already blows past most of the existing search engines in returning relevant nuggets. Google employs a concept of Page Rank derived from academic citation literature. Page Rank equates roughly to a page’s importance on the Web: the more inbound links a page has, and the higher the importance of the pages linking to it, the higher its Page Rank.
Austin Kleon explicitly tied the last two posts together and fed Kurt Vonnegut’s story shape graphs into Google Correlate’s search by drawing feature. This is SO GOOD.
This is kind of amazing…you draw a graph and Google Correlate finds query terms whose popularity matches the drawn curve. I drew a bell curve, a very rough one peaking in 2007, and it matches a bunch of searches for “myspace”.
This fits beautifully with the previous post about Vonnegut’s story shape graphs.
Dorothy Gambrell looked up all of the state names on Google and made a map of what the autocomplete suggestions were. Here’s part of it:
Lots of sports and schools.
The merchant, Vitaly Borker, 34, who operates a Web site called decormyeyes.com, was charged with one count each of mail fraud, wire fraud, making interstate threats and cyberstalking. The mail fraud and wire fraud charges each carry a maximum sentence of 20 years in prison. The stalking and interstate threats charges carry a maximum sentence of five years.
He was arrested early Monday by agents of the United States Postal Inspection Service. In an arraignment in the late afternoon in United States District Court in Lower Manhattan, Judge Michael H. Dolinger denied Mr. Borker’s request for bail, stating that the defendant was either “verging on psychotic” or had “an explosive personality.” Mr. Borker will be detained until a preliminary hearing, scheduled for Dec. 20.
Have you noticed that when you search Google for the answer to a mathematical calculation, the only result it lists is Google’s own? I mean, just look at this obvious result tampering:
This “hard-coding” of calculation answers as the top search result goes against the company’s supposed policy promising completely algorithmic and unbiased results. How are other mathematical calculation sites supposed to compete against the Mountain View search and math giant? What if 45 times 12 isn’t actually 540? (I checked the calculation on Wolfram Alpha several times and on my iPhone calcuator and 540 appears to be correct. For now.)
And this isn’t even Google’s most egregious transgression. As Eric Meyer points out, Google is blocking private correspondence between private parties. That means that grandmothers aren’t getting necessary information about erectile disfunction, people aren’t finding out where they can play Texas Hold ‘Em online, and the queries of Nigerian foreign ministers are going unanswered. There are millions of dollars sitting in a bank somewhere and all they need is a loan to get it out! Google! This. Is. Un. Acce. Ptable!
P.S. I think this “research” is obvious and the conclusions are misleading and biased. But then I don’t have Ph.D. from Harvard, so what do I know?
Steven Levy on how Google’s search algorithm has changed over the years.
Take, for instance, the way Google’s engine learns which words are synonyms. “We discovered a nifty thing very early on,” Singhal says. “People change words in their queries. So someone would say, ‘pictures of dogs,’ and then they’d say, ‘pictures of puppies.’ So that told us that maybe ‘dogs’ and ‘puppies’ were interchangeable. We also learned that when you boil water, it’s hot water. We were relearning semantics from humans, and that was a great advance.”
But there were obstacles. Google’s synonym system understood that a dog was similar to a puppy and that boiling water was hot. But it also concluded that a hot dog was the same as a boiling puppy. The problem was fixed in late 2002 by a breakthrough based on philosopher Ludwig Wittgenstein’s theories about how words are defined by context. As Google crawled and archived billions of documents and Web pages, it analyzed what words were close to each other. “Hot dog” would be found in searches that also contained “bread” and “mustard” and “baseball games” — not poached pooches. That helped the algorithm understand what “hot dog” — and millions of other terms — meant. “Today, if you type ‘Gandhi bio,’ we know that bio means biography,” Singhal says. “And if you type ‘bio warfare,’ it means biological.”
Or in simpler terms, here’s a snippet of a conversation that Google might have with itself:
A rock is a rock. It’s also a stone, and it could be a boulder. Spell it “rokc” and it’s still a rock. But put “little” in front of it and it’s the capital of Arkansas. Which is not an ark. Unless Noah is around.
It didn’t feature an athletic woman with a flimsy bra throwing a hammer through a screen, but I thought Google’s Super Bowl ad was pretty well done:
Google announced their public DNS server today. I’m using it right now. There’s been a bunch of speculation as to why Google is offering this service for free but the reason is pretty simple: they want to speed up people’s Google search results. In 2006, Google VP Marissa Mayer told the audience at the Web 2.0 conference that slowing a user’s search experience down even a fraction of a second results in fewer searches and less customer satisfaction.
Marissa ran an experiment where Google increased the number of search results to thirty. Traffic and revenue from Google searchers in the experimental group dropped by 20%.
Ouch. Why? Why, when users had asked for this, did they seem to hate it?
After a bit of looking, Marissa explained that they found an uncontrolled variable. The page with 10 results took .4 seconds to generate. The page with 30 results took .9 seconds.
Half a second delay caused a 20% drop in traffic. Half a second delay killed user satisfaction.
Former Amazon employee Greg Linden backs up Mayer’s claim:
This conclusion may be surprising — people notice a half second delay? — but we had a similar experience at Amazon.com. In A/B tests, we tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue.
Google is developing their next-generation search engine and needs your help in testing it out.
For the last several months, a large team of Googlers has been working on a secret project: a next-generation architecture for Google’s web search. It’s the first step in a process that will let us push the envelope on size, indexing speed, accuracy, comprehensiveness and other dimensions. The new infrastructure sits “under the hood” of Google’s search engine, which means that most users won’t notice a difference in search results.
After I heard Microsoft’s announcement of yet-another-interation of their search engine (named Bing), I went to look at the stats for kottke.org for the past month to see how many visitors each search engine sent to the site. I couldn’t believe how dominant Google was.
Google | 262,946 | 93.8%
MS Live | 4,307 | 1.5%
Yahoo | 4,036 | 1.4%
MSN | 2,796 | 1.0%
It’s a small sample and doesn’t match up with Comscore’s numbers (Google: 64.2%, Yahoo: 20.4%, MS: 8.2%), but wow. As a comparison, the numbers for a year ago for kottke.org had Google at 91%, Yahoo at 4.9%, and Live at 0.7%.
At some event called the Churchill Club Top Tech Trends, VC Steve Jurvetson had an interesting idea about the future direction of search.
He said the aggregate power of distributed human activity will trump centralized control. His main point was that Google, and other search engines that analyze the Web and links, are much less useful than a (theoretical) search engine that knows not what people have linked to (as Google does), but rather what pages are open on people’s browsers at the moment that people are searching. “All the problems of search would be solved if search relevance was ranked by what browsers were displaying,” he said.
I like that idea a lot, but it got me thinking: how many instances of Firefox can you run on a cheapo LInux box, how many tabs could you have open in each of those browsers, and would that be more or less cost effective than the search term gaming that currently happens? In other words, good luck with that!
If you’re skeptical of WolframAlpha (as I was), you should watch this introduction by Stephen Wolfram. The comparison to Google (usually “is WolframAlpha a Google killer?”) is not a good one but the new service could learn a little something from the reigning champion: hide the math. One of the geniuses of Google is that it took simple input and gave simple output with a whole lot of complexity in between that no one saw and few people cared about. Plus the underlying premise of the complex computation was simplified, branded (PageRank!), and became a value proposition for Google: here’s what the web itself thinks is important about your query.
Here’s a small and nerdy measure of the huge change in the executive branch of the US government today. Here’s the robots.txt file from whitehouse.gov yesterday:
And it goes on like that for almost 2400 lines! Here’s the new Obamafied robots.txt file:
That’s it! BTW, the robots.txt file tells search engines what to include and not include in their indexes. (thx, ian)
Update: Nearly four months later, the White House’s robots.txt file is still short…only four lines.
TinEye is an image search engine. You give it an image and it’ll find it on the web for you. If it works — I didn’t get to try it too much because it was down — this is great for chasing down attribution and finding other pix by the same photographer and such. (via master kalina)
Today, we’re announcing an initiative to help bring more magazine archives and current magazines online, partnering with publishers to begin digitizing millions of articles from titles as diverse as New York Magazine, Popular Mechanics, and Ebony.
At least I think it’s a few magazines…it might be thousands but there’s no way (that I can find) to view a list of magazines on offer.