Not too long ago I was watching a bunch of TED talks at home, when I came across one about a new kind of search engine. Now you might be saying to yourself, “Wow this guy is boring, watching lectures on search engines for fun.” And you’d probably be right aside from the fact that I’m awesome and you know it. But I encourage you to watch the video for yourself as my description of what this is and what it does may not quite do it justice.
Every person born within the last 40 years probably knows by now that you can parse the wealth of human knowledge from your laptop, cellphone, mp3 player, etc. And what is the most common method of doing that? A search engine, like Google. You can go to Google and type in just about anything and receive some sort of meaningful result, along with 30 pages of porn relating to that search. There are all sorts of tip and tricks one can use to refine their searches and glean data from the web that would otherwise be obscured or buried in messy page links. But what if you could present a search engine with a string of human recognizable input and receive not a series of web pages regarding your query but a well formatted page of data with a specific relation to your exact query? You would probably be using something like Wolfram Alpha.
Wolfram Alpha is what one might call a ‘meta search engine’ in that it scours the vastness of the Internet to return meaningful and specific data to you based on human recognizable input. So if I search for something like ‘solar x-rays August 1 2010 to August 2 2010’ (borrowed from the Wolfram Alpha blog) then it will return to me a page showing a table of the mean, lows and highs of solar flux for those days, along with a graph plotting the x-ray activity of the sun between those two days. Now, go try this at http://www.wolframalpha.com/ and you’ll see what I’m describing then type the same string into Google. The first 2 page links aren’t even related to the question. This is because Google is simply reporting back a list of pages that it queries from a contextual database. Whereas Wolfram Alpha is crawling raw data on the web, organizing it and formatting it for consumption by a person.
You can actually give it an equation and it will solve it for you and then explain the math behind it, it can count Pi, ask it for a definition of human nature, or even ask it to compare caffeine to capsaicin, which displays images of both molecular structures as well as comparing their molecular weight and chemical identifiers. I have found a few queries that it still has trouble with but it’s learning all the time and getting better with every question posited.
I feel like this idea, this concept is the future for how humans will interact with and traverse the knowledge we gain.
Update: I’m curious to see if Wolfram Alpha will ever take advantage of all of the metadata that you can glean from social networks, this ties in to my last post. Though the folks @ Twitter would have to “turn on the firehose” of data to the public if Wolfram is to use twitter updates too.