Sometimes it feels like Google understands me better than some actual humans, but alas, that’s not really the case. Every answer served up by the search engine has only been made possible because somewhere in the world, an actual human has already thought about it.
Now, I’m not saying that the Google of the future will start creating it’s own content, but a level of artificial intelligence never seen before will change the way it answers our queries.
Google’s first release of it’s Knowledge Graph was rolled out over a couple of days in May. It’s the fruit of two years labour since Google acquired Metaweb – a then five year old start-up company that “maintains an open database of things in the world”. Since the acquisition, Google has continued to grow this database to now contain somewhere in the region of 500 million entities and 3.5 billion attributes and connections.
By crawling and indexing every public database and using that information, along with data gathered from its own products, Google can now start to understand connections between ‘things’. It can begin to recognise that a word is not just a string of characters, it represents something, and when one word appears next to another particular word, like ‘New’ and ‘Jersey’, those two words mean something completely different again.
But how will this really change the user experience? Well, it’s not going to revolutionise the way we search over night, but you can already see examples of how the SERPs are changing for certain terms. The purpose of the Knowledge Graph, Google says, is to give users more contextual information. Some criticisms though, are that Google is making it less necessary to actually leave the results page. Of course, these criticisms are usually voiced by companies that have an agenda to get you on their site – or even their clients websites – and are worried about how they’ll need to adapt. But Google wants to take care of it’s clients – the user. The person who wants an answer to a question, or a solution to a problem – and the user is getting more and more impatient. Rather than Google simply pointing you in the direction of a website that might be able to answer your question, it will try to understand the context of what you’re looking for, find the answer, and serve up the information you need, either from it’s own results page, or by presenting you with other qualified web documents for more thorough and accurate information.
All of this progress is working toward creating the ultimate end goal in the world of search – creating The Star Trek Computer. The loyal, all-knowing assistant who is there by your side (or in your pocket), serving up answers, giving advice and basically holding your hand along the way.
At a recent presentation by the Google Search Team, advancements in their speech recognition technology were unveiled, and it was pretty impressive stuff. I don’t know whether they purposefully used the term ‘speech recognition’ as opposed to ‘voice’, but it certainly surmises what this technology can do; understand what you’re actually talking about, not just match sounds to strings of data.
Underpinned by the Knowledge Graph, this speech recognition technology has the ability to understand natural language. This contextual understanding of human knowledge, combined with access to the arsenal of information available across the entire internet, will change not only the way we use search engines, but the way we move through life.
So, what does the future of search look like? Well, from where I’m standing, it looks very bright indeed. I must admit, I’ve always been a bit more Red Dwarf than Star Trek, but whatever style of Sci-Fi tickles your fancy, there always seems to be one recurring theme when TV producers project the ideal future – the AI Assistant. So, whether it was ‘Ziggy’ in Quantum Leap, or my personal favourite, the rather passive aggressive Holly from RD, there’s no doubt about it – we all want one, and now thanks to some hardcore Trekkie’s, we can soon all have one.