
An investor in Siri Inc, Shawn Carolan of Menlo ventures recently discussed the technology behind Siri with Bloomberg. In his discussion, he spoke about exactly how Siri is able to take voice recognition data and determine intent. The approach that was mentioned discussed how Siri’s approach included “one big block” and mapping “strings of words across” a group of 10 domains of expertise
According to the technology journalist, Robert Cringely, this method is similar to patents owned by the search portal Excite from 1994. He mentioned the following:
Here’s how the ArchiText (later Excite) search engine worked. Every query was stripped to its significant words — subjects, objects, verbs and adjectives — then each query became a vector in a multidimensional space with each unique word being a dimension. “How do space rockets stay in orbit when they are flying through space?” would become a vector string one unit long for each of those words but two units long for the word “space.” This bit of semantic DNA was then mapped against an index of millions of web pages that had all been similarly converted to multidimensional vectors. It was quick, scalable, concentrated the processing load on the indexing where it didn’t bog down retrieval, and could reliably return pages like “Why satellites fall from the sky” that might answer the question even though none of the same words were used.
What do you think is going to happen? Share any thoughts and opinions below!
Source: Cringely
Message