Answer Searching Engines Might Be The Future In The Era Of AI
摘要： Although all the search engines in the Internet world are pretty much the same like two decades ago and people have long gotten used to browsing through different result pages to look for the content they need, the rise of the mobile Internet is generating new demands of information searching. Nowadays, traditional search engines can no longer satisfy users’ needs of finding answers as quickly as possible on the Internet in their fragmented time. New ways of searching are bound to come.
As the biggest search engine company in the world, Google’s market value has recently reached 500 billion dollars, even surpassing the total market value of Facebook and Alibaba added together. In contrast to the sky-high market value, Google’s search product, however, hasn’t changed much in terms of its form and services in the last twenty years: users input the key words and Google’s search engine will list out thousands of millions of related webpages to users.
Here comes the question: can search engines last long by adopting the same approach of searching for information like twenty years ago?
The answer is no. As mobile clients gradually replace PC clients as the main medium which Internet users adopt to receive information in their daily lives, the inconveniencies brought by the search engines on smart phones’ little screens have created new great demands. For instance, it’s particularly inconvenient when users want to open multiple different webpages from the search results on their phones. In other scenarios, when driving, for example, users will not get the information they want from their smart phones immediately since their hands are busy controlling the wheel and won’t be able to operate the phone. Current search engines are outdated when looking from this aspect.
Scenario One: Language barriers
In 2014, Google acquired a company named Quest Visual, a company that dedicates to the augmented reality translation technology, which allows smart phones to read texts via the camera and subsequently translate the words the phones read into the target language. With Quest Visual’s technology, users can take out their phones and scan texts in foreign languages anytime and get the translation they need immediately. This augmented reality translation technology integrates sophisticated text recognition technology and machine translation technology, which are fundamental for getting real-time translation assistance from smart phones.
Scenario Two: Getting information from the “outside brain” when talking to people
In a recent Google glass commercial, a young man was wearing his Google glass to a date, during which process he was able to search for relevant information about his date via the glass and therefore able to bring out his best side to win over the girl.
Conversation between people is also a scenario that requires real-time response search service. AI technology can work as another brain of ours (outside brain) and help us find the answers we need immediately on the Internet in order to keep the conversation flowing. In this sense, such technology can boost the quality and efficiency of our daily conversation and help us understand each other better.
Such function hasn’t been achieved yet since Google Glass is now facing obstacles in technical aspects and improving user experience. On the one hand, Google Glass only has a resolution of 640*480, which obviously is not going to provide users with the best visual experience. On the other hand, wearing your Google Glass in some social scenarios definitely makes you stand out like a sore thumb. Besides that, the exposure of privacy will also make people that are socializing with you uncomfortable. Additionally, the accuracy of image recognition and immediate response still require better and more advanced AI technologies. I believe the future development of technologies will make it possible for wearable gadgets to become the second brains of users that are in charge of analyzing, searching and providing information. However, these gadgets’ appearance must be designed to be low-key in order to not cause worries over privacy.
Scenarios Three: Within seconds, in sport games
This future scenario requires a little imagination to picture. For example, in table tennis competition, winners are decided within seconds. In this case, what really matters besides skills is players’ reaction speed. For table tennis rookies, it’s extremely hard to respond to quick attacks. Future wearable AI gadgets, in this scenario, can analyze the rebound and movement of the ball and predict the result of the attack faster than human brains. That being said, in sport competitions, driving, rescue missions, and other activities that need quick response, these wearable gadgets will prove to be incredible helpful and give the users an “edge”. Therefore, such technologies have rather immeasurable business value.
In the three scenarios above, users’ needs and the technological requirements vary. But still they have one thing in common, that is the wearable gadgets need to detect or sense users’ needs and provide answers or solutions immediately. Thus, we have every reason to believe that the next-gen search engine might be the ultimate solution to the typical needs I have mentioned above since future search engines should be able to provide information support service anytime and anywhere in service of helping users make immediate decisions and act. “Search engines in the future should be able to provide users with accurate answers to their questions instead of hundreds of thousands of webpages,” Google’s former CEO Eric Schmidt said in 2009. “I believe this is the direction the search engine industry will work on, and that the future belongs to answer searching engines.”
Answer searching engines are different from questions-based engines for the fact that the formers don’t require users to input a complete question, not even text. In other words, answer searching engines can provide answers to any form of inquiry.
So when can we have such technology?
What search engines do is list out numerous webpages related to the keywords users have entered. Browsing through tons of webpages just to look for an answer is no easy task for everyone. At present, in order to find the best answer conveniently, two issues must be solved first:
1. Developers must know what exactly users are looking for. Statistics show that when users are using voice recognition apps, they tend to say a complete sentence while prefer to input 2 or 3 keywords when using text. When users fail to provide enough information (for example, Apple could either be Apple the brand, or simply just fruit), the system won’t be able to understand what users are actually saying. In this case, the answer searching engine must have the capacity to run analysis and make predictions on the content users provide based on the context and semantically meaning.
a. Analysis on common needs. The system can make predictions on users’ needs by keeping track of similar users’ searching habit and history and analyzing the collected data.
b. Analysis on individual needs. Such analysis can be done by analyzing users’ past search record. For example, the system can analyze the user’s past search history and match the keyword with the record. In this case, when a user has searched Apple the tech company for multiple times before, then the system won’t direct him to a page full of apple the fruit instead of leading him to pages related to iPhones etc.
c. Reference provided by the location. For instance, when a user is walking around an electronic market, he is probably searching for Apple’s phones rather than looking for apple. The location reference is extra essential when it comes to searching for information on local life style such as restaurants for example.
2. Understanding the content of webpages and finding the best match according to the question. Machines now in some way can understand human languages thanks to the NLP technology (Natural Language Processing), which also involves other technologies like voice recognition, word recognition, word analysis, grammar tree building etc.
In comparison with understanding the text, picture recognition is in fact more challenging. A picture recognition system can usually recognition a certain group of things since to recognize other things will require a completely different approach. Therefore, picture recognition technology is incredibly effective in scenarios like recognizing car plates and human faces, but rather useless when it comes to the whole wide world that is just simply too diverse.
Luckily, some leading IT enterprises overseas have made their AI tools, which they have been working on for years, available to the general public for free, including Facebook’s deep learning plug-in Big Sur, Google’s TensorFlow, IBM’s System ML and Microsoft’s DMTK. While making great contributions to the development of society, these enterprises will also gather experience and momentum and have major technological breakthroughs with the help of all the great minds on earth.
When developers can truly understand users’ actual need and the content of webpages, it wouldn’t be that difficult to find the answers that users are really looking for. However, there are two more issues needed to be solved: Differentiating simple search (who, where, or when) from complicated search (why or how); logical integration of answers that scattered on different webpages.
Answer searching engine is a natural trend created by the time. Although all the search engines in the Internet world are pretty much the same like two decades ago and people have long got used to browsing through different result pages, the rise of the mobile Internet is generating new demands for searching information. Nowadays, traditional search engines can no longer satisfy users’ needs of finding answers fast on the Internet in their fragmented time. Without a doubt, the era of answer searching engine is coming in the near future.
[The article is published and edited with authorization from the author @YiZhao, please note source and hyperlink when reproduce.]
Translated by Garrett Lee (Senior Translator at PAGE TO PAGE), working for TMTpost.