During their recent Web Summit presentation, Mike Schroepfer, Facebook’s chief technology officer, has explained how Facebook’s AI technology works and what implications it could have on existing platforms. The social network thinks there is already too much content on the Internet; therefore, there is a growing need to develop accurate software programs for recognition.
Facebook has declared at the recent Web Summit conference that they are currently developing an artificial intelligence system that can identify photo content much in the same way humans do. The endeavor is not at all new to the public, who has already been introduced to similar news coming from competing companies.
According to Schroepfer, the new system will be among the first to see the world through human eyes. At present, computer systems rely on pixels to understand photo content, but developers think this strategy has become outdated and inefficient, considering that there is such a large volume of information on the Internet.
Once the new recognition software will have been tested, Facebook plans to create an assistant called Visual Q&A to help visually impaired people understand what pictures illustrate. The need to create a visual assistant is not determined only by people’s illnesses; Schroepfer has explained that online platforms need a solid infrastructure to support the ever growing volume of Internet information.
In addition to the social use that Facebook’s AI technology could have, Schroepfer plans to use recognition algorithms to create a teleporter – a system that can recreate users the illusion that they are actually living in the virtual environment they have selected. This can only be possible if we accurately identify photos based on the information they contain, he concluded.
There have been many tests and demonstrations to prove the efficiency of Facebook’s new system. Developers have begun with smaller texts and then, continued with bigger inputs that had to be deciphered for the AI system to correctly solve 100,000 questions. Results showed that the new program is 30% faster than previous ones and the required training data is 10 times smaller.
Developers hope they can someday combine the two systems together to obtain even better results. Computers will not be able to see like humans, but also to understand human language better. Taking these two into consideration, it will be easier for users to interact with their computers.
Image source: www.countdown.org