A picture contains a thousand meaning. But for Google, sometimes words are more beneficial. Therefore, finding the best method in translating an image automatically and accurately is more important for Google, though. Researchers between the two parties, Google and Stanford University separately have developed software which is not only recognising an object but also understanding more complicated situation and condition that involve some individual objects and activity.
If at first Google more focused in processing content catalogues on internet, algorithm implementation is much implemented farther in helping future robots in order to interact with other object around their surroundings.
The researchers realised that there is other implication enables to be investigated further, the implementation on surveillance field. For the simple example, to create a CCTV is not only identifying an object but also explaining what it does. As the consequence, CCTV is so smarter that could differentiate pedestrians come across the road and people who are making noisy at the same place.
Such an artificial intelligence method works by combining two neural networks whereas there are a number of computers inter connected with thinking capability to determine a pattern from a data. The first network recognise individual element by an image and the other network is using the natural language processor to interpret it.
But, such a technology breakthrough is not ideal yet. It is still susceptible of mistakes when interpreting a situation or condition. Based on some artificial intelligence specialists, the capability of this software still needs more time and long journey before arriving at the final destination namely understanding which is equal to human being.