Google’s annual conference Google I/O 2017 was held at the outdoor Shoreline Amphitheatre, California during May 17-19, 2017. The conference started with Google CEO, Sundar Pichai’s keynote, which did not come with too many new announcements but Google brought too many new updates regarding announcements that were made in last year’s Google I/O 2016.
Sundar Pichai started the keynote with the achievement of having over one billion active users for Google products. He also mentioned that Google has scaled up their 7 of the most important products and platforms, to a billion of monthly active users each.
Sundar Pichai attributed the success of reaching this milestone to the growth of mobile and smartphones. But off-course it also requires the computing which is evolving heavily with each passing day.
Last year Google mentioned about the important shift in computing from mobile first to AI first approach. The mobile first era made Google reimagine every product they are working on. This led to the change in user-interaction model like multi-touch, location, identity, payments, etc.
Similarly, today’s world is AI first world, Google is rethinking all products and applying machine learning and AI to solve user’s problems at scale. Today every product act differently and perform differently using machine learning.
For example, he said that today Google search ranks differently using machine learning while using Google map, the street view automatically recognizes restaurant signs, street signs, etc. using machine learning.
The introduction of Smart Reply and Allo got great reception last year and this year Google started rolling out Smart Reply for over 1 billion Gmail users. Thus machine learning systems have learned to be conversational, which is really nice and amazing.
There is a huge shift in the way how users interact with computing. Mobile brought multi-touch and Google evolved in interaction without using mouse and keyboards. Now the new things are voice and vision, these two new modalities are there for interaction with computing in more natural and engaging way. In the case of voice, people are already using voice as an input across many of the Google products. Because computers are getting much better in understanding speech. There has been a significant improvement in the speech recognition since last year, which can be proved from the improved word error rate from last year even in very noisy environment.
With respect to Google Home, Sundar Pichai mentioned that recently deep learning allowed to support multiple users in Google Home, so that it can recognize up to 6 people in the house and personalize the experience for each and every one. Thus voice is becoming an important modality in Google products.
So is with vision. Some great improvements have been seen in computer vision. Google can now understand all attributes of images much better. Image recognition is much better and is being used in across its products. Pixel, the world class smartphone launched by Google, has the best-in-class camera. Even if you get the low light noisy pictures, those can also be converted into a normal picture with Google.
The new thing, which is coming next is that if something is obstructing in the way of clicking anything you want to click, Google can now remove the obstruction and have the clear picture without obstruction in front of you.
With this clear inflection point with vision, Sundar Pichai announced Google’s new initiative, Google Lens. Google Lens is a set of vision-based computing capabilities that can understand what you are looking at. This will first get rolled out in Google Assistant and Google Photos and later to other products.
For example, if you want to understand the flower you have seen, you can invoke Google Lens from the Assistant, point your phone on that flower and Google Lens will tell you what flower it is. When you are at your friend’s place, now you won’t need the username and password. You just have to point your Google Lens on the barcode and your phone will get connected with the network. Now you can point your phone to any restaurant on the street and Google will bring the exact information of that restaurant on the screen. So now, Google has started understanding images and videos.
Google was built with their ability of understanding text and web pages. Which is now shifting to understanding images and videos. Google is now evolving for machine learning and AI world, and are rethinking the computational architecture. Now Google is into the building of AI first data centers.
With this vision, Google launched last year the tensor processing units. These are custom hardware for machine learning, which is 15-30 times faster and 30-80 times more power efficient than CPUs. TPUs last year were optimized for inference purpose. This year Google announced the new generation of TPUs that are optimized for both training and inference, called as cloud TPUs.
Sundar Pichai also spoke about many important advances in Google’s technical infrastructure for AI era. He informed that cloud TPUs are now coming to Google Compute Engine. Google also wants to provide a wide range of hardware, which lays the foundation for significant progress. Google is focused on working towards applying AI to solve problems at scale. Google is bringing their AI efforts together at Google.ai, which will focus on state of the art research, tools and infrastructure such as TensorFlow and Cloud TPUs and applied AI. So Google is taking all such AI advances and applying them to harder and newer problems rising in a wide range of disciplines. Google is using Machine learning providing tools to people to do what they do better. Apart from this major problem-solving approach, Google is also working and doing some simple and fun things. For example, Auto Draw, using which one can draw. When you use auto draw for drawing, Google will give you suggestions same like we get while typing the text.
Google is applying machine learning approach to all across its products, but the most important are the Google Search and Google Assistant.
Let’s check out some highlights from the announcements made by Google in I/O 2017.
Google Lens is the new initiative from Google, which can help you recognize things that you don’t know. For example any bird, any flower or even a new café. All you have to do is point your camera at that thing and Google Lens will do the work and bring all details you want. For now, Google Lens will be integrated with Google Assistant and Google Home.
Google.ai, as its tagline correctly says – ‘Bringing the benefits of AI to everyone’, is an initiative to democratize the benefits of the latest machine learning research. It is a kind of centralized resource, which will provide news and documentation about its latest projects and research and opportunities to experience some of their experimental technology. It provides open access to the documentation that will help professionals from the variety of industries like education, medicine, etc.
Google for JobsJob seekers have always asked for a single central place where all of the job requirements will be available. Google has initiated to produce job listings from various other posting sites and will display it within search results. Google for Jobs is going to help in overcoming the challenge of connecting job seekers to get all the information regarding job availability at one place.
Google Assistant for iPhone
Google Assistant has got some heavy enhancements and the good part is that it is available for download in iTunes store. Users are even comparing the iOS version of Google Assistant to Siri saying it as a better version but slightly underwhelming than Siri. The advantage of Google Assistant on iPhone is its third-party integrations and connected device control capabilities.
Android O has not brought anything fancy but it does bring the nuts and bolts for making Android O as a faster and better version, which also saves battery. The highlight of this OS version is picture-in-picture. With Android O, you don’t have to exit out of the app. If you press the home button, the video will collapse into a smaller and movable window and continues playing while you perform other actions and tasks.
Standalone VR Headsets
Google has already expanded into more advanced and expensive headsets. So Google is developing its first standalone VR headset in association with Lenovo and HTC. Previously, computer or smartphone power was required to experience VR. Now, using WorldSense technology, new standalone headsets can help you track precise movements in space.
Krify is a multinational IT service provider with core competency in iOS and Android mobile apps using advanced development technologies. Contact us to convert your great app idea into a successful mobile app.