Google’s annual conference Google I/O 2017 was held at the outdoor Shoreline Amphitheatre, California during May 17-19, 2017. The conference started with Google CEO, Sundar Pichai’s keynote, which did not come with too many new announcements but Google brought too many new updates regarding announcements that were made in last year’s Google I/O 2016.
Sundar Pichai started the keynote with the achievement of having over one billion active users for Google products. He also mentioned that Google has scaled up their 7 of the most important products and platforms, to a billion of monthly active users each.
Sundar Pichai attributed the success of reaching this milestone to the growth of mobile and smartphones. But off-course it also requires the computing which is evolving heavily with each passing day.
Last year Google mentioned about the important shift in computing from mobile first to AI first approach. The mobile first era made Google reimagine every product they are working on. This led to the change in user-interaction model like multi-touch, location, identity, payments, etc.
Similarly, today’s world is AI first world, Google is rethinking all products and applying machine learning and AI to solve user’s problems at scale. Today every product act differently and perform differently using machine learning.
For example, he said that today Google search ranks differently using machine learning while using Google map, the street view automatically recognizes restaurant signs, street signs, etc. using machine learning.
The introduction of Smart Reply and Allo got great reception last year and this year Google started rolling out Smart Reply for over 1 billion Gmail users. Thus machine learning systems have learned to be conversational, which is really nice and amazing.
There is a huge shift in the way how users interact with computing. Mobile brought multi-touch and Google evolved in interaction without using mouse and keyboards. Now the new things are voice and vision, these two new modalities are there for interaction with computing in more natural and engaging way. In the case of voice, people are already using voice as an input across many of the Google products. Because computers are getting much better in understanding speech. Speech recognition has significantly improved since last year, as evidenced by the decreased word error rate even in very noisy environments compared to last year.
With respect to Google Home, Sundar Pichai mentioned that recently deep learning allowed to support multiple users in Google Home, so that it can recognize up to 6 people in the house and personalize the experience for each and every one. Thus voice is becoming an important modality in Google products.
So is with vision. Some great improvements have been seen in computer vision. Google can now understand all attributes of images much better. Image recognition is much better and is being used in across its products. Pixel, the world class smartphone launched by Google, has the best-in-class camera. Google can convert even the low light noisy pictures into a normal picture.
The new thing, which is coming next is that if something is obstructing in the way of clicking anything you want to click, Google can now remove the obstruction and have the clear picture without obstruction in front of you.
With this clear inflection point with vision, Sundar Pichai announced Google’s new initiative, Google Lens. Google Lens is a set of vision-based computing capabilities that can understand what you are looking at. This will first get rolled out in Google Assistant and Google Photos and later to other products.
For example, if you want to understand the flower you have seen, you can invoke Google Lens from the Assistant, point your phone on that flower and Google Lens will tell you what flower it is. When you are at your friend’s place, now you won’t need the username and password. You just have to point your Google Lens on the barcode and your phone will get connected with the network. Now you can point your phone to any restaurant on the street and Google will bring the exact information of that restaurant on the screen. So now, Google has started understanding images and videos.
Google was built with their ability of understanding text and web pages. Which is now shifting to understanding images and videos. Google is now evolving for machine learning and AI world, and are rethinking the computational architecture. Now Google is into the building of AI first data centers.
With this vision, Google launched last year the tensor processing units. These are custom hardware for machine learning, which is 15-30 times faster and 30-80 times more power efficient than CPUs. Last year, Google optimized TPUs for inference purposes. This year, Google announced a new generation of TPUs optimized for both training and inference, called cloud TPUs.
Sundar Pichai also highlighted significant advances in Google’s technical infrastructure for the AI era. He announced that cloud TPUs are now available on Google Compute Engine, aiming to provide a broad range of hardware to drive progress. Google is consolidating its AI efforts under Google.ai, focusing on research, tools like TensorFlow, and applied AI. This initiative targets solving complex problems across various disciplines. Additionally, Google is democratizing machine learning tools to empower users. For instance, Auto Draw offers a fun way to draw with Google providing suggestions, akin to text prediction.
Google is applying machine learning approach to all across its products, but the most important are the Google Search and Google Assistant.
Let’s check out some highlights from the announcements made by Google in I/O 2017.
Google Lens
Google Lens is the new initiative from Google, which can help you recognize things that you don’t know. For example any bird, any flower or even a new café. All you have to do is point your camera at that thing and Google Lens will do the work and bring all details you want. Currently, Google Lens integration is with Google Assistant and Google Home.
Google.ai
Google.ai, as its tagline correctly says – ‘Bringing the benefits of AI to everyone’, is an initiative to democratize the benefits of the latest machine learning research. It is a kind of centralized resource, which will provide news and documentation about its latest projects and research and opportunities to experience some of their experimental technology. It provides open access to the documentation that will help professionals from the variety of industries like education, medicine, etc.
Google for Jobs
Job seekers have always asked for a single central place where all of the job requirements will be available. Google has initiated to produce job listings from various other posting sites and will display it within search results. Google has initiated to produce job listings from various other posting sites and will display it within search results. This move aims to help overcome the challenge of connecting job seekers and providing them with all the information regarding job availability at one place. Google for Jobs aims to streamline the process, making it easier for individuals to find employment opportunities. This innovative platform aggregates job listings from various sources, providing users with a centralized and efficient way to search. Leveraging Google’s powerful search capabilities, Google for Jobs offers advanced filtering options and personalized recommendations, enhancing the job search experience.
Google Assistant for iPhone
Google Assistant has got some heavy enhancements and the good part is that it is available for download in iTunes store. Users are even comparing the iOS version of Google Assistant to Siri saying it as a better version but slightly underwhelming than Siri. The advantage of Google Assistant on iPhone is its third-party integrations and connected device control capabilities.
Android O
Android O has not brought anything fancy but it does bring the nuts and bolts for making Android O as a faster and better version, which also saves battery. The highlight of this OS version is picture-in-picture. With Android O, you don’t have to exit out of the app. If you press the home button, the video will collapse into a smaller and movable window and continues playing while you perform other actions and tasks.
Standalone VR Headsets
Google has already expanded into more advanced and expensive headsets. So Google is developing its first standalone VR headset in association with Lenovo and HTC. Previously, experiencing VR required computer or smartphone power. Now, using WorldSense technology, new standalone headsets can help you track precise movements in space.
Krify is a multinational IT service provider with core competency in iOS and Android mobile apps using advanced development technologies. Contact us to convert your great app idea into a successful mobile app.