Google I/O 2018 Initiatives - Introducing Google Duplex | Intelegain




Artificial Intelligence

Google double downs on AI at I/O 2018 initiatives – Revealed Google Duplex, VPS Maps & more

The annual Google I/O developer conference this year announced its shifting focus on new areas of focus for the organization – Artificial Intelligence, Machine learning and Augmented reality. One running theme underlying most of the key Google’s keynotes was the utilization of AI and its subset Machine learning to drive most of its products.

Here are the key announcements from the conference.

Camera for Direction – Google Maps unveils Visual Positioning System (VPS)

Google is aware of the problems one faces while finding directions in an unfamiliar locale even if you have access to the GPS. Google VP, Aparna Chennapragada announced one of the best features – i.e. the updated version of Visual Positioning System (VPS) in the Google Maps Navigation.

For folks getting lost, you no longer have to wonder if you are going in the same direction as the blue dot on your GPS, instead, you can use your camera to look at the surrounding and the simple directions of turning left, right or straight will be apparent to you.

You simply need to hold up the phone, start the camera and the lens will identify the location where you are standing and compare it to the Google’s database of the Street View images in the region. Once the system detects your location an arrow will be displayed on the screen to tell you if you need to go right or left.

Google - 1

More additions to the Google Maps include “For You” tab which shows nearby points of interest, accompanying a “Your Match” feature which tries to custom-tailor recommendations. One of its intended use is sharing lists with pals instead of having to name names from memory.

Google Duplex- AI Based Assistant can schedule your appointments

The company announced that Assistant will soon be able to make appointments/reservations on your behalf even when they cannot be booked online. The assistant is powered by a technology called Google Duplex and it will be rolling out as an experiment soon.

Google Duplex can hold natural conversations and perform practical tasks over the phone. Thanks to the improvements in the Natural Language Processing and advancements in Deep Learning, Google Duplex can carry out human-like conversations along with human speech disfluencies like ‘uhmm’ or ‘hmm’.

In a disconcerting but stunning demo, Google CEO Sundar Pichai displayed the abilities of Google Assistant- showing as a fact that Google Assistant can talk realistically with real people in automated voice calls.

How does Google Duplex Work?

At the heart of Google Duplex is RNN or a recurrent neural network. It has been built using TensorFlow Extended. To make the assistant voice human-like, the creators use a combination of text to speech engine and a synthesis of TTS engine in order to vary the tone of the machine.

Google - 2

Human speech disfluencies like mentioned above are added in order to make it more believably human. The machines can even understand when to give a slow response as well as when to answer quickly using low-confidence models or faster approximations. The developers used real-time supervised training to train the system whenever in new domains- like a teacher instructing a student on a subject with various examples.

Android P focuses on Artificial intelligence and Digital well-being

Digital Well-Being

Google 2018 initiatives include fighting users’ phone addiction. In order to do this, the company has brought in a dashboard that will note how often you are using apps, what notification you receive and how much you unlock your phone. Users will get a breakdown of any app- meanwhile, the developers can directly link to the dedicated section of this app information page.

digitalwellbeing_blog

App timer assists the users in limiting the time spent on an app every day, with a Nudge that will let users know that they are approaching their time limit. When the time limit is over, the app icon will be grayed out for the rest of the day. In the meantime, Wind Down will help users get to sleep with features like graying the screen to prompt users to stop using the device.

Adaptive Battery and Adaptive Brightness

Google uses AI technology in many interesting ways, one of which is the Adaptive battery. AI is at the core of the operating system with Google working with DeepMind to draw power to the apps that the Android P knows you are actively using, all the while pushing the background processes to low-power cores.

Meanwhile, Adaptive brightness observes the users’ manual habits and adjusts the brightness based on the various environments and gradually learns from it.  The end goal is that user will not have to manually change things.

Gesture-based navigation

The next big thing that Google has announced has to do with Android’s launcher. The redesigned launcher is developed around a Gesture-Based Navigation system. When you swipe up the bottom of the screen, you will see the most recently used apps on the top of the screen- which is now a carousel.

How is Artificial Intelligence implemented on this – well, at the top of the carousel, the suggestions you are seeing are based on studying your usage habits. Google has named it App Actions– in this, the system utilizes Machine learning- an AI subset to learn your habits and then figure out what to place where.

new-system-navigation

Google Lens

The company also announced a new version of Google Lens in beta. If you have the few selected Android devices that are getting Google Lens in their native camera apps- then you will be able to access the feature by simply going into your camera and choosing the icon located on the bottom-right.

Tapping on the icon will activate the Google Lens feature and it will allow you to point your smartphone camera at objects or sights to identify more info on them. Google Lens is also getting a real-time finder that will examine what your camera is capturing even before you tap on the display. The tool will process the pixels through Machine Learning in order to give more details and produce more relevant search tags.

Since Google Lens has integrated AI into it, the tool provides users with a big picture of their environments. It can also provide with contextual suggestions for objects in surrounding environment. Like for instance, scanning a nearby coffee shop can show you the menu, pricing, timings of the said café. It can also help you copy and save a text from a book to your phone.

Another critical feature of Google Lens is Style Match. This feature is built using object-recognition and Machine Learning techniques. All you have to do is point your camera at an outfit or a bag you like and it will help you buy the item online and even show you similar styles to the item you like.

Artificial Intelligence in Google Photos for effective editing and sharing

Google Photos already allow you ready to use editing options like cropping, adjusting brightness and put in filters after you click a photo. With Artificial Intelligence, though Google Photos will assist you in making editing suggestions based on the data in the picture- which will enable you to edit photos with just a tap.

Google - 3

Google photos already can scan photos and digitize them. Soon it will be able to tell you who is in the photo and allow you to share it with the person in the photo. AI will also be used to suggest colorization of black and white photos while also recommending which color suits the part of the picture the most.

Conclusion

Google has unveiled very exciting opportunities in Artificial Intelligence development and applications in various fields, the above being the most exciting announcements made by the company in Google I/O 2018. Want to learn more, catch up with Google’s latest Artificial Intelligence developments on Google AI.

Share Button
Thank you for contacting us, we will get back to you soon