Tech

What you need to know from today’s Google IO: Chatty AI, collab tools, TPU v4 chips, quantum computing

Google IO Google today opened its developer conference, the aptly named Google IO, with a somber nod to the ongoing COVID-19 pandemic from Alphabet CEO Sundar Pichai.

“In some places, people are beginning to live their lives again as cases decline. Other places like Brazil and my home country of India are going through their most difficult moments yet. We are thinking of you and hoping for better days ahead,” Pichai said, speaking outdoors at the Chocolate Factory’s Mountain View campus.

Last year, the coronavirus outbreak prompted Google to cancel its IO show entirely.

Pichai detailed a new collaborative features in Google Workspace and several advancements in AI software and hardware, including a promising conversational technology called Language Model for Dialogue Applications (LaMDA).

Google is using the term Smart Canvas to refer to the dozen enhancements added to Workspace that aim to improve collaboration and connect distinct apps like Docs, Sheets, Slides, and Meet.

“With Smart Canvas, we’re bringing together the content and connections that transform collaboration, into a richer, better experience,” explained Javier Soltero, VP and general management of Google Workspace. “For over a decade, we’ve been pushing documents away from being just digital pieces of paper, and toward collaborative linked content inspired by the web. Smart Canvas is our next big step.”

As an example Soltero describes a scenario in which a team is collaborating on a shared Doc file and the assisted writing feature suggests changing the word “Chairman” to “Chairperson” to “avoid a gendered term.”

A related effort discussed toward the end of the opening keynote is Google’s effort to revise its digital image processing algorithms to better photograph diverse skin tones in the Android Camera app and elsewhere.

Other Smart Canvas enhancements include: @-mentions of team members in Docs and (soon) Sheets, through which additional information like job title, location, and contact information can be made available; table templates in Docs; the ability to present Docs, Sheets, and Slides content in Meet events; and a pageless format in Docs for better viewing across multiple screen sizes, among others.

AI better at chatting back

Pichai then reviewed Google’s advances in AI over its past 22 years, focusing on language translation and image recognition innovations. He described how work on how natural language advances like the Transformer neural network architecture in 2017 and BERT in 2019 have made computers more capable of understanding natural language queries.

“Today I’m excited to share our latest breakthrough in natural language understanding, LaMDA, it’s a language model for dialogue applications,” he explained. “And it’s open domain, which means it’s designed to converse on any topic.”

Pichai then illustrated LaMDA’s conversational skills by presenting a conversation about Pluto between a person and LaMDA, with the AI model responding as if it were the dwarf planet. Missing from the sample dialog were any of the nonsensical statements or misunderstandings that anyone who has engaged with conversational AI inevitably encounters, though LaMDA is still capable of messing up.

“It’s really impressive to see how LaMDA can carry on a conversation about any topic,” said Pichai. “It’s amazing how sensible and interesting the conversation is yet. It’s still early research, so it doesn’t get everything right. Sometimes it can give nonsensical responses.”

Pichai said further work is being done to ensure LaMDA, which builds on research described in a 2020 paper, meets Google’s standards for fairness, accuracy, safety and privacy. Clearly, Google is keen to avoid a Microsoft Tay-grade fiasco whenever it gets around to integrating LaMDA into its own services, such as Search and Assistant.

Pichai also announced revised AI hardware, Google’s Tensor Processing Unit (TPU) v4. More than twice as fast as TPU v3, TPU v4 chips can be connected into supercomputers called pods that consist of 4,096 processors capable of delivering one exaflop, or 10^18 floating point operations per second.

“Think about it this way, if 10 million people were on their laptops right now, then all of those laptops put together would almost match the computing power of one exaflop,” said Pichai.

“This is the fastest system we’ve ever deployed at Google, and a historic milestone for us. Previously to get an exaflop, you needed to build a custom supercomputer, but we already have many of these deployed today.”

Pichai said soon there will be dozens of TPU v4 pods in its data centers and these will be made available to Google Cloud customers later this year.

Going quantum

Google is also opening a Quantum AI Campus in Santa Barbara, California, which incorporates the company’s first quantum data center, a quantum hardware research lab, and a quantum chip fab.

Before ceding the stage to more esoteric, developer-specific presentations, Pichai also previewed a novel 3D video conferencing system called Project Starline.

“Using high resolution cameras and custom built depth sensors, we capture your shape and appearance from multiple perspectives, and then fuse them together to create an extremely detailed real time 3d model,” Pichai explained, noting that the company developed novel compression and streaming technology to reduce the massive amount of data, send it over the network, and display it on a novel light-field display that makes it looks like you’re taking to a real person.

Pichai said Google intends to expand access to Project Starline to healthcare and media partners. ®


Source link

Related Articles

Back to top button