Google Developers Blog


6th November 2018 |

Recap: Build Actions For Your Community

Posted by Leon Nicholls, Developer Programs Engineer

In March, we announced the "Build Actions for Your Community" Event Series. These events are run by Google Developers Groups (GDG) and other community groups to educate developers about Actions on Google through local meetup events.

The event series has now ended and spanned 66 countries with a total of 432 events. These events reached 19,400 developers with 21% women attendance.

Actions on Google is of interest to developers globally, from Benin City, Nigeria, to Valparaíso, Chile, Hyderabad, India, Košice, Slovakia, and Omaha, Nebraska.

Developers in these cities experienced hands-on learning, including codelabs and activities to design and create Actions for their communities.

Developers consider creating Actions for the Google Assistant as a way of applying machine learning to solve real world problems. Here, for example, are the winners of the #IndiaBuildsActions Campaign:

You can try Meditation Daily to help you relax, English King to learn about grammar, or Voice Cricket to play a game of cricket.

We also got valuable feedback directly from developers about how to improve the Actions on Google APIs and documentation. We learned that developers want to do Actions for feature phones and want the Assistant to support more languages. Developers also asked for more codelabs, more workshops and more samples (subsequently, we've added a 3rd codelab).

It was exciting to see how many developers shared their experiences on social media.

"Event series was impressive, Awesome and amazing. Knowledge well acquired" (Nigeria)

"The experience I had with the participants was unforgettable. Thank you" (Philippines)

It was also very encouraging to see that 76% of developers are likely to build new Actions and that most developers rated the Actions on Google platform better than other platforms.

Thanks to everybody who organized, presented, and attended these events all around the world. For even more events, join a local GDG DevFest to share ideas and learn about developing with Google's technologies. We can't wait to see what kinds of Actions you create for the Google Assistant!

Want more? Head over to the Actions on Google community to discuss Actions with other developers. Join the Actions on Google developer community program and you could earn a $200 monthly Google Cloud credit and an Assistant t-shirt when you publish your first app.


15th November 2018 |

Registration now open for DevFest OnAir!

Posted by Erica Hanson, Program Manager in Developer Relations

We're excited to announce the first official DevFest OnAir! DevFest OnAir is an online conference taking place on December 11th and 12th featuring sessions from DevFest events around the globe. These community-led developer sessions are hosted by GDGs (Google Developer Groups) focused on community, networking and learning about Google technologies. With over 500 communities and DevFest events happening all over the world, DevFest OnAir brings this global experience online for the first time!

Exclusive content.

DevFest OnAir includes exclusive content from Google in addition to content from the DevFest community. Watch content from up to three tracks at any time:

  • Cloud
  • Mobile
  • Voice, Web & more

Sessions cover multiple products such as Android, Google Cloud Platform, Firebase, Google Assistant, Flutter, machine learning with TensorFlow, and mobile web.

Tailored to your time zone.

Anyone can join, no matter where you are. We're hosting three broadcasts around the world, around the clock, so there's a convenient time for you to join no matter where you are at home or at work.

Ask us a question live.

Our live Q&A forum will be open throughout the online event to spark conversation and get you the answers you need.


Join the fun with interactive trivia during DevFest OnAir where you can receive something special!

Every participant who tunes in live on December 11th will receive one free month of learning on Qwiklabs.

Sign up now

Registration is free. Sign up here.

Learn more about DevFest 2018 here and find a DevFest event near you here.

GDGs are local groups of developers interested in Google products and APIs. Each GDG group can host a variety of technical activities for developers - from just a few people getting together to watch the latest Google Developers videos, to large gatherings with demos, tech talks, or hackathons. Learn more about GDG here.

Follow us on Twitter and YouTube.


2nd November 2018 |

Migrating G Suite extensions from Chrome Web Store to G Suite Marketplace

Originally posted on the Google Cloud Blog by Greg Brosman, Product Manager, G Suite Marketplace

Starting today, we're making it possible for you to access all of your favorite G Suite extensions in one place by bringing add-ons and web apps from the Chrome Web Store into the G Suite Marketplace.

If you're not familiar with the G Suite Marketplace, it's the app store for G Suite. Whether you want to boost your productivity, take control of your calendar or do more from within your inbox, you can browse more than a thousand options to customize how you work in G Suite. IT admins also have the ability to manage access and controls of apps from within the G Suite Marketplace—like whitelisting app access for users or installing an app for an entire domain (read more about best practices here). If you're an admin, you can access the marketplace from within the Admin console (Go to Tools > G Suite Marketplace).

How to migrate existing apps if you're a developer

Going forward, new G Suite extensions will be listed only on the G Suite Marketplace to make it easier for you to manage your listings. This includes all G Suite apps with add-ons, like Docs, Sheets and Drive. If you have existing apps listed on the Chrome Web Store, you'll have 90 days to migrate them. Here are specific instructions for editor add-ons, Drive v3 apps, and Drive v2 apps to get that process started. Ratings and reviews will be included in the migration, and existing users will continue to be able to use their apps.

We look forward to seeing your apps on G Suite Marketplace!


31st October 2018 |

Sean Medlin: A ‘Grow with Google Developer Scholarship’ Success Story

Originally published on the Udacity blog by Stuart Frye, VP for Business Development

This deserving scholarship recipient overcame incredible odds to earn this opportunity, and he's now on the path to achieving a career dream he's harbored since childhood!

Sean Medlin is a young man, but he's already experienced a great deal of hardship in his life. He's had to overcome the kinds of obstacles that too often stop people's dreams in their tracks, but he's never given up. Sustained by a lifelong love for computers, an unshakeable vision for his future, and a fierce commitment to learning, Sean has steadfastly pursued his life and career goals. He's done so against the odds, often without knowing whether anything would pan out.

Today, Sean Medlin is a Grow with Google Developer Scholarship recipient, on active duty in the US Air Force, with a Bachelor of Computer Science degree. He's married to a woman he says is "the best in the world" and he's just become a father for the second time. It's been a long journey for a boy who lost his sister to cancer before he'd reached adulthood, and whose official education record listed him as having never made it past the eighth grade.

But Sean keeps finding a way forward.

The experience of getting to know people like Sean is almost too powerful to describe, but experiences like these are at the heart of why the Grow with Google Developer Scholarship is such an impactful initiative for us. It's one thing to read the numbers at a high level, and feel joy and amazement that literally thousands of deserving learners have been able to advance their lives and careers through the scholarship opportunities they've earned. However, it's an entirely different experience to witness the transformative power of opportunity at the individual, human level. One person. Their life. Their dreams. Their challenges, and their successes.

It's our pleasure and our honor to introduce you to Sean Medlin, and to share his story.

You've spoken about your love for computers; when did that begin?

When I was around eight or nine, I inherited a computer from my parents and just started picking it apart and putting it back together. I fell in love with it and knew it was something I wanted to pursue as a career. By the time I reached the seventh grade I decided on a computer science degree, and knew I was already on the path to it—I was top of math and science in my class at that point.

And then things changed for your family. What happened?

My sister, who was just a year old at the time, was diagnosed with cancer; stage four. For the next several years, she fought it, and at one point beat it; but unfortunately, it came back. When she relapsed, she started receiving treatments at a research hospital about eight hours from where we lived. Because of this, our family was constantly separated. My brothers and I usually stayed at family and friend's houses. Eventually, my parents pulled us out of school so we could travel with them. We stayed at hotels or the Ronald McDonald house, really wherever we could find a place to stay. We eventually moved to Memphis, where the hospital is located. During all of this, I was homeschooled, but I really didn't learn a whole lot, given the circumstances. When my sister passed away, our family went through a terrible time. I personally took it hard and became lackadaisical. Eventually, I decided that regardless of what wrenches life was throwing me, I would not give up on my dream.

So you were still determined to further your education; what did you do?

Well, in what was my senior year, I decided to start thinking about college. I started googling, and the first thing I discovered is that I needed a high school diploma. So I found my way to the education boards in Oklahoma. I learned that I was never properly registered as a homeschool student. So my record shows that I dropped out of school my eighth grade year. I was pretty devastated. My only option was to go and get my GED*, so that's what I did.

Computer science was still your passion; were you able to start pursuing it after earning your GED?

Well, I had to take a lot of prerequisites before I could even start a computer science degree. I mean, a lot! Which was frustrating, because it took more money than I had. I tried applying for financial aid, but I wasn't able to get very much. I looked like an eighth grade dropout with a GED. That's all anyone saw.

So you found another way to pay for your schooling; what was that?

I decided to join the United States Air Force. I couldn't pay for my own education anymore, and the Air Force was offering tuition assistance. That was the best option I had. I have no military history in my family, and at first my friends and family were against the idea, worried I'd be overseas too much. But I was determined I was going to finish school and get my computer science degree and work in this field.

It sounds like the work you started doing in the Air Force wasn't really related to your desired career path, but you were still able to continue your education?

That's right. The career path I joined was supposedly tech-related, but it wasn't. I enlisted as a munition systems technology troop, or in other words, an ammo troop. It wasn't really in line with my goals, but the tuition assistance made it possible for me to keep studying computer science online. There was a tuition assistance cap though, and between that, and how much my supervisors were willing to approve, I was only able to take two classes per semester. But I kept plugging away, even using my own money to pay for some of it. It took me eight years while working in the Air Force, but I completed my computer science degree last March. I finished with a 3.98 GPA and Summa Cum Laude, the highest distinction!

That's an outstanding accomplishment, congratulations! Did you feel ready to enter the field and start working at that point?

Not at all! I definitely learned that I wasn't prepared for the programming world based just off my bachelor's degree. It taught me all the fundamentals, which was great. I learned the theory, and how to program, but I didn't really learn how to apply what I'd learned to real-world situations.

You'd had a great deal of experience with online learning by that point; is that where you went looking to determine your next steps?

Yes! I tried everything. I did some free web development boot camps. I discovered Udemy, and tried a bunch of their courses, trying to learn different languages. Then I found Udacity. I started off with free courses. I really fell in love with Java, and that's what initially brought me to Udacity's Android courses. The satisfaction of making an app, it just pulled me in. It was something I could show my wife, and my friends. I knew it was what I wanted to pursue.

And then you heard about the Google Scholarship?

Well, I was actually working out how I was going to pay for a Nanodegree program myself when the scholarship opportunity emerged. I applied, and was selected for the challenge course. I knew when I got selected, that I only had three months, and that they were going to pick the top 10 percent of the students, after those three months were up, to get the full scholarship. My son was only about a year old then, and my wife became pregnant again right when I found out about the scholarship. I told her, "I'm going to knock this course out as fast as possible. But I need you to help me buckle down." She took care of my son as much as possible, and I finished the challenge course in about two weeks. I was determined. I wanted to show I could do it. Afterwards, I became one of the student mentors and leaders, and constantly stayed active in the channels and forums. I just did as much as I could to prove my worth.

Those efforts paid off, and you landed a full Google Scholarship for the Android Basics Nanodegree program. And now you have some good news to share, is that right? Yes, I successfully completed the Android Basics Nanodegree program on July 29th!

How are you approaching your career goals differently now?

Well, completing the projects in my Nanodegree program really improved my confidence and performance in technical interviews. When I first graduated with my bachelor's degree, I applied for a few jobs and went through a couple technical interviews. I felt completely lost, and became nervous about doing them going forward. Once I completed the Nanodegree program, I went through another technical interview and felt so prepared. I knew every answer, and I knew exactly what I was talking about.

As it turns out, you've actually earned new opportunities ​within​ the Air Force. Can you
tell us about that?

The base I'm at is considered an IT hub for the Air Force, and the Air Force recently decided to start building mobile apps organically, utilizing our service members. Soon after this was decided, senior leadership began searching for the best and brightest programmers to fill this team. I was not only recommended, but they looked over my projects from the Nanodegree program, and deemed I was one of the most qualified! Normally, opportunities like this are strictly prohibited to anyone outside the requested Air Force specialty code, so I wasn't getting my hopes up. That restriction didn't stop senior leadership. As of right now, I'm part of the mobile app team, and the only ammo troop developing mobile apps for the Air Force, in the entire world!

So what does the future hold for you next?

I feel like the last 15 years of my life have been leading up to where I'm at now. I want to pursue a job as a software developer—an Android developer, in Silicon Valley! Ever since I was a kid, I've had the dream of being a developer at Blizzard. I was a huge World of Warcraft nerd during my homeschooled years. However, I'm okay if I fall a little short of that. I really just want to be surrounded by other programmers. I want to learn from them. It's what I've always wanted. To become a programmer. The idea of leaving the military is really scary though. The thought of not being able to get a job … it's scary, it's a lot of different emotions. But my aspiration is to become a full-time software developer for a big tech company, in a nice big city.

How does your wife feel about all of this?

My wife is the best woman in the world. She wants to follow me wherever the wind takes us. She's very proud of me, and I'm very proud of her too. She does a lot. I wouldn't be able to do what I do without her. That's for sure.

I think I speak for everyone at Udacity when I say that no one here has any doubt you'll achieve whatever you set out to achieve!

It's often said that hindsight is 20/20, and in hindsight, it's tempting to say we helped create the Grow with Google Developer Scholarship just for people like Sean. To say that, however, would be doing a disservice to him. His journey, and his accomplishments, are unique. The truth is, we didn't know who we'd meet when we launched this initiative. Yet here we are today, celebrating all that Sean has accomplished!

To have played a role in his story is an honor we couldn't have predicted, but it's one we'll treasure always.

Sean, congratulations on your success in the scholarship program, and for everything you've achieved. Whether you elect to stay in the military, or make your way to California with your family, we know you'll continue to do great things!

Growing Careers and Skills Across the US

Grow with Google is a new initiative to help people get the skills they need to find a job. Udacity is excited to partner with Google on this powerful effort, and to offer the Developer Scholarship program.

Grow with Google Developer scholars come from different backgrounds, live in different cities, and are pursuing different goals in the midst of different circumstances, but they are united by their efforts to advance their lives and careers through hard work, and a commitment to self-empowerment through learning. We're honored to support their efforts, and to share the stories of scholars like Sean.


12th November 2018 |

Elevating user trust in our API ecosystem

Posted by Andy Wen, Group Product Manager

Google API platforms have a long history of enabling a vibrant and secure third-party app ecosystem for developers—from the original launch of OAuth which helped users safeguard passwords, to providing fine-grained data-sharing controls for APIs, to launching controls to help G Suite admins manage app access in the workplace.

In 2018, we launched Gmail Add-ons, a new way for developers to integrate their apps into Gmail across platforms. Gmail Add-ons also offer a stronger security model for users because email data is only shared with the developer when a user takes action.

We've continually strengthened these controls and policies over the years based on user feedback. While the controls that are in place give people peace-of-mind and have worked well, today, we're introducing even stronger controls and policies to give our users the confidence they need to keep their data safe.

To provide additional assurances for users, today we are announcing new policies, focused on Gmail APIs, which will go into effect January 15, 2019. We are publishing these changes in advance to provide time for developers who may need to adjust their apps or policies to comply.

Of course, we encourage developers to migrate to Add-ons where possible as their preferred platform for the best privacy and security for users (developers also get the added bonus of listing their apps in the G Suite Marketplace to reach five million G Suite businesses). Let's review the policy updates:


To better ensure that user expectations align with developer uses, the following policies will apply to apps accessing user data from consumer Google accounts (Note: as always, G Suite admins have the ability to control access to their users' applications. Read more.).

Appropriate Access: Only permitted Application Types may access these APIs.

Users typically directly interact with their email through email clients and productivity tools. Users allowing applications to access their email without their regular direct interaction (for example, services that provide reporting or monitoring to users) will be provided with additional warnings and we will require them to regrant access at regular intervals.

How Data May Not Be Used: 3rd-party apps accessing these APIs must use the data to provide user-facing features and may not transfer or sell the data for other purposes such as targeting ads, market research, email campaign tracking, and other unrelated purposes. (Note: Gmail users' email content is not used for ads personalization.)

As an example, consolidating data from a user's email for their direct benefit, such as expense tracking, is a permitted use case. Consolidating the expense data for market research that benefits a third party is not permitted.

We have also clarified that human review of email data must be strictly limited.

How Data Must Be Secured: It is critical that 3rd-party apps handling Gmail data meet minimum security standards to minimize the risk of data breach. Apps will be asked to demonstrate secure data handling with assessments that include: application penetration testing, external network penetration testing, account deletion verification, reviews of incident response plans, vulnerability disclosure programs, and information security policies.

Applications that only store user data on end-user devices will not need to complete the full assessment but will need to be verified as non-malicious software. More information about the assessment will be posted here in January 2019. Existing Applications (as of this publication date) will have until the end of 2019 to complete the assessment.

Accessing Only Information You Need: During application review, we will be tightening compliance with our existing policy on limiting API access to only the information necessary to implement your application. For example, if your app does not need full or read access and only requires send capability, we require you to request narrower scopes so the app can only access data needed for its features.

Additional developer help documentation will be posted in November 2018 so that developers can assess the impact to their app and begin planning for any necessary changes.

Application Review

All apps accessing the restricted scopes will be required to submit an application review starting on January 15, 2019. If a review is not submitted by February 15, 2019, then new grants from Google consumer accounts will be disabled after February 22, 2019 and any existing grants will be revoked after March 31, 2019.

Application reviews will be submitted from the Google API Console. To ensure related communication is received, we encourage developers to update project roles (learn more) so that email addresses or an email group is up-to-date.

For more details about the restricted scope app verification, please visit this FAQ.


8th October 2018 |

More granular Google Account permissions with Google OAuth and APIs

Posted by Adam Dawes, Senior Product Manager

Google offers a wide variety of APIs that third-party app developers can use to build features for Google users. Granting access to this data is an important decision. Going forward, consumers will get more fine-grained control over what account data they choose to share with each app

Over the next few months, we'll start rolling out an improvement to our API infrastructure. We will show each permission that an app requests one at a time, within its own dialog, instead of presenting all permissions in a single dialog*. Users will have the ability to grant or deny permissions individually.

To prepare for this change, there are a number of actions you should take with your app:

  • Review the Google API Services: User Data Policy and make sure you are following them.
  • Before making an API call, check to see if the user has already granted permission to your app. This will help you avoid insufficient permission errors which could lead to unexpected app errors and a bad user experience. Learn more about this by referring to documentation on your platform below:
    • Documentation for Android
    • Documentation for the web
    • Documentation for iOS
  • Request permissions only when you need them. You'll be able to stage when each permission is requested, and we recommend being thoughtful about doing this in context. You should avoid asking for multiple scopes at sign-in, when users may be using your app for the first time and are unfamiliar with the app's features. Bundling together a request for several scopes makes it hard for users to understand why your app needs the permission and may alarm and deter them from further use of your app.
  • Provide justification before asking for access. Clearly explain why you need access, what you'll do with a user's data, and how they will benefit from providing access. Our research indicates that these explanations increase user trust and engagement.

An example of contextual permission gathering

These changes will begin to roll out to new clients starting this month and will get extended to existing clients at the beginning of 2019. Google continues to invest heavily in our developer tools and platforms. Together with the changes we made last year, we expect this improvement will help increase transparency and trust in our app ecosystem.

We look forward to working with you through this change. If you have feedback, please comment below. Or, if you have any technical questions, please post them on stackoverflow under the google-oauth tag.

*our different login scopes (profile, email, openid, and are all combined in the same consent and don't need to be requested separately.


5th October 2018 |

Share your #DevFest18 story!

Posted by Erica Hanson, Developer Communities Program Manager

Over 80 countries are planning a DevFest this year!

Our GDG community is very excited as they aim to connect with 100,000 developers at 500 DevFests around the world to learn, share and build new things.

Most recently, GDG Nairobi hosted the largest developer festival in Kenya. On September 22nd, DevFest Nairobi engaged 1,200+ developers, from 26+ African countries, with 37% women in attendance! They had 44 sessions, 4 tracks and 11 codelabs facilitated by 5 GDEs (Google Developer Experts) among other notable speakers. The energy was so great, #DevFestNairobi was trending on Twitter that day!

GDG Tokyo held their third annual DevFest this year on September 1st, engaging with over 1,000 developers! GDG Tokyo hosted 42 sessions, 6 tracks and 35 codelabs by partnering with 14 communities specializing in technology including 3 women-led communities (DroidGirls, GTUG Girls, and XR Jyoshibu).

Share your story!

Our community is interested in hearing about what you learned at DevFest. Use #DevFestStories and #DevFest18 on social media. We would love to re-share some of your stories here on the Google Developers blog and Twitter! Check out a few great examples below.

Learn more about DevFest 2018 here and find a DevFest event near you here.

GDGs are local groups of developers interested in Google products and APIs. Each GDG group can host a variety of technical activities for developers - from just a few people getting together to watch the latest Google Developers videos, to large gatherings with demos, tech talks, or hackathons. Learn more about GDG here.

Follow us on Twitter and YouTube.


4th October 2018 |

Four tips for building great transactional experiences for the Google Assistant

Posted by Mikhail Turilin, Product Manager, Actions on Google

Building engaging Actions for the Google Assistant is just the first step in your journey for delivering a great experience for your users. We also understand how important it is for many of you to get compensated for your hard work by enabling quick, hands-free transactional experiences through the Google Assistant.

Let's take a look at some of the best practices you should consider when adding transactions to your Actions!

1. Use Google Sign-In for the Assistant

Traditional account linking requires the user to open a web browser and manually log in to a merchant's website. This can lead to higher abandonment rates for a couple of reasons:

  1. Users need to enter username and password, which they often can't remember
  2. Even if the user started the conversation on Google Home, they will have to use a mobile phone to log in to the merchant web site

Our new Google Sign-In for the Assistant flow solves this problem. By implementing this authentication flow, your users will only need to tap twice on the screen to link their accounts or create a new account on your website. Connecting individual user profiles to your Actions gives you an opportunity to personalize your customer experience based on your existing relationship with a user.

And if you already have a loyalty program in place, users can accrue points and access discounts with account linking with OAuth and Google Sign-In.

Head over to our step-by-step guide to learn how to incorporate Google Sign-In.

2. Simplify the order process with a re-ordering flow

Most people prefer to use the Google Assistant quickly, whether they're at home and or on the go. So if you're a merchant, you should look for opportunities to simplify the ordering process.

Choosing a product from a list of many dozens of items takes a really long time. That's why many consumers enjoy the ability to quickly reorder items when shopping online. Implementing reordering with Google Assistant provides an opportunity to solve both problems at the same time.

Reordering is based on the history to previous purchases. You will need to implement account linking to identify returning users. Once the account is linked, connect the order history on your backend and present the choices to the user.

Just Eat, an online food ordering and delivery service in the UK, focuses on reordering as one of their core flows because they expect their customers to use the Google Assistant to reorder their favorite meals.

3. Use Google Pay for a more seamless checkout

Once a user has decided they're ready to make a purchase, it's important to provide a quick checkout experience. To help, we've expanded payment options for transactions to include Google Pay, a fast, simple way to pay online, in stores, and in the Google Assistant.

Google Pay reduces customer friction during checkout because it's already connected to users' Google accounts. Users don't need to go back and forth between the Google Assistant and your website to add a payment method. Instead, users can share the payment method that they have on file with Google Pay.

Best of all, it's simple to integrate – just follow the instructions in our transactions docs.

4. Support voice-only Actions on the Google Home

At I/O, we announced that voice-only transactions for Google Home are now supported in the US, UK, Canada, Germany, France, Australia, and Japan. A completely hands-free experience will give users more ways to complete transactions with your Actions.

Here are a few things to keep in mind when designing your transactions for voice-only surfaces:

  • Build easy-to-follow dialogue because users won't see dialogue or suggestion chips available on phones.
  • Avoid inducing choice paralysis. Focus on a few simple choices based on customer preferences collected during their previous orders.
  • Localize your transactional experiences for new regions – learn more here.
  • Don't forget to enable your transactions to work on smart speakers in the console.

Learn more tips in our Conversation Design Guidelines.

As we expand support for transactions in new countries and on new Google Assistant surfaces, now is the perfect time to make sure your transactional experiences are designed with users in mind so you can increase conversion and minimize drop-off.


3rd October 2018 |

Make money from your Actions, create better user experiences

Posted by Tarun Jain, Group PM, Actions on Google

The Google Assistant helps you get things done across the devices you have at your side throughout your day--a bedside smart speaker, your mobile device while on the go, or even your kitchen Smart Display when winding down in the evening.

One of the common questions we get from developers is: how do I create a seamless path for users to complete purchases across all these types of devices? We also get asked by developers: how can I better personalize my experience for users on the Assistant with privacy in mind?

Today, we're making these easier for developers with support for digital goods and subscriptions, and Google Sign-in for the Assistant. We're also giving the Google Assistant a complete makeover on mobile phones, enabling developers to create even more visually rich integrations.

Start earning money with premium experiences for your users

While we've offered transactions for physical goods for some time, starting today, you will also be able to offer digital goods, including one time purchases like upgrades--expansion packs or new levels, for example--and even recurring subscriptions directly within your Action.

Starting today, users can complete these transactions while in conversation with your Action through speakers, phones, and Smart Displays.This will be supported in the U.S. to start, with more locales coming soon.

Headspace, for example, now offers Android users an option to subscribe to their plans, meaning users can purchase a subscription and immediately see an upgraded experience while talking to their Action. Try it for yourself, by telling your Google Assistant, "meditate with Headspace"

Volley added digital goods to their role-playing game Castle Master so users could enhance their experience by purchasing upgrades. Try it yourself, by asking your Google Assistant to, "play Castle Master."

You can also ensure a seamless premium experience as users move between your Android app and Action for Assistant by letting users access their digital goods across their relationship with you, regardless of where the purchase originated. You can manage your digital goods for both your app and your Action in one place, in the Play Console.

Simplified account linking and user personalization

Once your users have access to a premium experience with digital goods, you will want to make sure your Action remembers them. To help with that, we're also introducing Google Sign-In for the Assistant, a secure authentication method that simplifies account linking for your users and reduces user drop off for login. Google Sign-In provides the most convenient way to log in, with just a few taps. With Google Sign-In users can even just use their voice to login and link accounts on smart speakers with the Assistant.

In the past, account linking could be a frustrating experience for your users; having to manually type a username and password--or worse, create a new account--breaks the natural conversational flow. With Google Sign-In, users can now create a new account with just a tap or confirmation through their voice. Most users can even link to their existing accounts with your service using their verified email address.

For developers, Google Sign-In also makes it easier to support login and personalize your Action for users. Previously, developers needed to build an account system and support OAuth-based account linking in order to personalize their Action. Now, you have the option to use Google Sign-In to support login for any user with a Google account.

Starbucks added Google Sign-In for the Assistant to enable users of their Action to access their Starbucks RewardsTM accounts and earn stars for their purchases. Since adding Google Sign-In for the Assistant, they've seen login conversion nearly double for their users versus their previous implementation that required manual account entry.

Check out our guide on the different authentication options available to you, to understand which best meets your needs.

A new visual experience for the phone

Today, we're launching the first major makeover for the Google Assistant on phones, bringing a richer, more interactive interface to the devices we carry with us throughout the day.

Since the Google Assistant made its debut, we've noticed that nearly half of all interactions with the Assistant today include both voice and touch. With this redesign, we're making the Assistant more visually assistive for users, combining voice with touch in a way that gives users the right controls in the right situations.

For developers, we've also made it easy to bring great multimodal experiences to life on the phone and other Assistant-enabled devices with screens, including Smart Displays. This presents a new opportunity to express your brand through richer visuals and with greater real estate in your Action.

To get started, you can now add rich responses to customize your Action for visual interfaces. With rich responses you can build visually engaging Actions for your users with a set of plug-and-play visual components for different types of content. If you've already added rich responses to your Action, these will work automatically on the new mobile redesign. Be sure to also check out our guidance on how and when to use visuals in your Action.

Below you can find some examples of the ways some partners and developers have already started to make use of rich responses to provide more visually interactive experiences for Assistant users on phones.

You can try these yourself by asking your Google Assistant to, "order my usual from Starbucks," "ask H&M Home to give inspiration for my kitchen," "ask Fitstar to workout," or "ask Food Network for chicken recipes."

Ready to get building? Check out our documentation on how to add digital goods and Google Sign-In for Assistant to create premium and personalized experiences for your users across devices.

To improve your visual experience for phone users, check out our conversation design site, our documentation on different surfaces, and our documentation and sample on how you can use rich responses to build with visual components. You can also test and debug your different types of voice, visual, and multimodal experiences in the Actions simulator.

Good luck building, and please continue to share your ideas and feedback with us. Don't forget that once you publish your first Action you can join our community program* and receive your exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit.

*Some countries are not eligible to participate in the developer community program, please review the terms and conditions


10th October 2018 |

Start a new .page today

Posted by Ben Fried, VP, CIO, & Chief Domains Enthusiast

Update: .page is now open for general registration! Find participating registrar partners at

Today we're announcing .page, the newest top-level domain (TLD) from Google Registry.

A TLD is the last part of a domain name, like .com in "" or .google in "". The .page TLD is a new opportunity for anyone to build an online presence. Whether you're writing a blog, getting your business online, or promoting your latest project, .page makes it simple and more secure to get the word out about the unique things you do.

Check out 10 interesting things some people and businesses are already doing on .page:

  1. Ellen.Page is the website of Academy Award®-nominated actress and producer Ellen Page that will spotlight LGBTQ culture and social causes.
  2. Home.Page is a project by the digital media artist Aaron Koblin, who is creating a living collection of hand-drawn houses from people across the world. Enjoy free art daily and help bring real people home by supporting revolving bail.
  3. ChristophNiemann.Page is the virtual exhibition space of illustrator, graphic designer, and author Christoph Niemann.
  4. Web.Page is a collaboration between a group of designers and developers who will offer a monthly online magazine with design techniques, strategies, and inspiration.
  5. CareerXO.Page by Geek Girl Careers is a personality assessment designed to help women find tech careers they love.
  6. TurnThe.Page by Insurance Lounge offers advice about the transition from career to retirement.
  7. WordAsImage.Page is a project by designer Ji Lee that explores the visualizations of words through typography.
  8. Membrane.Page by Synder Filtration is an educational website about spiral-wound nanofiltration, ultrafiltration, and microfiltration membrane elements and systems.
  9. TV.Page is a SaaS company that provides shoppable video technology for e-commerce, social media, and retail stores.
  10. Navlekha.Page was created by Navlekhā, a Google initiative that helps Indian publishers get their content online with free authoring tools, guidance, and a .page domain for the first 3 years. Since the initiative debuted at Google for India, publishers are creating articles within minutes. And Navlekhā plans to bring 135,000 publishers online over the next 5 years.

Security is a top priority for Google Registry's domains. To help keep your information safe, all .page websites require an SSL certificate, which helps keep connections to your domain secure and helps protect against things like ad malware and tracking injections. Both .page and .app, which we launched in May, will help move the web to an HTTPS-everywhere future.

.page domains are available now through the Early Access Program. For an extra fee, you'll have the chance to get the perfect .page domain name from participating registrar partners before standard registrations become available on October 9th. For more details about registering your domain, check out We're looking forward to seeing what you'll build on .page!


28th September 2018 |

Google Fonts launches Japanese support

Posted by the Google Fonts team

The Google Fonts catalog now includes Japanese web fonts. Since shipping Korean in February, we have been working to optimize the font slicing system and extend it to support Japanese. The optimization efforts proved fruitful—Korean users now transfer on average over 30% fewer bytes than our previous best solution. This type of on-going optimization is a major goal of Google Fonts.

Japanese presents many of the same core challenges as Korean:

  1. Very large character set
  2. Visually complex letterforms
  3. A complex writing system: Japanese uses several distinct scripts (explained well by Wikipedia)
  4. More character interactions: Line layout features (e.g. kerning, positioning, substitution) break when they involve characters that are split across different slices

The impact of the large character set made up of complex glyph contours is multiplicative, resulting in very large font files. Meanwhile, the complex writing system and character interactions forced us to refine our analysis process.

To begin supporting Japanese, we gathered character frequency data from millions of Japanese webpages and analyzed them to inform how to slice the fonts. Users download only the slices they need for a page, typically avoiding the majority of the font. Over time, as they visit more pages and cache more slices, their experience becomes ever faster. This approach is compatible with many scripts because it is based on observations of real-world usage.

Frequency of the popular Japanese and Korean characters on the web

As shown above, Korean and Japanese have a relatively small set of characters that are used extremely frequently, and a very long tail of rarely used characters. On any given page most of the characters will be from the high frequency part, often with a few rarer characters mixed in.

We tried fancier segmentation strategies, but the most performant method for Korean turned out to be simple:

  1. Put the 2,000 most popular characters in a slice
  2. Put the next 1,000 most popular characters in another slice
  3. Sort the remaining characters by Unicode codepoint number and divide them into 100 equally sized slices

A user of Google Fonts viewing a webpage will download only the slices needed for the characters on the page. This yielded great results, as clients downloaded 88% fewer bytes than a naive strategy of sending the whole font. While brainstorming how to make things even faster, we had a bit of a eureka moment, realizing that:

  1. The core features we rely on to efficiently deliver sliced fonts are unicode-range and woff2
  2. Browsers that support unicode-range and woff2 also support HTTP/2
  3. HTTP/2 enables the concurrent delivery of many small files

In combination, these features mean we no longer have to worry about queuing delays as we would have under HTTP/1.1, and therefore we can do much more fine-grained slicing.

Our analyses of the Japanese and Korean web shows most pages tend to use mostly common characters, plus a few rarer ones. To optimize for this, we tested a variety of finer-grained strategies on the common characters for both languages.

We concluded that the following is the best strategy for Korean, with clients downloading 38% fewer bytes than our previous best strategy:

  1. Take the 2,000 most popular Korean characters, sort by frequency, and put them into 20 equally sized slices
  2. Sort the remaining characters by Unicode codepoint number, and divide them into 100 equally sized slices

For Japanese, we found that segmenting the first 3,000 characters into 20 slices was best, resulting in clients downloading 80% fewer bytes than they would if we just sent the whole font. Having sufficiently reduced transfer sizes, we now feel confident in offering Japanese web fonts for the first time!

Now that both Japanese and Korean are live on Google Fonts, we have even more ideas for further optimization—and we will continue to ship updates to make things faster for our users. We are also looking forward to future collaborations with the W3C to develop new web standards and go beyond what is possible with today's technologies (learn more here).

PS - Google Fonts is hiring :)


27th September 2018 |

Introducing new APIs to improve augmented reality development with ARCore

Posted by Clayton Wilkinson, Developer Platforms Engineer

Today, we're releasing updates to ARCore, Google's platform for building augmented reality experiences, and to Sceneform, the 3D rendering library for building AR applications on Android. These updates include algorithm improvements that will let your apps consume less memory and CPU usage during longer sessions. They also include new functionality that give you more flexibility over content management.

Here's what we added:

Supporting runtime glTF loading in Sceneform

Sceneform will now include an API to enable apps to load gITF models at runtime. You'll no longer need to convert the gITF files to SFB format before rendering. This will be particularly useful for apps that have a large number of gITF models (like shopping experiences).

To take advantage of this new function -- and load models from the cloud or local storage at runtime -- use RenderableSource as the source when building a ModelRenderable.

 private static final String GLTF_ASSET = "";

// When you build a Renderable, Sceneform loads its resources in the background while returning
// a CompletableFuture. Call thenAccept(), handle(), or check isDone() before calling get().
.setSource(this, RenderableSource.builder().setSource(
.thenAccept(renderable -> duckRenderable = renderable)
throwable -> {
Toast toast =
Toast.makeText(this, "Unable to load renderable", Toast.LENGTH_LONG);
toast.setGravity(Gravity.CENTER, 0, 0);;
return null;

Publishing the Sceneform UX Library's source code

Sceneform has a UX library of common elements like plane detection and object transformation. Instead of recreating these elements from scratch every time you build an app, you can save precious development time by taking them from the library. But what if you need to tailor these elements to your specific app needs? Today we're publishing the source code of the UX library so you can customize whichever elements you need.

An example of interactive object transformation, powered by an element in the Sceneform UX Library.

Adding point cloud IDs to ARCore

Several developers have told us that when it comes to point clouds, they'd like to be able to associate points between frames. Why? Because when a point is present in multiple frames, it is more likely to be part of a solid, stable structure rather than an object in motion.

To make this possible, we're adding an API to ARCore that will assign IDs to each individual dot in a point cloud.

These new point IDs have the following elements:

  • Each ID is unique. Therefore, when the same value shows up in more than one frame, you know that it's associated with the same point.
  • Points that go out of view are lost forever. Even if that physical region comes back into view, a point will be assigned a new ID.

New devices

Last but not least, we continue to add ARCore support to more devices so your AR experiences can reach more users across more surfaces. These include smartphones as well as -- for the first time -- a Chrome OS device, the Acer Chromebook Tab 10.

Where to find us

You can get the latest information about ARCore and Sceneform on

Ready to try out the samples or have issues, then visit our projects hosted on GitHub:


26th September 2018 |

Code that final mile: from big data analysis to slide presentation

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Google Cloud Platform (GCP) provides infrastructure, serverless products, and APIs that help you build, innovate, and scale. G Suite provides a collection of productivity tools, developer APIs, extensibility frameworks and low-code platforms that let you integrate with G Suite applications, data, and users. While each solution is compelling on its own, users can get more power and flexibility by leveraging both together.

In the latest episode of the G Suite Dev Show, I'll show you one example of how you can take advantage of powerful GCP tools right from G Suite applications. BigQuery, for example, can help you surface valuable insight from massive amounts of data. However, regardless of "the tech" you use, you still have to justify and present your findings to management, right? You've already completed the big data analysis part, so why not go that final mile and tap into G Suite for its strengths? In the sample app covered in the video, we show you how to go from big data analysis all the way to an "exec-ready" presentation.

The sample application is meant to give you an idea of what's possible. While the video walks through the code a bit more, let's give all of you a high-level overview here. Google Apps Script is a G Suite serverless development platform that provides straightforward access to G Suite APIs as well as some GCP tools such as BigQuery. The first part of our app, the runQuery() function, issues a query to BigQuery from Apps Script then connects to Google Sheets to store the results into a new Sheet (note we left out CONSTANT variable definitions for brevity):

function runQuery() {
// make BigQuery request
var request = {query: BQ_QUERY};
var queryResults = BigQuery.Jobs.query(request, PROJECT_ID);
var jobId = queryResults.jobReference.jobId;
queryResults = BigQuery.Jobs.getQueryResults(PROJECT_ID, jobId);
var rows = queryResults.rows;

// put results into a 2D array
var data = new Array(rows.length);
for (var i = 0; i < rows.length; i++) {
var cols = rows[i].f;
data[i] = new Array(cols.length);
for (var j = 0; j < cols.length; j++) {
data[i][j] = cols[j].v;

// put array data into new Sheet
var spreadsheet = SpreadsheetApp.create(QUERY_NAME);
var sheet = spreadsheet.getActiveSheet();
var headers = queryResults.schema.fields;
sheet.appendRow(headers); // header row
sheet.getRange(START_ROW, START_COL,
rows.length, headers.length).setValues(data);

// return Sheet object for later use
return spreadsheet;

It returns a handle to the new Google Sheet which we can then pass on to the next component: using Google Sheets to generate a Chart from the BigQuery data. Again leaving out the CONSTANTs, we have the 2nd part of our app, the createColumnChart() function:

function createColumnChart(spreadsheet) {
// create & put chart on 1st Sheet
var sheet = spreadsheet.getSheets()[0];
var chart = sheet.newChart()
.addRange(sheet.getRange(START_CELL + ':' + END_CELL))

// return Chart object for later use
return chart;

The chart is returned by createColumnChart() so we can use that plus the Sheets object to build the desired slide presentation from Apps Script with Google Slides in the 3rd part of our app, the createSlidePresentation() function:

function createSlidePresentation(spreadsheet, chart) {
// create new deck & add title+subtitle
var deck = SlidesApp.create(QUERY_NAME);
var [title, subtitle] = deck.getSlides()[0].getPageElements();
subtitle.asShape().getText().setText('via GCP and G Suite APIs:\n' +
'Google Apps Script, BigQuery, Sheets, Slides');

// add new slide and insert empty table
var tableSlide = deck.appendSlide(SlidesApp.PredefinedLayout.BLANK);
var sheetValues = spreadsheet.getSheets()[0].getRange(
START_CELL + ':' + END_CELL).getValues();
var table = tableSlide.insertTable(sheetValues.length, sheetValues[0].length);

// populate table with data in Sheets
for (var i = 0; i < sheetValues.length; i++) {
for (var j = 0; j < sheetValues[0].length; j++) {
table.getCell(i, j).getText().setText(String(sheetValues[i][j]));

// add new slide and add Sheets chart to it
var chartSlide = deck.appendSlide(SlidesApp.PredefinedLayout.BLANK);

// return Presentation object for later use
return deck;

Finally, we need a driver application that calls all three one after another, the createColumnChart() function:

function createBigQueryPresentation() {
var spreadsheet = runQuery();
var chart = createColumnChart(spreadsheet);
var deck = createSlidePresentation(spreadsheet, chart);

We left out some detail in the code above but hope this pseudocode helps kickstart your own project. Seeking a guided tutorial to building this app one step-at-a-time? Do our codelab at Alternatively, go see all the code by hitting our GitHub repo at After executing the app successfully, you'll see the fruits of your big data analysis captured in a presentable way in a Google Slides deck:

This isn't the end of the story as this is just one example of how you can leverage both platforms from Google Cloud. In fact, this was one of two sample apps featured in our Cloud NEXT '18 session this summer exploring interoperability between GCP & G Suite which you can watch here:

Stay tuned as more examples are coming. We hope these videos plus the codelab inspire you to build on your own ideas.


21st September 2018 |

New experimental features for Daydream

Posted by Jonathan Huang, Senior Product Manager, Google AR/VR

Since we first launched Daydream, developers have responded by creating virtual reality (VR) experiences that are entertaining, educational and useful. Today, we're announcing a new set of experimental features for developers to use on the Lenovo Mirage Solo—our standalone Daydream headset—to continue to push the platform forward. Here's what's coming:

Experimental 6DoF Controllers

First, we're adding APIs to support positional controller tracking with six degrees of freedom—or 6DoF—to the Mirage Solo. With 6DoF tracking, you can move your hands more naturally in VR, just like you would in the physical world. To date, this type of experience has been limited to expensive PC-based VR with external tracking.

We've also created experimental 6DoF controllers that use a unique optical tracking system to help developers start building with 6DoF features on the Mirage Solo. Instead of using expensive external cameras and sensors that have to be carefully calibrated, our system uses machine learning and off-the-shelf parts to accurately estimate the 3D position and orientation of the controllers. We're excited about this approach because it can reduce the need for expensive hardware and make 6DoF experiences more accessible to more people.

We've already put these experimental controllers in the hands of a few developers and we're excited for more developers to start testing them soon.

Experimental 6DoF controllers

See-Through Mode

We're also introducing what we call see-through mode, which gives you the ability to see what's right in front of you in the physical world while you're wearing your VR headset.

See-through mode takes advantage of our WorldSense technology, which was built to provide accurate, low latency tracking. And, because the tracking cameras on the Mirage Solo are positioned at approximately eye-distance apart, you also get precise depth perception. The result is a see-through mode good enough to let you play ping pong with your headset on.

Playing ping pong with see-through mode on the Mirage Solo.

The combination of see-through mode and the Mirage Solo's tracking technology also opens up the door for developers to blend the digital and physical worlds in new ways by building Augmented Reality (AR) prototypes. Imagine, for example, an interior designer being able to plan a new layout for a room by adding virtual chairs, tables and decorations on top of the actual space.

Experimental app using objects from Poly, see-through mode and 6DoF Controllers to design a space in our office.

Smartphone Android Apps in VR

Finally, we're introducing the capability to open any smartphone Android app on your Daydream device, so you can use your favorite games, tools and more in VR. For example, you can play the popular indie game Mini Metro on a virtual big screen, so you have more space to view and plan your own intricate public transit system.

Playing Mini Metro on a virtual big screen in VR.

With support for Android Apps in VR, developers will be able to add Daydream VR support to their existing 2D applications without having to start from scratch. The Chrome team re-used the existing 2D interfaces for Chrome Browser Sync, settings and more to provide a feature-rich browsing experience in Daydream.

The Chrome app on Daydream uses the 2D settings within VR.

Try These Features

We've really loved building with these tools and can't wait to see what you do with them. See-through mode and Android Apps in VR will be available for all developers to try soon.

If you're a developer in the U.S., click here to learn more and apply now for an experimental 6DoF controller developer kit.


20th September 2018 |

Flutter Release Preview 2: Pixel-Perfect on iOS
Posted by the Flutter Team at Google

Flutter is Google's new mobile app toolkit for crafting beautiful native interfaces on iOS and Android in record time. Today, during the keynote of Google Developer Days in Shanghai, we are announcing Flutter Release Preview 2: our last major milestone before Flutter 1.0.

This release continues the work of completing core scenarios and improving quality, beginning with our initial beta release in February through to the availability of our first Release Preview earlier this summer. The team is now fully focused on completing our 1.0 release.

What's New in Release Preview 2

The theme for this release is pixel-perfect iOS apps. While we designed Flutter with highly brand-driven, tailored experiences in mind, we heard feedback from some of you who wanted to build applications that closely follow the Apple interface guidelines. So in this release we've greatly expanded our support for the "Cupertino" themed controls in Flutter, with an extensive library of widgets and classes that make it easier than ever to build with iOS in mind.

A reproduction of the iOS Settings home page, built with Flutter

Here are a few of the new iOS-themed widgets added in Flutter Release Preview 2:

And more have been updated, too:

As ever, the Flutter documentation is the place to go for detailed information on the Cupertino* classes. (Note that at the time of writing, we were still working to add some of these new Cupertino widgets to the visual widget catalog).

We've made progress to complete other scenarios also. Taking a look under the hood, support has been added for executing Dart code in the background, even while the application is suspended. Plugin authors can take advantage of this to create new plugins that execute code upon an event being triggered, such as the firing of a timer, or the receipt of a location update. For a more detailed introduction, read this Medium article, which demonstrates how to use background execution to create a geofencing plugin.

Another improvement is a reduction of up to 30% in our application package size on both Android and iOS. Our minimal Flutter app on Android now weighs in at just 4.7MB when built in release mode, a savings of 2MB since we started the effort — and we're continuing to identify further potential optimizations. (Note that while the improvements affect both iOS and Android, you may see different results on iOS because of how iOS packages are built).

Growing Momentum

As many new developers continue to discover Flutter, we're humbled to note that Flutter is now one of the top 50 active software repositories on GitHub:

We declared Flutter "production ready" at Google I/O this year; with Flutter getting ever closer to the stable 1.0 release, many new Flutter applications are being released, with thousands of Flutter-based apps already appearing in the Apple and Google Play stores. These include some of the largest applications on the planet by usage, such as Alibaba (Android, iOS), Tencent Now (Android, iOS), and Google Ads (Android, iOS). Here's a video on how Alibaba used Flutter to build their Xianyu app (Android, iOS), currently used by over 50 million customers in China:

We take customer satisfaction seriously and regularly survey our users. We promised to share the results back with the community, and our most recent survey shows that 92% of developers are satisfied or very satisfied with Flutter and would recommend Flutter to others. When it comes to fast development and beautiful UIs, 79% found Flutter extremely helpful or very helpful in both reaching their maximum engineering velocity and implementing an ideal UI. And 82% of Flutter developers are satisfied or very satisfied with the Dart programming language, which recently celebrated hitting the release milestone for Dart 2.

Flutter's strong community growth can be felt in other ways, too. On StackOverflow, we see fast growing interest in Flutter, with lots of new questions being posted, answered and viewed, as this chart shows:

Number of StackOverflow question views tagged with each of four popular UI frameworks over time

Flutter has been open source from day one. That's by design. Our goal is to be transparent about our progress and encourage contributions from individuals and other companies who share our desire to see beautiful user experiences on all platforms.

Getting Started

How do you upgrade to Flutter Release Preview 2? If you're on the beta channel already, it just takes one command:

$ flutter upgrade

You can check that you have Release Preview 2 installed by running flutter --version from the command line. If you have version 0.8.2 or later, you have everything described in this post.

If you haven't tried Flutter yet, now is the perfect time, and has all the details to download Flutter and get started with your first app.

When you're ready, there's a whole ecosystem of example apps and code snippets to help you get going. You can find samples from the Flutter team in the flutter/samples repo on GitHub, covering things like how to use Material and Cupertino, approaches for deserializing data encoded in JSON, and more. There's also a curated list of samples that links out to some of the best examples created by the Flutter community.

You can also learn and stay up to date with Flutter through our hands-on videos, newsletters, community articles and developer shows. There are discussion groups, chat rooms, community support, and a weekly online hangout available to you to help you along the way as you build your application. Release Preview 2 is our last release preview. Next stop: 1.0!


10th September 2018 |

Build new experiences with the Google Photos Library API
Posted by Jan-Felix Schmakeit, Google Photos Developer Lead

As we shared in May, people create and consume photos and videos in many different ways, and we think it should be easier to do more with the photos people take, across more of the apps and devices we all use. That's why we created the Google Photos Library API: to give you the ability to build photo and video experiences in your products that are smarter, faster, and more helpful.

After a successful developer preview over the past few months, the Google Photos Library API is now generally available. If you want to build and test your own experience, you can visit our developer documentation to get started. You can also express your interest in joining the Google Photos partner program if you are planning a larger integration.

Here's a quick overview of the Google Photos Library API and what you can do:

Whether you're a mobile, web, or backend developer, you can use this REST API to utilize the best of Google Photos and help people connect, upload, and share from inside your app. We are also launching client libraries in multiple languages that will help you get started quicker.

Users have to authorize requests through the API, so they are always in the driver's seat. Here are a few things you can help your users do:

  • Easily find photos, based on
    • what's in the photo
    • when it was taken
    • attributes like media format
  • Upload directly to their photo library or an album
  • Organize albums and add titles and locations
  • Use shared albums to easily transfer and collaborate

Putting machine learning to work in your app is simple too. You can use smart filters, like content categories, to narrow down or exclude certain types of photos and videos and make it easier for your users to find the ones they're looking for.

Thanks to everyone who provided feedback throughout our developer preview, your contributions helped make the API better. You can read our release notes to follow along with any new releases of our API. And, if you've been using the Picasa Web Albums API, here's a migration guide that will help you move to the Google Photos Library API.


12th September 2018 |

Sample Dialogs: The Key to Creating Great Actions on Google

Posted by Cathy Pearl, Head of Conversation Design Outreach
Illustrations by Kimberly Harvey

Hi all! I'm Cathy Pearl, head of conversation design outreach at Google. I've been building conversational systems for a while now, starting with IVRs (phone systems) and moving on to multi-modal experiences. I'm also the author of the O'Reilly book Designing Voice User Interfaces. These days, I'm keen to introduce designers and developers to our conversation design best practices so that Actions will provide the best possible user experience. Today, I'll be talking about a fundamental first step when thinking about creating an Action: writing sample dialogs.

So, you've got a cool idea for Actions on Google you want to build. You've brushed up on Dialogflow, done some codelabs, and figured out which APIs you want to use. You're ready to start coding, right?

Not so fast!

Creating an Action always needs to start with designing an Action. Don't panic; it's not going to slow you down. Planning out the design first will save you time and headaches later, and ultimately produces a better, more usable experience.

In this post, I'll talk about the first and most important component for designing a good conversational system: sample dialogs. Sample dialogs are potential conversational paths a user might take while conversing with your Action. They look a lot like film scripts, with dialog exchanges between your Action and the user. (And, like film scripts, they should be read aloud!) Writing sample dialogs comes before writing code, and even before creating flows.

When I talk to people about the importance of sample dialogs, I get a lot of nods and agreement. But when I go back later and say, "Hey, show me your sample dialogs," I often get a sheepish smile and an excuse as to why they weren't written. Common ones include:

  • "I'm just building a prototype, I can skip that stuff."
  • "I'm not worrying about the words right now—I can tweak that stuff later."
  • "The hard part is all about the backend integration! The words are the easy part."

First off, there is a misconception that "conversation design" (or voice user interface design) is just the top layer of the experience: the words, and perhaps the order of words, that the user will see/hear.

But conversation design goes much deeper. It drives the underlying structure of the experience, which includes:

  • What backend calls are we making?
  • What happens when something fails?
  • What data are we asking the user for?
  • What do we know about the user?
  • What technical constraints do we have, either with the technology itself or our own ecosystem?

In the end, these things manifest as words, to be sure. But thinking of them as "stuff you worry about later" will set you up for failure when it comes time for your user to interact with your Action. For example, without a sample dialog, you might not realize that your prompts all start with the word "Next", making them sound robotic and stilted. Sample dialogs will also show you where you need "glue" words such as "first" and "by the way".

Google has put together design guidelines for building conversational systems. They include an introduction to sample dialogs and why they're important:

Sample dialogs will give you a quick, low-fidelity sense of the "sound-and-feel" of the interaction you're designing. They convey the flow that the user will actually experience, without the technical distractions of code notation, complex flow diagrams, recognition-grammar issues, etc.

By writing sample dialogs, you can informally experiment with and evaluate different design strategies, such as how to promote the discoverability of new features or how to confirm a user's request (for example: should you use an implicit confirmation, an explicit confirmation, or no confirmation at all?).

Check out the Google I/O 2018 Action sample dialogs to see an example. (You can also take a look at the Google I/O 2018 Action code.)

Still not sure if you really need them? Let's hear from a developer who works on Actions, Jessica Dene Earley-Cha, who said in her recent Medium post:

Let's cover how this was built. Before any coding can happen, we need to build a Conversational Design. I originally had skipped this step because I thought that I could build the structure first and then fill in the content (like building a website). However, the structure is tied in with the content. I realized this when I was hitting walls that I thought were technical, but they were there because I didn't have a design.

She makes the great point that designing for conversational systems is different than designing for the web. With a conversational interface, the content itself is part of the structure, so design becomes even more important.

So now that you're (hopefully) convinced, let's discuss four of the common pitfalls developers can avoid by using sample dialogs:

PITFALL #1: Flooding the user with too much information

Suppose you're writing an Action for a bike shop: it can make repair and service appointments, give store hours, and list the latest deals. It's tempting to just start listing out options so the user will know everything they can do. Let's see what a sample dialog looks like using that strategy:

Hey Google, talk to Alyssa's Bike Shop.

Hi! Welcome to Alyssa's Bike Shop. Would you like to make an appointment for a repair, cancel an appointment, hear store hours, hear this week's deals, or hear next month's deals?

If you read this prompt out loud to someone else, it will quickly become apparent that too much information is being presented. Humans have a limited capacity for taking in audio, especially if it's the first time they're hearing it.

Here is a better way:

Hey Google, talk to Alyssa's Bike Shop.

Hi! Welcome to Alyssa's Bike Shop. I can help you make or cancel an appointment, get store hours, or tell you the latest deals. Which would you like?

Pro tip: an even better experience would be to leave out the "cancel" option if the user doesn't have any.

PITFALL #2: Keeping what your Action can do a secret

Here's a sample dialog representing a common mistake in many conversational experiences. In this example, the user is returning to an Action they've tried previously:

Hey Google, talk to Cathy's Astronomy Trivia Game.

This is Cathy's Astronomy Trivia Game. Welcome back!

Uhhh… what can I do??

When we become immersed in our design, we often forget that the user will be coming to the Action without the background we have. What they can do will not always be obvious.

Here's an improvement:

Hey Google, talk to Cathy's Astronomy Trivia Game.

This is Cathy's Astronomy Trivia Game. You can play solo, or join a team. What would you like to do?

Uh, I think I'll join a team.

Remember, you don't have to list all your options up front. Instead, try to guide the user through the process in a clear manner, and end with a call to action.

PITFALL #3: Not inviting your user to speak

Another common issue is not making it obvious when it's the user's turn to speak. As humans, we have many ways to let someone know it's their turn: words, intonation, eye gaze, and body language. For things like the Assistant, it's important to end the turn with a clear question or instruction. Here's an example of an Action that leaves the user hanging:

Okay Google, talk to Cool Animal Facts.

Welcome to Cool Animal Facts! What animal would you like to know about?

I love penguins!

Cool, penguins. Would you like to know about their habitat, or what they like to eat?

Oh, what do they like to eat!

Penguins can eat up to 13 pounds of fish, krill and squid a day! And did you know the fathers sit on the eggs?

Yes, I saw…

Meanwhile, the mothers go out to get fish.

(thinking) Wait.. am I supposed to say something?

In this example we see two problems. First, the Action asked a rhetorical question. When the user tried to answer, the Action ignored them and kept on talking.

The second problem is that after the fact was given, nothing happened. There was no indication of what the user could do next.

Here's a better way:

Penguins can eat up to 13 pounds of fish, krill and squid a day! By the way, male penguins spend time sitting on the eggs, while the females go out to get fish. Now, would you like to hear more about penguins, or ask about another animal?

I'm done, thanks.

Sure, thanks for talking to Cool Animal Facts. Goodbye!

Pro tip: Users often want to have an explicit way out, such as saying "goodbye".

Pro tip 2: Make your grammar robust! For the question "Would you like to hear more about penguins, or ask about another animal?" be prepared to handle responses such as "yes", "no", and names of animals.

PITFALL #4: Making your error prompts one size fits all

Gracefully handling the conversation when things go wrong is critical for any conversational system. Writing multiple error prompts for every turn in the conversation can seem daunting. Using a default "I'm sorry, I didn't understand" will not be helpful to the user. Instead, it should be based on what's been happening up until that point and where the user is currently. To make it easier, write sample dialogs for error conditions, which will make more clear what should be in the prompts.

Okay Google, talk to the Zombie Quiz.

Time for the Zombie Quiz! First question: which one of these should you do if you're being chased by a zombie: lock the door; run for the hills, or plant a garden?


Sorry, I didn't get that. Which one do you choose?

What? I don't remember them...

Sorry, I'm still having trouble. Which one do you choose?


Sorry, that's incorrect. Question two…

Context-sensitive error handling would have gotten the user back on track:

Okay Google, talk to the Zombie Quiz.

Time for the Zombie Quiz! First question: which one of these should you do if you're being chased by a zombie: lock the door; run for the hills, or plant a garden?


Which one of these should you do if you're being chased by a zombie: lock the door; run for the hills, or plant a garden?

Uhh.. I'm not sure… let me see...maybe the..

You can also say the number of the answer, like one, two, or three. Which one do you choose?

Oh, number three.


Your pre-flight checklist

I know you're itching to take off and starting drawing flows and writing code, but take time to write sample dialogs first. In the long run, it will make your coding easier, and you'll have fewer bugs to fix.

Here's a list of "Dos" to keep in mind when writing sample dialogs:

  • Check out the Conversation Design Guidelines for more help
  • Start your design by using written/spoken sample dialogs; diagrams of the detailed flow can come later
  • Read your sample dialogs out loud!
  • Make each sample dialog one path; they should not include branching
  • Write several "happy path" sample dialogs
  • Write several "error path" sample dialogs
  • Do a "table read" and have people unfamiliar with your sample dialog play the part of the user
  • Share your sample dialogs with everyone involved in building the Action, so everyone's on the same page
  • When testing, compare the actual working Action with the sample dialogs, to ensure it was implemented correctly
  • Iterate, iterate, iterate!

Happy writing!


6th September 2018 |

Launchpad Studio announces finance startup cohort, focused on applied-ML

Posted by Rich Hyndman, Global Tech Lead, Google Launchpad

Launchpad Studio is an acceleration program for the world's top startups. Founders work closely with Google and Alphabet product teams and experts to solve specific technical challenges and optimize their businesses for growth with machine learning. Last year we introduced our first applied-ML cohort focused on healthcare.

Today, we are excited to welcome the new cohort of Finance startups selected to participate in Launchpad Studio:

  • Alchemy (USA), bridging blockchain and the real world
  • Axinan (Singapore), providing smart insurance for the digital economy
  • Aye Finance (India), transforming financing in India
  • Celo (USA), increasing financial inclusion through a mobile-first cryptocurrency
  • Frontier Car Group (Germany), investing in the transformation of used-car marketplaces
  • Go-Jek (Indonesia), improving the welfare and livelihoods of informal sectors
  • GuiaBolso (Brazil), improving the financial lives of Brazilians
  • Inclusive (Ghana), verifying identities across Africa
  • m.Paani (India), (em)powering local retailers and the next billion users in India
  • Starling Bank (UK), improving financial health with a 100% mobile-only bank

These Studio startups have been invited from across nine countries and four continents to discuss how machine learning can be utilized for financial inclusion, stable currencies, and identification services. They are defining how ML and blockchain can supercharge efforts to include everyone and ensure greater prosperity for all. Together, data and user behavior are enabling a truly global economy with inclusive and differentiated products for banking, insurance, and credit.

Each startup is paired with a Google product manager to accelerate their product development, working alongside Google's ML research and development teams. Studio provides 1:1 mentoring and access to Google's people, network, thought leadership, and technology.

"Two of the biggest barriers to the large-scale adoption of cryptocurrencies as a means of payment are ease-of-use and purchasing-power volatility. When we heard about Studio and the opportunity to work with Google's AI teams, we were immediately excited as we believe the resulting work can be beneficial not just to Celo but for the industry as a whole." - Rene Reinsberg, Co-Founder and CEO of Celo

"Our technology has accelerated economic growth across Indonesia by raising the standard of living for millions of micro-entrepreneurs including ojek drivers, restaurant owners, small businesses and other professionals. We are very excited to work with Google, and explore more on how artificial intelligence and machine learning can help us strengthen our capabilities to drive even more positive social change not only to Indonesia, but also for the region." - Kevin Aluwi, Co-Founder and CIO of GO-JEK

"At Starling, we believe that data is the key to a healthy financial life. We are excited about the opportunity to work with Google to turn data into insights that will help consumers make better and more-informed financial decisions." - Anne Boden, Founder and CEO of Starling Bank

"At GuiaBolso, we use machine learning in different workstreams, but now we are doubling down on the technology to make our users' experience even more delightful. We see Studio as a way to speed that up." - Marcio Reis, CDO of GuiaBolso

Since launching in 2015, Google Developers Launchpad has become a global network of accelerators and partners with the shared mission of accelerating innovation that solves for the world's biggest challenges. Join us at one of our Regional Accelerators and follow Launchpad's applied ML best practices by subscribing to The Lever.


2nd August 2018 |

Google Developers Launchpad introduces The Lever, sharing applied-Machine Learning best practices

Posted by Malika Cantor, Program Manager for Launchpad

The Lever is Google Developers Launchpad's new resource for sharing applied-Machine Learning (ML) content to help startups innovate and thrive. In partnership with experts and leaders across Google and Alphabet, The Lever is operated by Launchpad, Google's global startup acceleration program. The Lever will publish the Launchpad community's experiences of integrating ML into products, and will include case studies, insights from mentors, and best practices from both Google and global thought leaders.

Peter Norvig, Google ML Research Director, and Cassie Kozyrkov, Google Cloud Chief Decision Scientist, are editors of the publication. Hear from them and other Googlers on the importance of developing and sharing applied ML product and business methodologies:

Peter Norvig (Google ML Research, Director): "The software industry has had 50 years to perfect a methodology of software development. In Machine Learning, we've only had a few years, so companies need to pay more attention to the process in order to create products that are reliable, up-to-date, have good accuracy, and are respectful of their customers' private data."

Cassie Kozyrkov (Chief Decision Scientist, Google Cloud): "We live in exciting times where the contributions of researchers have finally made it possible for non-experts to do amazing things with Artificial Intelligence. Now that anyone can stand on the shoulders of giants, process-oriented avenues of inquiry around how to best apply ML are coming to the forefront. Among these is decision intelligence engineering: a new approach to ML, focusing on how to discover opportunities and build towards safe, effective, and reliable solutions. The world is poised to make data more useful than ever before!"

Clemens Mewald (Lead, Machine Learning X and TensorFlow X): "ML/AI has had a profound impact in many areas, but I would argue that we're still very early in this journey. Many applications of ML are incremental improvements on existing features and products. Video recommendations are more relevant, ads have become more targeted and personalized. However, as Sundar said, AI is more profound than electricity (or fire). Electricity enabled modern technology, computing, and the internet. What new products will be enabled by ML/AI? I am convinced that the right ML product methodologies will help lead the way to magical products that have previously been unthinkable."

We invite you to follow the publication, and actively comment on our blog posts to share your own experience and insights.


31st July 2018 |

5 Tips for Developing Actions with the New Actions Console

Posted by Zachary Senzer, Product Manager

A couple months ago at Google I/O, we announced a redesigned Actions console that makes developing your Actions easier than ever before. The new Actions console features a more seamless development experience that tailors your workflow from onboarding through deployment, with tailored analytics to manage your Actions post-launch. Simply select your use case during onboarding and the Actions console will guide you through the different stages of development.

Here are 5 tips to help you create the best Actions for your content using our new console.

1. Optimize your Actions for new surfaces with theme customization

Part of what makes the Actions on Google ecosystem so special is the vast array of devices that people can use to interact with your Actions. Some of these devices, including phones and our new smart displays, allow users to have rich visual interactions with your content. To help your Actions stand out, you can customize how these visual experiences appear to users of these devices. Simply visit the "Build" tab and go to theme customization in the Actions console where you can specify background images, typography, colors, and more for your Actions.

2. Start to make your Actions easier to discover with built-in intents

Conversational experiences can introduce complexity in how people ask to complete a task related to your Action--a user could ask for a game in thousands of different ways ("play a game for me", "find a maps quiz", "I want some trivia"). Figuring out all of the ways a user might ask for your Action is difficult. To make this process much easier, we're beginning to map the ways users might ask for your Action into a taxonomy of built-in intents to abstract away this difficulty.

We'll start to use the built-in intent you associated with your Action to help users more easily discover your content as we begin testing them with user's queries. We'll continue to add many more built-in intents over the coming months to cover a variety of use cases. In the Actions console, go to the "Build" tab, click "Actions", then "Add Action" and select one to get started.

3. Promote your Actions with Action Links

While we'll continue to improve the ways users find your Actions within the Assistant, we've also made it easier for users to find your Actions outside the Assistant. Driving new traffic to your Actions is as easy as a click with Action Links. You now have the ability to define hyperlinks for each of your Actions to be used on your website, social media, email newsletters, and more. These links will launch users directly into your Action. If used on a desktop, the link will take users to the directory page for your Action, where they'll have the ability to choose the device they want to try your Action on. To configure Action Links in the console, visit the "Build" tab, choose "Actions", and select the Action for which you would like to create a link. That's it!

4. Ensure your Actions are high-quality by testing using our web simulator and alpha/beta environments

The best way to make sure that your Actions are working as intended is to test them using our updated web simulator. In the simulator, you can run through conversational user flows on phone, speaker, and even smart display device types. After you issue a request, you can see the visual response, request, and response JSON, with any potential errors. For further assistance with debugging errors, you also have the ability to view logs for your Actions.

Another great opportunity to test your Actions is by deploying to limited audiences in alpha and beta environments. By deploying to the alpha environment, your Actions do not need to go through the review process, meaning you can quickly test with your users. After deploying to the beta environment, you can launch your Actions to production whenever you like without additional review. To use alpha and beta environments, go to the "Deploy" tab and click "Release" in the Actions console.

5. Measure your success using analytics

After you deploy your Actions, it's equally important to measure their performance. By visiting the "Measure" tab and clicking "Analytics" in the Actions console, you will be able to view rich analytics on usage, health, and discovery. You can easily see how many people are using and returning to your Actions, how many errors users are encountering, the phrases users are saying to discover your Actions, and much, much, more. These insights can help you improve your Actions.

If you're new to the Actions console and looking for a quick way to get started, watch this video for an overview of the development process.

We're so excited to see how you will use the new Actions console to create even more Actions for more use cases, with additional tools to improve and iterate. Happy building!