Google Developers Blog


17th November 2017 |

Introducing our new developer YouTube Series: “Build Out”
Posted by Reto Meier & Colt McAnlis: Developer Advocates

Ever found yourself trying to figure out the right way to combine mobile, cloud, and web technologies, only to be lost in the myriad of available offerings? It can be challenging to know the best way to combine all the options to build products that solve problems for your users.

That's why we created Build Out, a new YouTube series where real engineers face-off building fake products.

Each month we, (Reto Meier and Colt McAnlis), will present competing architectures to help show how Google's developer products can be combined to solve challenging problems for your users. Each solution incorporates a wide range of technologies, including Google Cloud, Android, Firebase, and Tensorflow (just to name a few).

Since we're engineers at heart, we enjoy a challenge—so each solution goes well past minimum viable product, and explores some of the more advanced possibilities available to solve the problem creatively.

Now, here's the interesting part. When we're done presenting, you get to decide which of us solved the problem better, by posting a comment to the video on YouTube. If you've already got a better solution—or think you know one—tell us about it in the comments, or respond with your own Build Out video to show us how it's done!

Episode #1: The Smart Garden.

In which we explore designs for gardens that care for themselves. Each design must be fully autonomous, learn from experience, and scale from backyard up to large-scale commercial gardens.

You can get the full technical details on each Smart Garden solution in this Medium article, including alternative approaches and best practices.

You can also listen to the Build Out Rewound Podcast, to hear us discuss our choices.


16th November 2017 |

Launchpad comes to Africa to support tech startups! Apply to join the first accelerator class
Posted by Andy Volk, Sub-Saharan Africa Ecosystem Regional Manager & Josh Yellin, Program Manager of Launchpad Accelerator

Earlier this year at Google for Nigeria, our CEO Sundar Pichai made a commitmentto support African entrepreneurs building successful technology companies and products. Following up on that commitment, we're excited to announce Google Developers Launchpad Africa , our new hands-on comprehensive mentorship program tailored exclusively to startups based in Africa.

Building on the success of our global Launchpad Accelerator program, Launchpad Africa will kick-off as a three-month accelerator that provides African startups with over $3 million in equity-free support, working space, travel and PR backing, and access to expert advisers from Google, Africa, and around the world.

The first applicationperiod is now open through December 11, 9am PST and the first class will start in early 2018. More classes will be hosted in 2018 and beyond.

What do we look for when selecting startups?

Each startup that applies to Launchpad Africa is evaluated carefully. Below are general guidelines behind our process to help you understand what we look for in our candidates.

All startups in the program must:

  • Be a technology startup.
  • Be based in Ghana, Kenya, Nigeria, South Africa, Tanzania, or Uganda (stay tuned for future classes, as we hope to add more countries).
  • Have already raised seed funding.

Additionally, we also consider:

  • The problem you're trying to solve. How does it create value for users? How are you addressing a real challenge for your home city, country, or Africa broadly?
  • Will you share what you learn for the benefit of other startups in your local ecosystem?

Anyone who spends time in the African technology space knows that the continent is home to some exciting innovations. Over the years, Google has worked with some incredible startups across Africa, tackling everything from healthcare, education, streamlining e-commerce, to improving the food supply chain. We very much look forward to welcoming the first cohort of innovators for Launchpad Africa and continue to work together to drive innovation in the African market.


15th November 2017 |

Help users find, interact & re-engage with your app on the Google Assistant
Posted by Brad Abrams, Product Manager
Every day, users are discovering new ways the Google Assistant and your apps can help them get things done. Today we're announcing a set of new features to make it easier for users to find, interact, and re-engage with your app.

Helping users find your apps

With more international support and updates to the Google Assistant, it's easier than ever for users to find your app.
  • Updates to the app directory: We're adding what's new and what's trending sections in the app directory within the Assistant experience on your phone. These dynamic sections will constantly change and evolve, creating more opportunities for your app to be discovered by users in all supported locales where the Google Assistant and Actions on Google are available. We're also introducing autocomplete in the directory's search box, so, if a user doesn't quite remember the name of your app, it will populate as they type.
  • New subcategories: We've created subcategories in the app directory, so if you click on a category like "Food & Drink", apps are broken down into additional subcategories, like "Order Food" or "View a Menu." We're using your app's description and sample invocations to map users' natural search queries to the new task-based subcategories. The updated labelling taxonomy improves discovery for your app; it will now surface for users in all relevant subcategories depending on its various capabilities. This change will help you communicate to users everything your app can do, and creates new avenues for your app to be discovered – learn more here.
  • Implicit discovery: Implicit discovery is when a user is connected to your app using contextual queries (e.g., "book an appointment to fix my bike"), as opposed to calling for your app by name. We've created a new discovery section of the console to help improve your app's implicit discovery, providing instructions for creating precise action invocation phrases so your app will surface even when a user can't remember its name. Go hereto learn more.
  • Badges for family-friendly apps: We're launching a new "For Families" badge on the Google Assistant, designed to help users find apps that are appropriate for all ages. All existing apps in the Apps for Families program will get the badge automatically. Learn about how your app can qualify for the "For Families" badge here.
  • International support: Users will soon be able to find your apps in even more languages because starting today, you can build apps in Spanish (US, MX and ES), Italian, Portuguese (BR) and English (IN). And in the UK, developers can now start building apps that have transactional capabilities. Watch the internationalization videoto learn how to support multiple languages with Actions on Google.

Creating a more interactive user experience

Helping users find your app is one thing, but making sure they have a compelling, meaningful experience once they begin talking to your app is equally important – we're releasing some new features to help:
  • Speaker to phone transfer: We're launching a new API so you can develop experiences that start with the Assistant on voice-activated speakers like Google Home and can be passed off to users' phones. Need to send a map or complete a transaction using a phone? Check out the example below and click hereto learn more.
  • Build personalized apps: To create a more personal experience for users, you can now enable your app to remember select information and preferences. Learn more here.
  • Better SSML: We recently rolled out an update to the web simulator which includes a new SSML audio design experience. We now give you more options for creating natural, quality dialog using newly supported SSML tags, including <prosody>, <emphasis>, <audio> and others. The new tag <par> is coming soon and lets you add mood and richness, so you can play background music and ambient sounds while a user is having a conversation with your app. To help you get started, we've added over 1,000 sounds to the sound library.Listen to a brief SSML audio experiment that shows off some of the new features here 🔊.
  • Cancel event: Today when a user says "cancel" to end the conversation, your app never gets a chance to respond with a polite farewell message. Now you can get one last request to your webhook that you can use to clean up your fulfillment logic and respond to the user before they exit.
  • Account linking in conversation: Until today, users had to link their account to your app at the beginning of the interaction, before they had a chance to decide whether or not account linking was the right choice. With the updated AskForSignInAPI, we're giving you the option of prompting users to link their account to your app at the most appropriate time of the experience.

Re-engaging with your users

To keep users coming back to your app, day after day, we're adding some additional features that you can experiment with – these are available this week for you to start testing and will roll out to users soon.
  • Daily updates: At the end of a great interaction with your app, a user might want to be notified of similar content from your app every day. To enable that we will add a suggestion chip prompting the user to sign up for a daily update. Check out the example below and go to the discovery section of the console to configure daily updates.
  • Push notifications: We're launching a new push notification API, enabling your app to push asynchronous updates to users. For the day trader who's looking for the best time to sell stock options, or the frugal shopper waiting for the big sale to buy a new pair of shoes, these alerts will show up as system notifications on the phone (and later to the Assistant on voice-activated speakers like Google Home).
  • Directory analytics: To give you more insight into how users are interacting with your app on the mobile directory so you can continue improving the experience for users, we've updated the analytics tools in the console. You will be able to find information about your app's rating, the number of pageviews, along with the number of conversations that were initiated from your app directory listing.
Phew! I know that was a lot to cover, but that was only a brief overview of the updates we've made and we can't wait to see how you'll use these tools to unlock the Google Assistant's potential in new and creative ways.


14th November 2017 |

Announcing TensorFlow Lite
Posted by the TensorFlow team
Today, we're happy to announce the developer preview of TensorFlow Lite, TensorFlow’s lightweight solution for mobile and embedded devices! TensorFlow has always run on many platforms, from racks of servers to tiny IoT devices, but as the adoption of machine learning models has grown exponentially over the last few years, so has the need to deploy them on mobile and embedded devices. TensorFlow Lite enables low-latency inference of on-device machine learning models.

It is designed from scratch to be:
  • Lightweight Enables inference of on-device machine learning models with a small binary size and fast initialization/startup
  • Cross-platform A runtime designed to run on many different platforms, starting with Android and iOS
  • Fast Optimized for mobile devices, including dramatically improved model loading times, and supporting hardware acceleration
More and more mobile devices today incorporate purpose-built custom hardware to process ML workloads more efficiently. TensorFlow Lite supports the Android Neural Networks API to take advantage of these new accelerators as they come available.
TensorFlow Lite falls back to optimized CPU execution when accelerator hardware is not available, which ensures your models can still run fast on a large set of devices.


The following diagram shows the architectural design of TensorFlow Lite:
The individual components are:
  • TensorFlow Model: A trained TensorFlow model saved on disk.
  • TensorFlow Lite Converter: A program that converts the model to the TensorFlow Lite file format.
  • TensorFlow Lite Model File: A model file format based on FlatBuffers, that has been optimized for maximum speed and minimum size.
The TensorFlow Lite Model File is then deployed within a Mobile App, where:
  • Java API: A convenience wrapper around the C++ API on Android
  • C++ API: Loads the TensorFlow Lite Model File and invokes the Interpreter. The same library is available on both Android and iOS
  • Interpreter: Executes the model using a set of operators. The interpreter supports selective operator loading; without operators it is only 70KB, and 300KB with all the operators loaded. This is a significant reduction from the 1.5M required by TensorFlow Mobile (with a normal set of operators).
  • On select Android devices, the Interpreter will use the Android Neural Networks API for hardware acceleration, or default to CPU execution if none are available.
Developers can also implement custom kernels using the C++ API, that can be used by the Interpreter.


TensorFlow Lite already has support for a number of models that have been trained and optimized for mobile:
  • MobileNet: A class of vision models able to identify across 1000 different object classes, specifically designed for efficient execution on mobile and embedded devices
  • Inception v3: An image recognition model, similar in functionality to MobileNet, that offers higher accuracy but also has a larger size
  • Smart Reply: An on-device conversational model that provides one-touch replies to incoming conversational chat messages. First-party and third-party messaging apps use this feature on Android Wear.
Inception v3 and MobileNets have been trained on the ImageNet dataset. You can easily retrain these on your own image datasets through transfer learning.

What About TensorFlow Mobile?

As you may know, TensorFlow already supports mobile and embedded deployment of models through the TensorFlow Mobile API. Going forward, TensorFlow Lite should be seen as the evolution of TensorFlow Mobile, and as it matures it will become the recommended solution for deploying models on mobile and embedded devices. With this announcement, TensorFlow Lite is made available as a developer preview, and TensorFlow Mobile is still there to support production apps.
The scope of TensorFlow Lite is large and still under active development. With this developer preview, we have intentionally started with a constrained platform to ensure performance on some of the most important common models. We plan to prioritize future functional expansion based on the needs of our users. The goals for our continued development are to simplify the developer experience, and enable model deployment for a range of mobile and embedded devices.
We are excited that developers are getting their hands on TensorFlow Lite. We plan to support and address our external community with the same intensity as the rest of the TensorFlow project. We can't wait to see what you can do with TensorFlow Lite.
For more information, check out the TensorFlow Lite documentation pages.
Stay tuned for more updates.
Happy TensorFlow Lite coding!


15th November 2017 |

Reminder: Grow with Google scholarship window closes soon
Posted by Peter Lubbers, Head of Google Developer Training

Last month, we announced the 50,000 Grow with Google scholarship challenge in partnership with Udacity. And today, we want to remind you to apply for the programs before the application window closes on November 30th.

In case you missed the announcement details, the Google-Udacity curriculum was created to help developers get the training they need to enter the workforce as Android or mobile web developers. Whether you're an experienced programmer looking for a career-change or a novice looking for a start, the courses and the Nanodegree programs are built with your career-goals in mind and prepare you for Google's Associate Android Developer and Mobile Web Specialist developer certifications.

The scholarship challenge is an exciting chance to learn valuable skills to launch or advance your career as a mobile or web developer. The program leverages world-class curriculum, developed by experts from Google and Udacity. These courses are completely free, and as a reminder the top 5,000 students at the end of the challenge will earn a full Nanodegree scholarship to one of the four Nanodegree programs in Android or web development.

To learn more visit and submit your application before the scholarship window closes!


10th November 2017 |

Best practices to succeed with Universal App campaigns
Posted by Sissie Hsiao, VP of Product, Mobile App Advertising

It's almost time to move all your AdWords app install campaigns to Universal App campaigns (UAC). Existing Search, Display and YouTube app promo campaigns will stop running on November 15th, so it's important to start upgrading to UAC as soon as possible.

With UAC, you can reach the right people across all of Google's largest properties like Google Search, Google Play, YouTube and the Google Display Network — all from one campaign. Marketers who are already using UAC to optimize in-app actions are seeing 140% more conversions per dollar, on average, than other Google app promotion products.1

One of my favorite apps, Maven, a car sharing service from General Motors (GM), is already seeing great success with UAC. According to Kristen Alexander, Marketing Manager: "Maven believes in connecting people with the moments that matter to them. This car sharing audience is largely urban millennials and UAC helps us find this unique, engaged audience across the scale of Google. UAC for Actions helped us increase monthly Android registrations in the Maven app by 51% between April and June."

Join Kristen and others who are already seeing better results with UAC by following some best practices, which I've shared in these blog posts:

Steer Performance with Goals Create a separate UAC for each type of app user that you'd like to acquire — whether that's someone who will install your app or someone who will perform an in-app action after they've installed. Then increase the daily campaign budget for the UAC that's more important right now.

Optimize for the Right In-app Action

Track all important conversion events in your app to learn how users engage with it. Then pick an in-app action that's valuable to your business and is completed by at least 10 different people every day. This will give UAC enough data to find more users who will most likely complete the same in-app action.

Steer Performance with Creative Assets

Supply a healthy mix of creative assets (text, images and videos) that UAC can use to build ads optimized for your goal. Then use the Creative Asset Report to identify which assets are performing "Best" and which ones you should replace.

Follow these and other best practices to help you get positive results from your Universal App campaigns once you upgrade.


  1. Google Internal Data, July 2017 


10th November 2017 |

Migrating to the new Play Games Services APIs

In 11.6.0 of the Google Play Services SDK, we are introducing a major change to how the APIs are structured. We've deprecated the GoogleApiClient class, and introduced a decoupled collection of feature-specific API clients. This reduces the amount of boilerplate code required to access the Play Games Services APIs.

The change in the APIs is meant to make the APIs easier to use, thread-safe, and more memory efficient. The new API model also makes use of the Task model to give better separation of the concerns between your activity and handling the asynchronous results of the APIs. This programming model first appeared in Firebase and was well received. To dive in deeper into the wonderful world of Tasks, check out the blog series on tasks and the Tasks API developer guide.

As always, the developer documentationis a reliable source of information on these APIs, as well as all the other Google resources for developers. TheAndroid Basic Samples project, and Client Server Skeleton project have both been updated to use the Play Services API clients so you can see them in action. These sample projects are also the best place to add issues or problems you encounter using these APIs.

These changes seem big, but fear not! Using the Play Services API clients is very simple and will result in much less clutter in your code. There are three parts to using the API clients:

  1. Authentication is now explicitly done using the Google Sign-In client. This makes it more clear how to control the authentication process and the difference between a Google Sign-In identity and the Play Games Services identity.
  2. Convert all the Games.category static method calls to use the corresponding API client methods. This also includes converting PendingResult usages to use the Task class. The Task model helps greatly with separation of concerns in your code, and reduces the amount of multi-threaded complexity since tasks listeners are called back on the UI thread.
  3. Handling multi-player invitations is done explicitly through the turn-based and real-time multiplayer API clients. Since GoogleApiClient is no longer used, there is no access to the "connection hint" object which contains multi-player invitations. The invitation is now accessed through an explicit method call.


The details of the authentication process are found on the Google Developers website.

The most common use case for authentication is to use the DEFAULT_GAMES_SIGN_IN option. This option enables your game to use the games profile for the user. Since a user's games profile only contains a gamer tag that your game can display like a name, and an avatar for a image, the actual identity of the user is protected. This eliminates the need for the user to consent to sharing any additional personal information reducing the friction between the user and your game.

Note: The only Google sign-in option that can be requested which only uses games profile is requestServerAuthCode(). All others, such as requestIDToken()require the user to consent to additional information being shared. This also has the effect of preventing users from having a zero-tap sign-in experience.

One more note: if you are using the Snapshots API to save game data,you need to add the Drive.SCOPE_APPFOLDER scope when building the sign-in options:

private  GoogleSignInClient signInClient;

protected void onCreate(Bundle savedInstanceState) {

// other code here

GoogleSignInOptions signInOption = new GoogleSignInOptions.Builder(
// If you are using Snapshots add the Drive scope.
// If you need a server side auth code, request it here.
signInClient = GoogleSignIn.getClient(context, signInOption);

Since there can only be one user account signed in at a time, it's good practice to attempt a silent sign-in when the activity is resuming. This will have the effect of automatically signing in the user if it is valid to do so. It will also update or invalidate the signed-in account if there have been any changes, such as the user signing out from another activity.

private void signInSilently() {
GoogleSignInOptions signInOption =
new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_GAMES_SIGN_IN)
GoogleSignInClient signInClient = GoogleSignIn.getClient(this, signInOption);
new OnCompleteListener<GoogleSignInAccount>() {
public void onComplete(@NonNull Task<GoogleSignInAccount> task) {
// Handle UI updates based on being signed in or not.
// It is OK to cache the account for later use.
mSignInAccount = task.getResult();
protected void onResume() {

Signing in interactively is done by launching a separate intent. This is great! No more checking to see if errors have resolution and then trying to call the right APIs to resolve them. Just simply start the activity, and get the result in onActivityResult().

  Intent intent = signInClient.getSignInIntent();
startActivityForResult(intent, RC_SIGN_IN);
protected void onActivityResult(int requestCode, int resultCode, Intent intent) {
super.onActivityResult(requestCode, resultCode, intent);
if (requestCode == RC_SIGN_IN) {
// The Task returned from this call is always completed, no need to attach
// a listener.
Task<GoogleSignInAccount> task =

try {
GoogleSignInAccount account = task.getResult(ApiException.class);
// Signed in successfully, show authenticated UI.
} catch (ApiException apiException) {
// The ApiException status code indicates the
// detailed failure reason.
// Please refer to the GoogleSignInStatusCodes class reference
// for more information.
Log.w(TAG, "signInResult:failed code= " +
new AlertDialog.Builder(MainActivity.this)
.setMessage("Signin Failed")
.setNeutralButton(android.R.string.ok, null)

To determine if a user is signed in, you can call the GoogleSignIn.getLastSignedInAccount() method. This returns the GoogleSignInAccount for the user that is signed in, or null if no user is signed in.

if (GoogleSignIn.getLastSignedInAccount(/*context*/ this) != null) {
// There is a user signed in, handle updating the UI.
} else {
// Not signed in; update the UI.

Signing out is done by calling GoogleSignInClient.signOut(). There is no longer a Games specific sign-out.

new OnCompleteListener<Void>() {
public void onComplete(@NonNull Task<Void> task) {

Using Games Services API clients

In previous versions of Play Games Services, the general pattern of calling an API was something like this:

    PendingResult<Stats.LoadPlayerStatsResult> result =
mGoogleApiClient, false /* forceReload */);
ResultCallback<Stats.LoadPlayerStatsResult>() {
public void onResult(Stats.LoadPlayerStatsResult result) {
Status status = result.getStatus();
if (status.isSuccess()) {
PlayerStats stats = result.getPlayerStats();
if (stats != null) {
Log.d(TAG, "Player stats loaded");
if (stats.getDaysSinceLastPlayed() > 7) {
Log.d(TAG, "It's been longer than a week");
if (stats.getNumberOfSessions() > 1000) {
Log.d(TAG, "Veteran player");
if (stats.getChurnProbability() == 1) {
Log.d(TAG, "Player is at high risk of churn");
} else {
Log.d(TAG, "Failed to fetch Stats Data status: "
+ status.getStatusMessage());

The API was accessed from a static field on the Games class, the API returned a PendingResult, which you added a listener to in order to get the result.

Now things have changed slightly. There is a static method to get the API client from the Games class, and the Task class has replaced the PendingResult class.

As a result, the new code looks like this:

GoogleSignInAccount mSignInAccount = null;
Games.getPlayerStatsClient(this, mSignInAccount).loadPlayerStats(true)
new OnCompleteListener<AnnotatedData<PlayerStats>>() {
public void onComplete(Task<AnnotatedData<PlayerStats>> task) {
try {
AnnotatedData<PlayerStats> statsData =
if (statsData.isStale()) {
Log.d(TAG,"using cached data");
PlayerStats stats = statsData.get();
if (stats != null) {
Log.d(TAG, "Player stats loaded");
if (stats.getDaysSinceLastPlayed() > 7) {
Log.d(TAG, "It's been longer than a week");
if (stats.getNumberOfSessions() > 1000) {
Log.d(TAG, "Veteran player");
if (stats.getChurnProbability() == 1) {
Log.d(TAG, "Player is at high risk of churn");
} catch (ApiException apiException) {
int status = apiException.getStatusCode();
Log.d(TAG, "Failed to fetch Stats Data status: "
+ status + ": " + task.getException());

So, as you can see, the change is not too big, but you will gain all the goodness of the Task API, and not have to worry about the GoogleApiClient lifecycle management.

The pattern of changes is the same for all the APIs. If you need more information, you can consult the Developer website. For example if you used Games.Achievements, you now need to use Games.getAchievementClient().

The last major change to the Play Games Services APIs is the introduction of a new API class, GamesClient. This class handles support methods such as setGravityForPopups(), getSettingsIntent(), and also provides access to the multiplayer invitation object when your game is launched from a notification.

Previously the onConnected() method was called with a connection hint. This hint was a Bundle object that could contain the invitation that was passed to the activity when starting.

Now using the GamesClient API, if there is an invitation, your game should call signInSilently(); this call will succeed since the user is known from the invitation. Then retrieve the activation hint and process the invitation if present by calling GamesClient.getActivationHint():

Games.getGamesClient(MainActivity.this,  mSignInAccount)
new OnCompleteListener<Bundle>() {
public void onComplete(@NonNull Task<Bundle> task) {
try {
Bundle hint = task.getResult(ApiException.class);
if (hint != null) {
Invitation inv =
if (inv != null && inv.getInvitationId() != null) {
// retrieve and cache the invitation ID
} catch (ApiException apiException) {
Log.w(TAG, "getActivationHint failed: " +

Handling failure

When a method call fails, the Task.isSuccessful()will be false and information about the failure is accessed by calling Task.getException(). In some cases the exception is simply a non-success return value from the API call. You can check for this by casting to an ApiException:

if (task.getException() instanceof ApiException) {
ApiException apiException = (ApiException) task.getException();
status = apiException.getStatusCode();

In other cases, a MatchApiException can be returned and contains updated match data structure. It can be retrieved in a similar manner:

if (task.getException() instanceof MatchApiException) {
MatchApiException matchApiException =
(MatchApiException) task.getException();
status = matchApiException.getStatusCode();
match = matchApiException.getMatch();
} else if (task.getException() instanceof ApiException) {
ApiException apiException = (ApiException) task.getException();
status = apiException.getStatusCode();

If the status code is SIGN_IN_REQUIRED, this indicates that the player needs to be re-authenticated. To do this, call GoogleSignInClient.getSignInIntent()to sign in the player interactively.


The change from the GoogleApiClient usage to a more loosely coupled API clients usage will provide benefits of less boilerplate code, more clear usage patterns, and thread safety. As you migrate your current game to API clients, refer to these resources:

Best practices for Games:

Play Games Services Samples:

Android Basic Samples

Client Server Skeleton



8th November 2017 |

Fun new ways developers are experimenting with voice interaction
Posted by Amit Pitaru, Creative Lab

Voice interaction has the potential to simplify the way we use technology. And with Dialogflow, Actions on Google, and Speech Synthesis API, it's becoming easier for any developer to create voice-based experiences. That's why we've created Voice Experiments, a site to showcase how developers are exploring voice interaction in all kinds of exciting new ways.

The site includes a few experiments that show how voice interaction can be used to explore music, gaming, storytelling, and more. MixLab makes it easier for anyone to create music, using simple voice commands. Mystery Animal puts a new spin on a classic game. And Story Speakerlets you create interactive, spoken stories by just writing in a Google Doc – no coding required.

You can try the experiments through the Google Assistant on your phone and on voice-activated speakers like the Google Home. Or you can try them on the web using a browser like Chrome.

It's still early days for voice interaction, and we're excited to see what you will make. Visit to play with the experiments or submit your own.


7th November 2017 |

Announcing TensorFlow r1.4
Posted by the TensorFlow Team

TensorFlow release 1.4 is now public - and this is a big one! So we're happy to announce a number of new and exciting features we hope everyone will enjoy.


In 1.4, Keras has graduated from tf.contrib.keras to core package tf.keras. Keras is a hugely popular machine learning framework, consisting of high-level APIs to minimize the time between your ideas and working implementations. Keras integrates smoothly with other core TensorFlow functionality, including the Estimator API. In fact, you may construct an Estimator directly from any Keras model by calling the tf.keras.estimator.model_to_estimatorfunction. With Keras now in TensorFlow core, you can rely on it for your production workflows.

To get started with Keras, please read:

To get started with Estimators, please read:


We're pleased to announce that the Dataset API has graduated to core package The 1.4 version of the Dataset API also adds support for Python generators. We strongly recommend using the Dataset API to create input pipelines for TensorFlow models because:

  • The Dataset API provides more functionality than the older APIs (feed_dict or the queue-based pipelines).
  • The Dataset API performs better.
  • The Dataset API is cleaner and easier to use.

We're going to focus future development on the Dataset API rather than the older APIs.

To get started with Datasets, please read:

Distributed Training & Evaluation for Estimators

Release 1.4 also introduces the utility function tf.estimator.train_and_evaluate, which simplifies training, evaluation, and exporting Estimator models. This function enables distributed execution for training and evaluation, while still supporting local execution.

Other Enhancements

Beyond the features called out in this announcement, 1.4 also introduces a number of additional enhancements, which are described in the Release Notes.

Installing TensorFlow 1.4

TensorFlow release 1.4 is now available using standard pipinstallation.

# Note: the following command will overwrite any existing TensorFlow
# installation.
$ pip install --ignore-installed --upgrade tensorflow
# Use pip for Python 2.7
# Use pip3 instead of pip for Python 3.x

We've updated the documentation on to 1.4.

TensorFlow depends on contributors for enhancements. A big thank you to everyonehelping out developing TensorFlow! Don't hesitate to join the community and become a contributor by developing the source code on GitHub or helping out answering questions on Stack Overflow.

We hope you enjoy all the features in this release.

Happy TensorFlow Coding!


6th November 2017 |

Resonance Audio: Multi-platform spatial audio at scale
Posted by Eric Mauskopf, Product Manager

As humans, we rely on sound to guide us through our environment, help us communicate with others and connect us with what's happening around us. Whether walking along a busy city street or attending a packed music concert, we're able to hear hundreds of sounds coming from different directions. So when it comes to AR, VR, games and even 360 video, you need rich sound to create an engaging immersive experience that makes you feel like you're really there. Today, we're releasing a new spatial audio software development kit (SDK) called Resonance Audio. It's based on technology from Google's VR Audio SDK, and it works at scale across mobile and desktop platforms.

Experience spatial audio in our Audio Factory VR app for Daydreamand SteamVR

Performance that scales on mobile and desktop

Bringing rich, dynamic audio environments into your VR, AR, gaming, or video experiences without affecting performance can be challenging. There are often few CPU resources allocated for audio, especially on mobile, which can limit the number of simultaneous high-fidelity 3D sound sources for complex environments. The SDK uses highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality, even on mobile. We're also introducing a new feature in Unity for precomputing highly realistic reverb effects that accurately match the acoustic properties of the environment, reducing CPU usage significantly during playback.

Using geometry-based reverb by assigning acoustic materials to a cathedral in Unity

Multi-platform support for developers and sound designers

We know how important it is that audio solutions integrate seamlessly with your preferred audio middleware and sound design tools. With Resonance Audio, we've released cross-platform SDKs for the most popular game engines, audio engines, and digital audio workstations (DAW) to streamline workflows, so you can focus on creating more immersive audio. The SDKs run on Android, iOS, Windows, MacOS and Linux platforms and provide integrations for Unity, Unreal Engine, FMOD, Wwise and DAWs. We also provide native APIs for C/C++, Java, Objective-C and the web. This multi-platform support enables developers to implement sound designs once, and easily deploy their project with consistent sounding results across the top mobile and desktop platforms. Sound designers can save time by using our new DAW plugin for accurately monitoring spatial audio that's destined for YouTube videos or apps developed with Resonance Audio SDKs. Web developers get the open source Resonance Audio Web SDK that works in the top web browsers by using the Web Audio API.

DAW plugin for sound designers to monitor audio destined for YouTube 360 videos or apps developed with the SDK

Model complex Sound Environments Cutting edge features

By providing powerful tools for accurately modeling complex sound environments, Resonance Audio goes beyond basic 3D spatialization. The SDK enables developers to control the direction acoustic waves propagate from sound sources. For example, when standing behind a guitar player, it can sound quieter than when standing in front. And when facing the direction of the guitar, it can sound louder than when your back is turned.

Controlling sound wave directivity for an acoustic guitar using the SDK

Another SDK feature is automatically rendering near-field effects when sound sources get close to a listener's head, providing an accurate perception of distance, even when sources are close to the ear. The SDK also enables sound source spread, by specifying the width of the source, allowing sound to be simulated from a tiny point in space up to a wall of sound. We've also released an Ambisonic recording tool to spatially capture your sound design directly within Unity, save it to a file, and use it anywhere Ambisonic soundfield playback is supported, from game engines to YouTube videos.

If you're interested in creating rich, immersive soundscapes using cutting-edge spatial audio technology, check out the Resonance Audio documentation on our developer site, let us know what you think through GitHub, and show us what you build with #ResonanceAudio on social media; we'll be resharing our favorites.


1st November 2017 |

Google Developers Launchpad Studio works with top startups to tackle healthcare challenges with machine learning
Posted by Malika Cantor, Developer Relations Program Manager

Google is an artificial intelligence-first company. Machine Learning (ML) and Cloud are deeply embedded in our product strategies and have been crucial thus far in our efforts to tackle some of humanity's greatest challenges - like bringing high-quality, affordable, and specialized healthcare to people globally.

In that spirit, we're excited to announce the first four startups to join Launchpad Studio, our 6-month mentorship program tailored to help applied-ML startups build great products using the most advanced tools and technologies available. Working side-by-side with experts from across Google product and research teams - including Google Cloud, Verily, X, Brain, ML Research -, we intend to support these startups on their journey to build successful applications, and explore leveraging Google Cloud Platform, TensorFlow, Android, and other Google platforms. Launchpad Studio has also enlisted the expertise of a number of top industry practitioners and thought leaders to ensure Studio startups are successful in practice and long-term.

These four startups were selected based on the novel ways they've found to apply ML to important challenges in the Healthcare industry. Namely:

  1. Reducing doctor burnout and increasing doctor productivity (Augmedix)
  2. Regaining movement in paralyzed limbs (BrainQ)
  3. Accelerating clinical trials and enabling value-based healthcare (Byteflies)
  4. Detecting sepsis (CytoVale)

Let's take a closer look:

Reducing Doctor Burnout and Increasing Doctor Productivity

Numerous studies have shown that primary care physicians currently spend about half of their workday on the computer, documenting in the electronic health records (EHR).

Augmedix is on a mission to reclaim this time and repurpose it for what matters most: patient care. When doctors use the service by wearing Alphabet's Glass hardware, their documentation and administrative load is almost entirely alleviated. This saves doctors 2-3 hours per day and dramatically improves the doctor-patient experience.

Augmedix has started leveraging advances in deep learning and natural language understanding to accelerate these efficiencies and offer additional value that further improves patient care.

Regaining Movement in Paralyzed Limbs

Motor disability following neuro-disorders such as stroke, spinal cord injury, and traumatic brain injury affects tens of millions of people each year worldwide.

BrainQ's mission is to help these patients back on their feet, restoring their ability to perform activities of daily living. BrainQ is currently conducting clinical trials in leading hospitals in Israel.

The company is developing a medical device that utilizes artificial intelligence tools to identify high resolution spectral patterns in patient's brain waves, observed in electroencephalogram (EEG) sensors. These patterns are then translated into a personalized electromagnetic treatment protocol aimed at facilitating targeted neuroplasticity and enhancing patient's recovery.

Accelerating Clinical Trials and Enabling Value-Based Healthcare

Today, sensors are making it easier to collect data about health and diseases. However building a new wearable health application that is clinically validated and end-user friendly is still a daunting task. Byteflies' modular platform makes this whole process much easier and cost-effective. Through their medical and signal processing expertise, Byteflies has made advances in the interpretation of multiple synchronized vital signs. This multimodal high-resolution vital sign data is very useful for healthcare and clinical trial applications. With that level of data ingestion comes a great need for automated data processing. Byteflies plans to use ML to transform these data streams into actionable, personalized, and medically-relevant data.

Early Sepsis Detection

Research suggests that sepsis kills more Americans than breast cancer, prostate cancer, and AIDS combined. Fortunately, sepsis can often be quickly mitigated if caught early on in patient care.

CytoVale is developing a medical diagnostics platform based on cell mechanics, initially for use in early detection of sepsis in the emergency room setting. It analyzes thousands of cells' mechanical properties using ultra high speed video to diagnose disease in a few minutes. Their technology also has applications in immune activation, cancer detection, research tools, and biodefense.

CytoVale is leveraging recent advances in ML and computer vision in conjunction with their unique measurement approach to facilitate this early detection of sepsis.

More about the Program

Each startup will get tailored, equity-free support, with the goal of successfully completing a ML-focused project during the term of the program. To support this process, we provide resources, including deep engagement with engineers in Google Cloud, Google X, and other product teams, as well as Google Cloud credits. We also include both Google Cloud Platform and GSuite training in our engagement with all Studio startups.

Join Us

Based in San Francisco, Launchpad Studio is a fully tailored product development acceleration program that matches top ML startups and experts from Silicon Valley with the best of Google - its people, network, and advanced technologies - to help accelerate applied ML and AI innovation. The program's mandate is to support the growth of the ML ecosystem, and to develop product methodologies for ML.

Launchpad Studio is looking to work with the best and most game-changing ML startups from around the world. While we're currently focused on working with startups in the Healthcare and Biotech space, we'll soon be announcing other industry verticals, and any startup applying AI / ML technology to a specific industry vertical can apply on a rolling-basis.


31st October 2017 |

Eager Execution: An imperative, define-by-run interface to TensorFlow
Posted by Asim Shankar and Wolff Dobson, Google Brain Team

Today, we introduce eager execution for TensorFlow.

Eager execution is an imperative, define-by-run interface where operations are executed immediately as they are called from Python. This makes it easier to get started with TensorFlow, and can make research and development more intuitive.

The benefits of eager execution include:

  • Fast debugging with immediate run-time errors and integration with Python tools
  • Support for dynamic models using easy-to-use Python control flow
  • Strong support for custom and higher-order gradients
  • Almost all of the available TensorFlow operations

Eager execution is available now as an experimental feature, so we're looking for feedback from the community to guide our direction.

To understand this all better, let's look at some code. This gets pretty technical; familiarity with TensorFlow will help.

Using Eager Execution

When you enable eager execution, operations execute immediately and return their values to Python without requiring a For example, to multiply two matrices together, we write this:

import tensorflow as tf
import tensorflow.contrib.eager as tfe


x = [[2.]]
m = tf.matmul(x, x)

It's straightforward to inspect intermediate results with print or the Python debugger.

# The 1x1 matrix [[4.]]

Dynamic models can be built with Python flow control. Here's an example of the Collatz conjecture using TensorFlow's arithmetic operations:

a = tf.constant(12)
counter = 0
while not tf.equal(a, 1):
if tf.equal(a % 2, 0):
a = a / 2
a = 3 * a + 1

Here, the use of the tf.constant(12) Tensor object will promote all math operations to tensor operations, and as such all return values with be tensors.


Most TensorFlow users are interested in automatic differentiation. Because different operations can occur during each call, we record all forward operations to a tape, which is then played backwards when computing gradients. After we've computed the gradients, we discard the tape.

If you're familiar with the autograd package, the API is very similar. For example:

def square(x):
return tf.multiply(x, x)

grad = tfe.gradients_function(square)

print(square(3.)) # [9.]
print(grad(3.)) # [6.]

The gradients_function call takes a Python function square() as an argument and returns a Python callable that computes the partial derivatives of square() with respect to its inputs. So, to get the derivative of square() at 3.0, invoke grad(3.0), which is 6.

The same gradients_function call can be used to get the second derivative of square:

gradgrad = tfe.gradients_function(lambda x: grad(x)[0])

print(gradgrad(3.)) # [2.]

As we noted, control flow can cause different operations to run, such as in this example.

def abs(x):
return x if x > 0. else -x

grad = tfe.gradients_function(abs)

print(grad(2.0)) # [1.]
print(grad(-2.0)) # [-1.]

Custom Gradients

Users may want to define custom gradients for an operation, or for a function. This may be useful for multiple reasons, including providing a more efficient or more numerically stable gradient for a sequence of operations.

Here is an example that illustrates the use of custom gradients. Let's start by looking at the function log(1 + ex), which commonly occurs in the computation of cross entropy and log likelihoods.

def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)

# The gradient computation works fine at x = 0.
# [0.5]
# However it returns a `nan` at x = 100 due to numerical instability.
# [nan]

We can use a custom gradient for the above function that analytically simplifies the gradient expression. Notice how the gradient function implementation below reuses an expression (tf.exp(x)) that was computed during the forward pass, making the gradient computation more efficient by avoiding redundant computation.

def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)

# Gradient at x = 0 works as before.
# [0.5]
# And now gradient computation at x=100 works as well.
# [1.0]

Building models

Models can be organized in classes. Here's a model class that creates a (simple) two layer network that can classify the standard MNIST handwritten digits.

class MNISTModel(tfe.Network):
def __init__(self):
super(MNISTModel, self).__init__()
self.layer1 = self.track_layer(tf.layers.Dense(units=10))
self.layer2 = self.track_layer(tf.layers.Dense(units=10))
def call(self, input):
"""Actually runs the model."""
result = self.layer1(input)
result = self.layer2(result)
return result

We recommend using the classes (not the functions) in tf.layers since they create and contain model parameters (variables). Variable lifetimes are tied to the lifetime of the layer objects, so be sure to keep track of them.

Why are we using tfe.Network? A Network is a container for layers and is a tf.layer.Layer itself, allowing Networkobjects to be embedded in other Network objects. It also contains utilities to assist with inspection, saving, and restoring.

Even without training the model, we can imperatively call it and inspect the output:

# Let's make up a blank input image
model = MNISTModel()
batch = tf.zeros([1, 1, 784])
# (1, 1, 784)
result = model(batch)
# tf.Tensor([[[ 0. 0., ...., 0.]]], shape=(1, 1, 10), dtype=float32)

Note that we do not need any placeholders or sessions. The first time we pass in the input, the sizes of the layers' parameters are set.

To train any model, we define a loss function to optimize, calculate gradients, and use an optimizer to update the variables. First, here's a loss function:

def loss_function(model, x, y):
y_ = model(x)
return tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_)

And then, our training loop:

optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
for (x, y) in tfe.Iterator(dataset):
grads = tfe.implicit_gradients(loss_function)(model, x, y)

implicit_gradients() calculates the derivatives of loss_function with respect to all the TensorFlow variables used during its computation.

We can move computation to a GPU the same way we've always done with TensorFlow:

with tf.device("/gpu:0"):
for (x, y) in tfe.Iterator(dataset):
optimizer.minimize(lambda: loss_function(model, x, y))

(Note: We're shortcutting storing our loss and directly calling the optimizer.minimize, but you could also use the apply_gradients() method above; they are equivalent.)

Using Eager with Graphs

Eager execution makes development and debugging far more interactive, but TensorFlow graphs have a lot of advantages with respect to distributed training, performance optimizations, and production deployment.

The same code that executes operations when eager execution is enabled will construct a graph describing the computation when it is not. To convert your models to graphs, simply run the same code in a new Python session where eager execution hasn't been enabled, as seen, for example, in the MNIST example. The value of model variables can be saved and restored from checkpoints, allowing us to move between eager (imperative) and graph (declarative) programming easily. With this, models developed with eager execution enabled can be easily exported for production deployment.

In the near future, we will provide utilities to selectively convert portions of your model to graphs. In this way, you can fuse parts of your computation (such as internals of a custom RNN cell) for high-performance, but also keep the flexibility and readability of eager execution.

How does my code change?

Using eager execution should be intuitive to current TensorFlow users. There are only a handful of eager-specific APIs; most of the existing APIs and operations work with eager enabled. Some notes to keep in mind:

  • As with TensorFlow generally, we recommend that if you have not yet switched from queues to using for input processing, you should. It's easier to use and usually faster. For help, see this blog post and the documentation page.
  • Use object-oriented layers, like tf.layer.Conv2D() or Keras layers; these have explicit storage for variables.
  • For most models, you can write code so that it will work the same for both eager execution and graph construction. There are some exceptions, such as dynamic models that use Python control flow to alter the computation based on inputs.
  • Once you invoke tfe.enable_eager_execution(), it cannot be turned off. To get graph behavior, start a new Python session.

Getting started and the future

This is still a preview release, so you may hit some rough edges. To get started today:

There's a lot more to talk about with eager execution and we're excited… or, rather, we're eager for you to try it today! Feedback is absolutely welcome.


24th October 2017 |

Gmail Add-ons framework now available to all developers
Originally posted by Wesley Chun, G Suite Developer Advocate on the G Suite Blog

Email remains at the heart of how companies operate. That's why earlier this year, we previewed Gmail Add-ons—a way to help businesses speed up workflows. Since then, we've seen partners build awesome applications, and beginning today, we're extending the Gmail add-on preview to include all developers. Now anyone can start building a Gmail add-on.

Gmail Add-ons let you integrate your app into Gmail and extend Gmail to handle quick actions.

They are built using native UI context cards that can include simple text dialogs, images, links, buttons and forms. The add-on appears when relevant, and the user is just a click away from your app's rich and integrated functionality.

Gmail Add-ons are easy to create. You only have to write code once for your add-on to work on both web and mobile, and you can choose from a rich palette of widgets to craft a custom UI. Create an add-on that contextually surfaces cards based on the content of a message. Check out this video to see how we created an add-on to collate email receipts and expedite expense reporting.

Per the video, you can see that there are three components to the app's core functionality. The first component is getContextualAddOn()—this is the entry point for all Gmail Add-ons where data is compiled to build the card and render it within the Gmail UI. Since the add-on is processing expense reports from email receipts in your inbox, the createExpensesCard()parses the relevant data from the message and presents them in a form so your users can confirm or update values before submitting. Finally, submitForm()takes the data and writes a new row in an "expenses" spreadsheet in Google Sheets, which you can edit and tweak, and submit for approval to your boss.

Check out the documentation to get started with Gmail Add-ons, or if you want to see what it's like to build an add-on, go to the codelab to build ExpenseItstep-by-step. While you can't publish your add-on just yet, you can fill out this form to get notified when publishing is opened. We can't wait to see what Gmail Add-ons you build!


23rd October 2017 |

Introducing the Mobile Excellence Award to celebrate great work on Mobile Web
Posted by Shane Cassells, mSite Product Lead, EMEA

We recently partnered with Awwwards, an awards platform for web development and web design, to launch a Mobile Excellence Badge on awwwards.comand a Mobile Excellence Award to recognize great mobile web experiences.

Starting this month, every agency and digital professional that submits their website to Awwwards can be eligible for a Mobile Excellence Badge, a guarantee of the performance of their mobile version. The mobile website's performance will be evaluated by a group of experts and measured against specific criteria based on Google's mobile principles on speed and usability. When a site achieves a minimum score, it will be recognized with the new Mobile Excellence Badge. All criteria are listed at the Mobile Guidelines.

The highest scoring sites with the Mobile Excellence Badge will be nominated for Mobile Site of the Week. One of them will then go on to win Mobile Site of the Month.

All Mobile Sites of the Month will be candidate for Mobile Site of the Year, with the winner receiving a physical award at the Awwwards Conference in Berlin, 8-9 February 2018.

In a time where mobile is playing a dominant role in how people access the web, it is necessary that web developers and web designers build websites that meet users' expectations. Today, 53% of mobile site visits are abandoned if pages take longer than 3 seconds to load1 and despite the explosion of mobile usage, performance and usability of existing mobile sites remain poor and are far from meeting those expectations. At the moment, the average page load time is 22s globally2, which represents a massive missed opportunity for many companies knowing the impact of speed on conversion and bounce rates3.

If you created a great mobile web experience and want it to receive a Mobile Excellence Badge and compete for the Mobile Excellence Award submit your request here.


  1. Google Data, Aggregated, anonymized Google Analytics data from a sample of mWeb sites opted into sharing benchmark data, n=3.7K, Global, March 2016 

  2. Google Research,, Global, sample of more than 900,000 mWeb sites across Fortune 1000 and Small Medium Businesses. Testing was performed using Chrome and emulating a Nexus 5 device on a globally representative 3G connection. 1.6Mbps download speed, 300ms Round-Trip Time (RTT). Tested on EC2 on m3.medium instances, similar in performance to high-end smartphones, Jan. 2017. 

  3., Online Retail Experience Report 2017 


19th October 2017 |

Playtime 2017: Find success on Google Play and grow your business with new Play Console features

Originally Posted by Vineet Buch, Director of Product Management, Google Play Apps & Games on the Android Developers Blog
Today we kicked off our annual global Playtime series with back-to-back events in Berlin and San Francisco. Over the next month, we’ll be hearing from many app and game developers in cities around the world. It has been an amazing 2017 for developers on Google Play, there are now more than 8 billion new installs per month globally.

To help you continue to take advantage of this opportunity, we're announcing innovations on Google Play and new features in the Play Console. Follow us on Medium where presenters will be posting their strategies, best practices, and examples to help you achieve your business objectives. As Google Play continues to grow rapidly, we want to help people understand our business. That's why we're also publishing the State of Play 2017 report that will be updated annually to help you stay informed about our progress and how we’re helping developers succeed.

Apps and games on Google Play bring your devices to life, whether they're phones and tablets, Wear devices, TVs, Daydream, or Chromebooks like the new Google Pixelbook. We're making it even easier for people to discover and re-engage with great content on the Play Store.

Recognizing the best

We're investing in curation and editorial to showcase the highest quality apps and games we love. The revamped Editors' Choice is now live in 17 countries and Android Excellence recently welcomed new apps and games. We also continue to celebrate and support indie games, recently announcing winners of the Indie Games Festival in San Francisco and opening the second Indie Games Contest in Europe for nominations.

Discovering great games

We've launched an improved home for games with trailers and screenshots of gameplay and two new browse destinations are coming soon, 'New' (for upcoming and trending games) and 'Premium' (for paid games).

Going beyond installs

We’re showing reminders to try games you’ve recently installed and we’re expanding our successful ‘live operations’ banners on the Play Store, telling you about major in-game events in popular games you’ve got on your device. We're also excited to integrate Android Instant Apps with a 'Try it Now' button on store listings. With a single tap, people can jump right into the app experience without installing.

The new games experience on Google Play

The Google Play Console offers tools which help you and your team members at every step of an app’s lifecycle. Use the Play Console to improve app quality, manage releases with confidence, and increase business performance.

Focus on quality

Android vitals were introduced at I/O 2017 and already 65% of top developers are using the dashboard to understand their app's performance. We're adding five new Android vitals and increasing device coverage to help you address issues relating to battery consumption, crashes, and render time. Better performing apps are favored by Google Play's search and discovery algorithms.
We're improving pre-launch reports and enabling them for all developers with no need to opt-in. When you upload an alpha or beta APK, we'll automatically install and test your app on physical, popular devices powered by Firebase Test Lab. The report will tell you about crashes, display issues, security vulnerabilities, and now, performance issues encountered.
When you install a new app, you expect it to open and perform normally. To ensure people installing apps and games from Google Play have a positive experience and developers benefit from being part of a trusted ecosystem, we are introducing a policy to disallow apps which consistently exhibit broken experiences on the majority of devices such as​ crashing,​ closing,​ ​freezing,​ ​or​ ​otherwise​ ​functioning​ ​abnormally. Learn more in the policy center.

Release with confidence

Beta testing lets trusted users try your app or game before it goes to production so you can iterate on your ideas and gather feedback. You can now target alpha and beta tests to specific countries. This allows you to, for example, beta test in a country you're about to launch in, while people in other countries receive your production app. We'll be bringing country-targeting to staged rollouts soon.
We've also made improvements to the device catalog. Over 66% of top developers are using the catalog to ensure they provide a great user experience on the widest range of devices. You can now save device searches and see why a specific device doesn't support your app. Navigate to the device catalog and review the terms of service to get started.

Grow your subscriptions business

At I/O 2017 we announced that both the number of subscribers on Play and the subscriptions business revenue doubled in the preceding year. We're making it easier to setup and manage your subscription service with the Play Billing Library and, soon, new test instruments to simplify testing your flows for successful and unsuccessful payments.
We're helping you acquire and retain more subscribers. You can offer shorter free trials, at a minimum of three days, and we will now enforce one free trial at the app level to reduce the potential for abuse. You can opt-in to receive notifications when someone cancels their subscription and we're making it easier for people to restore a canceled subscription. Account hold is now generally available, where you can block access to your service while we get a user to fix a renewal payment issue. Finally, from January 2018 we're also updating our transaction fee for subscribers who are retained for more than 12 months.

Announcing the Google Play Security Reward Program

At Google, we have long enjoyed a close relationship with the security research community. Today we're introducing the Google Play Security Reward Program to incentivize security research into popular Android apps, including Google's own apps. The program will help us find vulnerabilities and notify developers via security recommendations on how to fix them. We hope to bring the success we have with our other reward programs, and we invite developers and the research community to work together with us on proactively improving Google Play ecosystem's security.

Stay up to date with Google Play news and tips

How useful did you find this blogpost?


6th November 2017 |

Grow with Google scholarships for US Android and web developers
Posted by Peter Lubbers, Head of Google Developer Training
Today, we are excited to announce that we are offering a 50,000 Udacity Scholarship Challenge in the United States through the Grow with Google initiative!
In case you missed the announcements in Pittsburgh earlier, the Grow with Google initiative represents Google's commitment to help drive the economic potential of technology through education. In addition to the Nanodegree scholarships, we are offering grants to organizations that train job-seekers with the digital tools they need.
Visit Grow with Google to learn more about this exciting initiative.
The Google-Udacity curriculum is targeted to helping developers get the training they need to enter the workforce as Android or mobile web developers. Whether you're an experienced programmer looking for a career-change or a novice looking for a start, the courses and the Nanodegree programs are built with your career-goals in mind and prepare you for Google's Associate Android Developer and Mobile Web Specialist developer certifications.
Of the 50,000 Challenge Scholarships available, 25,000 will be available for aspiring developers with no experience. We've split the curriculum for new developers between these two courses:
We've also dedicated 25,000 scholarships for developers with more than one year of experience. For these developers, the curriculum will be divided between these two courses:
The top 5,000 students at the end of the challenge will earn a full Nanodegree scholarship to one of the four Nanodegree programs in Android or web development.
The application period closes on November 30th. To learn more about the scholarships and to apply, visit


10th October 2017 |

Introducing Dialogflow, the new name for API.AI
Posted by Ilya Gelfenbeyn, Lead Product Manager, on behalf of the entire Dialogflow team

When we started API.AI, our goal was to provide developers like you with an API to add natural language processing capabilities to your applications, services and devices. We've worked hard towards that goal and accomplished a lot partnering with all of you. But as we've taken a look at our work over the past year and where we're heading, from new features like our Analytics tool to the 33 prebuilt agents, we realized that we were doing so much more than just providing an API. So with that, we'd like to introduce Dialogflow – the new name for API.AI.

Our new name doesn't change the work we're doing with you or our mission. Our mission continues to be that Dialogflow is your end-to-end platform for building great conversational experiences and our team will help you share what you've built with millions of users. In fact, here are 2 new features we've just launched to help you build those great experiences:

  1. In-line code editor: you can now write fulfillment logic, test, and implement a functional webhook directly in the console.
  1. Multi-lingual agent support: building for multiple languages is now easier than ever. You can now add additional languages and locales to your new or existing agent.

Thanks for being a part of API.AI – we can't wait to see what we do together with Dialogflow. Head over to your developer console and give these new features a try. And, as always, contact us if you have any questions.

Hi from the Dialogflow team!


4th October 2017 |

Apps for the Google Assistant: new languages, devices and features!
Posted by Brad Abrams, Product Manager

As you may have seen, it's a big day for the Google Assistant with new features, new devices and new languages coming soon. But it's also a big day for developers like you, as Actions on Google is also coming to new devices and new languages, and getting better for building and sharing apps.

Say hallo, bonjour and kon'nichiwa to apps for the Google Assistant

Actions on Google is already available in English in the US, UK and Australia and today, we're adding new languages to the mix—German (de-DE), French (fr-FR), Japanese (ja-JP), Korean (kr-KR), and both French and English in Canada (en-CA, fr-CA). Starting this week, you can build apps for the Google Assistant in these new languages and soon, they'll be available via the Assistant! Users will soon be able to talk to apps like Zalando, Namatata and Drop the Puck, with more apps on the way. We can't wait to see what you build!

Apps are now available on new devices

Along with the new Pixelbook come apps for the Assistant. As soon as the Pixelbook hits shelves later this year, your apps will just work, with no extra steps from you! With that said, as with every new surface, especially one with a screen, it's good to make sure that your app is in tip top shape, including using high quality images or adding images to make your conversations more visual.

With apps on Pixelbook, you'll be able to reach a whole new audience and give users the chance to explore your app on a bigger screen, while they get things done throughout their day.

And, in case you missed it, we also recently introduced apps on headphones optimized for the Google Assistant and with the Assistant on Android TV.

Building Apps for Families

Today we shared how the Assistant is great for families—giving people the chance to connect, explore, learn and have fun with the Assistant. And from trivia to storytelling, you can now build Apps for Families and get a dedicated badge via the Assistant on your phone, letting people know your app is family friendly! Soon, users will be able to say "Ok Google, what's my Justice League superhero?" or "Ok Google, play Sports Illustrated Kids Trivia" if you're looking for a game. Or 'Ok Google, let's learn" for some educational fun.

To participate, you first need to make sure your app complies with the program policies and, after that, simply submit it for review. Once approved it will be live for anyone to try! You can learn more about that here. Apps for Families will only be available in US English at the start.

Apps made easy with templates and more

It's easier than ever to make your first (or fifth!) app. With new templates, you can create your own trivia game, flash card app or personality quiz for the Google Assistant without writing code or doing any complex server configurations. All you have to do is add some questions and answers via a Google Sheet. Within minutes, voilà, you can try it out on your Google Assistant and publish it! And if you want to try one today, just say "Ok Google, Play Planet Quiz"

We even provide pre-defined personalities when you create an app from the templates, offering a voice, tone and natural conversational feel for your app's users, without any additional work on your end.

If you prefer to code your own apps, we put a fresh coat of paint on our Actions Console UI to make it easier to create apps with tools like API.AI.

Transactions with the Assistant on phones

In May we announced that you could start building transactional apps for the Google Assistant on phones and starting this week in the US, you can submit your apps for review! To get a first look at how transactions will work, you'll soon be able to try out 1-800-Flowers, Applebee's, Panera and Ticketmaster.

Ready to give it a try for yourself? You can build and test transactional apps that include payments, status updates and follow-on actions here.

In addition to paying, with transactional apps, a user can see their order history, get status updates and more.

Community Program: Benefits for great apps

And, last up, to support your efforts in building apps for the Google Assistant and celebrate your accomplishments, we created a new developer community program. Starting with up to $200 in monthly Google Cloud credit and an Assistant t-shirt when you publish your first app, the perks and opportunities available to you will grow as you hit milestone after milestone including your chance to earn a Google Home. And if you've already created an app, don't fret! Your perks are on the way!

Thanks for everything you do to make the Assistant more helpful, fun and interactive! It's been an exciting 10 months to see the platform expand to new languages and devices and to see what you've all created.


3rd October 2017 |

Introducing Cloud Firestore: Our New Document Database for Apps
Originally posted by Alex Dufetel on the Firebase Blog

Today we're excited to launch Cloud Firestore, a fully-managed NoSQL document database for mobile and web app development. It's designed to easily store and sync app data at global scale, and it's now available in beta.

Key features of Cloud Firestore include:

  • Documents and collections with powerful querying
  • iOS, Android, and Web SDKs with offline data access
  • Real-time data synchronization
  • Automatic, multi-region data replication with strong consistency
  • Node, Python, Go, and Java server SDKs

And of course, we've aimed for the simplicity and ease-of-use that is always top priority for Firebase, while still making sure that Cloud Firestore can scale to power even the largest apps.

Optimized for app development

Managing app data is still hard; you have to scale servers, handle intermittent connectivity, and deliver data with low latency.

We've optimized Cloud Firestore for app development, so you can focus on delivering value to your users and shipping better apps, faster. Cloud Firestore:

  • Synchronizes data between devices in real-time. Our Android, iOS, and Javascript SDKs sync your app data almost instantly. This makes it incredibly easy to build reactive apps, automatically sync data across devices, and build powerful collaborative features -- and if you don't need real-time sync, one-time reads are a first-class feature.
  • Uses collections and documents to structure and query data. This data model is familiar and intuitive for many developers. It also allows for expressive queries. Queries scale with the size of your result set, not the size of your data set, so you'll get the same performance fetching 1 result from a set of 100, or 100,000,000.
  • Enables offline data access via a powerful, on-device database. This local database means your app will function smoothly, even when your users lose connectivity. This offline mode is available on Web, iOS and Android.
  • Enables serverless development. Cloud Firestore's client-side SDKs take care of the complex authentication and networking code you'd normally need to write yourself. Then, on the backend, we provide a powerful set of security rules so you can control access to your data. Security rules let you control which users can access which documents, and let you apply complex validation logic to your data as well. Combined, these features allow your mobile app to connect directly to your database.
  • Integrates with the rest of the Firebase platform. You can easily configure Cloud Functions to run custom code whenever data is written, and our SDKs automatically integrate with Firebase Authentication, to help you get started quickly.

Putting the 'Cloud' in Cloud Firestore

As you may have guessed from the name, Cloud Firestore was built in close collaboration with the Google Cloud Platform team.

This means it's a fully managed product, built from the ground up to automatically scale. Cloud Firestore is a multi-region replicated database that ensures once data is committed, it's durable even in the face of unexpected disasters. Not only that, but despite being a distributed database, it's also strongly consistent, removing tricky edge cases to make building apps easier regardless of scale.

It also means that delivering a great server-side experience for backend developers is a top priority. We're launching SDKs for Java, Go, Python, and Node.js today, with more languages coming in the future.

Another database?

Over the last 3 years Firebase has grown to become Google's app development platform; it now has 16 products to build and grow your app. If you've used Firebase before, you know we already offer a database, the Firebase Realtime Database, which helps with some of the challenges listed above.

The Firebase Realtime Database, with its client SDKs and real-time capabilities, is all about making app development faster and easier. Since its launch, it has been adopted by hundred of thousands of developers, and as its adoption grew, so did usage patterns. Developers began using the Realtime Database for more complex data and to build bigger apps, pushing the limits of the JSON data model and the performance of the database at scale. Cloud Firestore is inspired by what developers love most about the Firebase Realtime Database while also addressing its key limitations like data structuring, querying, and scaling.

So, if you're a Firebase Realtime Database user today, we think you'll love Cloud Firestore. However, this does not mean that Cloud Firestore is a drop-in replacement for the Firebase Realtime Database. For some use cases, it may make sense to use the Realtime Database to optimize for cost and latency, and it's also easy to use both databases together. You can read a more in-depth comparison between the two databases here.

We're continuing development on both databases and they'll both be available in our console and documentation.

Get started!

Cloud Firestore enters public beta starting today. If you're comfortable using a beta product you should give it a spin on your next project! Here are some of the companies and startups who are already building with Cloud Firestore:

Get started by visiting the database tab in your Firebase console. For more details, see the documentation, pricing, code samples, performance limitations during beta, and view our open source iOS and JavaScript SDKs on GitHub.

We can't wait to see what you build and hear what you think of Cloud Firestore!


28th September 2017 |

Generating Google Slides from Images using Apps Script

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

Today, we announceda collection of exciting new features in Google Slides—among these is support for Google Apps Script. Now you can use Apps Script for Slides to programmatically create and modify Slides, plus customize menus, dialog boxes and sidebars in the user interface.

Programming presentations with Apps Script

Presentations have come a long way—from casting hand shadows over fires in caves to advances in lighting technology (magic lanterns) to, eventually, (in)famous 35mm slide shows of your Uncle Bob's endless summer vacation. More recently, we have presentation software—like Slides—and developers have been able to write applications to create or update them. This is made even easier with the new Apps Script support for Google Slides. In the latest G Suite Dev Show episode, we demo this new service, walking you through a short example that automatically creates a slideshow from a collection of images.

To keep things simple, the chosen images are already available online, accessible by URL. For each image, a new (blank) slide is added then the image is inserted. The key to this script are two lines of JavaScript (given an existing presentation and a link to each image):

      var slide = presentation.appendSlide(SlidesApp.PredefinedLayout.BLANK);
var image = slide.insertImage(link);

The first line of code adds a new slide while the other inserts an image on the new slide. Both lines are repeated for each image in the collection. While this initial, rudimentary solution works, the slide presentation created doesn't exactly fit the bill. It turns out that adding a few more lines make the application much more useful. See the video for all the details.

Getting started

To get started, check the documentation to learn more about Apps Scripts for Slides, or check out the Translateand Progress Bar sample Add-ons. If you want to dig deeper into the code sample from our video, take a look at the corresponding tutorial. And, if you love watching videos, check out our Apps Script video library or other G Suite Dev Show episodes. If you wish to build applications with Google Slides outside of the Apps Script environment and want to use your own development tools, you can do so with the Slides (REST) API—check out its documentation and video library.

With all these options, we look forward to seeing the applications you build with Google Slides!