Google Developers Blog

 

21st September 2018 |

New experimental features for Daydream

Posted by Jonathan Huang, Senior Product Manager, Google AR/VR

Since we first launched Daydream, developers have responded by creating virtual reality (VR) experiences that are entertaining, educational and useful. Today, we're announcing a new set of experimental features for developers to use on the Lenovo Mirage Solo—our standalone Daydream headset—to continue to push the platform forward. Here's what's coming:

Experimental 6DoF Controllers

First, we're adding APIs to support positional controller tracking with six degrees of freedom—or 6DoF—to the Mirage Solo. With 6DoF tracking, you can move your hands more naturally in VR, just like you would in the physical world. To date, this type of experience has been limited to expensive PC-based VR with external tracking.

We've also created experimental 6DoF controllers that use a unique optical tracking system to help developers start building with 6DoF features on the Mirage Solo. Instead of using expensive external cameras and sensors that have to be carefully calibrated, our system uses machine learning and off-the-shelf parts to accurately estimate the 3D position and orientation of the controllers. We're excited about this approach because it can reduce the need for expensive hardware and make 6DoF experiences more accessible to more people.

We've already put these experimental controllers in the hands of a few developers and we're excited for more developers to start testing them soon.

Experimental 6DoF controllers

See-Through Mode

We're also introducing what we call see-through mode, which gives you the ability to see what's right in front of you in the physical world while you're wearing your VR headset.

See-through mode takes advantage of our WorldSense technology, which was built to provide accurate, low latency tracking. And, because the tracking cameras on the Mirage Solo are positioned at approximately eye-distance apart, you also get precise depth perception. The result is a see-through mode good enough to let you play ping pong with your headset on.

Playing ping pong with see-through mode on the Mirage Solo.

The combination of see-through mode and the Mirage Solo's tracking technology also opens up the door for developers to blend the digital and physical worlds in new ways by building Augmented Reality (AR) prototypes. Imagine, for example, an interior designer being able to plan a new layout for a room by adding virtual chairs, tables and decorations on top of the actual space.

Experimental app using objects from Poly, see-through mode and 6DoF Controllers to design a space in our office.

Smartphone Android Apps in VR

Finally, we're introducing the capability to open any smartphone Android app on your Daydream device, so you can use your favorite games, tools and more in VR. For example, you can play the popular indie game Mini Metro on a virtual big screen, so you have more space to view and plan your own intricate public transit system.

Playing Mini Metro on a virtual big screen in VR.

With support for Android Apps in VR, developers will be able to add Daydream VR support to their existing 2D applications without having to start from scratch. The Chrome team re-used the existing 2D interfaces for Chrome Browser Sync, settings and more to provide a feature-rich browsing experience in Daydream.

The Chrome app on Daydream uses the 2D settings within VR.

Try These Features

We've really loved building with these tools and can't wait to see what you do with them. See-through mode and Android Apps in VR will be available for all developers to try soon.

If you're a developer in the U.S., click here to learn more and apply now for an experimental 6DoF controller developer kit.

 

20th September 2018 |

Flutter Release Preview 2: Pixel-Perfect on iOS
Posted by the Flutter Team at Google

Flutter is Google's new mobile app toolkit for crafting beautiful native interfaces on iOS and Android in record time. Today, during the keynote of Google Developer Days in Shanghai, we are announcing Flutter Release Preview 2: our last major milestone before Flutter 1.0.

This release continues the work of completing core scenarios and improving quality, beginning with our initial beta release in February through to the availability of our first Release Preview earlier this summer. The team is now fully focused on completing our 1.0 release.

What's New in Release Preview 2

The theme for this release is pixel-perfect iOS apps. While we designed Flutter with highly brand-driven, tailored experiences in mind, we heard feedback from some of you who wanted to build applications that closely follow the Apple interface guidelines. So in this release we've greatly expanded our support for the "Cupertino" themed controls in Flutter, with an extensive library of widgets and classes that make it easier than ever to build with iOS in mind.

A reproduction of the iOS Settings home page, built with Flutter

Here are a few of the new iOS-themed widgets added in Flutter Release Preview 2:

And more have been updated, too:

As ever, the Flutter documentation is the place to go for detailed information on the Cupertino* classes. (Note that at the time of writing, we were still working to add some of these new Cupertino widgets to the visual widget catalog).

We've made progress to complete other scenarios also. Taking a look under the hood, support has been added for executing Dart code in the background, even while the application is suspended. Plugin authors can take advantage of this to create new plugins that execute code upon an event being triggered, such as the firing of a timer, or the receipt of a location update. For a more detailed introduction, read this Medium article, which demonstrates how to use background execution to create a geofencing plugin.

Another improvement is a reduction of up to 30% in our application package size on both Android and iOS. Our minimal Flutter app on Android now weighs in at just 4.7MB when built in release mode, a savings of 2MB since we started the effort — and we're continuing to identify further potential optimizations. (Note that while the improvements affect both iOS and Android, you may see different results on iOS because of how iOS packages are built).

Growing Momentum

As many new developers continue to discover Flutter, we're humbled to note that Flutter is now one of the top 50 active software repositories on GitHub:

We declared Flutter "production ready" at Google I/O this year; with Flutter getting ever closer to the stable 1.0 release, many new Flutter applications are being released, with thousands of Flutter-based apps already appearing in the Apple and Google Play stores. These include some of the largest applications on the planet by usage, such as Alibaba (Android, iOS), Tencent Now (Android, iOS), and Google Ads (Android, iOS). Here's a video on how Alibaba used Flutter to build their Xianyu app (Android, iOS), currently used by over 50 million customers in China:

We take customer satisfaction seriously and regularly survey our users. We promised to share the results back with the community, and our most recent survey shows that 92% of developers are satisfied or very satisfied with Flutter and would recommend Flutter to others. When it comes to fast development and beautiful UIs, 79% found Flutter extremely helpful or very helpful in both reaching their maximum engineering velocity and implementing an ideal UI. And 82% of Flutter developers are satisfied or very satisfied with the Dart programming language, which recently celebrated hitting the release milestone for Dart 2.

Flutter's strong community growth can be felt in other ways, too. On StackOverflow, we see fast growing interest in Flutter, with lots of new questions being posted, answered and viewed, as this chart shows:

Number of StackOverflow question views tagged with each of four popular UI frameworks over time

Flutter has been open source from day one. That's by design. Our goal is to be transparent about our progress and encourage contributions from individuals and other companies who share our desire to see beautiful user experiences on all platforms.

Getting Started

How do you upgrade to Flutter Release Preview 2? If you're on the beta channel already, it just takes one command:

$ flutter upgrade

You can check that you have Release Preview 2 installed by running flutter --version from the command line. If you have version 0.8.2 or later, you have everything described in this post.

If you haven't tried Flutter yet, now is the perfect time, and flutter.io has all the details to download Flutter and get started with your first app.

When you're ready, there's a whole ecosystem of example apps and code snippets to help you get going. You can find samples from the Flutter team in the flutter/samples repo on GitHub, covering things like how to use Material and Cupertino, approaches for deserializing data encoded in JSON, and more. There's also a curated list of samples that links out to some of the best examples created by the Flutter community.

You can also learn and stay up to date with Flutter through our hands-on videos, newsletters, community articles and developer shows. There are discussion groups, chat rooms, community support, and a weekly online hangout available to you to help you along the way as you build your application. Release Preview 2 is our last release preview. Next stop: 1.0!

 

10th September 2018 |

Build new experiences with the Google Photos Library API
Posted by Jan-Felix Schmakeit, Google Photos Developer Lead

As we shared in May, people create and consume photos and videos in many different ways, and we think it should be easier to do more with the photos people take, across more of the apps and devices we all use. That's why we created the Google Photos Library API: to give you the ability to build photo and video experiences in your products that are smarter, faster, and more helpful.

After a successful developer preview over the past few months, the Google Photos Library API is now generally available. If you want to build and test your own experience, you can visit our developer documentation to get started. You can also express your interest in joining the Google Photos partner program if you are planning a larger integration.

Here's a quick overview of the Google Photos Library API and what you can do:

Whether you're a mobile, web, or backend developer, you can use this REST API to utilize the best of Google Photos and help people connect, upload, and share from inside your app. We are also launching client libraries in multiple languages that will help you get started quicker.

Users have to authorize requests through the API, so they are always in the driver's seat. Here are a few things you can help your users do:

  • Easily find photos, based on
    • what's in the photo
    • when it was taken
    • attributes like media format
  • Upload directly to their photo library or an album
  • Organize albums and add titles and locations
  • Use shared albums to easily transfer and collaborate

Putting machine learning to work in your app is simple too. You can use smart filters, like content categories, to narrow down or exclude certain types of photos and videos and make it easier for your users to find the ones they're looking for.

Thanks to everyone who provided feedback throughout our developer preview, your contributions helped make the API better. You can read our release notes to follow along with any new releases of our API. And, if you've been using the Picasa Web Albums API, here's a migration guide that will help you move to the Google Photos Library API.

 

12th September 2018 |

Sample Dialogs: The Key to Creating Great Actions on Google

Posted by Cathy Pearl, Head of Conversation Design Outreach
Illustrations by Kimberly Harvey

Hi all! I'm Cathy Pearl, head of conversation design outreach at Google. I've been building conversational systems for a while now, starting with IVRs (phone systems) and moving on to multi-modal experiences. I'm also the author of the O'Reilly book Designing Voice User Interfaces. These days, I'm keen to introduce designers and developers to our conversation design best practices so that Actions will provide the best possible user experience. Today, I'll be talking about a fundamental first step when thinking about creating an Action: writing sample dialogs.

So, you've got a cool idea for Actions on Google you want to build. You've brushed up on Dialogflow, done some codelabs, and figured out which APIs you want to use. You're ready to start coding, right?

Not so fast!

Creating an Action always needs to start with designing an Action. Don't panic; it's not going to slow you down. Planning out the design first will save you time and headaches later, and ultimately produces a better, more usable experience.

In this post, I'll talk about the first and most important component for designing a good conversational system: sample dialogs. Sample dialogs are potential conversational paths a user might take while conversing with your Action. They look a lot like film scripts, with dialog exchanges between your Action and the user. (And, like film scripts, they should be read aloud!) Writing sample dialogs comes before writing code, and even before creating flows.

When I talk to people about the importance of sample dialogs, I get a lot of nods and agreement. But when I go back later and say, "Hey, show me your sample dialogs," I often get a sheepish smile and an excuse as to why they weren't written. Common ones include:

  • "I'm just building a prototype, I can skip that stuff."
  • "I'm not worrying about the words right now—I can tweak that stuff later."
  • "The hard part is all about the backend integration! The words are the easy part."

First off, there is a misconception that "conversation design" (or voice user interface design) is just the top layer of the experience: the words, and perhaps the order of words, that the user will see/hear.

But conversation design goes much deeper. It drives the underlying structure of the experience, which includes:

  • What backend calls are we making?
  • What happens when something fails?
  • What data are we asking the user for?
  • What do we know about the user?
  • What technical constraints do we have, either with the technology itself or our own ecosystem?

In the end, these things manifest as words, to be sure. But thinking of them as "stuff you worry about later" will set you up for failure when it comes time for your user to interact with your Action. For example, without a sample dialog, you might not realize that your prompts all start with the word "Next", making them sound robotic and stilted. Sample dialogs will also show you where you need "glue" words such as "first" and "by the way".

Google has put together design guidelines for building conversational systems. They include an introduction to sample dialogs and why they're important:

Sample dialogs will give you a quick, low-fidelity sense of the "sound-and-feel" of the interaction you're designing. They convey the flow that the user will actually experience, without the technical distractions of code notation, complex flow diagrams, recognition-grammar issues, etc.

By writing sample dialogs, you can informally experiment with and evaluate different design strategies, such as how to promote the discoverability of new features or how to confirm a user's request (for example: should you use an implicit confirmation, an explicit confirmation, or no confirmation at all?).

Check out the Google I/O 2018 Action sample dialogs to see an example. (You can also take a look at the Google I/O 2018 Action code.)

Still not sure if you really need them? Let's hear from a developer who works on Actions, Jessica Dene Earley-Cha, who said in her recent Medium post:

Let's cover how this was built. Before any coding can happen, we need to build a Conversational Design. I originally had skipped this step because I thought that I could build the structure first and then fill in the content (like building a website). However, the structure is tied in with the content. I realized this when I was hitting walls that I thought were technical, but they were there because I didn't have a design.

She makes the great point that designing for conversational systems is different than designing for the web. With a conversational interface, the content itself is part of the structure, so design becomes even more important.

So now that you're (hopefully) convinced, let's discuss four of the common pitfalls developers can avoid by using sample dialogs:

PITFALL #1: Flooding the user with too much information

Suppose you're writing an Action for a bike shop: it can make repair and service appointments, give store hours, and list the latest deals. It's tempting to just start listing out options so the user will know everything they can do. Let's see what a sample dialog looks like using that strategy:

USER
Hey Google, talk to Alyssa's Bike Shop.

ACTION
Hi! Welcome to Alyssa's Bike Shop. Would you like to make an appointment for a repair, cancel an appointment, hear store hours, hear this week's deals, or hear next month's deals?

If you read this prompt out loud to someone else, it will quickly become apparent that too much information is being presented. Humans have a limited capacity for taking in audio, especially if it's the first time they're hearing it.

Here is a better way:

USER
Hey Google, talk to Alyssa's Bike Shop.

ACTION
Hi! Welcome to Alyssa's Bike Shop. I can help you make or cancel an appointment, get store hours, or tell you the latest deals. Which would you like?

Pro tip: an even better experience would be to leave out the "cancel" option if the user doesn't have any.

PITFALL #2: Keeping what your Action can do a secret

Here's a sample dialog representing a common mistake in many conversational experiences. In this example, the user is returning to an Action they've tried previously:

USER
Hey Google, talk to Cathy's Astronomy Trivia Game.

ACTION
This is Cathy's Astronomy Trivia Game. Welcome back!

USER
Uhhh… what can I do??

When we become immersed in our design, we often forget that the user will be coming to the Action without the background we have. What they can do will not always be obvious.

Here's an improvement:

USER
Hey Google, talk to Cathy's Astronomy Trivia Game.

ACTION
This is Cathy's Astronomy Trivia Game. You can play solo, or join a team. What would you like to do?

USER
Uh, I think I'll join a team.

Remember, you don't have to list all your options up front. Instead, try to guide the user through the process in a clear manner, and end with a call to action.

PITFALL #3: Not inviting your user to speak

Another common issue is not making it obvious when it's the user's turn to speak. As humans, we have many ways to let someone know it's their turn: words, intonation, eye gaze, and body language. For things like the Assistant, it's important to end the turn with a clear question or instruction. Here's an example of an Action that leaves the user hanging:

USER
Okay Google, talk to Cool Animal Facts.

ACTION
Welcome to Cool Animal Facts! What animal would you like to know about?

USER
I love penguins!

ACTION
Cool, penguins. Would you like to know about their habitat, or what they like to eat?

USER
Oh, what do they like to eat!

ACTION
Penguins can eat up to 13 pounds of fish, krill and squid a day! And did you know the fathers sit on the eggs?

USER
Yes, I saw…

ACTION
Meanwhile, the mothers go out to get fish.

USER
(thinking) Wait.. am I supposed to say something?

In this example we see two problems. First, the Action asked a rhetorical question. When the user tried to answer, the Action ignored them and kept on talking.

The second problem is that after the fact was given, nothing happened. There was no indication of what the user could do next.

Here's a better way:

ACTION
Penguins can eat up to 13 pounds of fish, krill and squid a day! By the way, male penguins spend time sitting on the eggs, while the females go out to get fish. Now, would you like to hear more about penguins, or ask about another animal?

USER
I'm done, thanks.

ACTION
Sure, thanks for talking to Cool Animal Facts. Goodbye!

Pro tip: Users often want to have an explicit way out, such as saying "goodbye".

Pro tip 2: Make your grammar robust! For the question "Would you like to hear more about penguins, or ask about another animal?" be prepared to handle responses such as "yes", "no", and names of animals.

PITFALL #4: Making your error prompts one size fits all

Gracefully handling the conversation when things go wrong is critical for any conversational system. Writing multiple error prompts for every turn in the conversation can seem daunting. Using a default "I'm sorry, I didn't understand" will not be helpful to the user. Instead, it should be based on what's been happening up until that point and where the user is currently. To make it easier, write sample dialogs for error conditions, which will make more clear what should be in the prompts.

USER
Okay Google, talk to the Zombie Quiz.

ACTION
Time for the Zombie Quiz! First question: which one of these should you do if you're being chased by a zombie: lock the door; run for the hills, or plant a garden?

USER
(thinking)

ACTION
Sorry, I didn't get that. Which one do you choose?

USER
What? I don't remember them...

ACTION
Sorry, I'm still having trouble. Which one do you choose?

USER
Argh!

ACTION
Sorry, that's incorrect. Question two…

Context-sensitive error handling would have gotten the user back on track:

USER
Okay Google, talk to the Zombie Quiz.

ACTION
Time for the Zombie Quiz! First question: which one of these should you do if you're being chased by a zombie: lock the door; run for the hills, or plant a garden?

USER
(thinking)

ACTION
Which one of these should you do if you're being chased by a zombie: lock the door; run for the hills, or plant a garden?

USER
Uhh.. I'm not sure… let me see...maybe the..

ACTION
You can also say the number of the answer, like one, two, or three. Which one do you choose?

USER
Oh, number three.

ACTION
Correct!

Your pre-flight checklist

I know you're itching to take off and starting drawing flows and writing code, but take time to write sample dialogs first. In the long run, it will make your coding easier, and you'll have fewer bugs to fix.

Here's a list of "Dos" to keep in mind when writing sample dialogs:

  • Check out the Conversation Design Guidelines for more help
  • Start your design by using written/spoken sample dialogs; diagrams of the detailed flow can come later
  • Read your sample dialogs out loud!
  • Make each sample dialog one path; they should not include branching
  • Write several "happy path" sample dialogs
  • Write several "error path" sample dialogs
  • Do a "table read" and have people unfamiliar with your sample dialog play the part of the user
  • Share your sample dialogs with everyone involved in building the Action, so everyone's on the same page
  • When testing, compare the actual working Action with the sample dialogs, to ensure it was implemented correctly
  • Iterate, iterate, iterate!

Happy writing!

 

6th September 2018 |

Launchpad Studio announces finance startup cohort, focused on applied-ML

Posted by Rich Hyndman, Global Tech Lead, Google Launchpad

Launchpad Studio is an acceleration program for the world's top startups. Founders work closely with Google and Alphabet product teams and experts to solve specific technical challenges and optimize their businesses for growth with machine learning. Last year we introduced our first applied-ML cohort focused on healthcare.

Today, we are excited to welcome the new cohort of Finance startups selected to participate in Launchpad Studio:

  • Alchemy (USA), bridging blockchain and the real world
  • Axinan (Singapore), providing smart insurance for the digital economy
  • Aye Finance (India), transforming financing in India
  • Celo (USA), increasing financial inclusion through a mobile-first cryptocurrency
  • Frontier Car Group (Germany), investing in the transformation of used-car marketplaces
  • Go-Jek (Indonesia), improving the welfare and livelihoods of informal sectors
  • GuiaBolso (Brazil), improving the financial lives of Brazilians
  • Inclusive (Ghana), verifying identities across Africa
  • m.Paani (India), (em)powering local retailers and the next billion users in India
  • Starling Bank (UK), improving financial health with a 100% mobile-only bank

These Studio startups have been invited from across nine countries and four continents to discuss how machine learning can be utilized for financial inclusion, stable currencies, and identification services. They are defining how ML and blockchain can supercharge efforts to include everyone and ensure greater prosperity for all. Together, data and user behavior are enabling a truly global economy with inclusive and differentiated products for banking, insurance, and credit.

Each startup is paired with a Google product manager to accelerate their product development, working alongside Google's ML research and development teams. Studio provides 1:1 mentoring and access to Google's people, network, thought leadership, and technology.

"Two of the biggest barriers to the large-scale adoption of cryptocurrencies as a means of payment are ease-of-use and purchasing-power volatility. When we heard about Studio and the opportunity to work with Google's AI teams, we were immediately excited as we believe the resulting work can be beneficial not just to Celo but for the industry as a whole." - Rene Reinsberg, Co-Founder and CEO of Celo

"Our technology has accelerated economic growth across Indonesia by raising the standard of living for millions of micro-entrepreneurs including ojek drivers, restaurant owners, small businesses and other professionals. We are very excited to work with Google, and explore more on how artificial intelligence and machine learning can help us strengthen our capabilities to drive even more positive social change not only to Indonesia, but also for the region." - Kevin Aluwi, Co-Founder and CIO of GO-JEK

"At Starling, we believe that data is the key to a healthy financial life. We are excited about the opportunity to work with Google to turn data into insights that will help consumers make better and more-informed financial decisions." - Anne Boden, Founder and CEO of Starling Bank

"At GuiaBolso, we use machine learning in different workstreams, but now we are doubling down on the technology to make our users' experience even more delightful. We see Studio as a way to speed that up." - Marcio Reis, CDO of GuiaBolso

Since launching in 2015, Google Developers Launchpad has become a global network of accelerators and partners with the shared mission of accelerating innovation that solves for the world's biggest challenges. Join us at one of our Regional Accelerators and follow Launchpad's applied ML best practices by subscribing to The Lever.

 

2nd August 2018 |

Google Developers Launchpad introduces The Lever, sharing applied-Machine Learning best practices

Posted by Malika Cantor, Program Manager for Launchpad

The Lever is Google Developers Launchpad's new resource for sharing applied-Machine Learning (ML) content to help startups innovate and thrive. In partnership with experts and leaders across Google and Alphabet, The Lever is operated by Launchpad, Google's global startup acceleration program. The Lever will publish the Launchpad community's experiences of integrating ML into products, and will include case studies, insights from mentors, and best practices from both Google and global thought leaders.

Peter Norvig, Google ML Research Director, and Cassie Kozyrkov, Google Cloud Chief Decision Scientist, are editors of the publication. Hear from them and other Googlers on the importance of developing and sharing applied ML product and business methodologies:

Peter Norvig (Google ML Research, Director): "The software industry has had 50 years to perfect a methodology of software development. In Machine Learning, we've only had a few years, so companies need to pay more attention to the process in order to create products that are reliable, up-to-date, have good accuracy, and are respectful of their customers' private data."

Cassie Kozyrkov (Chief Decision Scientist, Google Cloud): "We live in exciting times where the contributions of researchers have finally made it possible for non-experts to do amazing things with Artificial Intelligence. Now that anyone can stand on the shoulders of giants, process-oriented avenues of inquiry around how to best apply ML are coming to the forefront. Among these is decision intelligence engineering: a new approach to ML, focusing on how to discover opportunities and build towards safe, effective, and reliable solutions. The world is poised to make data more useful than ever before!"

Clemens Mewald (Lead, Machine Learning X and TensorFlow X): "ML/AI has had a profound impact in many areas, but I would argue that we're still very early in this journey. Many applications of ML are incremental improvements on existing features and products. Video recommendations are more relevant, ads have become more targeted and personalized. However, as Sundar said, AI is more profound than electricity (or fire). Electricity enabled modern technology, computing, and the internet. What new products will be enabled by ML/AI? I am convinced that the right ML product methodologies will help lead the way to magical products that have previously been unthinkable."

We invite you to follow the publication, and actively comment on our blog posts to share your own experience and insights.

 

31st July 2018 |

5 Tips for Developing Actions with the New Actions Console

Posted by Zachary Senzer, Product Manager

A couple months ago at Google I/O, we announced a redesigned Actions console that makes developing your Actions easier than ever before. The new Actions console features a more seamless development experience that tailors your workflow from onboarding through deployment, with tailored analytics to manage your Actions post-launch. Simply select your use case during onboarding and the Actions console will guide you through the different stages of development.

Here are 5 tips to help you create the best Actions for your content using our new console.

1. Optimize your Actions for new surfaces with theme customization

Part of what makes the Actions on Google ecosystem so special is the vast array of devices that people can use to interact with your Actions. Some of these devices, including phones and our new smart displays, allow users to have rich visual interactions with your content. To help your Actions stand out, you can customize how these visual experiences appear to users of these devices. Simply visit the "Build" tab and go to theme customization in the Actions console where you can specify background images, typography, colors, and more for your Actions.

2. Start to make your Actions easier to discover with built-in intents

Conversational experiences can introduce complexity in how people ask to complete a task related to your Action--a user could ask for a game in thousands of different ways ("play a game for me", "find a maps quiz", "I want some trivia"). Figuring out all of the ways a user might ask for your Action is difficult. To make this process much easier, we're beginning to map the ways users might ask for your Action into a taxonomy of built-in intents to abstract away this difficulty.

We'll start to use the built-in intent you associated with your Action to help users more easily discover your content as we begin testing them with user's queries. We'll continue to add many more built-in intents over the coming months to cover a variety of use cases. In the Actions console, go to the "Build" tab, click "Actions", then "Add Action" and select one to get started.

3. Promote your Actions with Action Links

While we'll continue to improve the ways users find your Actions within the Assistant, we've also made it easier for users to find your Actions outside the Assistant. Driving new traffic to your Actions is as easy as a click with Action Links. You now have the ability to define hyperlinks for each of your Actions to be used on your website, social media, email newsletters, and more. These links will launch users directly into your Action. If used on a desktop, the link will take users to the directory page for your Action, where they'll have the ability to choose the device they want to try your Action on. To configure Action Links in the console, visit the "Build" tab, choose "Actions", and select the Action for which you would like to create a link. That's it!

4. Ensure your Actions are high-quality by testing using our web simulator and alpha/beta environments

The best way to make sure that your Actions are working as intended is to test them using our updated web simulator. In the simulator, you can run through conversational user flows on phone, speaker, and even smart display device types. After you issue a request, you can see the visual response, request, and response JSON, with any potential errors. For further assistance with debugging errors, you also have the ability to view logs for your Actions.

Another great opportunity to test your Actions is by deploying to limited audiences in alpha and beta environments. By deploying to the alpha environment, your Actions do not need to go through the review process, meaning you can quickly test with your users. After deploying to the beta environment, you can launch your Actions to production whenever you like without additional review. To use alpha and beta environments, go to the "Deploy" tab and click "Release" in the Actions console.

5. Measure your success using analytics

After you deploy your Actions, it's equally important to measure their performance. By visiting the "Measure" tab and clicking "Analytics" in the Actions console, you will be able to view rich analytics on usage, health, and discovery. You can easily see how many people are using and returning to your Actions, how many errors users are encountering, the phrases users are saying to discover your Actions, and much, much, more. These insights can help you improve your Actions.


If you're new to the Actions console and looking for a quick way to get started, watch this video for an overview of the development process.

We're so excited to see how you will use the new Actions console to create even more Actions for more use cases, with additional tools to improve and iterate. Happy building!

 

26th July 2018 |

Designing for the Google Assistant on Smart Displays

Posted by Saba Zaidi, Senior Interaction Designer, Google Assistant

Earlier this year we announced Smart Displays, a new category of devices with the Google Assistant built in, that augment voice experiences with immersive visuals. These new, highly visual devices can make it easier to convey complex information, suggest Actions, support transactions, and express your brand. Starting today, Smart Displays are available for purchase in major US retailers, both in-store and online.

Interacting through voice is fast and easy, because speaking comes naturally to people, and language doesn't constrain people to predefined paths, unlike traditional visual interfaces. However in audio-only interfaces, it can be difficult to communicate detailed information like lists or tables, and nearly impossible to represent rich content like images, charts or a visual brand identity. Smart Displays allow you to create Actions for the Assistant that can respond to natural conversation, and also display information and represent your brand in an immersive, visual way.

Today we're announcing consumer availability of rich responses optimized for Smart Displays. With rich responses, developers can use basic cards, lists, tables, carousels and suggestion chips, which give you an array of visual interactions for your Action, with more visual components coming soon. In addition, developers can also create custom themes to more deeply customize your Action's look and feel.

If you've already built a voice-centric Action for the Google Assistant, not to worry, it'll work automatically on Smart Displays. But we highly recommend adding rich responses and custom themes to make your Action even more visually engaging and useful to your users on Smart Displays. Here are a few tips to get you started:

1. Consider using visual components instead of complex voice prompts

Smart Displays offer several visual formats for displaying information and facilitating user input. A carousel of images, a list or a table can help users scan information efficiently and then interact with a quick tap or swipe.

For example, consider a long, spoken prompt like: "Welcome to National Anthems! You can play the national anthems from 20 different countries, including the United States, Canada and the United Kingdom. Which would you like to hear?"

Instead of merely showing the transcript of that whole spoken prompt on the screen, a carousel of country flags makes it easy for users to scroll and tap the anthem they want to hear.

2. Use visual suggestions to streamline the conversation

Suggestion chips are a great way to surface recommendations, aid feature discovery and keep the conversation moving on Smart Displays.

In this example, suggestion chips can help users find the "surprise me" feature, find the most popular anthems, or filter anthems by region.

3. Express your brand with themes

You can take advantage of new custom themes to differentiate your experience and represent your brand's persona, choosing a custom voice, background image or color, font style, or the shape of your cards to match your branding.

For example, an Action like California Surf Report, could be themed in a more immersive and customized way.

4. Check out our library of developer resources

We offer more tips on designing and building for Smart Displays and other visual devices in our conversation design site and in our talk from I/O about how to design Actions across devices.

Then head to our documentation to learn how to customize the visual appearance of your Actions with rich responses. You can also test and tinker with customizations for Smart Displays in the Actions Console simulator.

Don't forget that once you publish your first Action you can join our community program* and receive your exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit.

We can't wait to see—quite literally—what you build next! Thanks for being a part of our community, and as always, if you have ideas or requests that you'd like to share with our team, don't hesitate to join the conversation.


*Some countries are not eligible to participate in the developer community program, please review the terms and conditions

 

25th July 2018 |

New AIY Edge TPU Boards

Posted by Billy Rutledge, Director of AIY Projects

Over the past year and a half, we've seen more than 200K people build, modify, and create with our Voice Kit and Vision Kit products. Today at Cloud Next we announced two new devices to help professional engineers build new products with on-device machine learning(ML) at their core: the AIY Edge TPU Dev Board and the AIY Edge TPU Accelerator. Both are powered by Google's Edge TPU and represent our first steps towards expanding AIY into a platform for experimentation with on-device ML.

The Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite ML models on your device. We've learned that performance-per-watt and performance-per-dollar are critical benchmarks when processing neural networks within a small footprint. The Edge TPU delivers both in a package that's smaller than the head of a penny. It can accelerate ML inferencing on device, or can pair with Google Cloud to create a full cloud-to-edge ML stack. In either configuration, by processing data directly on-device, a local ML accelerator increases privacy, removes the need for persistent connections, reduces latency, and allows for high performance using less power.

The AIY Edge TPU Dev Board is an all-in-one development board that allows you to prototype embedded systems that demand fast ML inferencing. The baseboard provides all the peripheral connections you need to effectively prototype your device — including a 40-pin GPIO header to integrate with various electrical components. The board also features a removable System-on-module (SOM) daughter board can be directly integrated into your own hardware once you're ready to scale.

The AIY Edge TPU Accelerator is a neural network coprocessor for your existing system. This small USB-C stick can connect to any Linux-based system to perform accelerated ML inferencing. The casing includes mounting holes for attachment to host boards such as a Raspberry Pi Zero or your custom device.

On-device ML is still in its early days, and we're excited to see how these two products can be applied to solve real world problems — such as increasing manufacturing equipment reliability, detecting quality control issues in products, tracking retail foot-traffic, building adaptive automotive sensing systems, and more applications that haven't been imagined yet.

Both devices will be available online this fall in the US with other countries to follow shortly.

For more product information visit g.co/aiy and sign up to be notified as products become available.

 

24th July 2018 |

New Dialogflow features: how to use them to expand your Actions’ customer support capabilities

Posted by Mary Chen, Product Marketing Manager, and Ralfi Nahmias, Product Manager, Dialogflow

Today at Google Cloud Next '18, Dialogflow is introducing several new beta features to expand conversational capabilities for customer support and contact centers. Let's take a look at how three of these features can be used with the Google Assistant to improve the customer care experience for your Actions.

Create Actions smarter and faster with Knowledge Connectors Beta

Building conversational Actions for content-heavy use cases, such as FAQ or knowledge base answers, is difficult. Such content is often dense and unstructured, making accurate intent modeling time-consuming and prone to error. Dialogflow's Knowledge Connectors feature simplifies the development process by understanding and automatically curating questions and responses from the content you provide. It can add thousands of extracted responses directly to your conversational Action built with Dialogflow, giving you more time for the fun parts – building rich and engaging user experiences.

Try out Knowledge Connectors in this bike shop sample

Understand user texts better with Automatic Spelling Correction

When users interact with the Google Assistant through text, it's common and natural to make spelling and grammar mistakes. When mistypes occur, Actions may not understand the user's intent, resulting in a poor followup experience. With Dialogflow's Automatic Spelling Correction, Actions built with Dialogflow can automatically correct spelling mistakes, which significantly improves intent and entity matching. Automatic Spelling Correction uses similar technology to what's used in Google Search and other Google products.

Enable Automatic Spelling Correction to improve intent and entity matching

Assign a phone number to your Action with Phone Gateway Beta

Your Action can now be used as a virtual phone agent with Dialogflow's new Phone Gateway integration. Assign a working phone number to your Action built with Dialogflow, and it can start taking calls immediately. Phone Gateway allows you to easily implement virtual agents without needing to stitch together multiple services required for building phone applications.

Set up Phone Gateway in 3 easy steps

Dialogflow's Knowledge Connectors, Automatic Spelling Correction, and Phone Gateway are free for Standard Edition agents up to certain limits; for enterprise needs, see here for more options.

We look forward to the Actions you'll build with these new Dialogflow features. Give the features a try with the Cloud Next FAQ Action we made:

  • Download the Github sample
  • Say "Hey Google, talk to Next helper" on your Google Assistant-enabled device
  • Call +1 317-978-0364 (which uses Dialogflow's Phone Gateway)

And if you're new to developing for the Google Assistant, join our Cloud Next talk this Thursday at 9am – see you on the livestream or in person!

 

20th July 2018 |

10 must-see G Suite developer sessions at Google Cloud Next ‘18

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Google Cloud Next '18 is only a few days away, and this year, there are over 500 sessions covering all aspects of cloud computing, from G Suite to the Google Cloud Platform. This is your chance to learn first-hand how to build custom solutions in G Suite alongside other developers from Independent Software Vendors (ISVs), systems integrators (SIs), and industry enterprises.

G Suite's intelligent productivity apps are secure, smart, and simple to use, so why not integrate your apps with them? If you're planning to attend the event and are wondering which sessions you should check out, here are some sessions to consider:

  • "Power Your Apps with Gmail, Google Drive, Calendar, Sheets, Slides, and More!" on Tuesday, July 24th. Join me as I lead this session that provides a high-level technical overview of the various ways you can build with G Suite. This is a great place to start before attending deeper technical sessions.
  • "Power your apps with Gmail, Google Drive, Calendar, Sheets, Slides and more" on Monday, July 23rd and Friday, July 27th. Join me for one of our half-day bootcamps! Both are identical and bookend the conference—one on Monday and another on Friday, meaning you can do either one and still make it to all the other conference sessions. While named the same as the technical overview above, the bootcamps dive a bit deeper and feature more detailed tech talks on Google Apps Script, the G Suite REST APIs, and App Maker. The three (or more!) hands-on codelabs will leave you with working code that you can start customizing for your own apps on the job! Register today to ensure you get a seat.
  • "Automating G Suite: Apps Script & Sheets Macro Recorder" and "Enhancing the Google Apps Script Developer Experience" both on Tuesday, July 24th. Interested in Google Apps Script, our customized serverless JavaScript runtime used to automate, integrate, and extend G Suite? The first session introduces developers and ITDMs to new features as well as real business use cases while the other dives into recent features that make Apps Script more friendly for the professional developer.
  • "G Suite + GCP: Building Serverless Applications with All of Google Cloud" on Wednesday, July 25th. This session is your chance to attend one of the few hybrid talks that look at how to you can build applications on both the GCP and G Suite platforms. Learn about serverless—a topic that's become more and more popular over the past year—and see examples on both platforms with a pair of demos that showcase how you can take advantage of GCP tools from a G Suite serverless app, and how you can process G Suite data driven by GCP serverless functions. I'm also leading this session and eager to show how you can leverage the strengths of each platform together in the same applications.
  • "Build apps your business needs, with App Maker" and "How to Build Enterprise Workflows with App Maker" on Tuesday, July 24th and Thursday, July 26th, respectively. Google App Maker is a new low-code, development environment that makes it easy to build custom apps for work. It's great for business analysts, technical managers, or data scientists who may not have software engineering resources. With a drag & drop UI, built-in templates, and point-and-click data modeling, App Maker lets you go from idea to app in minutes! Learn all about it with our pair of App Maker talks featuring our Developer Advocate, Chris Schalk.
  • "The Google Docs, Sheets & Slides Ecosystem: Stronger than ever, and growing" and "Building on the Docs Editors: APIs and Apps Script" on Wednesday, July 25th and Thursday, July 26th, respectively. Check out these pair of talks to learn more about how to write apps that integrate with the Google Docs editors (Docs, Sheets, Slides, Forms). The first describes the G Suite productivity tools' growing interoperability in the enterprise with while the second focuses on the different integration options available to developers, either using Google Apps Script or the REST APIs.
  • "Get Productive with Gmail Add-ons" on Tuesday, July 24th. We launched Gmail Add-ons less than a year ago to help developers integrate their apps alongside Gmail. Check out this video I made to help you get up-to-speed on Gmail Add-ons! This session is for developers either new to Gmail Add-ons or want to hear the latest from the Gmail Add-ons and API team.

I look forward to meeting you in person at Next '18. In the meantime, check out the entire session schedule to find out everything it has to offer. Don't forget to swing by our "Meet the Experts" office hours (Tue-Thu), G Suite "Collaboration & Productivity" showcase demos (Tue-Thu), the G Suite Birds-of-a-Feather meetup (Wed), and the Google Apps Script & G Suite Add-ons meetup (just after the BoF on Wed). I'm excited at how we can use "all the tech" to change the world. See you soon!

 

20th July 2018 |

DevFest 2018 Kickoff!
Posted by Erica Hanson, Program Manager in Developer Relations

Google Developers is proud to announce DevFest 2018, the largest annual community event series for the Google Developer Groups (GDG) program. Hundreds of GDG chapters around the world will host their biggest and most exciting developer event of the year. These are often all-day or multi-day events with many speakers and workshops, highlighting a wide range of Google developer products. DevFest season runs from August to November 2018.

Our GDG organizers and communities are getting ready for the season, and are excited to host an event near you!

Whether you are an established developer, new to tech, or just curious about the community - come and check out #DevFest18. Everyone is invited!

For more information on DevFest 2018 and to find an event near you, visit the site.

 

30th July 2018 |

Hangouts Chat alerts & notifications... with asynchronous messages

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

While most chatbots respond to user requests in a synchronous way, there are scenarios when bots don't perform actions based on an explicit user request, such as for alerts or notifications. In today's DevByte video, I'm going to show you how to send messages asynchronously to rooms or direct messages (DMs) in Hangouts Chat, the team collaboration and communication tool in G Suite.

What comes to mind when you think of a bot in a chat room? Perhaps a user wants the last quarter's European sales numbers, or maybe, they want to look up local weather or the next movie showtime. Assuming there's a bot for whatever the request is, a user will either send a direct message (DM) to that bot or @mention the bot from within a chat room. The bot then fields the request (sent to it by the Hangouts Chat service), performs any necessary magic, and responds back to the user in that "space," the generic nomenclature for a room or DM.

Our previous DevByte video for the Hangouts Chat bot framework shows developers what bots and the framework are all about as well as how to build one of these types of bots, in both Python and JavaScript. However, recognize that these bots are responding synchronously to a user request. This doesn't suffice when users want to be notified when a long-running background job has completed, when a late bus or train will be arriving soon, or when one of their servers has just gone down. Recognize that such alerts can come from a bot but also perhaps a monitoring application. In the latest episode of the G Suite Dev Show, learn how to integrate this functionality in either type of application.

From the video, you can see that alerts and notifications are "out-of-band" messages, meaning they can come in at any time. The Hangouts Chat bot framework provides several ways to send asynchronous messages to a room or DM, generically referred to as a "space." The first is the HTTP-based REST API. The other way is using what are known as "incoming webhooks."

The REST API is used by bots to send messages into a space. Since a bot will never be a human user, a Google service account is required. Once you create a service account for your Hangouts Chat bot in the developers console, you can download its credentials needed to communicate with the API. Below is a short Python sample snippet that uses the API to send a message asynchronously to a space.

from apiclient import discovery
from httplib2 import Http
from oauth2client.service_account import ServiceAccountCredentials

SCOPES = 'https://www.googleapis.com/auth/chat.bot'
creds = ServiceAccountCredentials.from_json_keyfile_name(
'svc_acct.json', SCOPES)
CHAT = discovery.build('chat', 'v1', http=creds.authorize(Http()))

room = 'spaces/<ROOM-or-DM>'
message = {'text': 'Hello world!'}
CHAT.spaces().messages().create(parent=room, body=message).execute()

The alternative to using the API with service accounts is the concept of incoming webhooks. Webhooks are a quick and easy way to send messages into any room or DM without configuring a full bot, i.e., monitoring apps. Webhooks also allow you to integrate your custom workflows, such as when a new customer is added to the corporate CRM (customer relationship management system), as well as others mentioned above. Below is a Python snippet that uses an incoming webhook to communicate into a space asynchronously.

import requests
import json

URL = 'https://chat.googleapis.com/...&thread_key=T12345'
message = {'text': 'Hello world!'}
requests.post(URL, data=json.dumps(message))

Since incoming webhooks are merely endpoints you HTTP POST to, you can even use curl to send a message to a Hangouts Chat space from the command-line:

curl \
-X POST \
-H 'Content-Type: application/json' \
'https://chat.googleapis.com/...&thread_key=T12345' \
-d '{"text": "Hello!"}'

To get started, take a look at the Hangouts Chat developer documentation, especially the specific pages linked to above. We hope this video helps you take your bot development skills to the next level by showing you how to send messages to the Hangouts Chat service asynchronously.

 

27th June 2018 |

Launching the Indie Games Accelerator in Asia - helping gaming startups find success on Google Play

Posted by Anuj Gulati, Developer Marketing Manager, Google Play and Sami Kizilbash, Developer Relations Program Manager, Google

Emerging markets now account for more than 40% of game installs on Google Play. Rapid smartphone adoption in these regions presents a new base of engaged gamers that are looking for high quality mobile gaming experiences. At Google Play, we are focused on helping local game developers from these markets achieve their full potential and make the most of this opportunity.

Indie Games Accelerator is a new initiative to support top indie game startups from India, Indonesia, Malaysia, Pakistan, Philippines, Singapore, Thailand and Vietnam who are looking to supercharge their growth on Android. This four month program is a special edition of Launchpad Accelerator, designed in close collaboration with Google Play, featuring a comprehensive gaming curriculum and mentorship from top mobile gaming experts.

Successful participants will be invited to attend two all-expense-paid gaming bootcamps at the Google Asia-Pacific office in Singapore, where they will receive personalized mentorship from Google teams and industry experts. Additional benefits include Google Cloud Platform credits, invites to exclusive Google and industry events, and more.

Visit the program website to find out more and apply now.

 

21st June 2018 |

Flutter Release Preview 1: Live from GMTC in Beijing

Posted by the Flutter Team at Google

Today at the GMTC front-end conference in Beijing, we announced Flutter Release Preview 1, signaling a new phase of development for Flutter as we move into the final stages of stabilization for 1.0.

Google I/O last month was something of a celebration for the Flutter team: having reached beta, it was good to meet with many developers who are learning, prototyping, or building with Flutter. In the immediate aftermath of Google I/O, we continue to see rapid growth in the Flutter ecosystem, with a 50% increase in active Flutter users. We've also seen over 150 individual Flutter events taking place across fifty countries: from New York City to Uyo, Nigeria; from Tokyo and Osaka in Japan to Nuremberg, Germany.

One common measure of community momentum is the number of GitHub stars, and we've also seen tremendous growth here, with Flutter becoming one of the top 100 software repos on GitHub in May.

Announcing Flutter Release Preview 1

Today we're taking another big step forward, with the immediate availability of Flutter Release Preview 1. It seems particularly auspicious to make this announcement in Beijing at the GMTC Global Front-End Conference. China has the third largest population of developers using Flutter, after the USA and India. Companies such as Alibaba and Tencent are already adopting Flutter for production apps, and there is a growing local community who are translating content and adding packages and mirrors for Chinese developers.

The shift from beta to release preview with this release signals our confidence in the stability and quality of what we have with Flutter, and our focus on bug fixing and stabilization.

We've posted a longer article with details on what's new in Flutter Release Preview 1 over at our Medium channel. You can download Flutter Release Preview 1 directly from the Flutter website, or simply run flutter upgrade from an existing installation.

It's been fun to watch others encounter Flutter for the first time. This article from an iOS developer who has recently completed porting an iOS app to Flutter is a positive endorsement of the project's readiness for real-world production usage:

"I haven't been this excited about a technology since Ruby on Rails or Go… After dedicating years to learning iOS app dev in-depth, it killed me that I was alienating so many Android friends out there. Also, learning other cross platform frameworks at the time was super unattractive to me because of what was available… Writing a Flutter app has been a litmus test and Flutter passed the test. Flutter is something I feel like I can really invest in and most importantly, really enjoy using."

As we get ever closer to publishing our first release from the "stable" channel, we're ready for more developers to build and deploy solutions that use this Release Preview. There are plenty of training offerings to help you learn Flutter: from I/O sessions to newsletters to hands-on videos to developer shows. We're excited to see what you build!

 

5th June 2018 |

Google Developers Agency Program | 2018

Posted by Amit Chopra & Maggie Hohlfeld

Google Developers Agency Program | Awards

2 years, 225+ agencies and 36 countries later, the Google Developers Agency Program has grown from a simple effort to connect with development agencies working on mobile apps to a global, exclusive program that recognizes and trains the best software agencies in the world.

The program's mission remains simple: identify, upskill, and promote top development agencies. It provides agencies with access to local events, hangouts, dedicated content, priority support from product and developer relations teams, and upcoming developer products.

Google Developers Agency Program | Logo

As a way to identify top agencies who demonstrated excellence in Android development within the program and promote them, we first announced the "Certified Agency" Program at Google I/O in May 2015. Certification has now become the gold standard for Android development agencies, and has helped to push the agency ecosystem to improve as a whole.

Today we are pleased to share that we now have reached 50 Certified agencies from 15 different countries in the program.

Google Developers Agency Program | Award Night

We celebrated our newest class of Certified agencies at Google I/O, where it all began, and can't wait to see how much the program will have grown by this time next year.

Learn more about the program by clicking on Google Developers Agency program.

 

31st May 2018 |

Innovate with Google at the 2018 China-US Young Maker Competition!

Posted by Aimin Zhu, University Relations Manager, Google China

Following the announcement of the 2018 China-U.S. Young Maker Competition, we are very excited that there are already over 1000 participants with over a month left before the final submission deadline! Project submissions are open to all makers, developers, and students age 18-40 in the United States. Check out the projects others are developing on the project submissions page.

Participants may choose to develop their projects using any platform. Makers and students in the US are encouraged to consider the many Google technologies and platforms available to build innovative solutions:

The project submission deadline is June 22, so there is still plenty of time to join the competition! If you have additional questions about the competition or the project submission process, please visit the contest FAQ.

The top 10 projects selected by the judges will win an all-expenses-paid trip to Beijing, China, to join the finals with Chinese makers on August 13-17. We look forward to meeting you at the final event!

For more details, please see US divitional contest landing page hosted by Hackster.io.

 

30th May 2018 |

Creating AR Experiences for I/O: Our Process

Posted by Karin Levi, Product Marketing, ARCore

A few weeks ago at Google I/O we released a major update to ARCore, Google's AR development platform. We added new APIs like Cloud Anchors, that enable multi-user, collaborative AR experiences and Augmented Images that enable activation of 2D images into 3D objects. All of these updates are going to change the way we use AR today and enable developers to create richer, more immersive AR apps.

With these new capabilities, we decided to put our platform to the test. So we built real experiences to showcase how these all come to life. All demos were presented at the I/O AR & VR sandbox area. We open sourced them to make sure you can see how simple it is to build these experiences. We're pretty happy with how they turned out and would love to share with you some learning and insights from behind the scenes.

Light Board - Multiplayer game

Light Board is an AR multiplayer tabletop game where two players on floating game boards launch colored projectiles at each other.

While building Light Board it was important for us to keep in mind who the end users are. We wanted it to be a simple/fun game for developers to try out while visiting the I/O sandbox. The developers would only have a couple minutes to play while passing through, so it needed to allow players (even non-gamers) to pick it up and play with very little setup.

The artwork for Light Board was a major focus. Our mission for the look of the game was to align with the design and decor of I/O 2018. This way, our app would feel like an extension of everything the attendees saw around them. As a result, our design philosophy had 3 goals; bright accent colors, simple graphic shapes and natural physical materials.

Left: Design for AR/VR Sandbox at I/O 2018. Right: Key art for Light Board game boards

The artwork was created in Maya and Cinema 4D. We created physically based materials for our models using Substance Painter. Just as continuous iteration is crucial for engineering, it is also important when creating art assets. With that in mind, we kept careful track of our content pipeline, even for this relatively simple project. This allowed us to quickly try out different looks and board styles before settling on our final design.

On the engineering front we selected the Unity game engine as our dev environment. Unity gives us a couple of important advantages. First, it is easy to get great looking 3D graphics up and running right away. Second, the engine component is already complete, so we could immediately start iterating on gameplay code. As with the artwork, this allowed us to test gameplay options before we made a final decision. Additionally, Unity gave us support for both Android and iOS with only a little extra work.

To handle the multiplayer aspect we used Firebase Realtime Database. We were concerned with network performance at the event, and felt that the persistent nature of a database would make it more tolerant of poor networks. As it turned out, it worked very well and we got the ability to quit and rejoin games for free!

We had a lot of fun building Light Board and we hope people can use it as an example of how easy it can be to not only build AR apps, but to use really cool features like Cloud Anchors. Please check out our open source repo and give Light Board a try!

Just a line - Draw with your friends

In March, we released Just a Line, an Android app that lets you draw in the air with your phone. It's a simple experiment meant to showcase the power of ARCore. At Google I/O, we added Cloud Anchors to the app so that two people can draw at once in the same space, even if one of them is using Android and the other iOS.

Both apps were built natively: The Android version was written in Android Studio, and the iOS version was built in xCode. ARCore's Cloud Anchors enable Just a Line to pair two phones, allowing users to draw simultaneously in a shared space. Pairing works across Android and iOS devices, and drawings are synchronized live through a Firebase Realtime Database. You can find the open-source code for iOS here and for Android here.

Illusive Images - Art exhibition comes to life

"Illusive Images" demo is an augmented gallery consisting of 3 artworks, each exploring a different augmented image use case and user experience. As one walks from side to side, around the object, or gazes in a specific direction, 2D artworks are married with 3D, inviting the viewer to enter into the space of the artwork spanning well beyond the physical frame.

Due to the visual design nature of our augmented images, we experimented a lot with creating databases with varying degrees of features. In order to get the best results, we iterated quickly by resizing the canvas for the artwork. We also moved and stretched the brightness and contrast levels. These variations helped to achieve the most optimal image without compromising design intent.

The app was built in Unity with ARCore, with the majority of assets created in Cinema 4D. Mograph animations were imported into Unity as an fbx, and driven entirely by the position of the user in relation to the artwork. An example project can be found here.

To make your development experience easier, we open sourced all the demos our team built. We hope you find this useful! You can also visit our website to learn more and start building AR experiences today.

 

25th May 2018 |

Introducing the Data Studio Community Connector Codelab
Posted by Minhaz Kazi, Developer Advocate, Google Data Studio

Data Studio is Google's free next gen business intelligence and data visualization platform. Community Connectors for Data Studio let you build connectors to any internet-accessible data source using Google Apps Script. You can build Community Connectors for commercial, enterprise, and personal use. Learn how to build Community Connectors using the Data Studio Community Connector Codelab.

Use the Community Connector Codelab

The Community Connector Codelab explains how Community Connectors work and provides a step by step tutorial for creating your first Community Connector. You can get started if you have a basic understanding of Javascript and web APIs. You should be able to build your first connector in 30 mins using the Codelab.

If you have previously imported data into Google Sheets using Apps Script, you can use this Codelab to get familiar with the Community Connectors and quickly port your code to fetch your data directly into Data Studio.

Why create your own Community Connector

Community Connectors can help you to quickly deliver an end-to-end visualization solution that is user-friendly and delivers high user value with low development efforts. Community Connectors can help you build a reporting solution for personal, public, enterprise, or commercial data, and also do explanatory visualizations.

  • If you provide a web based service to customers, you can create template dashboards or even let your users create their own visualization based on the users' data from your service.
  • Within an enterprise, you can create serverless and highly scalable reporting solutions where you have complete control over your data and sharing features.
  • You can create an aggregate view of all your metrics across different commercial platforms and service providers while providing drill down capabilities.
  • You can create connectors to public and open datasets. Sharing these connectors will enable other users to quickly gain access to these datasets and dive into analysis directly without writing any code.

By building a Community Connector, you can go from scratch to a push button customized dashboard solution for your service in a matter of hours.

The following dashboard uses Community Connectors to fetch data from Stack Overflow, GitHub, and Twitter. Try using the date filter to view changes across all sources:

This dashboard uses the following Community Connectors:

You can build your own connector to any preferred service and publish it in the Community Connector gallery. The Community Connector gallery now has over 90 Partner Connectors connecting to more than 450 data sources.

Once you have completed the Codelab, view the Community Connector documentation and sample code on the Data Studio open source repository to build your own connector.

 

29th May 2018 |

Let's hit the road! Join Google Developers Community Roadshow

Posted by Przemek Pardel, Developer Relations Program Manager, Regional Lead

This summer, the Google Developers team is touring 10 countries and 14 cities in Europe in a colorful community bus. We'll be visiting university campuses and technology parks to meet you locally and talk about our programs for developers and start-ups.

Join us to find out how Google supports developer communities. Learn about Google Developer Groups, Women Techmakers program and the various ways we engage with the broader developer community in Europe and around the world.

Our bus will stop in the following locations between 12.00 and 4pm:

  • 4th June, Estonia, Tallinn
  • 6th June, Latvia, Riga
  • 8th June, Lithuania, Vilnius
  • 11th June, Poland, Gdańsk
  • 13th June, Poland, Poznań
  • 15th June, Poland, Kraków
  • 18th June, Slovenia, Ljubljana
  • 19th June, Croatia, Zagreb
  • 21st June, Bulgaria, Sofia

Want to meet us on the way? Sign up for the event in your city here.

What to expect:

  • Information: learn more about how Google supports developer communities around the world, from content, speakers to a global network
  • Network: with other community organizers from your city
  • Workshops: join some of our product workshops on tour (Actions on Google, Google Cloud, Machine Learning), and meet with Google teams
  • Fun: live music, games and more!

Are you interested in starting a new developer community or are you an organizer who would like to join the global Google Community Program? Let us know and receive an invitation-only pass to our private events.