Google Developers Blog

 

20th July 2018 |

10 must-see G Suite developer sessions at Google Cloud Next ‘18

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Google Cloud Next '18 is only a few days away, and this year, there are over 500 sessions covering all aspects of cloud computing, from G Suite to the Google Cloud Platform. This is your chance to learn first-hand how to build custom solutions in G Suite alongside other developers from Independent Software Vendors (ISVs), systems integrators (SIs), and industry enterprises.

G Suite's intelligent productivity apps are secure, smart, and simple to use, so why not integrate your apps with them? If you're planning to attend the event and are wondering which sessions you should check out, here are some sessions to consider:

  • "Power Your Apps with Gmail, Google Drive, Calendar, Sheets, Slides, and More!" on Tuesday, July 24th. Join me as I lead this session that provides a high-level technical overview of the various ways you can build with G Suite. This is a great place to start before attending deeper technical sessions.
  • "Power your apps with Gmail, Google Drive, Calendar, Sheets, Slides and more" on Monday, July 23rd and Friday, July 27th. Join me for one of our half-day bootcamps! Both are identical and bookend the conference—one on Monday and another on Friday, meaning you can do either one and still make it to all the other conference sessions. While named the same as the technical overview above, the bootcamps dive a bit deeper and feature more detailed tech talks on Google Apps Script, the G Suite REST APIs, and App Maker. The three (or more!) hands-on codelabs will leave you with working code that you can start customizing for your own apps on the job! Register today to ensure you get a seat.
  • "Automating G Suite: Apps Script & Sheets Macro Recorder" and "Enhancing the Google Apps Script Developer Experience" both on Tuesday, July 24th. Interested in Google Apps Script, our customized serverless JavaScript runtime used to automate, integrate, and extend G Suite? The first session introduces developers and ITDMs to new features as well as real business use cases while the other dives into recent features that make Apps Script more friendly for the professional developer.
  • "G Suite + GCP: Building Serverless Applications with All of Google Cloud" on Wednesday, July 25th. This session is your chance to attend one of the few hybrid talks that look at how to you can build applications on both the GCP and G Suite platforms. Learn about serverless—a topic that's become more and more popular over the past year—and see examples on both platforms with a pair of demos that showcase how you can take advantage of GCP tools from a G Suite serverless app, and how you can process G Suite data driven by GCP serverless functions. I'm also leading this session and eager to show how you can leverage the strengths of each platform together in the same applications.
  • "Build apps your business needs, with App Maker" and "How to Build Enterprise Workflows with App Maker" on Tuesday, July 24th and Thursday, July 26th, respectively. Google App Maker is a new low-code, development environment that makes it easy to build custom apps for work. It's great for business analysts, technical managers, or data scientists who may not have software engineering resources. With a drag & drop UI, built-in templates, and point-and-click data modeling, App Maker lets you go from idea to app in minutes! Learn all about it with our pair of App Maker talks featuring our Developer Advocate, Chris Schalk.
  • "The Google Docs, Sheets & Slides Ecosystem: Stronger than ever, and growing" and "Building on the Docs Editors: APIs and Apps Script" on Wednesday, July 25th and Thursday, July 26th, respectively. Check out these pair of talks to learn more about how to write apps that integrate with the Google Docs editors (Docs, Sheets, Slides, Forms). The first describes the G Suite productivity tools' growing interoperability in the enterprise with while the second focuses on the different integration options available to developers, either using Google Apps Script or the REST APIs.
  • "Get Productive with Gmail Add-ons" on Tuesday, July 24th. We launched Gmail Add-ons less than a year ago to help developers integrate their apps alongside Gmail. Check out this video I made to help you get up-to-speed on Gmail Add-ons! This session is for developers either new to Gmail Add-ons or want to hear the latest from the Gmail Add-ons and API team.

I look forward to meeting you in person at Next '18. In the meantime, check out the entire session schedule to find out everything it has to offer. Don't forget to swing by our "Meet the Experts" office hours (Tue-Thu), G Suite "Collaboration & Productivity" showcase demos (Tue-Thu), the G Suite Birds-of-a-Feather meetup (Wed), and the Google Apps Script & G Suite Add-ons meetup (just after the BoF on Wed). I'm excited at how we can use "all the tech" to change the world. See you soon!

 

20th July 2018 |

DevFest 2018 Kickoff!
Posted by Erica Hanson, Program Manager in Developer Relations

Google Developers is proud to announce DevFest 2018, the largest annual community event series for the Google Developer Groups (GDG) program. Hundreds of GDG chapters around the world will host their biggest and most exciting developer event of the year. These are often all-day or multi-day events with many speakers and workshops, highlighting a wide range of Google developer products. DevFest season runs from August to November 2018.

Our GDG organizers and communities are getting ready for the season, and are excited to host an event near you!

Whether you are an established developer, new to tech, or just curious about the community - come and check out #DevFest18. Everyone is invited!

For more information on DevFest 2018 and to find an event near you, visit the site.

 

28th June 2018 |

Hangouts Chat alerts & notifications... with asynchronous messages

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

While most chatbots respond to user requests in a synchronous way, there are scenarios when bots don't perform actions based on an explicit user request, such as for alerts or notifications. In today's DevByte video, I'm going to show you how to send messages asynchronously to rooms or direct messages (DMs) in Hangouts Chat, the team collaboration and communication tool in G Suite.

What comes to mind when you think of a bot in a chat room? Perhaps a user wants the last quarter's European sales numbers, or maybe, they want to look up local weather or the next movie showtime. Assuming there's a bot for whatever the request is, a user will either send a direct message (DM) to that bot or @mention the bot from within a chat room. The bot then fields the request (sent to it by the Hangouts Chat service), performs any necessary magic, and responds back to the user in that "space," the generic nomenclature for a room or DM.

Our previous DevByte video for the Hangouts Chat bot framework shows developers what bots and the framework are all about as well as how to build one of these types of bots, in both Python and JavaScript. However, recognize that these bots are responding synchronously to a user request. This doesn't suffice when users want to be notified when a long-running background job has completed, when a late bus or train will be arriving soon, or when one of their servers has just gone down. Recognize that such alerts can come from a bot but also perhaps a monitoring application. In the latest episode of the G Suite Dev Show, learn how to integrate this functionality in either type of application.

From the video, you can see that alerts and notifications are "out-of-band" messages, meaning they can come in at any time. The Hangouts Chat bot framework provides several ways to send asynchronous messages to a room or DM, generically referred to as a "space." The first is the HTTP-based REST API. The other way is using what are known as "incoming webhooks."

The REST API is used by bots to send messages into a space. Since a bot will never be a human user, a Google service account is required. Once you create a service account for your Hangouts Chat bot in the developers console, you can download its credentials needed to communicate with the API. Below is a short Python sample snippet that uses the API to send a message asynchronously to a space.

from apiclient import discovery
from httplib2 import Http
from oauth2client.service_account import ServiceAccountCredentials

SCOPES = 'https://www.googleapis.com/auth/chat.bot'
creds = ServiceAccountCredentials.from_json_keyfile_name(
'svc_acct.json', SCOPES)
CHAT = discovery.build('chat', 'v1', http=creds.authorize(Http()))

room = 'spaces/<ROOM-or-DM>'
message = {'text': 'Hello world!'}
CHAT.spaces().messages().create(parent=room, body=message).execute()

The alternative to using the API with services accounts is the concept of incoming webhooks. Webhooks are a quick and easy way to send messages into any room or DM without configuring a full bot, i.e., monitoring apps. Webhooks also allow you to integrate your custom workflows, such as when a new customer is added to the corporate CRM (customer relationship management system), as well as others mentioned above. Below is a Python snippet that uses an incoming webhook to communicate into a space asynchronously.

import requests
import json

URL = 'https://chat.googleapis.com/...&thread_key=T12345'
message = {'text': 'Hello world!'}
requests.post(URL, data = json.dumps(message))

Since incoming webhooks are merely endpoints you HTTP POST to, you can even use curl to send a message to a Hangouts Chat space from the command-line:

curl \
-X POST \
-H 'Content-Type: application/json' \
'https://chat.googleapis.com/...&thread_key=T12345' \
-d '{"text": "Hello!"}'

To get started, take a look at the Hangouts Chat developer documentation, especially the specific pages linked to above. We hope this video helps you take your bot development skills to the next level by showing you how to send messages to the Hangouts Chat service asynchronously.

 

27th June 2018 |

Launching the Indie Games Accelerator in Asia - helping gaming startups find success on Google Play

Posted by Anuj Gulati, Developer Marketing Manager, Google Play and Sami Kizilbash, Developer Relations Program Manager, Google

Emerging markets now account for more than 40% of game installs on Google Play. Rapid smartphone adoption in these regions presents a new base of engaged gamers that are looking for high quality mobile gaming experiences. At Google Play, we are focused on helping local game developers from these markets achieve their full potential and make the most of this opportunity.

Indie Games Accelerator is a new initiative to support top indie game startups from India, Indonesia, Malaysia, Pakistan, Philippines, Singapore, Thailand and Vietnam who are looking to supercharge their growth on Android. This four month program is a special edition of Launchpad Accelerator, designed in close collaboration with Google Play, featuring a comprehensive gaming curriculum and mentorship from top mobile gaming experts.

Successful participants will be invited to attend two all-expense-paid gaming bootcamps at the Google Asia-Pacific office in Singapore, where they will receive personalized mentorship from Google teams and industry experts. Additional benefits include Google Cloud Platform credits, invites to exclusive Google and industry events, and more.

Visit the program website to find out more and apply now.

 

21st June 2018 |

Flutter Release Preview 1: Live from GMTC in Beijing

Posted by the Flutter Team at Google

Today at the GMTC front-end conference in Beijing, we announced Flutter Release Preview 1, signaling a new phase of development for Flutter as we move into the final stages of stabilization for 1.0.

Google I/O last month was something of a celebration for the Flutter team: having reached beta, it was good to meet with many developers who are learning, prototyping, or building with Flutter. In the immediate aftermath of Google I/O, we continue to see rapid growth in the Flutter ecosystem, with a 50% increase in active Flutter users. We've also seen over 150 individual Flutter events taking place across fifty countries: from New York City to Uyo, Nigeria; from Tokyo and Osaka in Japan to Nuremberg, Germany.

One common measure of community momentum is the number of GitHub stars, and we've also seen tremendous growth here, with Flutter becoming one of the top 100 software repos on GitHub in May.

Announcing Flutter Release Preview 1

Today we're taking another big step forward, with the immediate availability of Flutter Release Preview 1. It seems particularly auspicious to make this announcement in Beijing at the GMTC Global Front-End Conference. China has the third largest population of developers using Flutter, after the USA and India. Companies such as Alibaba and Tencent are already adopting Flutter for production apps, and there is a growing local community who are translating content and adding packages and mirrors for Chinese developers.

The shift from beta to release preview with this release signals our confidence in the stability and quality of what we have with Flutter, and our focus on bug fixing and stabilization.

We've posted a longer article with details on what's new in Flutter Release Preview 1 over at our Medium channel. You can download Flutter Release Preview 1 directly from the Flutter website, or simply run flutter upgrade from an existing installation.

It's been fun to watch others encounter Flutter for the first time. This article from an iOS developer who has recently completed porting an iOS app to Flutter is a positive endorsement of the project's readiness for real-world production usage:

"I haven't been this excited about a technology since Ruby on Rails or Go… After dedicating years to learning iOS app dev in-depth, it killed me that I was alienating so many Android friends out there. Also, learning other cross platform frameworks at the time was super unattractive to me because of what was available… Writing a Flutter app has been a litmus test and Flutter passed the test. Flutter is something I feel like I can really invest in and most importantly, really enjoy using."

As we get ever closer to publishing our first release from the "stable" channel, we're ready for more developers to build and deploy solutions that use this Release Preview. There are plenty of training offerings to help you learn Flutter: from I/O sessions to newsletters to hands-on videos to developer shows. We're excited to see what you build!

 

5th June 2018 |

Google Developers Agency Program | 2018

Posted by Amit Chopra & Maggie Hohlfeld

Google Developers Agency Program | Awards

2 years, 225+ agencies and 36 countries later, the Google Developers Agency Program has grown from a simple effort to connect with development agencies working on mobile apps to a global, exclusive program that recognizes and trains the best software agencies in the world.

The program's mission remains simple: identify, upskill, and promote top development agencies. It provides agencies with access to local events, hangouts, dedicated content, priority support from product and developer relations teams, and upcoming developer products.

Google Developers Agency Program | Logo

As a way to identify top agencies who demonstrated excellence in Android development within the program and promote them, we first announced the "Certified Agency" Program at Google I/O in May 2015. Certification has now become the gold standard for Android development agencies, and has helped to push the agency ecosystem to improve as a whole.

Today we are pleased to share that we now have reached 50 Certified agencies from 15 different countries in the program.

Google Developers Agency Program | Award Night

We celebrated our newest class of Certified agencies at Google I/O, where it all began, and can't wait to see how much the program will have grown by this time next year.

Learn more about the program by clicking on Google Developers Agency program.

 

31st May 2018 |

Innovate with Google at the 2018 China-US Young Maker Competition!

Posted by Aimin Zhu, University Relations Manager, Google China

Following the announcement of the 2018 China-U.S. Young Maker Competition, we are very excited that there are already over 1000 participants with over a month left before the final submission deadline! Project submissions are open to all makers, developers, and students age 18-40 in the United States. Check out the projects others are developing on the project submissions page.

Participants may choose to develop their projects using any platform. Makers and students in the US are encouraged to consider the many Google technologies and platforms available to build innovative solutions:

The project submission deadline is June 22, so there is still plenty of time to join the competition! If you have additional questions about the competition or the project submission process, please visit the contest FAQ.

The top 10 projects selected by the judges will win an all-expenses-paid trip to Beijing, China, to join the finals with Chinese makers on August 13-17. We look forward to meeting you at the final event!

For more details, please see US divitional contest landing page hosted by Hackster.io.

 

30th May 2018 |

Creating AR Experiences for I/O: Our Process

Posted by Karin Levi, Product Marketing, ARCore

A few weeks ago at Google I/O we released a major update to ARCore, Google's AR development platform. We added new APIs like Cloud Anchors, that enable multi-user, collaborative AR experiences and Augmented Images that enable activation of 2D images into 3D objects. All of these updates are going to change the way we use AR today and enable developers to create richer, more immersive AR apps.

With these new capabilities, we decided to put our platform to the test. So we built real experiences to showcase how these all come to life. All demos were presented at the I/O AR & VR sandbox area. We open sourced them to make sure you can see how simple it is to build these experiences. We're pretty happy with how they turned out and would love to share with you some learning and insights from behind the scenes.

Light Board - Multiplayer game

Light Board is an AR multiplayer tabletop game where two players on floating game boards launch colored projectiles at each other.

While building Light Board it was important for us to keep in mind who the end users are. We wanted it to be a simple/fun game for developers to try out while visiting the I/O sandbox. The developers would only have a couple minutes to play while passing through, so it needed to allow players (even non-gamers) to pick it up and play with very little setup.

The artwork for Light Board was a major focus. Our mission for the look of the game was to align with the design and decor of I/O 2018. This way, our app would feel like an extension of everything the attendees saw around them. As a result, our design philosophy had 3 goals; bright accent colors, simple graphic shapes and natural physical materials.

Left: Design for AR/VR Sandbox at I/O 2018. Right: Key art for Light Board game boards

The artwork was created in Maya and Cinema 4D. We created physically based materials for our models using Substance Painter. Just as continuous iteration is crucial for engineering, it is also important when creating art assets. With that in mind, we kept careful track of our content pipeline, even for this relatively simple project. This allowed us to quickly try out different looks and board styles before settling on our final design.

On the engineering front we selected the Unity game engine as our dev environment. Unity gives us a couple of important advantages. First, it is easy to get great looking 3D graphics up and running right away. Second, the engine component is already complete, so we could immediately start iterating on gameplay code. As with the artwork, this allowed us to test gameplay options before we made a final decision. Additionally, Unity gave us support for both Android and iOS with only a little extra work.

To handle the multiplayer aspect we used Firebase Realtime Database. We were concerned with network performance at the event, and felt that the persistent nature of a database would make it more tolerant of poor networks. As it turned out, it worked very well and we got the ability to quit and rejoin games for free!

We had a lot of fun building Light Board and we hope people can use it as an example of how easy it can be to not only build AR apps, but to use really cool features like Cloud Anchors. Please check out our open source repo and give Light Board a try!

Just a line - Draw with your friends

In March, we released Just a Line, an Android app that lets you draw in the air with your phone. It's a simple experiment meant to showcase the power of ARCore. At Google I/O, we added Cloud Anchors to the app so that two people can draw at once in the same space, even if one of them is using Android and the other iOS.

Both apps were built natively: The Android version was written in Android Studio, and the iOS version was built in xCode. ARCore's Cloud Anchors enable Just a Line to pair two phones, allowing users to draw simultaneously in a shared space. Pairing works across Android and iOS devices, and drawings are synchronized live through a Firebase Realtime Database. You can find the open-source code for iOS here and for Android here.

Illusive Images - Art exhibition comes to life

"Illusive Images" demo is an augmented gallery consisting of 3 artworks, each exploring a different augmented image use case and user experience. As one walks from side to side, around the object, or gazes in a specific direction, 2D artworks are married with 3D, inviting the viewer to enter into the space of the artwork spanning well beyond the physical frame.

Due to the visual design nature of our augmented images, we experimented a lot with creating databases with varying degrees of features. In order to get the best results, we iterated quickly by resizing the canvas for the artwork. We also moved and stretched the brightness and contrast levels. These variations helped to achieve the most optimal image without compromising design intent.

The app was built in Unity with ARCore, with the majority of assets created in Cinema 4D. Mograph animations were imported into Unity as an fbx, and driven entirely by the position of the user in relation to the artwork. An example project can be found here.

To make your development experience easier, we open sourced all the demos our team built. We hope you find this useful! You can also visit our website to learn more and start building AR experiences today.

 

25th May 2018 |

Introducing the Data Studio Community Connector Codelab
Posted by Minhaz Kazi, Developer Advocate, Google Data Studio

Data Studio is Google's free next gen business intelligence and data visualization platform. Community Connectors for Data Studio let you build connectors to any internet-accessible data source using Google Apps Script. You can build Community Connectors for commercial, enterprise, and personal use. Learn how to build Community Connectors using the Data Studio Community Connector Codelab.

Use the Community Connector Codelab

The Community Connector Codelab explains how Community Connectors work and provides a step by step tutorial for creating your first Community Connector. You can get started if you have a basic understanding of Javascript and web APIs. You should be able to build your first connector in 30 mins using the Codelab.

If you have previously imported data into Google Sheets using Apps Script, you can use this Codelab to get familiar with the Community Connectors and quickly port your code to fetch your data directly into Data Studio.

Why create your own Community Connector

Community Connectors can help you to quickly deliver an end-to-end visualization solution that is user-friendly and delivers high user value with low development efforts. Community Connectors can help you build a reporting solution for personal, public, enterprise, or commercial data, and also do explanatory visualizations.

  • If you provide a web based service to customers, you can create template dashboards or even let your users create their own visualization based on the users' data from your service.
  • Within an enterprise, you can create serverless and highly scalable reporting solutions where you have complete control over your data and sharing features.
  • You can create an aggregate view of all your metrics across different commercial platforms and service providers while providing drill down capabilities.
  • You can create connectors to public and open datasets. Sharing these connectors will enable other users to quickly gain access to these datasets and dive into analysis directly without writing any code.

By building a Community Connector, you can go from scratch to a push button customized dashboard solution for your service in a matter of hours.

The following dashboard uses Community Connectors to fetch data from Stack Overflow, GitHub, and Twitter. Try using the date filter to view changes across all sources:

This dashboard uses the following Community Connectors:

You can build your own connector to any preferred service and publish it in the Community Connector gallery. The Community Connector gallery now has over 90 Partner Connectors connecting to more than 450 data sources.

Once you have completed the Codelab, view the Community Connector documentation and sample code on the Data Studio open source repository to build your own connector.

 

29th May 2018 |

Let's hit the road! Join Google Developers Community Roadshow

Posted by Przemek Pardel, Developer Relations Program Manager, Regional Lead

This summer, the Google Developers team is touring 10 countries and 14 cities in Europe in a colorful community bus. We'll be visiting university campuses and technology parks to meet you locally and talk about our programs for developers and start-ups.

Join us to find out how Google supports developer communities. Learn about Google Developer Groups, Women Techmakers program and the various ways we engage with the broader developer community in Europe and around the world.

Our bus will stop in the following locations between 12.00 and 4pm:

  • 4th June, Estonia, Tallinn
  • 6th June, Latvia, Riga
  • 8th June, Lithuania, Vilnius
  • 11th June, Poland, Gdańsk
  • 13th June, Poland, Poznań
  • 15th June, Poland, Kraków
  • 18th June, Slovenia, Ljubljana
  • 19th June, Croatia, Zagreb
  • 21st June, Bulgaria, Sofia

Want to meet us on the way? Sign up for the event in your city here.

What to expect:

  • Information: learn more about how Google supports developer communities around the world, from content, speakers to a global network
  • Network: with other community organizers from your city
  • Workshops: join some of our product workshops on tour (Actions on Google, Google Cloud, Machine Learning), and meet with Google teams
  • Fun: live music, games and more!

Are you interested in starting a new developer community or are you an organizer who would like to join the global Google Community Program? Let us know and receive an invitation-only pass to our private events.

 

22nd May 2018 |

Web Notifications API Support Now Available in FCM Send v1 API

Posted by Mertcan Mermerkaya, Software Engineer

We have great news for web developers that use Firebase Cloud Messaging to send notifications to clients! The FCM v1 REST API has integrated fully with the Web Notifications API. This integration allows you to set icons, images, actions and more for your Web notifications from your server! Better yet, as the Web Notifications API continues to grow and change, these options will be immediately available to you. You won't have to wait for an update to FCM to support them!

Below is a sample payload you can send to your web clients on Push API supported browsers. This notification would be useful for a web app that supports image posting. It can encourage users to engage with the app.

{
"message": {
"webpush": {
"notification": {
"title": "Fish Photos 🐟",
"body":
"Thanks for signing up for Fish Photos! You now will receive fun daily photos of fish!",
"icon": "firebase-logo.png",
"image": "guppies.jpg",
"data": {
"notificationType": "fishPhoto",
"photoId": "123456"
},
"click_action": "https://example.com/fish_photos",
"actions": [
{
"title": "Like",
"action": "like",
"icon": "icons/heart.png"
},
{
"title": "Unsubscribe",
"action": "unsubscribe",
"icon": "icons/cross.png"
}
]
}
},
"token": "<APP_INSTANCE_REGISTRATION_TOKEN>"
}
}

Notice that you are able to set new parameters, such as actions, which gives the user different ways to interact with the notification. In the example below, users have the option to choose from actions to like the photo or to unsubscribe.

To handle action clicks in your app, you need to add an event listener in the default firebase-messaging-sw.js file (or your custom service worker). If an action button was clicked, event.action will contain the string that identifies the clicked action. Here's how to handle the "like" and "unsubscribe" events on the client:

// Retrieve an instance of Firebase Messaging so that it can handle background messages.
const messaging = firebase.messaging();

// Add an event listener to handle notification clicks
self.addEventListener('notificationclick', function(event) {
if (event.action === 'like') {
// Like button was clicked

const photoId = event.notification.data.photoId;
like(photoId);
}
else if (event.action === 'unsubscribe') {
// Unsubscribe button was clicked

const notificationType = event.notification.data.notificationType;
unsubscribe(notificationType);
}

event.notification.close();
});

The SDK will still handle regular notification clicks and redirect the user to your click_action link if provided. To see more on how to handle click actions on the client, check out the guide.

Since different browsers support different parameters in different platforms, it's important to check out the browser compatibility documentation to ensure your notifications work as intended. Want to learn more about what the Send API can do? Check out the FCM Send API documentation and the Web Notifications API documentation. If you're using the FCM Send API and you incorporate the Web Notifications API in a cool way, then let us know! Find Firebase on Twitter at @Firebase, and Facebook and Google+ by searching "Firebase".

 

17th May 2018 |

Start making your business more accessible using Primer
Posted by Lisa Gevelber, VP Marketing Ads and Americas

Over one billion people in the world have some form of disability.

That's why we make accessibility a core consideration when we develop new products—from concept to launch and beyond. It's good for users and good for business: Building products that don't consider a diverse range of needs could mean missing a substantial group of potential users and customers.

But impairments and disabilities are as varied as people themselves. For designers, developers, marketers or small business owners, making your products and designs more accessible might seem like a daunting task. How can you make sure you're being more inclusive? Where do you start?

Today, Global Accessibility Awareness Day, we're launching a new suite of resources to help creators, marketers, and designers answer those questions and build more inclusive products and designs.

The first step is learning about accessibility. Simply start by downloading the Google Primer app and search "accessibility." You'll find five-minute lessons that help you better understand accessibility, and learn practical tips to start making your own business, products and designs more accessible, like key design principles for building a more accessible website. You may even discover that addressing accessibility issues can improve the user experience for everyone. For instance, closed captions can make your videos accessible to more people whether they have a hearing impairment or are sitting in a crowded room.

Next, visit the Google Accessibility page and discover free tools that can help you make your site or app more accessible for more people. The Android Developers site also contains a wide range of suggestions to help you improve the accessibility of your app.

We hope these resources will help you join us in designing and building for a more inclusive future. After all, an accessible web and world is a better one—both for people and for business.

"Excited to see the new lessons on accessibility that Primer launched today. They help us learn how to start making websites and products more accessible. With over 1 billion people in the world with some form of disability, building a more inclusive web is the right thing to do both for people and for business."

- Ari Balogh, VP Engineering

 

15th May 2018 |

Developing bots for Hangouts Chat

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

We recently introduced Hangouts Chat to general availability. This next-generation messaging platform gives G Suite users a new place to communicate and to collaborate in teams. It features archive & search, tighter G Suite integration, and the ability to create separate, threaded chat rooms. The key new feature for developers is a bot framework and API. Whether it's to automate common tasks, query for information, or perform other heavy-lifting, bots can really transform the way we work.

In addition to plain text replies, Hangouts Chat can also display bot responses with richer user interfaces (UIs) called cards which can render header information, structured data, images, links, buttons, etc. Furthermore, users can interact with these components, potentially updating the displayed information. In this latest episode of the G Suite Dev Show, developers learn how to create a bot that features an updating interactive card.

As you can see in the video, the most important thing when bots receive a message is to determine the event type and take the appropriate action. For example, a bot will perform any desired "paperwork" when it is added to or removed from a room or direct message (DM), generically referred to as a "space" in the vernacular.

Receiving an ordinary message sent by users is the most likely scenario; most bots do "their thing" here in serving the request. The last event type occurs when a user clicks on an interactive card. Similar to receiving a standard message, a bot performs its requisite work, including possibly updating the card itself. Below is some pseudocode summarizing these four event types and represents what a bot would likely do depending on the event type:

function processEvent(req, rsp) {
var event = req.body; // event type received
var message; // JSON response message

if (event.type == 'REMOVED_FROM_SPACE') {
// no response as bot removed from room
return;

} else if (event.type == 'ADDED_TO_SPACE') {
// bot added to room; send welcome message
message = {text: 'Thanks for adding me!'};

} else if (event.type == 'MESSAGE') {
// message received during normal operation
message = responseForMsg(event.message.text);

} else if (event.type == 'CARD_CLICKED') {
// user-click on card UI
var action = event.action;
message = responseForClick(
action.actionMethodName, action.parameters);
}

rsp.send(message);
};

The bot pseudocode as well as the bot featured in the video respond synchronously. Bots performing more time-consuming operations or those issuing out-of-band notifications, can send messages to spaces in an asynchronous way. This includes messages such as job-completed notifications, alerts if a server goes down, and pings to the Sales team when a new lead is added to the CRM (Customer Relationship Management) system.

Hangouts Chat supports more than JavaScript or Python and Google Apps Script or Google App Engine. While using JavaScript running on Apps Script is one of the quickest and simplest ways to get a bot online within your organization, it can easily be ported to Node.js for a wider variety of hosting options. Similarly, App Engine allows for more scalability and supports additional languages (Java, PHP, Go, and more) beyond Python. The bot can also be ported to Flask for more hosting options. One key takeaway is the flexibility of the platform: developers can use any language, any stack, or any cloud to create and host their bot implementations. Bots only need to be able to accept HTTP POST requests coming from the Hangouts Chat service to function.

At Google I/O 2018 last week, the Hangouts Chat team leads and I delivered a longer, higher-level overview of the bot framework. This comprehensive tour of the framework includes numerous live demos of sample bots as well as in a variety of languages and platforms. Check out our ~40-minute session below.

To help you get started, check out the bot framework launch post. Also take a look at this post for a deeper dive into the Python App Engine version of the vote bot featured in the video. To learn more about developing bots for Hangouts Chat, review the concepts guides as well as the "how to" for creating bots. You can build bots for your organization, your customers, or for the world. We look forward to all the exciting bots you're going to build!

 

15th May 2018 |

.app is now open for general registration
Posted by Christina Chiou Yeh, Google Registry

On May 1 we announced .app, the newest top-level domain (TLD) from Google Registry. It's now open for general registration so you can register your desired .app name right now. Check out what some of our early adopters are already doing on .app around the globe.

We begin our journey with sitata.app, which provides real-time travel information about events like protests or transit strikes. Looks all clear, so our first stop is the Caribbean, where we use thelocal.app and start exploring. After getting some sun, we fly to the Netherlands, where we're feeling hungry. Luckily, picnic.app delivers groceries, right to our hotel. With our bellies full, it's time to head to India, where we use myra.app to order the medicine, hygiene, and baby products that we forgot to pack. Did we mention this was a business trip? Good thing lola.app helped make such a complex trip stress free. Time to head home now, so we slip on a hoodie we bought on ov.app and enjoy the ride.

We hope these apps inspire you to also find your home on .app! Visit get.app to choose a registrar partner to register your domain.

 

9th May 2018 |

Introducing ML Kit

Posted by Brahim Elbouchikhi, Product Manager

In today's fast-moving world, people have come to expect mobile apps to be intelligent - adapting to users' activity or delighting them with surprising smarts. As a result, we think machine learning will become an essential tool in mobile development. That's why on Tuesday at Google I/O, we introduced ML Kit in beta: a new SDK that brings Google's machine learning expertise to mobile developers in a powerful, yet easy-to-use package on Firebase. We couldn't be more excited!



Machine learning for all skill levels

Getting started with machine learning can be difficult for many developers. Typically, new ML developers spend countless hours learning the intricacies of implementing low-level models, using frameworks, and more. Even for the seasoned expert, adapting and optimizing models to run on mobile devices can be a huge undertaking. Beyond the machine learning complexities, sourcing training data can be an expensive and time consuming process, especially when considering a global audience.

With ML Kit, you can use machine learning to build compelling features, on Android and iOS, regardless of your machine learning expertise. More details below!

Production-ready for common use cases

If you're a beginner who just wants to get the ball rolling, ML Kit gives you five ready-to-use ("base") APIs that address common mobile use cases:

  • Text recognition
  • Face detection
  • Barcode scanning
  • Image labeling
  • Landmark recognition

With these base APIs, you simply pass in data to ML Kit and get back an intuitive response. For example: Lose It!, one of our early users, used ML Kit to build several features in the latest version of their calorie tracker app. Using our text recognition based API and a custom built model, their app can quickly capture nutrition information from product labels to input a food's content from an image.

ML Kit gives you both on-device and Cloud APIs, all in a common and simple interface, allowing you to choose the ones that fit your requirements best. The on-device APIs process data quickly and will work even when there's no network connection, while the cloud-based APIs leverage the power of Google Cloud Platform's machine learning technology to give a higher level of accuracy.

See these APIs in action on your Firebase console:

Heads up: We're planning to release two more APIs in the coming months. First is a smart reply API allowing you to support contextual messaging replies in your app, and the second is a high density face contour addition to the face detection API. Sign up here to give them a try!

Deploy custom models

If you're seasoned in machine learning and you don't find a base API that covers your use case, ML Kit lets you deploy your own TensorFlow Lite models. You simply upload them via the Firebase console, and we'll take care of hosting and serving them to your app's users. This way you can keep your models out of your APK/bundles which reduces your app install size. Also, because ML Kit serves your model dynamically, you can always update your model without having to re-publish your apps.

But there is more. As apps have grown to do more, their size has increased, harming app store install rates, and with the potential to cost users more in data overages. Machine learning can further exacerbate this trend since models can reach 10's of megabytes in size. So we decided to invest in model compression. Specifically, we are experimenting with a feature that allows you to upload a full TensorFlow model, along with training data, and receive in return a compressed TensorFlow Lite model. The technology behind this is evolving rapidly and so we are looking for a few developers to try it and give us feedback. If you are interested, please sign up here.

Better together with other Firebase products

Since ML Kit is available through Firebase, it's easy for you to take advantage of the broader Firebase platform. For example, Remote Config and A/B testing lets you experiment with multiple custom models. You can dynamically switch values in your app, making it a great fit to swap the custom models you want your users to use on the fly. You can even create population segments and experiment with several models in parallel.

Other examples include:

Get started!

We can't wait to see what you'll build with ML Kit. We hope you'll love the product like many of our early customers:

Get started with the ML Kit beta by visiting your Firebase console today. If you have any thoughts or feedback, feel free to let us know - we're always listening!

 

15th May 2018 |

Actions on Google at I/O: More ways to drive engagement and create rich, immersive experiences
Posted by Brad Abrams, Group Product Manager, Actions on Google

The Google Assistant is becoming even more conversational and visual – helping people get things done, save time and be more present. And developers like you have been a big part of this story, making the Assistant more useful across more than 500 million devices. Starbucks, Disney, Zyrtec, Singapore Airlines and many others are engaging with users through the Actions they've built. In total, the Google Assistant is ready to help with over 1 million Actions, built by Google and all of you.

Ever since we launched Actions on Google, our mission has been to give you the tools you need to create engaging Actions, making them a part of people's everyday lives. Just over the past six months we've made significant upgrades to our platform to bring us closer to that vision. We made improvements to help your Actions get discovered, opened Actions on Google to more languages, took a few steps toward making your Actions more creative and visually appealing, launched a new conversation design site, and last week announced a new program to invest in startups that push the Assistant ecosystem forward.


Today, I want to share how we're making it even easier for app and web developers to get started with the Google Assistant.

Welcoming Android and web developers

We've seen a lot of great Android developers build Actions that complement their mobile apps. You can already create a personal, connected experience across your Android app and the Actions you build for the Assistant. Now we're making it possible to extend your Android app experiences to the Assistant in even more ways.

Think of your Actions for the Google Assistant as a companion experience to your app that users can access at home or on the go, across phones, smart speakers, TVs, cars, watches, headphones, and, soon, Smart Displays. If you want to personalize some of the experiences from your Android app, account linking lets your users have a consistent experience whether they're in your app or interacting with your Action.

Seamless digital content subscriptions from Google Play

We added support for seamless digital subscriptions so your users can enjoy the content and digital goods they bought in the Google Play Store right in your Assistant Action. For example, since I'm a premium subscriber in the Economist's app, I can now enjoy their premium content on any Assistant-enabled device.

And while you can already help users complete transactions for physical goods, soon you will be able to offer digital goods and subscriptions directly from your Actions.

Fully customizable visuals for display surfaces

The Assistant blends conversation with rich visual interactions for phones, Smart Displays and TVs. We've made it so your Actions already work on these visual surfaces with no extra work.

Starting today, you can take this a step further and better customize the appearance of your Actions for visual surfaces by, among other things, controlling the background image, defining the typeface, and setting color themes used in your Action. Just head to the Actions console, make your changes and test them in the simulator today. These changes will be available on phones, TVs and Smart Displays, when they launch.

Here's an example screenshot from a demo Action:

And below, you can see how Volley was able to create a full screen immersive experience for their game "King for a Day." The ability to create customizable edge-to-edge visuals will launch for developers in the next few months.

Introducing App Actions

In the Android keynote today, we announced a new feature called App Actions. App Actions are a new way to raise the visibility of your Android app to users as they start their tasks. We look forward to creating another channel to reach more users that can engage with your App Actions in the Google Assistant.

App Actions will be available for all developers to try soon; please sign up here if you'd like to be notified.

Find new users and keep them coming back

After you've built an Action for the Assistant, you want to get lots of people engaged with your experience. You can already prompt your users to sign up for Action Notifications on their phones, and soon, we'll be expanding support so users can get notifications on smart speakers and
Smart Displays. Today we're also announcing three updates aimed at helping more users discover your Actions and keeping them engaged on a daily basis.

Map your Actions to users' queries with built-in intents

Over the past 20 years, Google has helped connect people with the information, services and content they're looking for by organizing, ranking, and showing the most relevant experience for users. With built-in intents, we're bringing this expertise to use in the Google Assistant.

When someone says "Hey Google, let's play a maps quiz" they expect the Assistant to suggest relevant games that might pertain to geography. For that to happen, we need to understand the user's fundamental intent. This can be pretty difficult; just think of the thousands of ways a user could ask for a game.


To handle this complexity, we're beginning to map all the ways that people can ask for things into a taxonomy of built-in intents. Today, we're making the first set of these intents available to you so you can give the Assistant a deeper understanding of what your Action can do. As a result, the Assistant will be able to better understand and recommend Actions to meet a user's intent. We'll be rolling out hundreds of built-in intents in the coming months.

Today you can implement built-in intents in your action and test them in the simulator. You'll be able to use these in production soon.

Promote your Actions from anywhere a link works
We're now making it easier to drive traffic to your Actions with Action Links. These are hyperlinks you can use anywhere—your website, emails, blog, even social media channels like Facebook and Twitter—that deep link directly into your Action.

Now, when a developer like Headspace has something new to share, they can spread the word and drive engagement directly into their Action from across the web. Users can click on the link and jump into their Action's experience on phones and Smart Displays, and if they click the Action Link while on desktop, they can choose which Assistant-enabled device they'd like to use – from smart speakers to TVs. Go see an example on Headspace's website, or give their Action Link a try here.


If you've already built an Action and want to spread the word, starting today you can visit the Actions console to find your Action Links and get going.

Become a part of your users' daily routines

To consistently re-engage with users, you need to become a part of their daily habits. Google Assistant users can already use routines to execute multiple Actions with a single command, perfect for those times when users wake up in the morning, head out of the house, get ready for bed or many of the other tasks we perform throughout the day.

Now, with Routine Suggestions, after someone engages with your Action, you can prompt them to add your Action to their routines with just a couple of taps.

So when I leave the house for work each morning, I can have my Assistant order my Americano from Starbucks and play that premium content from the Economist.

You can enable your Action for Routine Suggestions in the console today, and it will be working in production soon.


And more...

Before you run off and start sharing Actions links to all of your followers on social media, check out some of the other announcements we're making here at I/O:

  • Better testing: Testing with real users is the best way to ensure your Action has high quality. Starting today, you can deploy your Actions—or updates to your Actions—to a limited set of users in pre-launch alpha and beta environments.
  • Voice transactions on smart speakers: Starting today, users in the US will be able to purchase goods via voice-activated speakers like Google Home, and this is coming to the UK, Australia, Canada, France, Germany, Japan in the next few weeks.
  • A redesigned Actions console: The new onboarding experience allows you to choose from several categories to tailor your workflow, with a new UI to guide you through the stages of the developer workflow, making it faster and easier to build your Actions.
  • Improvements to the directory: Users can leave written reviews about your Actions while signed in, providing you praise and valuable feedback to fine-tune your Actions over time. We also introduced new dynamic sections—"Popular," "You Might Like" and "Editorial Picks"—in the Explore tab to create new ways for your Actions to be discovered by users.
  • The Google Assistant SDK for devices: We offer support for 14 locales, and for cards visualization and media (news and podcasts). To see some of these features in action, check out our new poster maker experiment with Deeplocal, or stop by to see it at the Google Assistant I/O Sandbox.
  • Account Linking via Voice: We're launching a developer preview of Google Sign-In for the Assistant. Users will soon be able to connect, or create an account with your Actions using just their voice so no need to set up an account linking system for your users.
  • 500,000 developers on Dialogflow: the team hit a big milestone with over half a million developers building conversational experiences! Their new releases help you onboard faster, debug smarter, enrich natural language understanding quality, and build for new Google Assistant surfaces.

Extend your experiences to the Google Assistant
We're delighted to see that many of you are starting to test the waters in this emerging era of conversational computing. If you're already building mobile or web apps but haven't tried building conversational Actions for the Google Assistant just yet, now is the perfect time to get started. Start thinking of the companion experiences that could be a fit for the Google Assistant. We have easy-to-follow guides and a community program with rewards and Google Cloud credits to get you up and running in no time. We can't wait to try out your Actions soon!

 

8th May 2018 |

Introducing the Google Photos partner program

Posted by Jan-Felix Schmakeit, Google Photos Developer Lead

People create and consume photos and videos in many different ways, and we think it should be easier to do more with the photos you've taken, across all the apps and devices you use.

That's why we're introducing a new Google Photos partner program that gives you the tools and APIs to build photo and video experiences in your products that are smarter, faster and more helpful.

Building with the Google Photos Library API

With the Google Photos Library API, your users can seamlessly access their photos whenever they need them.

Whether you're a mobile, web, or backend developer, you can use this REST API to utilize the best of Google Photos and help people connect, upload, and share from inside your app.

Your user is always in the driver's seat. Here are a few things you can help them to do:

  • Easily find photos, based on
    • what's in the photo
    • when it was taken
    • attributes like description and media format
  • Upload directly to their photo library
  • Organize albums and add titles and locations
  • Use shared albums to easily transfer and collaborate

With the Library API, you don't have to worry about maintaining your own storage and infrastructure, as photos and videos remain safely backed up in Google Photos.

Putting machine intelligence to work in your app is simple too. You can use smart filters, like content categories, to narrow down or exclude certain types of photos and videos and make it easier for your users to find the ones they're looking for.

We've also aimed to take the hassle out of building a smooth user experience. Features like thumbnailing and cross-platform deep-links mean you can offload common tasks and focus on what makes your product unique.

Getting started

Today, we're launching a developer preview of the Google Photos Library API. You can start building and testing it in your own projects right now.

Get started by visiting our developer documentation where you can also express your interest in joining the Google Photos partner program. Some of our early partners, including HP, Legacy Republic, NixPlay, Xero and TimeHop are already building better experiences using the API.

If you are following Google I/O, you can also join us for our session to learn more.

We're excited for the road ahead and look forward to working with you to develop new apps that work with Google Photos.

 

15th May 2018 |

Ready for Production Apps: Flutter Beta 3

Posted by the Flutter Team at Google

This week at Google I/O, we're announcing the third beta release of Flutter, our mobile app SDK for creating high-quality, native user experiences on iOS and Android, along with showcasing new tooling partners, usage of Flutter by several high-profile customers, and announcing official support from the Material team.

We believe mobile development needs an upgrade. All too often, developers are forced to compromise between quality and productivity: either building the same application twice on both iOS and Android, or settling for a cross-platform solution that makes it hard to deliver the native experience that customers demand. This is why we built Flutter: to offer a new path for mobile development, focused foremost on native performance, advanced visuals, and dramatically improving developer velocity and productivity.

Just twelve months ago at Google I/O 2017, we announced Flutter and delivered an early alpha of the toolkit. Over the last year, we've invested tens of thousands of engineering hours preparing Flutter for production use. We've rewritten major parts of the engine for performance, added support for developing on Windows, published tooling for Android Studio and Visual Studio Code, integrated Dart 2 and added support for more Firebase APIs, added support for inline video, ads and charts, internationalization and accessibility, addressed thousands of bugs and published hundreds of pages of documentation. It's been a busy year and we're thrilled to share the latest beta release with you!

Flutter offers:

  1. High-velocity development with features like stateful hot reload, which helps you quickly and easily experiment with your application without having to rebuild from scratch.
  2. Expressive and flexible designs with a layered, extensible architecture of rich, composable, customizable UI widget sets and animation libraries that enables designers' dreams to come to life.
  3. High-quality experiences across devices and platforms with our portable, GPU-accelerated renderer and ahead-of-time compilation to lightning-fast machine code.

Empowering Developers and Designers

As evidence of the power that Flutter can offer applications, 2Dimensions are this week releasing a preview of a new tool for creating powerful interactive animations with Flutter. Here's an example of the output of their software:

preview of a new tool for creating powerful interactive animations

What you are seeing here is Flutter rendering 2D skeletal mesh animations on the phone in real-time. Achieving this level of graphical horsepower is thanks to Flutter's use of the hardware-accelerated Skia engine that draws every pixel to the screen, paired with the blazingly fast ahead-of-time compiled Dart language. But it gets better: note how the demo slider widget is translucently overlaid on the animation. Flutter seamlessly combines user interface widgets with 60fps animated graphics generated in real time, with the same code running on iOS and Android.

Here's what Luigi Rosso, co-founder of 2Dimensions, says about Flutter:

"I love the friction-free iteration with Flutter. Hot Reload sets me in a feedback loop that keeps me focused and in tune with my work. One of my biggest productivity inhibitors are tools that run slower than the developer. Flutter finally resets that bar."

One common challenge for mobile application creators is the transition from early design sketches to an interactive prototype that can be piloted or tested with customers. This week at Google I/O, Infragistics, one of the largest providers of developer tooling and components, are announcing their commitment to Flutter and demonstrating how they've set out to close the designer/developer gap even further with supportive tooling. Indigo Design to Code Studio enables designers to add interactivity to a Sketch design, and generate a pixel-perfect Flutter application.

Customer Adoption

We launched Flutter Beta 1 just ten weeks ago at Mobile World Congress, and it is exciting to see the momentum since then, both on Github, and in the number of published Flutter applications. Even though we're still building out Flutter, we're pleasantly surprised to see strong early adoption of the SDK, with some high-profile customer examples already published. One of the most popular is the companion app to the award-winning Hamilton Broadway musical, built by Posse Digital, with millions of monthly users, and an average rating of 4.6 on the Play Store.

This week, Alibaba is announcing their adoption of Flutter for Xianyu, one of their flagship applications with over twenty million monthly active users. Alibaba praises Flutter for its consistency across platforms, the ease of generating UI code from designer redlines, and the ease with which their native developers have learned Flutter. They are currently rolling out this updated version to their customers.

Another company now using Flutter is Groupon, who is prototyping and building new code for their merchant application. Here's what they say about using it:

"I love the fact that Flutter integrates with our existing app and our team has to write code just once to provide a native experience for both our apps. This significantly reduces our time to market and helps us deliver more features to our customers." Varun Menghani, Head of Merchant Product Management, Groupon

In the short time since the Beta 1 launch, we've seen hundreds of Flutter apps published to the app stores, across a wide variety of application categories. Here are a few examples of the diversity of apps being created with Flutter:

  • Abbey Road Studios are previewing Topline, a new version of their music production app.
  • AppTree provides a low-code enterprise app platform for brands like McDonalds, Stanford, Wayfair & Fermilab.
  • Birch Finance lets you manage and optimize your existing credit cards.
  • Coach Yourself offers mindfulness and cognitive-behavioral training.
  • OfflinePal collects nearby activities in one place, from concerts and theaters, to mountain hiking and tourist attractions.

Closer to home, Google continues to use Flutter extensively. One new example announced at I/O comes from Google Ads, who are previewing their new Flutter-based AdWords app that allows businesses to track and optimize their online advertising campaigns. Sridhar Ramaswamy, SVP for Ads and Commerce, says:

"Flutter provides a modern reactive framework that enabled us to unify the codebase and teams for our Android and iOS applications. It's allowed the team to be much more productive, while still delivering a native application experience to both platforms. Stateful hot reload has been a game changer for productivity."

New in Flutter Beta 3

Flutter Beta 3, shipping today at I/O, continues us on the glidepath towards our eventual 1.0 release with new features that complete core scenarios. Dart 2, our reboot of the Dart language with a focus on client development, is now fully enabled with a terser syntax for building Flutter UIs. Beta 3 is world-ready with localization support including right-to-left languages, and also provides significantly improved support for building highly-accessible applications. New tooling provides a powerful widget inspector that makes it easier to see the visual tree for your UI and preview how widgets will look during development. We have emerging support for integrating ads through Firebase. And Visual Studio Code is now fully supported as a first-class development tool, with a dedicated Flutter extension.

The Material Design team has worked with us extensively since the start. We're happy to announce that as of today, Flutter is a first-class toolkit for Material, which means the Material and Flutter teams will partner to deliver even more support for Material Design. Of course, you can continue to use Flutter to build apps with a wide range of design aesthetics to express your brand.

More information about the new features in Flutter Beta 3 can be found at the Flutter blog on Medium. If you already have Flutter installed, just one command -- flutter upgrade -- gets you on the latest build. Otherwise, you can follow our getting started guide to install Flutter on macOS, Windows or Linux.

Roadmap to Release

Flutter has long been used in production at Google and by the public, even though we haven't yet released "1.0." We're approaching our 1.0 quality bar, and in the coming months you'll see us focus on some specific areas:

  1. Performance and size. We'll work on improving both the speed and consistency of Flutter's performance, and offer additional tools and documentation for diagnosing potential issues. We'll also reduce the minimum size of a Flutter application;
  2. Compatibility. We are continuing to grow our support for a broad range of device types, including older 32-bit devices and expanding our set of out-of-the-box iOS widgets. We're also working to make it easier to add Flutter to your existing Android or iOS codebase.
  3. Ecosystem. In partnership with the broader community, we continue to build out an ecosystem of packages that make it easy to integrate with the broad set of platform APIs and SDKs.

Like every software project, the trade-offs are between time, quality and features. We are targeting a 1.0 release within the next year, but we will continue to adjust the schedule as necessary. As we're an open source project, our open issues are public and work scheduled for upcoming milestones can be viewed on our Github repo at any time. We welcome your help along this journey to make mobile development amazing.

Whether you're at Google I/O in person or watching remotely, we have plenty of technical content to help you get up and running. In particular, we have numerous sessions on Flutter and Material Design, as well as a new series of Flutter codelabs and a Udacity course that is now open for registration.

Since last year, we've been on a great journey together with a community of early adopters. We get an electric feeling when we see the range of apps, experiments, plug-ins, and supporting tools that developers are starting to produce using Flutter, and we're only just getting started. Now is a great time to join us. Connect with us through the website at https://flutter.io, via Twitter at @flutterio, and in our Google group and Gitter chat room. We're excited to see what you build!

 

15th May 2018 |

Open Sourcing Seurat: bringing high-fidelity scenes to mobile VR

Posted by Manfred Ernst, Software Engineer

Great VR experiences make you feel like you're really somewhere else. To create deeply immersive experiences, there are a lot of factors that need to come together: amazing graphics, spatialized audio, and the ability to move around and feel like the world is responding to you.

Last year at I/O, we announced Seurat as a powerful tool to help developers and creators bring high-fidelity graphics to standalone VR headsets with full positional tracking, like the Lenovo Mirage Solo with Daydream. Seurat is a scene simplification technology designed to process very complex 3D scenes into a representation that renders efficiently on mobile hardware. Here's how ILMxLAB was able to use Seurat to bring an incredibly detailed 'Rogue One: A Star Wars Story' scene to a standalone VR experience.

Today, we're open sourcing Seurat to the developer community. You can now use Seurat to bring visually stunning scenes to your own VR applications and have the flexibility to customize the tool for your own workflows.

Behind the scenes - How Seurat works

Seurat works by taking advantage of the fact that VR scenes are typically viewed from within a limited viewing region, and leverages this to optimize the geometry and textures in your scene. It takes RGBD images (color and depth) as input and generates a textured mesh, targeting a configurable number of triangles, texture size, and fill rate, to simplify scenes beyond what traditional methods can achieve.

To demonstrate what Seurat can do, here's a snippet from Blade Runner: Revelations, which launched today with the Lenovo Mirage Solo.

Blade Runner: Revolution by Alcon Interactive and Seismic Games

The Blade Runner universe is known for its stunning worlds, and in Revelations, you get to unravel a mystery around fugitive Replicants in the futuristic but gritty streets. To create the look and feel for Revelations, Seismic used Seurat to bring a scene of 46.6 million triangles down to only 307,000, improving performance by more than 100x with almost no loss in visual quality:

Original scene:

Seurat-processed scene:

If you're interested in learning more about Seurat or trying it out yourself, visit the Seurat GitHub page to access the documentation and source code. We're looking forward to seeing what you build!

 

5th May 2018 |

Install the Google I/O 2018 App and reserve seats for Sessions

Posted by Kerry Murrill, Google Developers Marketing

I/O is just a couple of days away! As we get closer, we hope you've had the chance to explore the schedule to make the most of the three festival days. In addition to customizing your schedule on google.com/io/schedule, you can now browse through our 150+ Sessions, and dozens of Office Hours, App Reviews, and Codelabs via the Google I/O 2018 mobile app or Action for the Assistant.

Apps: Android, iOS, Web (add to your mobile homescreen), Action for the Assistant

Here is a breakdown of all the things you can do with the mobile app this year:

Schedule on iOS

Session details on Android

Map on Android

Action on the Assistant

SCHEDULE

Browse, filter, and find Sessions, Office Hours, Codelabs, App Reviews and the recently added Meetups across 18 product areas.

Be sure to reserve seats for your favorite Sessions either in the app or at google.com/io/schedule. You can reserve as many Sessions as you'd like per day, but only one reservation per time slot is allowed. Reservations will be open until 1 hour before the start time for each Session. If a Session is full, you can join the waitlist and we'll automatically change your reservation status if any spots open up (you can now check your waitlist position on the I/O website). A portion of seats will still be available first-come, first-served for those who aren't able to reserve a seat in advance.

Most Sessions will be livestreamed and recordings will be available soon after. Want to celebrate I/O with your community? Find an I/O Extended viewing party near you.

In addition to attending Sessions, and participating in Office Hours and App Reviews, you'll have the opportunity to talk directly with Google engineers throughout the Sandbox space, which will feature multiple product demos and activations, and during Codelabs where you can complete self-paced tutorials.

Remember to save some energy for the evening! On Day 1, attendees are invited to the After Hours Block Party from 7-10PM. It will include dinner, drinks, and lots of fun, interactive experiences throughout the Sandbox space: a magic show, a diner, throwback treats, an Android themed Bouncy World, MoDA 2.0, the I/O Totem stage and lots of music throughout! On Day 2, don't miss out on the After Hours Concert from 8-10PM, with food and drinks available throughout. The concert will be livestreamed so you can join from afar, too. Stay tuned to find out who's performing this year!

To make things easy for you, your starred and reserved events will always be synced from your account across mobile, desktop, and the Assistant, so you can switch back and forth as needed. You can also filter for just your starred and reserved events to see just the events you want.

MAP

Guide yourself throughout Shoreline with the interactive map. Find your way to your next Session or see what's inside the Sandbox domes.

INFO & TRANSPORTATION

Find more information about onsite WiFi, content formats, plus travel tips to get to Shoreline, including the shuttle schedule.

Keeping up with the tradition, the mobile app and Action for the Assistant will be open sourced after I/O. Until then, we hope the mobile app and Action will help you navigate the schedule and grounds for a great experience.

T-4 days… See you soon!