Google Developers Blog

 

20th March 2019 |

Update on the Google Groups Settings API

Posted by Zerzar Bukhari, Product Manager, G Suite

In February 2019, we announced upcoming changes to the Google Groups Settings API. Based on your feedback, we're making improvements to the Groups API to make it easier for you to assess the impact and take action. For the full list of changes, see this help center article.

When will API changes take effect?

The new features will be available starting March 25, 2019. It may take up to 72 hours for the features to rollout to everyone

What's changing?

  • Property 'membersCanPostAsTheGroup' will not be merged into 'whoCanModerateContent'
  • Property 'messageModerationLevel' will continue to support MODERATE_NEW_MEMBERS (it will not be deprecated)
  • New property 'customRoleUsedInMergedSetting'
    • This will indicate if a group uses custom roles in one of the merged settings. If a group uses a custom role, review the permissions in the Groups interface. The Groups API doesn't support custom roles and may report incorrect values for permissions.
  • New properties representing all to-be-merged settings, as well as the new settings, will be added
  • New property 'whoCanDiscoverGroup' to indicate the upcoming behavior for 'showInGroupDirectory'

For complete detail on Groups Settings API behavior changes, please reference this table.

 

17th March 2019 |

This is the Future of Finance

Posted by Roy Glasberg, Head of Launchpad

Launchpad's mission is to accelerate innovation and to help startups build world-class technologies by leveraging the best of Google - its people, network, research, and technology.

In September 2018, the Launchpad team welcomed ten of the world's leading FinTech startups to join their accelerator program, helping them fast-track their application of advanced technology. Today, March 15th, we will see this cohort graduate from the program at the Launchpad team's inaugural event - The Future of Finance - a global discussion on the impact of applied ML/AI on the finance industry. These startups are ensuring that everyone has relevant insights at their fingertips and that all people, no matter where they are, have access to equitable money, banking, loans, and marketplaces.

Tune into the event from wherever you are via the livestream link

The Graduating Class of Launchpad FinTech Accelerator San Francisco'19

  • Alchemy (USA), bridging blockchain and the real world
  • Axinan (Singapore), providing smart insurance for the digital economy
  • Aye Finance (India), transforming financing in India
  • Celo (USA), increasing financial inclusion through a mobile-first cryptocurrency
  • Frontier Car Group (Germany), investing in the transformation of used-car marketplaces
  • GO-JEK (Indonesia), improving the welfare and livelihoods of informal sectors
  • GuiaBolso (Brazil), improving the financial lives of Brazilians
  • JUMO (South Africa), creating a transparent, fair money marketplace for mobile users to access loans
  • m.Paani (India), (em)powering local retailers and the next billion users in India
  • Starling Bank (UK), improving financial health with a 100% mobile-only bank

Since joining the accelerator, these startups have made great strides and are going from strength to strength. Some recent announcements from this cohort include:

  • JUMO continue to make huge steps forward including more first time banking customers as a result of improvements in predictive capabilities.
  • The team at Aye Finance have just closed $30m in Series D equity round.
  • Starling Bank has provided 150 new jobs in Southampton and have received a £100m grant from a fund aimed at increasing competition and innovation in the British banking sector, and also a £75m fundraise.
  • GuiaBolso ran a campaign to pay the bills of some its users (the beginning of the year in Brazil is a time of high expenses and debts) and is having a significant impact on credit with 80% of cases seeing interest rates on loans being cheaper than traditional banks.

We look forward to following the success of all our participating founders as they continue to make a significant impact on the global economy.

Want to know more about the Launchpad Accelerator? Visit our site, stay updated on developments and future opportunities by subscribing to the Google Developers newsletter and visit The Launchpad Blog.

 

11th March 2019 |

Launchpad Accelerator announces startup selections in Africa, Brazil, and India

Posted by Roy Glasberg, Founder of Launchpad Accelerator

For the past six years, Launchpad has connected startups from around the world with the best of Google - its people, network, methodologies, and technologies. We have worked with market leaders in over 40 countries across 6 regional programs (San Francisco, Brazil, Africa, Israel, India, and Tokyo). Launchpad also includes a new program in Mexico announced earlier this year, along with our Indie Games Accelerator and Google.org AI for Social Good Accelerator programs.

We are pleased to announce that the next cohort of startups has been selected for our upcoming programs in Africa, Brazil, and India. We reviewed over 1,000 applications for these programs, and were thoroughly impressed with the quality of startups that indicated their interest. The startups chosen represent those using technology to create a positive impact on key industries in their region and we look forward to supporting them and connecting them with startup ecosystems around the world.

In Africa, we have selected 12 startups from 6 African countries for our 3rd class in this region:

  • 54Gene (Nigeria) - Improving drug discovery by researching the genetically diverse African population
  • Data Integrated Limited (Kenya) - Automating and digitizing SME payments, connecting the street to high finance.
  • Instadiet.me (Egypt) - Connecting patients to credible nutritionists and dietitians to help them maintain a healthy and optimal weight online.
  • Kwara (Kenya) - Providing a rich digital banking platform to established fair lenders such as credit unions or savings and credit cooperatives (SACCOs), with an open API to enable and accelerate their inclusion into the formal financial ecosystem.
  • OkHi (Kenya) - A physical addressing platform for emerging markets - on a mission to enable the billions without a physical address to "be included."
  • PAPS (Senegal) - Logistics startup focused on last mile delivery and domestic market, with strong client care orientation, allowing live tracking, intelligent adresses system and automatic dispatch.
  • ScholarX (Nigeria) - Connecting high potential students with funding opportunities to help them access an education
  • Swipe2pay (Uganda) - A web and mobile payments solution that democratizes electronic payments for SMEs by making it easy for them to accept mobile as a mode of payment.
  • Tambua Health Inc. (Kenya) - Turning a normal smartphone into a powerful, non-invasive diagnostic tool for Tuberculosis and Pneumonia. It uses a cough sound acoustic signature, symptoms, risk factors, and clinical information to come up with a diagnostic report.
  • Voyc.ai (South Africa) - A CX Research Platform that helps companies understand their customers by turning their customer research into insights, profiles, and customer journey maps.
  • WellaHealth (Nigeria) - A pharmacy marketplace for affordable, high-quality disease care driven by artificial intelligence starting with malaria.
  • Zelda Learning (South Africa) - Providing free online career guidance for students looking to enter university and linking them to funding and study opportunities.

In India, for our 2nd class, we are focused on seed to growth-stage startups that operate across a number of sectors using ML and AI to solve for India-specific problems:

  • Opentalk Pte Ltd - an app that connects people around the world to become better speakers and make new friends.
  • THB - Helping healthcare providers drive full potential value from their clinical data
  • Perceptiviti Data Solutions - An AI platform for Insurance claim Ffagging, payment integrity, fraud, and abuse management
  • DheeYantra - Cognitive conversational AI for Indian vernacular languages
  • Kaleidofin - Customized financial solutions that combine multiple financial products such as savings, credit, and insurance in intuitive ways to help customers achieve their financial goals.
  • FinancePeer - A P2P lending company that connects lenders with borrowers online.
  • SmartCoin - A go-to app for providing credit access to the vastly underserved lower- and middle-income segments through advanced AI/ML models.
  • HRBOT - Using AI and Video Analytics to find employable candidates in tier 2 & 3 cities remotely.
  • Savera.ai - Remotely mapping roofs to reflect the attractiveness of a solar power plant for your roof, followed by chatbot based support to help you learn about solar (savings, RoI, reviews etc.) and connections to local service providers.
  • Adiuvo Diagnostics - Rapid wound infection assessment and management device.

In Brazil, we have chosen startups that are applying ML in interesting ways and are solving for local challenges.

  • Accountfy - SaaS platform focused on FP&A tools. Users upload trial balances and financial statements are easily built through accounting figures. harts, alerts, reports and budgets can be created too.
  • Agilize - An online accounting firm that provides annual savings of $1,500, predictability, and transparency to small-sized business through a friendly platform and massive automation.
  • Blu365 - An innovative, data-driven, customer-centric debt negotiation platform that has been transforming positively the relationship between companies and customers .
  • Estante Mágica - Estante Mágica is a free platform that, in partnership with schools, turn students into real authors, making children protagonists of their own stories.
  • Gesto - GESTO is a health tech consulting firm that uses data science to intelligently manage health insurance.
  • Rebel -A data, tech, and analytics-driven platform whose mission is to lead the transformation of the financial services market in Brazil by empowering consumers.
  • SmarttBot - Empowering individuals with the best automated investment tools in order to give them edge against bigger investors and financial institutions and improve their chances of making money.
  • Social Miner - A technology able to predict if an e-commerce visitor will buy or not and create experiences based on the consumer journey phases.

Applications are still open for Launchpad Accelerator Mexico - if you are a LATAM-based startup using technology to solve big challenges for that region, please apply to the program here.

As with all of our previous regional classes, these startups will benefit from customized programs, access to partners and mentors on the ground, and Google's support and dedication to their success.


Stay updated on developments and future opportunities by subscribing to the Google Developers newsletter, as well as The Launchpad Blog.

 

6th March 2019 |

Introducing Coral: Our platform for development with local AI

Posted by Billy Rutledge (Director) and Vikram Tank (Product Mgr), Coral Team

AI can be beneficial for everyone, especially when we all explore, learn, and build together. To that end, Google's been developing tools like TensorFlow and AutoML to ensure that everyone has access to build with AI. Today, we're expanding the ways that people can build out their ideas and products by introducing Coral into public beta.

Coral is a platform for building intelligent devices with local AI.

Coral offers a complete local AI toolkit that makes it easy to grow your ideas from prototype to production. It includes hardware components, software tools, and content that help you create, train and run neural networks (NNs) locally, on your device. Because we focus on accelerating NN's locally, our products offer speedy neural network performance and increased privacy — all in power-efficient packages. To help you bring your ideas to market, Coral components are designed for fast prototyping and easy scaling to production lines.

Our first hardware components feature the new Edge TPU, a small ASIC designed by Google that provides high-performance ML inferencing for low-power devices. For example, it can execute state-of-the-art mobile vision models such as MobileNet V2 at 100+ fps, in a power efficient manner.

Coral Camera Module, Dev Board and USB Accelerator

For new product development, the Coral Dev Board is a fully integrated system designed as a system on module (SoM) attached to a carrier board. The SoM brings the powerful NXP iMX8M SoC together with our Edge TPU coprocessor (as well as Wi-Fi, Bluetooth, RAM, and eMMC memory). To make prototyping computer vision applications easier, we also offer a Camera that connects to the Dev Board over a MIPI interface.

To add the Edge TPU to an existing design, the Coral USB Accelerator allows for easy integration into any Linux system (including Raspberry Pi boards) over USB 2.0 and 3.0. PCIe versions are coming soon, and will snap into M.2 or mini-PCIe expansion slots.

When you're ready to scale to production we offer the SOM from the Dev Board and PCIe versions of the Accelerator for volume purchase. To further support your integrations, we'll be releasing the baseboard schematics for those who want to build custom carrier boards.

Our software tools are based around TensorFlow and TensorFlow Lite. TF Lite models must be quantized and then compiled with our toolchain to run directly on the Edge TPU. To help get you started, we're sharing over a dozen pre-trained, pre-compiled models that work with Coral boards out of the box, as well as software tools to let you re-train them.

For those building connected devices with Coral, our products can be used with Google Cloud IoT. Google Cloud IoT combines cloud services with an on-device software stack to allow for managed edge computing with machine learning capabilities.

Coral products are available today, along with product documentation, datasheets and sample code at g.co/coral. We hope you try our products during this public beta, and look forward to sharing more with you at our official launch.

 

4th March 2019 |

.dev for all

Posted by Adam Seligman, VP, Developer Relations

Last week we announced the new .dev top-level domain (TLD) was open for Early Access registrations. As of today, .dev is available to anyone through your registrar of choice (typically $12-$15 for standard priced domains, varies by registrar).

We envision .dev as a home for developers. From tools to programming languages to blogs, .dev is the best place for all the amazing things that you build. Over the past few months, we've launched, or re-launched, many of our own developer sites on the new domain. Here are some of our favorites:

  • Learn how to build a better web at web.dev.
  • Start your open source journey with the right license. Did you know that without the right license, software isn't really open source? Opensource.dev explains why.
  • Learn how to build beautiful native apps on iOS and Android from a single codebase. Visit flutter.dev to learn more.
  • Join the TensorFlow community at tfhub.dev.
  • Analyze and tune your software with performance tracing for Android, Linux, and Chrome. Check out perfetto.dev.
  • Explore Google's open source JavaScript and WebAssembly engine at v8.dev
  • Get your hands on Puppeteer, a Node library that provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Get it at pptr.dev.

But we're not done yet! We've got big plans for .dev, and we'd like to invite you to join us. To start, everyone who applied for a ticket to Google I/O 2019 will get a .dev domain at no cost for one year. If you entered the drawing, check your inbox for your redemption code. We'll be moving more of our existing projects and launching some exciting things on .dev in the months to come. We can't wait to see what you build on .dev -- share what you create with #hellodotdev.

 

27th February 2019 |

A new space for Southeast Asian developers in Singapore

Posted by Sami Kizilbash, Developer Relations Program Manager

Last November, Raymond Chan, a data scientist at Chope, attended one of our first ML bootcamps for developers and start-ups in Southeast Asia. Over four days, he gained a deeper understanding of how to use Google Cloud Platform to better structure data from approximately 775,000 records on Chope's real-time restaurant reservation booking platform every day. With this new knowledge, Chope has been able to use that data for more effective and timely decision-making, making it easier for customers to book restaurants.

Last week in Singapore, we opened the Developer Space @ Google Singapore—a space that brings together resources to help Southeast Asian developers, entrepreneurs and community groups grow, plus earn more with their businesses. This is the first physical space dedicated to developers that sits inside a Google office, so developers in Singapore can look forward to benefiting from insights, hands-on mentorship and networking opportunities with various teams working at our Asia Pacific headquarters.

Supporting startups and developers like Raymond, and helping them achieve their full potential is something we're passionate about. In addition to the ML bootcamps which we expect another 800 developers in Singapore to attend by the end of this year, we will run a range of workshops on the latest Google tools and technologies, as well as programs like LeadersLab and Indie Games Accelerator that fuel ecosystem growth. We will also support activities run by community groups like Google Developer Groups, Google Business Groups and Women Techmakers.

With developers and startups from Southeast Asia rapidly driving growth across the region, we can't think of a better place to open this new hub. Come join us throughout the year for an exciting roster of events and meet people who, like Raymond, are looking to build and scale great products. Check out our schedule of events here.

 

26th February 2019 |

Launching Flutter 1.2 at Mobile World Congress

Posted by the Flutter team

The Flutter team is coming to you live this week from Mobile World Congress in Barcelona, the largest annual gathering of the mobile technology industry. One year ago, we announced the first beta of Flutter at this same event, and since then Flutter has grown faster than we could have imagined. So it seems fitting that we celebrate this anniversary occasion with our first stable update release for Flutter.

Flutter 1.2

Flutter 1.2 is the first feature update for Flutter. We've focused this release on a few major areas:

  • Improved stability, performance and quality of the core framework.
  • Work to polish visual finish and functionality of existing widgets.
  • New web-based tooling for developers building Flutter applications.

Having shipped Flutter 1.0, we focused a good deal of energy in the last couple of months on improving our testing and code infrastructure, clearing a backlog of pull requests, and improving performance and quality of the overall framework. We have a comprehensive list of these requests in the Flutter wiki for those who are interested in the specifics. This work also included broader support for new UI languages such as Swahili.

We continue to make improvements to both the Material and Cupertino widget sets, to support more flexible usage of Material and continue to strive towards pixel-perfect fidelity on iOS. The latter work includes support for floating cursor text editing, as well as showing continued attention to minor details (for example, we updated the way the text editing cursor paints on iOS for a faithful representation of the animation and painting order). We added support for a broader set of animation easing functions, inspired by the work of Robert Penner. And we added support for new keyboard events and mouse hover support, in preparation for deeper support for desktop-class operating systems.

The plug-in team has also been busy in Flutter 1.2, with work well underway to support in-app purchases, as well as many bug fixes for video player, webview, and maps. And thanks to a pull request contributed by a developer from Intuit, we now have support for Android App Bundles, a new packaging format that helps in reducing app size and enables new features like dynamic delivery for Android apps.

Lastly, Flutter 1.2 includes the Dart 2.2 SDK, an update that brings significant performance improvements to compiled code along with new language support for initializing sets. For more information on this work, you can read the Dart 2.2 announcement.

(As an aside, some might wonder why this release is numbered 1.2. Our goal is to ship a 1.x release to the 'beta' channel on about a monthly basis, and to release an update approximately every quarter to the 'stable' channel that is ready for production usage. Our 1.1 last month was a beta release, and so 1.2 is therefore our first stable release.)

New Tools for Flutter Developers

Mobile developers come from a variety of backgrounds and often prefer different programming tools and editors. Flutter itself supports different tools, including first-class support for Android Studio and Visual Studio Code as well as support for building apps from the command line, so we knew we needed flexibility in how we expose debugging and runtime inspection tools.

Alongside Flutter 1.2, we're delighted to preview a new web-based suite of programming tools to help Flutter developers debug and analyze their apps. These tools are now available for installation alongside the extensions and add-ins for Visual Studio Code and Android Studio, and offer a number of capabilities:

  • A widget inspector, which enables visualization and exploration of the tree hierarchy that Flutter uses for rendering.
  • A timeline view that helps you diagnose your application at a frame-by-frame level, identifying rendering and computational work that may cause animation 'jank' in your apps.
  • A full source-level debugger that lets you step through code, set breakpoints and investigate the call stack.
  • A logging view that shows activity you log from your application as well as network, framework and garbage collection events.

We plan to invest further in this new web-based tooling for both Flutter and Dart developers and, as integration for web-based experiences improves, we plan to build these services directly into tools like Visual Studio Code.

What's next for Flutter?

In addition to the engineering work, we took some time after Flutter 1.0 to document our 2019 roadmap, and you'll see that we've got plenty of work ahead of us.

A big focus for 2019 is growing Flutter beyond mobile platforms. At Flutter Live, we announced a project codenamed "Hummingbird", which brings Flutter to the web, and we plan to share a technical preview in the coming months. In addition, we continue to work on bringing Flutter to desktop-class devices; this requires work both at the framework level as described above, as well as the ability to package and deploy applications for operating systems like Windows and Mac, in which we're investing through our Flutter Desktop Embedding project.

Flutter Create: what can you do with 5K of Dart?

This week, we're also excited to launch Flutter Create, a contest that challenges you to build something interesting, inspiring, and beautiful with Flutter using five kilobytes or less of Dart code. 5K isn't a lot -- for a typical MP3 file, it's about a third of a second of music -- but we're betting you can amaze us with what you can achieve in Flutter with such a small amount of code.

The contest runs until April 7th, so you've got a few weeks to build something cool. We have some great prizes, including a fully-loaded iMac Pro developer workstation with a 14-core processor and 128GB of memory that is worth over $10,000! We'll be announcing the winners at Google I/O, where we'll have a number of Flutter talks, codelabs and activities.

In closing

Flutter is now one of the top 20 software repos on Github, and the worldwide community grows with every passing month. Between meetups in Chennai, India, articles from Port Harcourt, Nigeria, apps from Copenhagen, Denmark and incubation studios in New York City, USA, it's clear that Flutter continues to become a worldwide phenomenon, thanks to you. You can see Flutter in apps that have hundreds of millions of users, and in apps from entrepreneurs who are bringing their first idea to market. It's exciting to see the range of ideas you have, and we hope that we can help you express them with Flutter.

Attendees of a Flutter deep dive at Technozzare, SRM University.

Finally, we've recently launched a YouTube channel exclusively dedicated to Flutter. Be sure to subscribe at flutter.dev/youtube for shows including the Boring Flutter Development Show, Widget of the Week, and Flutter in Focus. You'll also find a new case study from Dream11, a popular Indian fantasy sports site, as well as other Developer Stories. See you there!

 

25th February 2019 |

Build Actions for the next billion users

Posted by Brad Abrams, Group Product Manager, Actions on Google

Before we look forward and discuss updates to Actions on Google for 2019, we wanted to recognize our global developer community for your tremendous work in 2018. We saw more than 4 times the number of projects created with Actions on Google this past year. And some of the most popular Action categories include Games and Trivia, Home Control, Music, Actions for Families, and Education – well done!

We hope to carry this enthusiasm forward, and at Mobile World Congress, we're announcing new tools so you can reach and engage with more people around the globe.

Building for the next billion users

The Google Assistant's now available in more than 80 countries in nearly 30 languages, and you've been busy making your Actions accessible in many of those locales.

One of the most exciting things we've seen in the last couple of years is happening in places where the next billion users are coming online for the first time. In these fast-growing countries like India, Indonesia, Brazil, and Mexico, voice is often the primary way users interact with their devices because it's natural, universal, and the most accessible input method for people who are starting to engage with technology for the first time in their lives.

Actions on Google coming to KaiOS and Android (Go Edition)

As more countries are coming online, we want to make it so you can reach and engage with these users as they're adopting the Google Assistant into their everyday lives with astonishing ease. There are tens of millions of users on Android Go and KaiOS in over 100 countries.

We'll be making your Actions available to Android Go and KaiOS devices in the next few months, so you should start thinking now about how to build for these platforms and users. Without any additional work required, your Actions will work on both operating systems at launch (unless of course, Action requires a screen with touch input). We'll also be launching a simulator so you can test your Actions to see how they look on entry-level Android Go smartphones and KaiOS feature phones.

A couple of partners have already built Actions with these new audiences in mind. Hello English, for example, created an Action to offer English lessons for users that speak Hindi, to create more opportunities for people through language learning. And Where is My Train? (WIMT) was built for the millions of Indians commuting daily, offering real-time locations and times for trains accessible by voice. Check out our developer docs for KaiOS and Android Go Edition, and start building for the next billion users.

Expanding capabilities to more languages and countries

And we're not just focused on a handful of emerging countries. We're always working to enable all of Actions on Google's tools so users can enjoy the best experience possible regardless of the country they live in or the language they speak—our work here never ends! Here's a snapshot of some of the progress we've made this past year:

  • New locales: Since last MWC, we've launched Actions on Google support for more languages and locales. You can now build Actions in 19 languages across 28 locales.
  • Wavenet voices: As we've launched Actions on Google in more languages, we've added more text-to-speech voice options for your Actions. And thanks to Wavenet advancements, we're introducing improved, more natural-sounding TTS voices for English (en-US, en-GB and en-AU), Dutch, French (fr-FR and fr-CA), German, Italian, Russian, Portuguese (Brazilian), Japanese, Korean, Polish, Danish and Swedish. You can listen to the upgraded voices here, and they'll start rolling out to your Actions in the coming weeks.
  • Transactions: You can now offer transactional experiences in 22 markets, up from just 1 since last MWC. If you're looking to incorporate transactions in your Actions, check out these tips.
  • Templates for the next billion users: If you're not yet familiar with templates, you can fill in a Google Sheet and publish an Action within minutes. Trivia and Personality Quiz templates are available in English, (en-US and en-UK), French, German, Italian, Japanese, Korean, Portuguese, Spanish, Hindi and Indonesian. All you have to do is upload a Sheet in any of the languages above and your Actions will be live in those languages.

We've already talked about how busy the development community was this past year, and we've been hard at work to keep up! If you're looking to reach and engage with millions—even billions more users—now's a good time to start thinking about how your Action can make a difference in people's lives around the globe.

 

21st February 2019 |

Five new investments for the Google Assistant Investments program

Posted by Ilya Gelfenbeyn, Head of the Google Assistant Investments program

Last year, we announced the Google Assistant Investments program with the goal to help pioneering startups bring their ideas to life in the digital assistant ecosystem. Not only have we invested in some really great startups, we've also been working closely with these companies to make their services available to more users.

We're excited to be back to announce five new portfolio companies and catch up on the progress some of them have made this past year. With the next batch of investments, we're helping companies explore how digital assistants can improve the hospitality, insurance, fashion and education industries, and we have something for sports fans too.

Welcome to our new portfolio investments

First up, AskPorter. This London-based team was founded to make managing spaces simple, providing every property manager and occupant with a digital personal assistant. AskPorter is an AI-powered property management platform with a digital assistant called Porter. Porter assists and takes care of all aspects of property management such as guiding inspections arranging viewings, troubleshooting maintenance issues and chasing payments.

GradeSlam is an on-demand, chat-based, personalized learning and tutoring service available across all subject areas. Sessions are conducted via chat, creating a learning environment that allows students to interact freely and personally with qualified educators. The Montreal-based team is already used by more than 150,000 students, teachers and administrators.

Aiva Health puts smart speakers in hospitals and senior communities to reduce response times and improve satisfaction for patients, seniors, and caregivers alike. Aiva understands patient requests and routes them to the most appropriate caregiver so they can respond instantly via their mobile app. The Aiva platform provides centralized IoT management, powering Smart Hospitals and Smart Communities.

StyleHacks (formerly Maison Me) was founded with a goal of empowering people to take back control of their style and wardrobe. With a conversational interface and personalized AI-powered recommendations, they're helping people live their most stylish lives. The team has already launched the "StyleHacks" Action for phones and Smart Displays in December 2018, helping people decide what to wear by providing personalized recommendations based on the weather and preferences. And in the next few months, StyleHacks will also be able to help you shop for clothes you will actually wear. Just ask StyleHacks what to wear today

StatMuse turns the biggest sports stars into your own personal sports commentator. Powered by the personalities of more than 25 sports superstars including Peyton Manning, Jerry Rice and Scott Van Pelt, fans can get scores, stats and recaps for the NBA, NFL, NHL and MLB dating back to 1876. To try it out, just say, "Hey Google, talk to StatMuse."

It's been almost a year since we launched the Investments program and we're happy to see how some of these companies are already using voice to broaden the Google Assistant's capabilities. If you're working on new ways for people to use their voice to get things done, or building new hardware devices for digital assistants, we'd like to hear from you.

 

21st February 2019 |

Launchpad Accelerator Mexico now accepting startup applications

Posted by Francisco Solsona, Developer Relations Manager, Google Hispanoamerica

The Latin American startup ecosystem is thriving. Many success stories from the region have served as inspiration for entrepreneurs and investors alike. To build upon this momentum, we believe it is important to continue supporting programs for entrepreneurs that are using technology to solve some of the region's biggest challenges.

That's why we are happy to announce Launchpad Accelerator Mexico, a program focused on helping startups throughout Latin America create attractive, scalable, and impactful products and technologies. This program has existed in different parts of the world such as Israel, Tel Aviv; Nigeria, Lagos; Brazil, São Paulo; and now comes to Mexico City thanks to our partnership with Centraal, a thriving co-working space.

Access to new technologies and technical experts is essential to guarantee startup success. With Google Cloud serving as backbone for today's global startups, Launchpad Accelerator Mexico will help startups overcome technological challenges in Artificial Intelligence, Machine Learning, Android, and web solutions.

The inaugural program will last three months and will offer technical support on an initially defined, high impact project. The cohort of entrepreneurs will have access to a group of mentors from Google and other industry experts.

This program is for your startup if:

  • Have already validated their business model and are working on product-market adjustment and traction
  • Are interested in developing their products using the following technologies: Artificial Intelligence, Machine Learning, Android, Google Cloud Platform, Web (Profressive Web Apps and Accelerated Mobile Pages)
  • Would like their technology/product leader or team to participate in the program's activities

Registration for the inaugural class is now open and startups can apply using this form. Registration remains open until March 15. The selected startups will be announced on March 21 and will start working with Google on April 29, 2019.

 

22nd February 2019 |

Hello, .dev!

Posted by Ben Fried, VP, CIO, & Chief Domains Enthusiast

Developers, designers, writers and architects: you built the web. You make it possible for the billions of people online today to do what they do. Have you ever tried to register your preferred domain name, only to find out it's not available? Today, Google Registry is announcing .dev, a brand new top-level domain (TLD) that's dedicated to developers and technology. We hope .dev will be a new home for you to build your communities, learn the latest tech and showcase your projects—all with a perfect domain name.

Check out what some companies, both big and small, are doing on .dev:

  • Want to build a website? Both GitHub.dev and grow.dev have you covered.
  • Trying to create more inclusive products? Visit accessibility.dev for digital accessibility solutions.
  • Learn about Slack's helpful tools, libraries, and SDKs at slack.dev.
  • Connect with Women Who Code at women.dev.
  • Who doesn't want to do more with their time? JetBrains.dev offers software solutions that make developers more productive.
  • Want to brush up on your skills (or learn new ones)? Check out Codecademy.dev.
  • Learn how to build apps on the Salesforce platform at crm.dev.
  • Interested in learning how to increase the agility and productivity of your data team? Visit dataops.dev.
  • Want to build & deploy serverless apps on a global cloud network? You can do that with Cloudflare at workers.dev.
  • Get a sneak peek of what's running under the hood of the Niantic Real World Platform at ar.dev.

Like our recent launches for .app and .page, this new domain will be secure by default because it requires HTTPS to connect to all .dev websites. This protects people who visit your site against ad malware and tracking injection by internet service providers, and from spying when using open WiFi networks. With every .dev website that's launched, you help move the web to an HTTPS-everywhere future.

Starting today at 8:00 a.m. PT and through February 28, .dev domains are available to register as part of our Early Access Program, where you can secure your desired domains for an additional fee. The fee decreases according to a daily schedule. Beginning on February 28, .dev domains will be available at a base annual price through your registrar of choice. To find out pricing from our participating partners, visit get.dev.

Google has already started using .dev for some of our own projects, like web.dev and opensource.dev. Visit get.dev to see what companies like Mozilla, Netflix, Glitch, Stripe, JetBrains and more are doing on .dev and get your own domain through one of our registrar partners. We look forward to seeing what you create on .dev!

 

15th February 2019 |

New UI tools and a richer creative canvas come to ARCore

Posted by Evan Hardesty Parker, Software Engineer

ARCore and Sceneform give developers simple yet powerful tools for creating augmented reality (AR) experiences. In our last update (version 1.6) we focused on making virtual objects appear more realistic within a scene. In version 1.7, we're focusing on creative elements like AR selfies and animation as well as helping you improve the core user experience in your apps.

Creating AR Selfies

Example of 3D face mesh application

ARCore's new Augmented Faces API (available on the front-facing camera) offers a high quality, 468-point 3D mesh that lets users attach fun effects to their faces. From animated masks, glasses, and virtual hats to skin retouching, the mesh provides coordinates and region specific anchors that make it possible to add these delightful effects.

You can get started in Unity or Sceneform by creating an ARCore session with the "front-facing camera" and Augmented Faces "mesh" mode enabled. Note that other AR features such as plane detection aren't currently available when using the front-facing camera. AugmentedFace extends Trackable, so faces are detected and updated just like planes, Augmented Images, and other trackables.

// Create ARCore session that support Augmented Faces for use in Sceneform.
public Session createAugmentedFacesSession(Activity activity) throws UnavailableException {
// Use the front-facing (selfie) camera.
Session session = new Session(activity, EnumSet.of(Session.Feature.FRONT_CAMERA));
// Enable Augmented Faces.
Config config = session.getConfig();
config.setAugmentedFaceMode(Config.AugmentedFaceMode.MESH3D);
session.configure(config);
return session;
}

Animating characters in your Sceneform AR apps

Another way version 1.7 expands the AR creative canvas is by letting your objects dance, jump, spin and move around with support for animations in Sceneform. To start an animation, initialize a ModelAnimator (an extension of the existing Android animation support) with animation data from your ModelRenderable.

void startDancing(ModelRenderable andyRenderable) {
AnimationData data = andyRenderable.getAnimationData("andy_dancing");
animator = new ModelAnimator(data, andyRenderable);
animator.start();
}

Solving common AR UX challenges in Unity with new UI components

In ARCore version 1.7 we also focused on helping you improve your user experience with a simplified workflow. We've integrated "ARCore Elements" -- a set of common AR UI components that have been validated with user testing -- into the ARCore SDK for Unity. You can use ARCore Elements to insert AR interactive patterns in your apps without having to reinvent the wheel. ARCore Elements also makes it easier to follow Google's recommended AR UX guidelines.

ARCore Elements includes two AR UI components that are especially useful:

  • Plane Finding - streamlining the key steps involved in detecting a surface
  • Object Manipulation - using intuitive gestures to rotate, elevate, move, and resize virtual objects

We plan to add more to ARCore Elements over time. You can download the ARCore Elements app available in the Google Play Store to learn more.

Improving the User Experience with Shared Camera Access

ARCore version 1.7 also includes UX enhancements for the smartphone camera -- specifically, the experience of switching in and out of AR mode. Shared Camera access in the ARCore SDK for Java lets users pause an AR experience, access the camera, and jump back in. This can be particularly helpful if users want to take a picture of the action in your app.

More details are available in the Shared Camera developer documentation and Java sample.

Learn more and get started

For AR experiences to capture users' imaginations they need to be both immersive and easily accessible. With tools for adding AR selfies, animation, and UI enhancements, ARCore version 1.7 can help with both these objectives.

You can learn more about these new updates on our ARCore developer website.

 

5th February 2019 |

Working together to improve user security

Posted by Adam Dawes

We're always looking for ways to improve user security both on Google and on your applications. That's why we've long invested in Google Sign In, so that users can extend all the security protections of their Google Account to your app.

Historically, there has been a critical shortcoming of all single sign in solutions. In the rare case that a user's Google Account falls prey to an attacker, the attacker can also gain and maintain access to your app via Google Sign In. That's why we're super excited to open a new feature of Google Sign In, Cross Account Protection (CAP), to developers.

CAP is a simple protocol that enables two apps to send and receive security notifications about a common user. It supports a standardized set of events including: account hijacked, account disabled, when Google terminates all the user's sessions, and when we lock an account to force the user to change their password. We also have a signal if we detect that an account could be causing abuse on your system.

CAP is built on several newly created Internet Standards, Risk and Incident Sharing and Coordination (RISC) and Security Events, that we developed with the community at the OpenID Foundation and IETF. This means that you should only have to build one implementation to be able to receive signals from multiple identity providers.

Google is now ready to send security events to your app for any user who has previously logged in using Google Sign In. If you've already integrated Google Sign In into your service, you can start receiving signals in just three easy steps:

  1. Enable the RISC API and create a Service Account on the project/s where you set up Google Sign In. If you have clients set up in different projects for your web, Android and iOS apps, you'll have to repeat this for each project.
  2. Build a RISC Receiver. This means opening a REST API on your service where Google will be able to POST security event tokens. When you receive these events, you'll need to validate they come from Google and then act on them. This may mean terminating your user's existing sessions, disabling the account, finding an alternate login mechanism or looking for other suspicious activity with the user's account.
  3. Use the Service Account to configure Google's pubsub with the location of your API. You should then start receiving signals, and you can start testing and then roll out this important new protection.

If you already use Google Sign In, please get started by checking out our developer docs. If you don't use Google Sign In, CAP is another great reason to do so to improve the security of your users. Developers using Firebase Authentication or Google Cloud Identity for Customers & Partners have CAP configured automatically - there's nothing you need to do. You can also post questions on Stack Overflow with the #SecEvents tag.

 

1st February 2019 |

NoSQL for the serverless age: Announcing Cloud Firestore general availability and updates

Posted by Amit Ganesh, VP Engineering & Dan McGrath, Product Manager

As modern application development moves away from managing infrastructure and toward a serverless future, we're pleased to announce the general availability of Cloud Firestore, our serverless, NoSQL document database. We're also making it available in 10 new locations to complement the existing three, announcing a significant price reduction for regional instances, and enabling integration with Stackdriver for monitoring.

Cloud Firestore is a fully managed, cloud-native database that makes it simple to store, sync, and query data for web, mobile, and IoT applications. It focuses on providing a great developer experience and simplifying app development with live synchronization, offline support, and ACID transactions across hundreds of documents and collections. Cloud Firestore is integrated with both Google Cloud Platform (GCP) and Firebase, Google's mobile development platform. You can learn more about how Cloud Firestore works with Firebase here. With Cloud Firestore, you can build applications that move swiftly into production, thanks to flexible database security rules, real-time capabilities, and a completely hands-off auto-scaling infrastructure.

Cloud Firestore does more than just core database tasks. It's designed to be a complete data backend that handles security and authorization, infrastructure, edge data storage, and synchronization. Identity and Access Management (IAM) and Firebase Auth are built in to help make sure your application and its data remain secure. Tight integration with Cloud Functions, Cloud Storage, and Firebase's SDK accelerates and simplifies building end-to-end serverless applications. You can also easily export data into BigQuery for powerful analysis, post-processing of data, and machine learning.

Building with Cloud Firestore means your app can seamlessly transition from online to offline and back at the edge of connectivity. This helps lead to simpler code and fewer errors. You can serve rich user experiences and push data updates to more than a million concurrent clients, all without having to set up and maintain infrastructure. Cloud Firestore's strong consistency guarantee helps to minimize application code complexity and reduces bugs. A client-side application can even talk directly to the database, because enterprise-grade security is built right in. Unlike most other NoSQL databases, Cloud Firestore supports modifying up to 500 collections and documents in a single transaction while still automatically scaling to exactly match your workload.

What's new with Cloud Firestore

  • New regional instance pricing. This new pricing takes effect on March 3, 2019 for most regional instances, and is as low as 50% of multi-region instance prices.
    • Data in regional instances is replicated across multiple zones within a region. This is optimized for lower cost and lower write latency. We recommend multi-region instances when you want to maximize the availability and durability of your database.
  • SLA now available. You can now take advantage of Cloud Firestore's SLA: 99.999% availability for multi-region instances and 99.99% availability for regional instances.
  • New locations available. There are 10 new locations for Cloud Firestore:
    • Multi-region
      • Europe (eur3)
    • North America (Regional)
      • Los Angeles (us-west2)
      • Montréal (northamerica-northeast1)
      • Northern Virginia (us-east4)
    • South America (Regional)
      • São Paulo (southamerica-east1)
    • Europe (Regional)
      • London (europe-west2)
    • Asia (Regional)
      • Mumbai (asia-south1)
      • Hong Kong (asia-east2)
      • Tokyo (asia-northeast1)
    • Australia (Regional)
      • Sydney (australia-southeast1)

Cloud Firestore is now available in 13 regions.

  • Stackdriver integration (in beta). You can now monitor Cloud Firestore read, write and delete operations in near-real time with Stackdriver.
  • More features coming soon. We're working on adding some of the most requested features to Cloud Firestore from our developer community, such as querying for documents across collections and incrementing database values without needing a transaction.

As the next generation of Cloud Datastore, Cloud Firestore is compatible with all Cloud Datastore APIs and client libraries. Existing Cloud Datastore users will be live-upgraded to Cloud Firestore automatically later in 2019. You can learn more about this upgrade here.

Adding flexibility and scalability across industries

Cloud Firestore is already changing the way companies build apps in media, IoT, mobility, digital agencies, real estate, and many others. The unifying themes among these workloads include: the need for mobility even when connectivity lapses, scalability for many users, and the ability to move quickly from prototype to production. Here are a few of the stories we've heard from Cloud Firestore users.

When opportunity strikes...

In the highly competitive world of shared, on-demand personal mobility via cars, bikes, and scooters, the ability to deliver a differentiated user experience, iterate rapidly, and scale are critical. The prize is huge. Skip provides a scooter-sharing system where shipping fast can have a big impact. Mike Wadhera, CTO and Co-founder, says, "Cloud Firestore has enabled our engineering and product teams to ship at the clock-speed of a startup while leveraging Google-scale infrastructure. We're delighted to see continued investment in Firebase and the broader GCP platform."

Another Cloud Firestore user, digital consultancy The Nerdery, has to deliver high-quality results in a short period of time, often needing to integrate with existing third-party data sources. They can't build up and tear down complicated, expensive infrastructure for every client app they create. "Cloud Firestore was a great fit for the web and mobile applications we built because it required a solution to keep 40,000-plus users apprised of real-time data updates," says Jansen Price, Principal Software Architect. "The reliability and speed of Cloud Firestore coupled with its real-time capabilities allowed us to deliver a great product for the Google Cloud Next conferences."

Reliable information delivery

Incident response company Now IMS uses real-time data to keep citizens safe in crowded places, where cell service can get spotty when demand is high. "As an incident management company, real-time and offline capabilities are paramount to our customers," says John Rodkey, Co-founder. "Cloud Firestore, along with the Firebase Javascript SDK, provides us with these capabilities out of the box. This new 100% serverless architecture on Google Cloud enables us to focus on rapid application development to meet our customers' needs instead of worrying about infrastructure or server management like with our previous cloud."

Regardless of the app, users want the latest information right away, without having to click refresh. The QuintoAndar mobile application connects tenants and landlords in Brazil for easier apartment rentals. "Being able to deliver constantly changing information to our customers allows us to provide a truly engaging experience. Cloud Firestore enables us to do this without additional infrastructure and allows us to focus on the core challenges of our business," says Guilherme Salerno, Engineering Manager at QuintoAndar.

Real-time, responsive apps, happy users

Famed broadsheet and media company The Telegraph uses Cloud Firestore so registered users can easily discover and engage with relevant content. The Telegraph wanted to make the user experience better without having to become infrastructure experts in serving and managing data to millions of concurrent connections. "Cloud Firestore allowed us to build a real-time personalized news feed, keeping readers informed with synchronized content state across all of their devices," says Alex Mansfield-Scaddan, Solution Architect. "It allowed The Telegraph engineering teams to focus on improving engagement with our customers, rather than becoming real-time database and infrastructure experts."

On the other side of the Atlantic, The New York Times used Cloud Firestore to build a feature in The Times' mobile app to send push notifications updated in real time for the 2018 Winter Olympics. In previous approaches to this feature, scaling had been a challenge. The team needed to track each reader's history of interactions in order to provide tailored content for particular events or sports. Cloud Firestore allowed them to query data dynamically, then send the real-time updates to readers. The team was able to send more targeted content faster.

Delivering powerful edge storage for IoT devices

Athlete testing technology company Hawkin Dynamics was an early, pre-beta adopter of Cloud Firestore. Their pressure pads are used by many professional sports teams to measure and track athlete performance. In the fast-paced, high-stakes world of professional sports, athletes can't wait around for devices to connect or results to calculate. They demand instant answers even if the WiFi is temporarily down. Hawkin Dynamics uses Cloud Firestore to bring real-time data to athletes through their app dashboard, shown below.

"Our core mission at Hawkin Dynamics is to help coaches make informed decisions regarding their athletes through the use of actionable data. With real-time updates, our users can get the data they need to adjust an athlete's training on a moment-by-moment basis," says Chris Wales, CTO. "By utilizing the powerful querying ability of Cloud Firestore, we can provide them the insights they need to evaluate the overall efficacy of their programs. The close integrations with Cloud Functions and the other Firebase products have allowed us to constantly improve on our product and stay responsive to our customers' needs. In an industry that is rapidly changing, the flexibility afforded to us by Cloud Firestore in extending our applications has allowed us to stay ahead of the game."

Getting started with Cloud Firestore

We've heard from many of you that Cloud Firestore is helping solve some of your most timely development challenges by simplifying real-time data and data synchronization, eliminating server-side code, and providing flexible yet secure database authentication rules. This reflects the state of the cloud app market, where developers are exploring lots of options to help them build better and faster while also providing modern user experiences. This glance at Stack Overflow questions gives a good picture of some of these trends, where Cloud Firestore is a hot topic among cloud databases.

Source: StackExchange

We've seen close to a million Cloud Firestore databases created since its beta launch. The platform is designed to serve databases ranging in size from kilobytes to multiple petabytes of data. Even a single application running on Cloud Firestore is delivering more than 1 million real-time updates per second to users. These apps are just the beginning. To learn more about serverless application development, take a look through the archive of the recent application development digital conference.

We'd love to hear from you, and we can't wait to see what you build next. Try Cloud Firestore today for your apps.

 

25th January 2019 |

Google opens new innovation space in San Francisco for the developer community

Posted by Jeremy Neuner, Head of Launchpad San Francisco

Google's Developer Relations team is opening a new innovation space at 543 Howard St. in San Francisco. By working with more than a million developers and startups we've found that something unique happens when we interact with our communities face-to-face. Talks, meetups, workshops, sprints, bootcamps, and social events not only provide opportunities for Googlers to authentically connect with users but also build trust and credibility as we form connections on a more personal level.

The space will be the US home of Launchpad, Google's startup acceleration engine. Founded in 2016 the Launchpad Accelerator has seen 13 cohorts graduate across 5 continents, reaching 241 startups. In 2019, the program will bring together top Google talent with startups from around the world who are working on AI-enabled solutions to problems in financial technology, healthcare, and social good.

In addition to its focus on startups, the Google innovation space will offer programming designed specifically for developers and designers throughout the year. For example, in tandem with the rapid growth of Google Cloud Platform, we will host hands-on sessions on Kubernetes, big data and AI architectures with Google engineers and industry experts.

Finally, we want the space to serve as a hub for industry-wide Developer Relations' diversity and inclusion efforts. And we will partner with groups such as Manos Accelerator and dev/Mission to bring the latest technologies to underserved groups.

We designed the space with a single credo in mind, "We must continually be jumping off cliffs and developing our wings on the way down." The flexible design of the space ensures our community has a place to learn, experiment, and grow.

For more information about our new innovation space, click here.


 

17th January 2019 |

Scratch 3.0's new programming blocks, built on Blockly

Posted by Erik Pasternak, Blockly team Manager

Coding is a powerful tool for creating, expressing, and understanding ideas. That's why our goal is to make coding available to kids around the world. It's also why, in late 2015, we decided to collaborate with the MIT Media Lab on the redesign of the programming blocks for their newest version of Scratch.

Left: Scratch 2.0's code rendering. Right: Scratch 3.0's new code rendering.

Scratch is a block-based programming language used by millions of kids worldwide to create and share animations, stories, and games. We've always been inspired by Scratch, and CS First, our CS education program for students, provides lessons for educators to teach coding using Scratch.

But Scratch 2.0 was built on Flash, and by 2015, it became clear that the code needed a JavaScript rewrite. This would be an enormous task, so having good code libraries would be key.

And this is where the Blockly team at Google came in. Blockly is a library that makes it easy for developers to add block programming to their apps. By 2015, many of the web's visual coding activities were built on Blockly, through groups like Code.org, App Inventor, and MakeCode. Today, Blockly is used by thousands of developers to build apps that teach kids how to code.

One of our Product Managers, Champika (who earned her master's degree in Scratch's lab at MIT) believed Blockly could be a great fit for Scratch 3.0. She brought together the Scratch and Google Blockly teams for informal discussions. It was clear the teams had shared goals and values and could learn a lot from one another. Blockly brought a flexible, powerful library to the table, and the Scratch team brought decades of experience designing for kids.

Champika and the Blockly team together at I/O Youth, 2016.

Those early meetings kicked off three years of fun (and hard work) that led to the new blocks you see in Scratch 3.0. The two teams regularly traveled across the country to work together in person, trade puns, and pore over designs. Scratch's feedback and design drove lots of new features in Blockly, and Blockly made those features available to all developers.

On January 2nd, Scratch 3.0 launched with all of the code open source and publicly developed. At Google, we created two coding activities that showcase this code base. The first was Code a Snowflake, which was used by millions of kids as part of Google's Santa Tracker. The second was a Google Doodle that celebrated 50 years of kids coding and gave millions of people their first experience with block programming. As an added bonus, we worked with Scratch to include an extension for Google Translate in Scratch 3.0.

With Scratch 3.0, even more people are programming with blocks built on Blockly. We're excited to see what else you, our developers, will build on Blockly.

 

1st March 2019 |

Google+ APIs shutting down March 7, 2019

Update: This posting was updated on February 28, 2019 with important, recent changes to aspects of the shutdown covering Google+ Sign-in, Google+ APIs, and Google+ OAuth scope requests.

On March 7, 2019, we are shutting down the legacy Google+ APIs. This has been a progressive shutdown where calls to affected APIs began intermittently failing on January 28, 2019.

Developers should have received one or more emails listing recently used Google+ API methods in their projects. Whether or not an email was received, we strongly encourage developers to search for and remove any affected dependencies on Google+ APIs from their applications.

The most commonly used Google+ legacy APIs that are being shut down include:

Note that we have built new implementations for several people.get and people.getOpenIdConnect APIs that are documented as belonging to the legacy Google+ APIs, including those listed above. The new implementations will only return basic fields necessary for sign-in functionality. More information can be found below.

As previously announced, as part of these changes:

To help mitigate the impact of the shut down, we have made the following changes to aspects of the Google+ APIs shutdown.

  • If you would like to test your application with the shutdown behavior before March 7, you may do so by joining this Google Group with a test user and then trying your application as that user. Be sure to use only test with fake users and fake data. Do not test with your production user accounts or data. Note that you will see the new behavior for your fake users, regardless of which application you are testing with.
  • We have created a new implementation of several people.get and people.getOpenIdConnect APIs that will only return basic fields necessary for sign-in functionality such as name and email address, if authorized by the user. The new implementation only allows an app to retrieve the profile of the signed-in user, and can return only basic profile fields necessary for user sign-in functionality.
  • While we still recommend that developers migrate to alternative APIs such as Google Sign-in and Google People API, for cases where developers are unable to move over before March 7th, existing calls made to the legacy Google+ people.get and people.getOpenIdConnect APIs will automatically be served by this new implementation at the same HTTP endpoints as before.
  • Likewise, requests for some OAuth scopes will no longer fail as previously communicated. In most cases scope requests such as those used for sign in and usage not related to Google+ will no longer return an error. However, other scopes that authorized access to Google+ data such as Circle and Stream information will still no longer be granted. See full outline of scope behavior here.
  • While we strongly encourage developers to migrate to the more comprehensive Google Sign-in authentication system, for cases where developers are unable to move over before March 7th, scopes required for Google+ sign-in will now be remapped to existing Google Sign-in (not Google+) scopes, which should allow these legacy applications to continue to use Google+ Sign-in until they can migrate.
  • We are working with third party developers to help manage the transition and may implement additional mitigations in limited cases where the issue would impact hundreds of thousands of users. For example, we may allow temporary access to legacy Google+ APIs for broken, non-social apps that are using the API primarily for sign-in purposes.

Developers should still remove any dependencies on Google+ APIs from their applications as failure to do so will most likely break their applications. Developers may consider alternative APIs such as Google Sign-in and Google People API for their needs.

Google+ integrations for web or mobile apps are also being shut down. Please see this additional notice.

While we're sunsetting Google+ for consumers, we're investing in Google+ for enterprise organizations. They can expect a new look and new features -- more information is available in our blog post.

 

19th December 2018 |

Tasty: A Recipe for Success on the Google Home Hub

Posted by Julia Chen Davidson, Head of Partner Marketing, Google Home

We recently launched the Google Home Hub, the first ever Made by Google smart speaker with a screen, and we knew that a lot of you would want to put these helpful devices in the kitchen—perhaps the most productive room in the house. With the Google Assistant built-in to the Home Hub, you can use your voice—or your hands—to multitask during meal time. You can manage your shopping list, map out your family calendar, create reminders for the week, and even help your kids out with their homework.

To make the Google Assistant on the Home Hub even more helpful in the kitchen, we partnered with BuzzFeed's Tasty, the largest social food network in the world, to bring 2,000 of their step-by-step tutorials to the Assistant, adding to the tens of thousands of recipes already available. With Tasty on the Home Hub, you can search for recipes based on the ingredients you have in the pantry, your dietary restrictions, cuisine preferences and more. And once you find the right recipe, Tasty will walk you through each recipe with instructional videos and step-by-step guidance.

Tasty's Action shows off how brands can combine voice with visuals to create next-generation experiences for our smart homes. We asked Sami Simon, Product Manager for BuzzFeed Media Brands, a few questions about building for the Google Assistant and we hope you'll find some inspiration for how you can combine voice and touch for the new category of devices in our homes.

What additive value do you see for your users by building an Action for the Google Assistant that's different from an app or YouTube video series, for example?

We all know that feeling when you have your hands in a bowl of ground meat and you realize you have to tap the app to go to the next step or unpause the YouTube video you were watching (I can attest to random food smudges all over my phone and computer for this very reason!).


With our Action, people can use the Google Assistant to get a helping hand while cooking, navigating a Tasty recipe just by using their voice. Without having to break the flow of rolling out dough or chopping an onion, we can now guide people on what to expect next in their cooking process. What's more, with the Google Home Hub, which has the added bonus of a display screen, home chefs can also quickly glance at the video instructions for extra guidance.

The Google Home Hub gives users all of Google, in their home, at a glance. What advantages do you see for Tasty in being a part of voice-enabled devices in the home?

The Assistant on the Google Home Hub enhances the Tasty experience in the kitchen, making it easier than ever for home chefs to cook Tasty recipes, either by utilizing voice commands or the screen display. Tasty is already the centerpiece of the kitchen, and with the Google Home Hub integration, we have the opportunity to provide additional value to our audience. For instance, we've introduced features like Clean Out My Fridge where users share their available ingredients and Tasty recommends what to cook. We're so excited that we can seamlessly provide inspiration and coaching to all home chefs and make cooking even more accessible.

How do you think these new devices will shape the future of digital assistance? How did you think through when to use voice and visual components in your Action?

In our day-to-day lives, we don't necessarily think critically about the best way to receive information in a given instance, but this project challenged us to create the optimal cooking experience. Ultimately we designed the Action to be voice-first to harness the power of the Assistant.

We then layered in the supplemental visuals to make the cooking experience even easier and make searching our recipe catalogue more fun. For instance, if you're busy stir frying, all the pertinent information would be read aloud to you, and if you wanted to quickly check what this might look like, we also provide the visual as additional guidance.

Can you elaborate on 1-3 key findings that your team discovered while testing the Action for the Home Hub?

Tasty's lens on cooking is to provide a fun and accessible experience in the kitchen, which we wanted to have come across with the Action. We developed a personality profile for Tasty with the mission of connecting with chefs of all levels, which served as a guide for making decisions about the Action. For instance, once we defined the voice of Tasty, we knew how to keep the dialogue conversational in order to better resonate with our audience.


Additionally, while most people have had some experience with digital assistants, their knowledge of how assistants work and ways that they use them vary wildly from person to person. When we did user testing, we realized that unlike designing UX for a website, there weren't as many common design patterns we could rely on. Keeping this in mind helped us to continuously ensure that our user paths were as clear as possible and that we always provided users support if they got lost or confused.

What are you most excited about for the future of digital assistance and branded experiences there? Where do you foresee this ecosystem going?

I'm really excited for people to discover more use cases we haven't even dreamed of yet. We've thoroughly explored practical applications of the Assistant, so I'm eager to see how we can develop more creative Actions and evolve how we think about digital assistants. As the Assistant will only get smarter and better at predicting people's behavior, I'm looking forward to seeing the growth of helpful and innovative Actions, and applying those to Tasty's mission to make cooking even more accessible.

What's next for Tasty and your Action? What additional opportunities do you foresee for your brand in digital assistance or conversational interfaces?

We are proud of how our Action leverages the Google Assistant to enhance the cooking experience for our audience, and excited for how we can evolve the feature set in the future. The Tasty brand has evolved its videos beyond our popular top-down recipe format. It would be an awesome opportunity to expand our Action to incorporate the full breadth of the Tasty brand, such as our creative long-form programming or extended cooking tutorials, so we can continue helping people feel more comfortable in the kitchen.

To check out Tasty's Action yourself, just say "Hey Google, ask Tasty what I should make for dinner" on your Home Hub or Smart Display. And to learn more about the solutions we have for businesses, take a look at our Assistant Business site to get started building for the Google Assistant.

If you don't have the resources to build in-house, you can also work with our talented partners that have already built Actions for all types of use cases. To make it even easier to find the perfect partner, we recently launched a new website that shows these agencies on a map with more details about how to get in touch. And if you're an agency already building Actions, we'd love to hear from you. Just reach out here and we'll see if we can offer some help along the way!

 

18th December 2018 |

Building the Shape System for Material Design

Posted by Yarden Eitan, Software Engineer

Building the Shape System for Material Design

I am Yarden, an iOS engineer for Material Design—Google's open-source system for designing and building excellent user interfaces. I help build and maintain our iOS components, but I'm also the engineering lead for Material's shape system.

Shape: It's kind of a big deal

You can't have a UI without shape. Cards, buttons, sheets, text fields—and just about everything else you see on a screen—are often displayed within some kind of "surface" or "container." For most of computing's history, that's meant rectangles. Lots of rectangles.

But the Material team knew there was potential in giving designers and developers the ability to systematically apply unique shapes across all of our Material Design UI components. Rounded corners! Angular cuts! For designers, this means being able to create beautiful interfaces that are even better at directing attention, expressing brand, and supporting interactions. For developers, having consistent shape support across all major platforms means we can easily apply and customize shape across apps.

My role as engineering lead was truly exciting—I got to collaborate with our design leads to scope the project and find the best way to create this complex new system. Compared to systems for typography and color (which have clear structures and precedents like the web's H1-H6 type hierarchy, or the idea of primary/secondary colors) shape is the Wild West. It's a relatively unexplored terrain with rules and best practices still waiting to be defined. To meet this challenge, I got to work with all the different Material Design engineering platforms to identify possible blockers, scope the effort, and build it!

When building out the system, we had two high level goals:

  • Adding shape support for our components—giving developers the ability to customize the shape of buttons, cards, chips, sheets, etc.
  • Defining and developing a good way to theme our components using shape—so developers could set their product's shape story once and have it cascade through their app, instead of needing to customize each component individually.

From an engineering perspective, adding shape support held the bulk of the work and complexities, whereas theming had more design-driven challenges. In this post, I'll mostly focus on the engineering work and how we added shape support to our components.

Here's a rundown of what I'll cover here:

  • Scoping out the shape support functionality
  • Building shape support consistently across platforms is hard
  • Implementing shape support on iOS
    • Shape core implementation
    • Adding shape support for components
  • Applying a custom shape on your component
  • Final words

Scoping out the shape support functionality

Our first task was to scope out two questions: 1) What is shape support? and 2) What functionality should it provide? Initially our goals were somewhat ambitious. The original proposal suggested an API to customize components by edges and corners, with full flexibility on how these edges and corners look. We even thought about receiving a custom .png file with a path and converting it to a shaped component in each respective platform.

We soon found that having no restrictions would make it extremely hard to define such a system. More flexibility doesn't necessarily mean a better result. For example, it'd be quite a feat to define a flexible and easy API that lets you make a snake-shaped FAB and train-shaped cards. But those elements would almost certainly contradict the clear and straightforward approach championed by Material Design guidance.

This truck-shaped FAB is a definite "don't" in Material Design guidance.

We had to weigh the expense of time and resources against the added value for each functionality we could provide.

To solve these open questions we decided to conduct a full weeklong workshop including team members from design, engineering, and tooling. It proved to be extremely effective. Even though there were a lot of inputs, we were able to hone down what features were feasible and most impactful for our users. Our final proposal was to make the initial system support three types of shapes: square, rounded, and cut. These shapes can be achieved through an API customizing a component's corners.

Building shape support consistently across platforms (it's hard)

Anyone who's built for multiple platforms knows that consistency is key. But during our workshop, we realized how difficult it would be to provide the exact same functionality for all our platforms: Android, Flutter, iOS, and the web. Our biggest blocker? Getting cut corners to work on the web.

Unlike sharp or rounded corners, cut corners do not have a built-in native solution on the web.

Our web team looked at a range of solutions—we even considered the idea of adding background-colored squares over each corner to mask it and make it appear cut. Of course, the drawbacks there are obvious: Shadows are masked and the squares themselves need to act as chameleons when the background isn't static or has more than one color.

We then investigated the Houdini (paint worklet) API along with polyfill which initially seemed like a viable solution that would actually work. However, adding this support would require additional effort:

  • Our UI components use shadows to display elevation and the new canvas shadows look different than the native CSS box-shadow, which would require us to reimplement shadows throughout our system.
  • Our UI components also display a visual ripple effect when being tapped—to show intractability. For us to continue using ripple in the paint worklet, we would need to reimplement it, as there is no cross-browser masking solution that doesn't provide significant performance hits.

Even if we'd decided to add more engineering effort and go down the Houdini path, the question of value vs cost still remained, especially with Houdini still being "not ready" across the web ecosystem.

Based on our research and weighing the cost of the effort, we ultimately decided to move forward without supporting cut corners for web UIs (at least for now). But the good news was that we have spec-ed out the requirements and could start building!

Implementing shape support on iOS

After honing down the feature set, it was up to the engineers of each platform to go and start building. I helped build out shape support for iOS. Here's how we did it:

Core implementation

In iOS, the basic building block of user interfaces is based on instances of the UIView class. Each UIView is backed by a CALayer instance to manage and display its visual content. By modifying the CALayer's properties, you can modify various properties of its visual appearance, like color, border, shadow, and also the geometry.

When we refer to a CALayer's geometry, we always talk about it in the form of a rectangle.

Its frame is built from an (x, y) pair for position and a (width, height) pair for size. The main API for manipulating the layer's rectangular shape is by setting its cornerRadius, which receives a radius value, and in turn sets its four corners to be rounded by that value. The notion of a rectangular backing and an easy API for rounded corners exists pretty much across the board for Android, Flutter, and the web. But things like cut corners and custom edges are usually not as straightforward. To be able to offer these features we built a shape library that provides a generator for creating CALayers with specific, well-defined shape attributes.

Thankfully, Apple provides us with the class CAShapeLayer, which subclasses CALayer and has a customPath property. Assigning this property to a custom CGPath allows us to create any shape we want.

With the path capabilities in mind, we then built a class that leverages the CGPath APIs and provides properties that our users will care about when shaping their components. Here is the API:

/**
An MDCShapeGenerating for creating shaped rectangular CGPaths.

By default MDCRectangleShapeGenerator creates rectangular CGPaths.
Set the corner and edge treatments to shape parts of the generated path.
*/
@interface MDCRectangleShapeGenerator : NSObject <MDCShapeGenerating>

/**
The corner treatments to apply to each corner.
*/
@property(nonatomic, strong) MDCCornerTreatment *topLeftCorner;
@property(nonatomic, strong) MDCCornerTreatment *topRightCorner;
@property(nonatomic, strong) MDCCornerTreatment *bottomLeftCorner;
@property(nonatomic, strong) MDCCornerTreatment *bottomRightCorner;

/**
The offsets to apply to each corner.
*/
@property(nonatomic, assign) CGPoint topLeftCornerOffset;
@property(nonatomic, assign) CGPoint topRightCornerOffset;
@property(nonatomic, assign) CGPoint bottomLeftCornerOffset;
@property(nonatomic, assign) CGPoint bottomRightCornerOffset;

/**
The edge treatments to apply to each edge.
*/
@property(nonatomic, strong) MDCEdgeTreatment *topEdge;
@property(nonatomic, strong) MDCEdgeTreatment *rightEdge;
@property(nonatomic, strong) MDCEdgeTreatment *bottomEdge;
@property(nonatomic, strong) MDCEdgeTreatment *leftEdge;

/**
Convenience to set all corners to the same MDCCornerTreatment instance.
*/
- (void)setCorners:(MDCCornerTreatment *)cornerShape;

/**
Convenience to set all edge treatments to the same MDCEdgeTreatment instance.
*/
- (void)setEdges:(MDCEdgeTreatment *)edgeShape;

By providing such an API, a user can generate a path for only a corner or an edge, and the MDCRectangleShapeGenerator class above will create a shape with those properties in mind. For this initial implementation of our initial shape system, we used only the corner properties.

As you can see, the corners themselves are made of the class MDCCornerTreatment, which encapsulates three pieces of important information:

  • The value of the corner (each specific corner type receives a value).
  • Whether the value provided is a percentage of the height of the surface or an absolute value.
  • A method that returns a path generator based on the given value and corner type. This will provide MDCRectangleShapeGenerator a way to receive the right path for the corner, which it can then append to the overall path of the shape.

To make things even simpler, we didn't want our users to have to build the custom corner by calculating the corner path, so we provided 3 convenient subclasses for our MDCCornerTreatment that generate a rounded, curved, and cut corner.

As an example, our cut corner treatment receives a value called a "cut"—which defines the angle and size of the cut based on the number of UI points starting from the edge of the corner, and going an equal distance on the X axis and the Y axis. If the shape is a square with a size of 100x100, and we have all its corners set with MDCCutCornerTreatment and a cut value of 50, then the final result will be a diamond with a size of 50x50.

Here's how the cut corner treatment implements the path generator:

- (MDCPathGenerator *)pathGeneratorForCornerWithAngle:(CGFloat)angle
andCut:(CGFloat)cut {
MDCPathGenerator *path =
[MDCPathGenerator pathGeneratorWithStartPoint:CGPointMake(0, cut)];
[path addLineToPoint:CGPointMake(MDCSin(angle) * cut, MDCCos(angle) * cut)];
return path;
}

The cut corner's path only cares about the 2 points (one on each edge of the corner) that dictate the cut. The points are (0, cut) and (sin(angle) * cut, cos(angle) * cut). In our case—because we are talking only about rectangles where their corner is 90 degrees—the latter point is equivalent to (cut, 0) where sin(90) = 1 and cos(90) = 0

Here's how the rounded corner treatment implements the path generator:

- (MDCPathGenerator *)pathGeneratorForCornerWithAngle:(CGFloat)angle 
andRadius:(CGFloat)radius {
MDCPathGenerator *path =
[MDCPathGenerator pathGeneratorWithStartPoint:CGPointMake(0, radius)];
[path addArcWithTangentPoint:CGPointZero
toPoint:CGPointMake(MDCSin(angle) * radius, MDCCos(angle) * radius)
radius:radius];
return path;
}

From the starting point of (0, radius) we draw an arc of a circle to the point (sin(angle) * radius, cos(angle) * radius) which—similarly to the cut example—translates to (radius, 0). Lastly, the radius value is the radius of the arc.

Adding shape support for components

After providing an MDCRectangleShapeGenerator with the convenient APIs for setting the corners and edges, we then needed to add a property for each of our components to receive the shape generator and apply the shape to the component.

Each supported component now has a shapeGenerator property in its API that can receive an MDCShapeGenerator or any different shape generator that implements the pathForSize method: Given the width and height of the component, it returns a CGPath of the shape. We also needed to make sure that the path generated is then applied to the underlying CALayer of the component's UIView for it to be displayed.

By applying the shape generator's path on the component, we had to keep a couple things in mind:

Adding proper shadow, border, and background color support

Because the shadows, borders, and background colors are part of the default UIView API and don't necessarily take into account custom CALayer paths (they follow the default rectangular bounds), we needed to provide additional support. So we implemented MDCShapedShadowLayer to be the view's main CALayer. What this class does is take the shape generator path, and then passes that path to be the layer's shadow path—so the shadow will follow the custom shape. It also provides different APIs for setting the background color and border color/width by explicitly setting the values on the CALayer that holds the custom path, rather than invoking the top level UIView APIs. As an example, when setting the background color to black (instead of invoking UIView's backgroundColor) we invoke CALayer's fillColor.

Being conscious of setting layer's properties such as shadowPath and cornerRadius

Because the shape's layer is set up differently than the view's default layer, we need to be conscious of places where we set our layer's properties in our existing component code. As an example, setting the cornerRadius of a component—which is the default way to set rounded corners using Apple's API—will actually not be applicable if you also set a custom shape.

Supporting touch events

Receiving touch also applies only on the original rectangular bounds of the view. With a custom shape, we'll have cases where there are places in the rectangular bounds where the layer isn't drawn, or places outside the bounds where the layer is drawn. So we needed a way to support proper touch that corresponds to where the shape is and isn't, and act accordingly.

To achieve this, we override the hitTest method of our UIView. The hitTest method is responsible for returning the view supposed to receive the touch. In our case, we implemented it so it returns the custom shape's view if the touch event is contained inside the generated shape path:

- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
if (self.layer.shapeGenerator) {
if (CGPathContainsPoint(self.layer.shapeLayer.path, nil, point, true)) {
return self;
} else {
return nil;
}
}
return [super hitTest:point withEvent:event];
}

Ink Ripple Support

As with the other properties, our ink ripple (which provides a ripple effect to the user as touch feedback) is also built on top of the default rectangular bounds. For ink, there are two things we update: 1) the maxRippleRadius and 2) the masking to bounds. The maxRippleRadius must be updated in cases where the shape is either smaller or bigger than the bounds. In these cases we can't rely on the bounds because for smaller shapes the ink will ripple too fast, and for bigger shapes the ripple won't cover the entire shape. The ink layer's maskToBounds needs to also be set to NO so we can allow the ink to spread outside of the bounds when the custom shape is bigger than the default bounds.

- (void)updateInkForShape {
CGRect boundingBox = CGPathGetBoundingBox(self.layer.shapeLayer.path);
self.inkView.maxRippleRadius =
(CGFloat)(MDCHypot(CGRectGetHeight(boundingBox), CGRectGetWidth(boundingBox)) / 2 + 10.f);
self.inkView.layer.masksToBounds = NO;
}

Applying a custom shape to your components

With all the implementation complete, here are per-platform examples of how to provide cut corners to a Material Button component:

Android:

Kotlin

button.background as? MaterialShapeDrawable?.let {
it.shapeAppearanceModel.apply {
cornerFamily = CutCornerTreatment(cornerSize)
}
}

XML:

<com.google.android.material.button.MaterialButton
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:shapeAppearanceOverlay="@style/MyShapeAppearanceOverlay"/>

<style name="MyShapeAppearanceOverlay">
<item name="cornerFamily">cut</item>
<item name="cornerSize">4dp</item>
<style>

Flutter:

FlatButton(
shape: BeveledRectangleBorder(
// Despite referencing circles and radii, this means "make all corners 4.0".
borderRadius: BorderRadius.all(Radius.circular(4.0)),
),

iOS:

MDCButton *button = [[MDCButton alloc] init];
MDCRectangleShapeGenerator *rectShape = [[MDCRectangleShapeGenerator alloc] init];
[rectShape setCorners:[MDCCutCornerTreatment alloc] initWithCut:4]]];
button.shapeGenerator = rectShape;

Web (rounded corners):

.my-button {
@include mdc-button-shape-radius(4px);
}

Final words

I'm really excited to have tackled this problem and have it be part of the Material Design system. I'm particularly happy to have worked so collaboratively with design. As an engineer, I tend to tackle problems more or less from similar angles, and also think about problems very similarly to other engineers. But when solving problems together with designers, it feels like the challenge is actually looked at from all the right angles (pun intended), and the solution often turns out to be better and more thoughtful.

We're in good shape to continue growing the Material shape system and offering even more support for things like edge treatments and more complicated shapes. One day (when Houdini is ready) we'll even be able to support cut corners on the web.

Please check our code out on GitHub across the different platforms: Android, Flutter, iOS, Web. And check out our newly updated Material Design guidance on shape.

 

7th December 2018 |

Creating More Realistic AR experiences with updates to ARCore & Sceneform

Posted by Ashish Shah, Product Manager, Google AR & VR

The magic of augmented reality is in the way it blends the digital and the physical worlds. For AR experiences to feel truly immersive, digital objects need to look realistic -- as if they were actually there with you, in your space. This is something we continue to prioritize as we update ARCore and Sceneform, our 3D rendering library for Java developers.

Today, with the release of ARCore 1.6, we're bringing further improvements to help you build more realistic and compelling experiences, including better plane boundary tracking and several lighting improvements in Sceneform.

With 250M devices now supporting ARCore, developers can bring these experiences to an even larger and growing user base.

More Realistic Lighting in Sceneform

Previous versions of Sceneform defaulted to optimizing ambient light as yellow. Version 1.6 defaults to neutral and white. This aligns more closely to the way light appears in the real world, making digital objects look more natural. You can see the differences below.

Left side image: Sceneform 1.5Right side image: Sceneform 1.6

This change will also make objects rendered with Sceneform look as if they're affected more naturally by color and lighting in the surrounding environment. For example, if you're viewing an AR object at sunset, it would appear to be illuminated by the red and orange hues, just like real objects in the scene.

In addition, we've updated Sceneform's built-in environmental image to provide a more neutral scene for your app. This will be most noticeable when viewing reflections in smooth metallic surfaces.

Adding screen capture and recording to the mix

To help you further improve quality and engagement in your AR apps, we're adding screen capture and recording to Sceneform. This is something a number of developers have requested to help with demo recording and prototyping. It can also be used as an external facing feature, allowing your users to share screenshots and videos on social media more easily, which can help get the word out about your app.

You can access this functionality through the surface mirroring API for the SceneView class. The API allows you to display the Sceneform view on a device's screen at the same time it's being rendered to another surface (such as the input surface for the Android MediaRecorder).

Learn more and get started

The new updates to Sceneform and ARCore are available today. With these new versions also comes support for new devices, such as the Samsung Galaxy A3 and the Huawei P20 Lite, that will join the list of ARCore-enabled devices. More information is available on the ARCore developer website.