Google Developers Blog

 

20th December 2018 |

Google+ APIs shutting down March 7, 2019

As part of the sunset of Google+ for consumers, we will be shutting down all Google+ APIs on March 7, 2019. This will be a progressive shutdown beginning in late January, so we are advising all developers reliant on the Google+ APIs that calls to those APIs may start to intermittently fail as early as January 28, 2019.

On or around December 20, 2018, affected developers should also receive an email listing recently used Google+ API methods in their projects. Whether or not an email is received, we strongly encourage developers to search for and remove any dependencies on Google+ APIs from their applications.

The most commonly used APIs that are being shut down include:

As part of these changes:

  • The Google+ Sign-in feature has been fully deprecated and will also be shut down on March 7, 2019. Developers should migrate to the more comprehensive Google Sign-in authentication system.
  • Over the Air Installs is now deprecated and has been shut down.

Google+ integrations for web or mobile apps are also being shut down. Please see this additional notice.

While we're sunsetting Google+ for consumers, we're investing in Google+ for enterprise organizations. They can expect a new look and new features -- more information is available in our blog post.

 

19th December 2018 |

Tasty: A Recipe for Success on the Google Home Hub

Posted by Julia Chen Davidson, Head of Partner Marketing, Google Home

We recently launched the Google Home Hub, the first ever Made by Google smart speaker with a screen, and we knew that a lot of you would want to put these helpful devices in the kitchen—perhaps the most productive room in the house. With the Google Assistant built-in to the Home Hub, you can use your voice—or your hands—to multitask during meal time. You can manage your shopping list, map out your family calendar, create reminders for the week, and even help your kids out with their homework.

To make the Google Assistant on the Home Hub even more helpful in the kitchen, we partnered with BuzzFeed's Tasty, the largest social food network in the world, to bring 2,000 of their step-by-step tutorials to the Assistant, adding to the tens of thousands of recipes already available. With Tasty on the Home Hub, you can search for recipes based on the ingredients you have in the pantry, your dietary restrictions, cuisine preferences and more. And once you find the right recipe, Tasty will walk you through each recipe with instructional videos and step-by-step guidance.

Tasty's Action shows off how brands can combine voice with visuals to create next-generation experiences for our smart homes. We asked Sami Simon, Product Manager for BuzzFeed Media Brands, a few questions about building for the Google Assistant and we hope you'll find some inspiration for how you can combine voice and touch for the new category of devices in our homes.

What additive value do you see for your users by building an Action for the Google Assistant that's different from an app or YouTube video series, for example?

We all know that feeling when you have your hands in a bowl of ground meat and you realize you have to tap the app to go to the next step or unpause the YouTube video you were watching (I can attest to random food smudges all over my phone and computer for this very reason!).


With our Action, people can use the Google Assistant to get a helping hand while cooking, navigating a Tasty recipe just by using their voice. Without having to break the flow of rolling out dough or chopping an onion, we can now guide people on what to expect next in their cooking process. What's more, with the Google Home Hub, which has the added bonus of a display screen, home chefs can also quickly glance at the video instructions for extra guidance.

The Google Home Hub gives users all of Google, in their home, at a glance. What advantages do you see for Tasty in being a part of voice-enabled devices in the home?

The Assistant on the Google Home Hub enhances the Tasty experience in the kitchen, making it easier than ever for home chefs to cook Tasty recipes, either by utilizing voice commands or the screen display. Tasty is already the centerpiece of the kitchen, and with the Google Home Hub integration, we have the opportunity to provide additional value to our audience. For instance, we've introduced features like Clean Out My Fridge where users share their available ingredients and Tasty recommends what to cook. We're so excited that we can seamlessly provide inspiration and coaching to all home chefs and make cooking even more accessible.

How do you think these new devices will shape the future of digital assistance? How did you think through when to use voice and visual components in your Action?

In our day-to-day lives, we don't necessarily think critically about the best way to receive information in a given instance, but this project challenged us to create the optimal cooking experience. Ultimately we designed the Action to be voice-first to harness the power of the Assistant.

We then layered in the supplemental visuals to make the cooking experience even easier and make searching our recipe catalogue more fun. For instance, if you're busy stir frying, all the pertinent information would be read aloud to you, and if you wanted to quickly check what this might look like, we also provide the visual as additional guidance.

Can you elaborate on 1-3 key findings that your team discovered while testing the Action for the Home Hub?

Tasty's lens on cooking is to provide a fun and accessible experience in the kitchen, which we wanted to have come across with the Action. We developed a personality profile for Tasty with the mission of connecting with chefs of all levels, which served as a guide for making decisions about the Action. For instance, once we defined the voice of Tasty, we knew how to keep the dialogue conversational in order to better resonate with our audience.


Additionally, while most people have had some experience with digital assistants, their knowledge of how assistants work and ways that they use them vary wildly from person to person. When we did user testing, we realized that unlike designing UX for a website, there weren't as many common design patterns we could rely on. Keeping this in mind helped us to continuously ensure that our user paths were as clear as possible and that we always provided users support if they got lost or confused.

What are you most excited about for the future of digital assistance and branded experiences there? Where do you foresee this ecosystem going?

I'm really excited for people to discover more use cases we haven't even dreamed of yet. We've thoroughly explored practical applications of the Assistant, so I'm eager to see how we can develop more creative Actions and evolve how we think about digital assistants. As the Assistant will only get smarter and better at predicting people's behavior, I'm looking forward to seeing the growth of helpful and innovative Actions, and applying those to Tasty's mission to make cooking even more accessible.

What's next for Tasty and your Action? What additional opportunities do you foresee for your brand in digital assistance or conversational interfaces?

We are proud of how our Action leverages the Google Assistant to enhance the cooking experience for our audience, and excited for how we can evolve the feature set in the future. The Tasty brand has evolved its videos beyond our popular top-down recipe format. It would be an awesome opportunity to expand our Action to incorporate the full breadth of the Tasty brand, such as our creative long-form programming or extended cooking tutorials, so we can continue helping people feel more comfortable in the kitchen.

To check out Tasty's Action yourself, just say "Hey Google, ask Tasty what I should make for dinner" on your Home Hub or Smart Display. And to learn more about the solutions we have for businesses, take a look at our Assistant Business site to get started building for the Google Assistant.

If you don't have the resources to build in-house, you can also work with our talented partners that have already built Actions for all types of use cases. To make it even easier to find the perfect partner, we recently launched a new website that shows these agencies on a map with more details about how to get in touch. And if you're an agency already building Actions, we'd love to hear from you. Just reach out here and we'll see if we can offer some help along the way!

 

18th December 2018 |

Building the Shape System for Material Design

Posted by Yarden Eitan, Software Engineer

Building the Shape System for Material Design

I am Yarden, an iOS engineer for Material Design—Google's open-source system for designing and building excellent user interfaces. I help build and maintain our iOS components, but I'm also the engineering lead for Material's shape system.

Shape: It's kind of a big deal

You can't have a UI without shape. Cards, buttons, sheets, text fields—and just about everything else you see on a screen—are often displayed within some kind of "surface" or "container." For most of computing's history, that's meant rectangles. Lots of rectangles.

But the Material team knew there was potential in giving designers and developers the ability to systematically apply unique shapes across all of our Material Design UI components. Rounded corners! Angular cuts! For designers, this means being able to create beautiful interfaces that are even better at directing attention, expressing brand, and supporting interactions. For developers, having consistent shape support across all major platforms means we can easily apply and customize shape across apps.

My role as engineering lead was truly exciting—I got to collaborate with our design leads to scope the project and find the best way to create this complex new system. Compared to systems for typography and color (which have clear structures and precedents like the web's H1-H6 type hierarchy, or the idea of primary/secondary colors) shape is the Wild West. It's a relatively unexplored terrain with rules and best practices still waiting to be defined. To meet this challenge, I got to work with all the different Material Design engineering platforms to identify possible blockers, scope the effort, and build it!

When building out the system, we had two high level goals:

  • Adding shape support for our components—giving developers the ability to customize the shape of buttons, cards, chips, sheets, etc.
  • Defining and developing a good way to theme our components using shape—so developers could set their product's shape story once and have it cascade through their app, instead of needing to customize each component individually.

From an engineering perspective, adding shape support held the bulk of the work and complexities, whereas theming had more design-driven challenges. In this post, I'll mostly focus on the engineering work and how we added shape support to our components.

Here's a rundown of what I'll cover here:

  • Scoping out the shape support functionality
  • Building shape support consistently across platforms is hard
  • Implementing shape support on iOS
    • Shape core implementation
    • Adding shape support for components
  • Applying a custom shape on your component
  • Final words

Scoping out the shape support functionality

Our first task was to scope out two questions: 1) What is shape support? and 2) What functionality should it provide? Initially our goals were somewhat ambitious. The original proposal suggested an API to customize components by edges and corners, with full flexibility on how these edges and corners look. We even thought about receiving a custom .png file with a path and converting it to a shaped component in each respective platform.

We soon found that having no restrictions would make it extremely hard to define such a system. More flexibility doesn't necessarily mean a better result. For example, it'd be quite a feat to define a flexible and easy API that lets you make a snake-shaped FAB and train-shaped cards. But those elements would almost certainly contradict the clear and straightforward approach championed by Material Design guidance.

This truck-shaped FAB is a definite "don't" in Material Design guidance.

We had to weigh the expense of time and resources against the added value for each functionality we could provide.

To solve these open questions we decided to conduct a full weeklong workshop including team members from design, engineering, and tooling. It proved to be extremely effective. Even though there were a lot of inputs, we were able to hone down what features were feasible and most impactful for our users. Our final proposal was to make the initial system support three types of shapes: square, rounded, and cut. These shapes can be achieved through an API customizing a component's corners.

Building shape support consistently across platforms (it's hard)

Anyone who's built for multiple platforms knows that consistency is key. But during our workshop, we realized how difficult it would be to provide the exact same functionality for all our platforms: Android, Flutter, iOS, and the web. Our biggest blocker? Getting cut corners to work on the web.

Unlike sharp or rounded corners, cut corners do not have a built-in native solution on the web.

Our web team looked at a range of solutions—we even considered the idea of adding background-colored squares over each corner to mask it and make it appear cut. Of course, the drawbacks there are obvious: Shadows are masked and the squares themselves need to act as chameleons when the background isn't static or has more than one color.

We then investigated the Houdini (paint worklet) API along with polyfill which initially seemed like a viable solution that would actually work. However, adding this support would require additional effort:

  • Our UI components use shadows to display elevation and the new canvas shadows look different than the native CSS box-shadow, which would require us to reimplement shadows throughout our system.
  • Our UI components also display a visual ripple effect when being tapped—to show intractability. For us to continue using ripple in the paint worklet, we would need to reimplement it, as there is no cross-browser masking solution that doesn't provide significant performance hits.

Even if we'd decided to add more engineering effort and go down the Houdini path, the question of value vs cost still remained, especially with Houdini still being "not ready" across the web ecosystem.

Based on our research and weighing the cost of the effort, we ultimately decided to move forward without supporting cut corners for web UIs (at least for now). But the good news was that we have spec-ed out the requirements and could start building!

Implementing shape support on iOS

After honing down the feature set, it was up to the engineers of each platform to go and start building. I helped build out shape support for iOS. Here's how we did it:

Core implementation

In iOS, the basic building block of user interfaces is based on instances of the UIView class. Each UIView is backed by a CALayer instance to manage and display its visual content. By modifying the CALayer's properties, you can modify various properties of its visual appearance, like color, border, shadow, and also the geometry.

When we refer to a CALayer's geometry, we always talk about it in the form of a rectangle.

Its frame is built from an (x, y) pair for position and a (width, height) pair for size. The main API for manipulating the layer's rectangular shape is by setting its cornerRadius, which receives a radius value, and in turn sets its four corners to be rounded by that value. The notion of a rectangular backing and an easy API for rounded corners exists pretty much across the board for Android, Flutter, and the web. But things like cut corners and custom edges are usually not as straightforward. To be able to offer these features we built a shape library that provides a generator for creating CALayers with specific, well-defined shape attributes.

Thankfully, Apple provides us with the class CAShapeLayer, which subclasses CALayer and has a customPath property. Assigning this property to a custom CGPath allows us to create any shape we want.

With the path capabilities in mind, we then built a class that leverages the CGPath APIs and provides properties that our users will care about when shaping their components. Here is the API:

/**
An MDCShapeGenerating for creating shaped rectangular CGPaths.

By default MDCRectangleShapeGenerator creates rectangular CGPaths.
Set the corner and edge treatments to shape parts of the generated path.
*/
@interface MDCRectangleShapeGenerator : NSObject <MDCShapeGenerating>

/**
The corner treatments to apply to each corner.
*/
@property(nonatomic, strong) MDCCornerTreatment *topLeftCorner;
@property(nonatomic, strong) MDCCornerTreatment *topRightCorner;
@property(nonatomic, strong) MDCCornerTreatment *bottomLeftCorner;
@property(nonatomic, strong) MDCCornerTreatment *bottomRightCorner;

/**
The offsets to apply to each corner.
*/
@property(nonatomic, assign) CGPoint topLeftCornerOffset;
@property(nonatomic, assign) CGPoint topRightCornerOffset;
@property(nonatomic, assign) CGPoint bottomLeftCornerOffset;
@property(nonatomic, assign) CGPoint bottomRightCornerOffset;

/**
The edge treatments to apply to each edge.
*/
@property(nonatomic, strong) MDCEdgeTreatment *topEdge;
@property(nonatomic, strong) MDCEdgeTreatment *rightEdge;
@property(nonatomic, strong) MDCEdgeTreatment *bottomEdge;
@property(nonatomic, strong) MDCEdgeTreatment *leftEdge;

/**
Convenience to set all corners to the same MDCCornerTreatment instance.
*/
- (void)setCorners:(MDCCornerTreatment *)cornerShape;

/**
Convenience to set all edge treatments to the same MDCEdgeTreatment instance.
*/
- (void)setEdges:(MDCEdgeTreatment *)edgeShape;

By providing such an API, a user can generate a path for only a corner or an edge, and the MDCRectangleShapeGenerator class above will create a shape with those properties in mind. For this initial implementation of our initial shape system, we used only the corner properties.

As you can see, the corners themselves are made of the class MDCCornerTreatment, which encapsulates three pieces of important information:

  • The value of the corner (each specific corner type receives a value).
  • Whether the value provided is a percentage of the height of the surface or an absolute value.
  • A method that returns a path generator based on the given value and corner type. This will provide MDCRectangleShapeGenerator a way to receive the right path for the corner, which it can then append to the overall path of the shape.

To make things even simpler, we didn't want our users to have to build the custom corner by calculating the corner path, so we provided 3 convenient subclasses for our MDCCornerTreatment that generate a rounded, curved, and cut corner.

As an example, our cut corner treatment receives a value called a "cut"—which defines the angle and size of the cut based on the number of UI points starting from the edge of the corner, and going an equal distance on the X axis and the Y axis. If the shape is a square with a size of 100x100, and we have all its corners set with MDCCutCornerTreatment and a cut value of 50, then the final result will be a diamond with a size of 50x50.

Here's how the cut corner treatment implements the path generator:

- (MDCPathGenerator *)pathGeneratorForCornerWithAngle:(CGFloat)angle
andCut:(CGFloat)cut {
MDCPathGenerator *path =
[MDCPathGenerator pathGeneratorWithStartPoint:CGPointMake(0, cut)];
[path addLineToPoint:CGPointMake(MDCSin(angle) * cut, MDCCos(angle) * cut)];
return path;
}

The cut corner's path only cares about the 2 points (one on each edge of the corner) that dictate the cut. The points are (0, cut) and (sin(angle) * cut, cos(angle) * cut). In our case—because we are talking only about rectangles where their corner is 90 degrees—the latter point is equivalent to (cut, 0) where sin(90) = 1 and cos(90) = 0

Here's how the rounded corner treatment implements the path generator:

- (MDCPathGenerator *)pathGeneratorForCornerWithAngle:(CGFloat)angle 
andRadius:(CGFloat)radius {
MDCPathGenerator *path =
[MDCPathGenerator pathGeneratorWithStartPoint:CGPointMake(0, radius)];
[path addArcWithTangentPoint:CGPointZero
toPoint:CGPointMake(MDCSin(angle) * radius, MDCCos(angle) * radius)
radius:radius];
return path;
}

From the starting point of (0, radius) we draw an arc of a circle to the point (sin(angle) * radius, cos(angle) * radius) which—similarly to the cut example—translates to (radius, 0). Lastly, the radius value is the radius of the arc.

Adding shape support for components

After providing an MDCRectangleShapeGenerator with the convenient APIs for setting the corners and edges, we then needed to add a property for each of our components to receive the shape generator and apply the shape to the component.

Each supported component now has a shapeGenerator property in its API that can receive an MDCShapeGenerator or any different shape generator that implements the pathForSize method: Given the width and height of the component, it returns a CGPath of the shape. We also needed to make sure that the path generated is then applied to the underlying CALayer of the component's UIView for it to be displayed.

By applying the shape generator's path on the component, we had to keep a couple things in mind:

Adding proper shadow, border, and background color support

Because the shadows, borders, and background colors are part of the default UIView API and don't necessarily take into account custom CALayer paths (they follow the default rectangular bounds), we needed to provide additional support. So we implemented MDCShapedShadowLayer to be the view's main CALayer. What this class does is take the shape generator path, and then passes that path to be the layer's shadow path—so the shadow will follow the custom shape. It also provides different APIs for setting the background color and border color/width by explicitly setting the values on the CALayer that holds the custom path, rather than invoking the top level UIView APIs. As an example, when setting the background color to black (instead of invoking UIView's backgroundColor) we invoke CALayer's fillColor.

Being conscious of setting layer's properties such as shadowPath and cornerRadius

Because the shape's layer is set up differently than the view's default layer, we need to be conscious of places where we set our layer's properties in our existing component code. As an example, setting the cornerRadius of a component—which is the default way to set rounded corners using Apple's API—will actually not be applicable if you also set a custom shape.

Supporting touch events

Receiving touch also applies only on the original rectangular bounds of the view. With a custom shape, we'll have cases where there are places in the rectangular bounds where the layer isn't drawn, or places outside the bounds where the layer is drawn. So we needed a way to support proper touch that corresponds to where the shape is and isn't, and act accordingly.

To achieve this, we override the hitTest method of our UIView. The hitTest method is responsible for returning the view supposed to receive the touch. In our case, we implemented it so it returns the custom shape's view if the touch event is contained inside the generated shape path:

- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
if (self.layer.shapeGenerator) {
if (CGPathContainsPoint(self.layer.shapeLayer.path, nil, point, true)) {
return self;
} else {
return nil;
}
}
return [super hitTest:point withEvent:event];
}

Ink Ripple Support

As with the other properties, our ink ripple (which provides a ripple effect to the user as touch feedback) is also built on top of the default rectangular bounds. For ink, there are two things we update: 1) the maxRippleRadius and 2) the masking to bounds. The maxRippleRadius must be updated in cases where the shape is either smaller or bigger than the bounds. In these cases we can't rely on the bounds because for smaller shapes the ink will ripple too fast, and for bigger shapes the ripple won't cover the entire shape. The ink layer's maskToBounds needs to also be set to NO so we can allow the ink to spread outside of the bounds when the custom shape is bigger than the default bounds.

- (void)updateInkForShape {
CGRect boundingBox = CGPathGetBoundingBox(self.layer.shapeLayer.path);
self.inkView.maxRippleRadius =
(CGFloat)(MDCHypot(CGRectGetHeight(boundingBox), CGRectGetWidth(boundingBox)) / 2 + 10.f);
self.inkView.layer.masksToBounds = NO;
}

Applying a custom shape to your components

With all the implementation complete, here are per-platform examples of how to provide cut corners to a Material Button component:

Android:

Kotlin

button.background as? MaterialShapeDrawable?.let {
it.shapeAppearanceModel.apply {
cornerFamily = CutCornerTreatment(cornerSize)
}
}

XML:

<com.google.android.material.button.MaterialButton
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:shapeAppearanceOverlay="@style/MyShapeAppearanceOverlay"/>

<style name="MyShapeAppearanceOverlay">
<item name="cornerFamily">cut</item>
<item name="cornerSize">4dp</item>
<style>

Flutter:

FlatButton(
shape: BeveledRectangleBorder(
// Despite referencing circles and radii, this means "make all corners 4.0".
borderRadius: BorderRadius.all(Radius.circular(4.0)),
),

iOS:

MDCButton *button = [[MDCButton alloc] init];
MDCRectangleShapeGenerator *rectShape = [[MDCRectangleShapeGenerator alloc] init];
[rectShape setCorners:[MDCCutCornerTreatment alloc] initWithCut:4]]];
button.shapeGenerator = rectShape;

Web (rounded corners):

.my-button {
@include mdc-button-shape-radius(4px);
}

Final words

I'm really excited to have tackled this problem and have it be part of the Material Design system. I'm particularly happy to have worked so collaboratively with design. As an engineer, I tend to tackle problems more or less from similar angles, and also think about problems very similarly to other engineers. But when solving problems together with designers, it feels like the challenge is actually looked at from all the right angles (pun intended), and the solution often turns out to be better and more thoughtful.

We're in good shape to continue growing the Material shape system and offering even more support for things like edge treatments and more complicated shapes. One day (when Houdini is ready) we'll even be able to support cut corners on the web.

Please check our code out on GitHub across the different platforms: Android, Flutter, iOS, Web. And check out our newly updated Material Design guidance on shape.

 

7th December 2018 |

Creating More Realistic AR experiences with updates to ARCore & Sceneform

Posted by Ashish Shah, Product Manager, Google AR & VR

The magic of augmented reality is in the way it blends the digital and the physical worlds. For AR experiences to feel truly immersive, digital objects need to look realistic -- as if they were actually there with you, in your space. This is something we continue to prioritize as we update ARCore and Sceneform, our 3D rendering library for Java developers.

Today, with the release of ARCore 1.6, we're bringing further improvements to help you build more realistic and compelling experiences, including better plane boundary tracking and several lighting improvements in Sceneform.

With 250M devices now supporting ARCore, developers can bring these experiences to an even larger and growing user base.

More Realistic Lighting in Sceneform

Previous versions of Sceneform defaulted to optimizing ambient light as yellow. Version 1.6 defaults to neutral and white. This aligns more closely to the way light appears in the real world, making digital objects look more natural. You can see the differences below.

Left side image: Sceneform 1.5Right side image: Sceneform 1.6

This change will also make objects rendered with Sceneform look as if they're affected more naturally by color and lighting in the surrounding environment. For example, if you're viewing an AR object at sunset, it would appear to be illuminated by the red and orange hues, just like real objects in the scene.

In addition, we've updated Sceneform's built-in environmental image to provide a more neutral scene for your app. This will be most noticeable when viewing reflections in smooth metallic surfaces.

Adding screen capture and recording to the mix

To help you further improve quality and engagement in your AR apps, we're adding screen capture and recording to Sceneform. This is something a number of developers have requested to help with demo recording and prototyping. It can also be used as an external facing feature, allowing your users to share screenshots and videos on social media more easily, which can help get the word out about your app.

You can access this functionality through the surface mirroring API for the SceneView class. The API allows you to display the Sceneform view on a device's screen at the same time it's being rendered to another surface (such as the input surface for the Android MediaRecorder).

Learn more and get started

The new updates to Sceneform and ARCore are available today. With these new versions also comes support for new devices, such as the Samsung Galaxy A3 and the Huawei P20 Lite, that will join the list of ARCore-enabled devices. More information is available on the ARCore developer website.

 

7th December 2018 |

Sync Google Drive files to apps using the Drive REST API, bidding farewell to the Drive Android API

Posted by Remy Burger, Product Manager, Google Drive

If you're looking to make Google Drive files accessible from within your application, chances are you might use the Google Drive REST API or the Google Drive Android API to help. Both tools allow users to download or upload files from Drive from inside of another application.

Starting today, we're simplifying options for developers by retiring the Drive Android API. We will focus solely on expanding functionality for the Drive REST API.

If you're new to the Drive REST API, it offers all of the same functionality as the Drive Android API, including ways to:

If you use the Google Drive Android API, you will need to migrate your existing applications to other services prior to December 6, 2019, when all calls to the API and any features in your applications that depend on it will be shut down. Note: if you've been using the Drive Android API for its offline sync capability, you can continue to provide an offline-first model by using a SyncAdapter with the Drive REST API.

What to do if you currently use the Google Drive Android API

We want to make it easy for you to migrate your applications to use the Drive REST API. To get started, reference this migration guide which details replacements for each of the major services fulfilled by the Drive Android API. Additionally, check out this sample app, which demonstrates each of these proposed replacements. If you have any issues, check out the google-drive-sdk tag on StackOverflow.

 

5th December 2018 |

Upgrade to version 2 of the Google Drive Activity API

Posted by Jeremy S. Meredith, Google Drive Activity API Team

Today, we are announcing a new version of the Google Drive Activity API, used to access the record of user activity in Google Drive. This new API offers an expanded data model to provide meaningful representations of actions, actors, and targets of activity in Google Drive. It also offers new features for filtering the results of requests made to the API.

The version of the API released today replaces the existing Drive Activity API v1, so you should migrate your applications to the new version of the API soon. We will shut down the v1 API on December 31, 2019. At that time, any application that depends on the v1 API will no longer work.

A migration guide is available to help with this transition to the new Drive Activity API v2. You may also want to read the overview and guides for the new version, peruse the reference documentation, or jump right in and try it out in the APIs Explorer.

 

4th December 2018 |

Flutter 1.0: Google’s Portable UI Toolkit

Posted by Tim Sneath, Group Product Manager for Flutter

Today, at Flutter Live, we're announcing Flutter 1.0, the first stable release of Google's UI toolkit for creating beautiful, native experiences for iOS and Android from a single codebase.

Cross-platform mobile development today is full of compromise. Developers are forced to choose between either building the same app multiple times for multiple operating systems, or to accept a lowest common denominator solution that trades native speed and accuracy for portability. With Flutter, we believe we have a solution that gives you the best of both worlds: hardware-accelerated graphics and UI, powered by native ARM code, targeting both popular mobile operating systems.

Introducing Flutter

Flutter doesn't replace the traditional Apple and Android app models for building mobile apps; instead, it's an app engine that you can either embed into an existing app or use for an entirely new app.

We think of the characteristics of Flutter along four dimensions:

  1. Flutter enables you to build beautiful apps. We want to enable designers to deliver their full creative vision without being forced to water it down due to limitations of the underlying framework. Flutter lets you control every pixel on the screen, and its powerful compositing capabilities let you overlay and animate graphics, video, text and controls without limitation. Flutter includes a full set of widgets that deliver pixel-perfect experiences on both iOS and Android. And it enables the ultimate realization of Material Design, Google's open design system for digital experiences.
  2. Flutter is fast. It's powered by the same hardware-accelerated Skia 2D graphics engine that underpins Chrome and Android. We architected Flutter to be able to support glitch-free, jank-free graphics at the native speed of your device. Flutter code is powered by the world-class Dart platform, which enables compilation to native 32-bit and 64-bit ARM code for iOS and Android.
  3. Flutter is productive. Flutter introduces stateful hot reload, a revolutionary new capability for mobile developers and designers to iterate on their apps in real time. With stateful hot reload, you can make changes to the code of your app and see the results instantly without restarting your app or losing its state. Stateful hot reload transforms the way developers build an app -- and in user surveys, developers say it makes their development cycle three times more productive.
  4. Lastly, Flutter is open. Flutter is an open source project with a BSD-style license, and includes the contributions of hundreds of developers from around the world. In addition, there's a vibrant ecosystem of thousands of plug-ins. And because every Flutter app is a native app that uses the standard Android and iOS build tools, you can access everything from the underlying operating system, including code and UI written in Kotlin or Java on Android, and Swift or Objective-C on iOS.

Put this all together, combine it with best-in-class tooling for Visual Studio Code, Android Studio, IntelliJ or the programmer's editor of your choice, and you have Flutter -- a development environment for building beautiful native experiences for iOS or Android from a single codebase.

Flutter Growth and Momentum

We announced the first beta of Flutter at Mobile World Congress ten months ago, and we've been excited to see how quickly it has been adopted by the broader community, as evidenced by the thousands of Flutter apps that are already published to the Apple and Google Play stores even before our 1.0 release. It's clear that developers are ready for a new approach to UI development.

Internally, Flutter is being used at Google for a wide array of products, with Google Ads already having switched to Flutter for their iOS and Android app. And even before 1.0, a wide range of global customers including Abbey Road Studios, Alibaba, Capital One, Groupon, Hamilton, JD.com, Philips Hue, Reflectly, and Tencent are developing or shipping apps with Flutter.

Michael Jones, Senior Director of Engineering from the Capital One team, says the following about their experiences with Flutter:

"We are excited by Flutter's unique take on high-performing cross-platform development. Our engineers have appreciated the rapid development promise and hot reload capabilities, and over the past year we have seen tremendous progress in the framework and especially the native integration story.

"Flutter can allow Capital One to think of features not in an 'iOS or Android-first' fashion, but rather in a true mobile-first model. We are excited to see Flutter 1.0 and continue to be impressed with the pace of advancement and the excitement in the engineering community."

At the Flutter Live event today, the popular payment service Square announced two new Flutter SDKs that make it easy to accept payments for goods and services with Flutter, whether in-person using a Square payment reader or by taking payments inside a mobile app. Square demonstrated an example of using their payments SDK using an app from Collins Family Orchards, a family farm that grows and sells fruit in farmers markets around the Pacific Northwest.

The developer of the Collins Family Orchards app, Dean Papastrat, had this to say about his experience:

"I was blown away by the speed of all the animations and transitions in production builds. As a web developer, it was super easy to make the transition to Flutter, and I can't believe I was able to build a fully working app that can take payments in just a week."

Also at Flutter Live, 2Dimensions announced the immediate availability of Flare, a remarkable new tool for designers to create vector animations that can be embedded directly into a Flutter app and manipulated with code. Flare eliminates the need to design in one app, animate in another, then convert all of that to device-specific assets and code.

Animations built with Flare can be embedded into an existing Flutter app as a widget, allowing them to participate in the full compositor and be overlaid with other text, graphical layers or even UI widgets. Integrating in this way frees animations from the 'black box' limitations of other architectures, and allows ongoing collaboration between designers and developers right up to the completion of the app. Such tight integration between Flutter and Flare provides a uniquely compelling offering for digital designers and animators who want to create highly-polished mobile experiences.

Another partner who has bet on Flutter is Nevercode, a fast-growing provider of continuous integration and delivery (CI/CD) tooling for mobile apps. At Flutter Live, they announced Codemagic, a new tool designed specifically for Flutter to make it easy to automate the process of building and packaging Flutter apps for both Android and iOS from a single automation. Available today in beta, Codemagic allows you to select a GitHub repo containing a Flutter project, and with just a few clicks, create continuous build flows that run tests, and generate binary app bundles that you can upload to the Apple and Google Play stores.

We put together a short video to highlight the range and variety of the apps developers have been building with Flutter since the beta:

New Features in Flutter 1.0

Since the first beta, we've been working to add features and polish to Flutter. In particular, we rounded out our support for pixel-perfect iOS apps with new widgets; added support for nearly twenty different Firebase services; and worked on improving performance and reducing the size of Flutter apps. We've also closed out thousands of issues based on feedback from the community.

Flutter also includes the latest version of the Dart platform, 2.1, an update to Dart 2 that offers smaller code size, faster type checks, and better usability for type errors. Dart 2.1 also has new language features to improve productivity when building user experiences. Developers who have already adopted Dart 2.1 tell us they're seeing significant speed improvements just by switching to the latest engine:

While the primary focus of the 1.0 release is bug fixes and stabilization, we're also introducing previews of two major new features for developers to try out in preview mode that we anticipate will ship in our next quarterly release in February 2019: Add to App and platform views.

Add to App

When we first built Flutter, we focused on productivity for the scenario where someone is building a new application from scratch. But of course, not everyone has the luxury of being able to start with a clean slate. Talking to some of our larger customers, it was clear that they wanted to use Flutter for new user journeys or features within an existing application, or to convert their existing application to Flutter in stages.

The architecture of Flutter supports this model well: after all, every Flutter app includes a host Android and iOS container. But we've been working to make it easier to incrementally adopt Flutter by updating our templates, tooling and guidance for existing apps. We've made it easier to share assets between Flutter and host code. And we've also reworked the tooling to make it easy to attach to an existing Flutter process without launching the debugger with the application.

We will continue to work to make this experience even better. Even though a number of customers are already using our guidance on Add to App successfully, we're continuing to add samples and expand support for complex scenarios. In the meantime, our instructions for adding Flutter to existing apps are on our wiki, and you can track the remaining work on the GitHub project board.

Platform Views

While Add To App is useful as a way to gradually introduce Flutter to an existing application, sometimes it's useful to go the other way round and embed an Android or iPhone platform control in a Flutter app.

So we've introduced platform view widgets (AndroidView and UiKitView) that let you embed this kind of content on each platform. We've been previewing Android support for a couple of months, but now we're expanding support to iOS, and starting to add plug-ins like Google Maps and WebView that take advantage of this.

Like other components, our platform view widgets participate in the composition model, which means that you can integrate it with other Flutter content. For example, in the screenshot above, the floating action button in the bottom right corner is a Flutter widget that has background color with 50% alpha. This demonstrates the unique architectural advantages of Flutter well.

While this work is ready for developers to try out, we're continuing to work on improving performance and device compatibility, so we recommend caution if deploying apps that depend on PlatformViews. We're continuing to actively optimize platform views and expect them to be ready for production in time for our next quarterly update.

Flutter Beyond Mobile

The primary target for Flutter has so far been iOS and Android. Yet our ambitions for Flutter extend beyond mobile to a broader set of platforms. Indeed, from the outset Flutter was architected as a portable UI toolkit that is flexible enough to go wherever pixels are painted.

Some of this work has already been taking place in the open. Flutter Desktop Embedding is an early-stage project that brings Flutter to desktop operating systems including Windows, MacOS, and Linux. We also recently published informal details of using Flutter on Raspberry Pi, as a way to demonstrate Flutter embedding support to smaller-scale devices that may not include a full desktop environment.

This week, at Flutter Live, we gave the first sneak peek of an experimental project we're working on in the labs that significantly expands where Flutter can run.

Hummingbird is a web-based implementation of the Flutter runtime that takes advantage of the capability of the Dart platform to compile not just to native ARM code but also to JavaScript. This enables Flutter code to run on the standards-based web without change.

We have a separate blog article on Medium that describes the technical implementation details of Hummingbird. And we'll have a lot more to share on Hummingbird at Google I/O in 2019: hope to see you there!

Of course, mobile remains our immediate priority, and you can expect to see the bulk of our investment in these core mobile scenarios over the coming months.

Conclusion

With the release of Flutter 1.0, we've established a new 'stable' channel, in addition to the existing beta, dev, and master channels. The stable channel updates less often than other channels, but we have a higher confidence in its quality since builds have already been vetted through the other channels. We anticipate that we'll update our stable channel on a quarterly basis with our most battle-tested builds.

You can download Flutter 1.0 from our website at https://flutter.io, where you can also find documentation for developers transitioning from other frameworks, code labs, a cookbook of common samples, and technical videos.

We owe a particular debt to the early adopters who have joined us on the journey so far, providing feedback, identifying issues, creating content, and generally shaping the product. The Flutter community is one of our greatest assets as a project: a welcoming, diverse, helpful group of individuals who volunteer selflessly because they also care about this open source project. Thank you!

Flutter is ready for you. What will you build?

 

27th November 2018 |

Introduction to Fairness in Machine Learning

Posted by Andrew Zaldivar, Developer Advocate, Google AI

A few months ago, we announced our AI Principles, a set of commitments we are upholding to guide our work in artificial intelligence (AI) going forward. Along with our AI Principles, we shared a set of recommended practices to help the larger community design and build responsible AI systems.

In particular, one of our AI Principles speaks to the importance of recognizing that AI algorithms and datasets are the product of the environment—and, as such, we need to be conscious of any potential unfair outcomes generated by an AI system and the risk it poses across cultures and societies. A recommended practice here for practitioners is to understand the limitations of their algorithm and datasets—but this is a problem that is far from solved.

To help practitioners take on the challenge of building fairer and more inclusive AI systems, we developed a short, self-study training module on fairness in machine learning. This new module is part of our Machine Learning Crash Course, which we highly recommend taking first—unless you know machine learning really well, in which case you can jump right into the Fairness module.

The Fairness module features a hands-on technical exercise. This exercise demonstrates how you can use tools and techniques that may already exist in your development stack (such as Facets Dive, Seaborn, pandas, scikit-learn and TensorFlow Estimators to name a few) to explore and discover ways to make your machine learning system fairer and more inclusive. We created our exercise in a Colaboratory notebook, which you are more than welcome to use, modify and distribute for your own purposes.

From exploring datasets to analyzing model performance, it's really easy to forget to make time for responsible reflection when building an AI system. So rather than having you run every code cell in sequential order without pause, we added what we call FairAware tasks throughout the exercise. FairAware tasks help you zoom in and out of the problem space. That way, you can remind yourself of the big picture: finding the undesirable biases that could disproportionately affect model performance across groups. We hope a process like FairAware will become part of your workflow, helping you find opportunities for inclusion.

FairAware task guiding practitioner to compare performances across gender.

The Fairness module was created to provide you with enough of an understanding to get started in addressing fairness and inclusion in AI. Keep an eye on this space for future work as this is only the beginning.

If you wish to learn more from our other examples, check out the Fairness section of our Responsible AI Practices guide. There, you will find a full set of Google recommendations and resources. From our latest research proposal on reporting model performance with fairness and inclusion considerations, to our recently launched diagnostic tool that lets anyone investigate trained models for fairness, our resource guide highlights many areas of research and development in fairness.

Let us know what your thoughts are on our Fairness module. If you have any specific comments on the notebook exercise itself, then feel free to leave a comment on our GitHub repo.


On behalf of many contributors and supporters,

Andrew Zaldivar – Developer Advocate, Google AI

 

26th November 2018 |

Meet the Grow with Google Developer Scholarships graduates

Posted by Peter Lubbers, Senior Program Manager, Google Developer Training

In January, as a part of Grow with Google’s ongoing commitment to create economic opportunities for Americans, the Google Developer Scholarship Challenge—hosted in partnership with Udacity—awarded nearly 50,000 scholarships to aspiring developers from a wide range of backgrounds and experience levels.

In April, the 5,000 top performers in the Scholarship Challenge earned scholarships for a full Udacity Nanodegree program. These scholars come from every part of the United States, range in age from the late teens to the late sixties, and vary in experience from beginning to advanced. Despite these differences, they share a desire to strengthen their web and Android development skills, and to grow professionally.

Together, they’ve created nearly 18,000 web and Android apps, and exchanged over 2 million messages on the support channels. Students all across the country have reported new jobs, career advancement, and engagement in community programs as a result of their scholarships.

We’d share every story if we could, as they’re all remarkable. But today, we introduce you to five scholars in particular. Because of their hard work, and what they’ve made of the scholarship opportunity, their lives and careers have changed in dramatic ways. Let’s meet them now.


Tony Boswell

Kansas City, MO

From Missouri Long-Haul Trucker to Web Developer

Tony Boswell was a long-haul truck driver for 14 years. He covered over 1.5 million miles, drove through almost every state in the US, and hauled everything from fresh produce to crude oil. It was steady work, but it required being away from home 320 days out of every year. Tony told us “My wife was home alone and we were living two entirely separate lives.”

Last year, at age 48, Tony decided he had to make a change. Despite not having any transferable skills or relevant work history, he believed he could become a developer. He applied to the Grow with Google Developer program, and earned the Nanodegree scholarship. It was the right move. Tony completed his Nanodegree program in September, and recently found a full-time position focused on front-end web development. Thanks to the career lessons included in his program, he was able to confidently negotiate a $10,000+ increase in his starting salary offer.

“I am happy to say, thanks to the education, training, and coaching that I received from this program, I have finally completed my transition from the open road and a steering wheel, to accepting the title of Technical Support Specialist — Web Developer. I can truly say that my whole life has changed because of coding.”


Kimberly McCaffery

Virginia Beach, VA

From Virginia Homemaker to Technology Apprentice

Kimberly McCaffery applied for the Grow with Google scholarship to acquire new skills that would help her transition back to the workforce. She is a mother of four, and has been a military spouse and homemaker for over 20 years. She was motivated to apply because she recognized the need to contribute financially to her family:

“Since 1999, we’ve moved 10 times; in the US and overseas. When we got back to Virginia, I returned to the workforce as a substitute teacher. The W2 I received was my first one this century, but, my total pay was less than $500! As my husband approaches retirement, I knew it would help us all if I could shoulder more of the load.“

After completing her Front-End Nanodegree program earlier this fall, Kimberly got a job as a Technology Apprentice at MAXX Potential in Norfolk, Virginia. “I’m so pleased and proud! It's 10 minutes from the kid's school, very flexible, and full of challenges with IT as a service. And there is plenty of room within the company to grow as fast as I want!”



Charles Rowland

Glendive, MT

From unemployed to Software Engineer

After being laid off from a job in Pennsylvania, Charles and his family moved back to his wife’s hometown in rural Montana, where he struggled to find work as a freelancer. It was a very difficult time, and his confidence suffered.

“I fell into major depression. When my phone rang, I had panic attacks because it was people asking for money. Job-wise, there was nothing in our small town.”

Charles had applied for, and earned, a Grow with Google Scholarship, but there didn’t seem to be a single place where he could apply his skills. He was desperate, but one interview changed everything for him:

“In June I applied for a job at the local cable company to do cable installation. In August I finally got called in for an interview. Immediately the CEO asked me why I didn’t apply for their programming position. I never actually saw it. Instead of an interview for an installer job it turned into the first of 2 interviews for a programming job. For the 2nd interview, I loaded up my phone with all the apps I had made during the Android Basics program. In the interview I answered all the standard questions but it was when I pulled my phone out and showed off the applications I made in the Nanodegree program, that I could tell that I nailed it.“

Two days later, they called and offered Charles the job.

“I never imagined I’d end up doing a job like this. My first day was on September 24.”


Anna Scott

Tularosa, NM

Working with Students to Build an Apache Language App

Anna is a Special Education teacher and STEM program coordinator for a middle school in New Mexico. She has a passion for technology, and applied for the Google Developer Scholarship to gain new knowledge and be more helpful to her students and her community.

Anna lives and works near the Mescalero Apache Tribal lands and is now working with her students to develop an Apache language app.

“Students are collecting Apache words and phrases as raw data for the app, and have been working closely with our Apache Language teacher, who is a member of the tribe. Students are designing artwork for the app and are consulting their elders to make it meaningful for Apache people.”

Anna is also having a school-wide drawing contest for the launching icon. During the STEM meetings, students work with Android Studio—they learn how to change the look of their app with XML, and make it do things with Java. “My students are really motivated by this project!”



Lourdes Wellington

Castine, ME

Building A Website for African Widows and Orphans

Lourdes Wellington worked in the information technology field, but in the back of her mind, she harbored a desire to learn software development. She was gearing up to make that transition, when a serious health crisis put a hold on her plans—it was cancer, and survival meant having part of her right arm amputated. Despite the challenge, she was determined to move forward both physically and mentally:

“Losing my arm was a small price to pay considering I did not lose my life. My mental aptitude became stronger and I began to consider how I wanted to move forward in the future with my life.”

Lourdes successfully applied for the Grow with Google scholarship, and with the new skills she learned in her Front-End Nanodegree program, she went looking for a meaningful way to make an impact. She learned about an organization that benefits African widows and orphans, and decided to get involved. She created a website to help increase visibility for the organization, calling attention to their efforts to raise funds so a fish hatchery and fish ponds can be constructed to feed small villages.

“Taking programming classes with Udacity for website development has motivated me to create even more websites for charity.”

It has been an honor and a pleasure to play a small part in the remarkable journeys each of these scholarship students has undertaken since we first met them back in January. We look forward to seeing how each and every graduate puts their new skills to work to advance their lives, their careers, and the world around them!



 

6th November 2018 |

Recap: Build Actions For Your Community

Posted by Leon Nicholls, Developer Programs Engineer

In March, we announced the "Build Actions for Your Community" Event Series. These events are run by Google Developers Groups (GDG) and other community groups to educate developers about Actions on Google through local meetup events.

The event series has now ended and spanned 66 countries with a total of 432 events. These events reached 19,400 developers with 21% women attendance.

Actions on Google is of interest to developers globally, from Benin City, Nigeria, to Valparaíso, Chile, Hyderabad, India, Košice, Slovakia, and Omaha, Nebraska.

Developers in these cities experienced hands-on learning, including codelabs and activities to design and create Actions for their communities.

Developers consider creating Actions for the Google Assistant as a way of applying machine learning to solve real world problems. Here, for example, are the winners of the #IndiaBuildsActions Campaign:

You can try Meditation Daily to help you relax, English King to learn about grammar, or Voice Cricket to play a game of cricket.

We also got valuable feedback directly from developers about how to improve the Actions on Google APIs and documentation. We learned that developers want to do Actions for feature phones and want the Assistant to support more languages. Developers also asked for more codelabs, more workshops and more samples (subsequently, we've added a 3rd codelab).

It was exciting to see how many developers shared their experiences on social media.

"Event series was impressive, Awesome and amazing. Knowledge well acquired" (Nigeria)

"The experience I had with the participants was unforgettable. Thank you" (Philippines)

It was also very encouraging to see that 76% of developers are likely to build new Actions and that most developers rated the Actions on Google platform better than other platforms.

Thanks to everybody who organized, presented, and attended these events all around the world. For even more events, join a local GDG DevFest to share ideas and learn about developing with Google's technologies. We can't wait to see what kinds of Actions you create for the Google Assistant!

Want more? Head over to the Actions on Google community to discuss Actions with other developers. Join the Actions on Google developer community program and you could earn a $200 monthly Google Cloud credit and an Assistant t-shirt when you publish your first app.

 

15th November 2018 |

Registration now open for DevFest OnAir!

Posted by Erica Hanson, Program Manager in Developer Relations

We're excited to announce the first official DevFest OnAir! DevFest OnAir is an online conference taking place on December 11th and 12th featuring sessions from DevFest events around the globe. These community-led developer sessions are hosted by GDGs (Google Developer Groups) focused on community, networking and learning about Google technologies. With over 500 communities and DevFest events happening all over the world, DevFest OnAir brings this global experience online for the first time!

Exclusive content.

DevFest OnAir includes exclusive content from Google in addition to content from the DevFest community. Watch content from up to three tracks at any time:

  • Cloud
  • Mobile
  • Voice, Web & more

Sessions cover multiple products such as Android, Google Cloud Platform, Firebase, Google Assistant, Flutter, machine learning with TensorFlow, and mobile web.

Tailored to your time zone.

Anyone can join, no matter where you are. We're hosting three broadcasts around the world, around the clock, so there's a convenient time for you to join no matter where you are at home or at work.

Ask us a question live.

Our live Q&A forum will be open throughout the online event to spark conversation and get you the answers you need.

Fun!

Join the fun with interactive trivia during DevFest OnAir where you can receive something special!

Every participant who tunes in live on December 11th will receive one free month of learning on Qwiklabs.

Sign up now

Registration is free. Sign up here.

Learn more about DevFest 2018 here and find a DevFest event near you here.

GDGs are local groups of developers interested in Google products and APIs. Each GDG group can host a variety of technical activities for developers - from just a few people getting together to watch the latest Google Developers videos, to large gatherings with demos, tech talks, or hackathons. Learn more about GDG here.

Follow us on Twitter and YouTube.

 

2nd November 2018 |

Migrating G Suite extensions from Chrome Web Store to G Suite Marketplace

Originally posted on the Google Cloud Blog by Greg Brosman, Product Manager, G Suite Marketplace

Starting today, we're making it possible for you to access all of your favorite G Suite extensions in one place by bringing add-ons and web apps from the Chrome Web Store into the G Suite Marketplace.

If you're not familiar with the G Suite Marketplace, it's the app store for G Suite. Whether you want to boost your productivity, take control of your calendar or do more from within your inbox, you can browse more than a thousand options to customize how you work in G Suite. IT admins also have the ability to manage access and controls of apps from within the G Suite Marketplace—like whitelisting app access for users or installing an app for an entire domain (read more about best practices here). If you're an admin, you can access the marketplace from within the Admin console (Go to Tools > G Suite Marketplace).

How to migrate existing apps if you're a developer

Going forward, new G Suite extensions will be listed only on the G Suite Marketplace to make it easier for you to manage your listings. This includes all G Suite apps with add-ons, like Docs, Sheets and Drive. If you have existing apps listed on the Chrome Web Store, you'll have 90 days to migrate them. Here are specific instructions for editor add-ons, Drive v3 apps, and Drive v2 apps to get that process started. Ratings and reviews will be included in the migration, and existing users will continue to be able to use their apps.

We look forward to seeing your apps on G Suite Marketplace!

 

31st October 2018 |

Sean Medlin: A ‘Grow with Google Developer Scholarship’ Success Story

Originally published on the Udacity blog by Stuart Frye, VP for Business Development

This deserving scholarship recipient overcame incredible odds to earn this opportunity, and he's now on the path to achieving a career dream he's harbored since childhood!

Sean Medlin is a young man, but he's already experienced a great deal of hardship in his life. He's had to overcome the kinds of obstacles that too often stop people's dreams in their tracks, but he's never given up. Sustained by a lifelong love for computers, an unshakeable vision for his future, and a fierce commitment to learning, Sean has steadfastly pursued his life and career goals. He's done so against the odds, often without knowing whether anything would pan out.

Today, Sean Medlin is a Grow with Google Developer Scholarship recipient, on active duty in the US Air Force, with a Bachelor of Computer Science degree. He's married to a woman he says is "the best in the world" and he's just become a father for the second time. It's been a long journey for a boy who lost his sister to cancer before he'd reached adulthood, and whose official education record listed him as having never made it past the eighth grade.

But Sean keeps finding a way forward.

The experience of getting to know people like Sean is almost too powerful to describe, but experiences like these are at the heart of why the Grow with Google Developer Scholarship is such an impactful initiative for us. It's one thing to read the numbers at a high level, and feel joy and amazement that literally thousands of deserving learners have been able to advance their lives and careers through the scholarship opportunities they've earned. However, it's an entirely different experience to witness the transformative power of opportunity at the individual, human level. One person. Their life. Their dreams. Their challenges, and their successes.

It's our pleasure and our honor to introduce you to Sean Medlin, and to share his story.

You've spoken about your love for computers; when did that begin?

When I was around eight or nine, I inherited a computer from my parents and just started picking it apart and putting it back together. I fell in love with it and knew it was something I wanted to pursue as a career. By the time I reached the seventh grade I decided on a computer science degree, and knew I was already on the path to it—I was top of math and science in my class at that point.

And then things changed for your family. What happened?

My sister, who was just a year old at the time, was diagnosed with cancer; stage four. For the next several years, she fought it, and at one point beat it; but unfortunately, it came back. When she relapsed, she started receiving treatments at a research hospital about eight hours from where we lived. Because of this, our family was constantly separated. My brothers and I usually stayed at family and friend's houses. Eventually, my parents pulled us out of school so we could travel with them. We stayed at hotels or the Ronald McDonald house, really wherever we could find a place to stay. We eventually moved to Memphis, where the hospital is located. During all of this, I was homeschooled, but I really didn't learn a whole lot, given the circumstances. When my sister passed away, our family went through a terrible time. I personally took it hard and became lackadaisical. Eventually, I decided that regardless of what wrenches life was throwing me, I would not give up on my dream.

So you were still determined to further your education; what did you do?

Well, in what was my senior year, I decided to start thinking about college. I started googling, and the first thing I discovered is that I needed a high school diploma. So I found my way to the education boards in Oklahoma. I learned that I was never properly registered as a homeschool student. So my record shows that I dropped out of school my eighth grade year. I was pretty devastated. My only option was to go and get my GED*, so that's what I did.

Computer science was still your passion; were you able to start pursuing it after earning your GED?

Well, I had to take a lot of prerequisites before I could even start a computer science degree. I mean, a lot! Which was frustrating, because it took more money than I had. I tried applying for financial aid, but I wasn't able to get very much. I looked like an eighth grade dropout with a GED. That's all anyone saw.

So you found another way to pay for your schooling; what was that?

I decided to join the United States Air Force. I couldn't pay for my own education anymore, and the Air Force was offering tuition assistance. That was the best option I had. I have no military history in my family, and at first my friends and family were against the idea, worried I'd be overseas too much. But I was determined I was going to finish school and get my computer science degree and work in this field.

It sounds like the work you started doing in the Air Force wasn't really related to your desired career path, but you were still able to continue your education?

That's right. The career path I joined was supposedly tech-related, but it wasn't. I enlisted as a munition systems technology troop, or in other words, an ammo troop. It wasn't really in line with my goals, but the tuition assistance made it possible for me to keep studying computer science online. There was a tuition assistance cap though, and between that, and how much my supervisors were willing to approve, I was only able to take two classes per semester. But I kept plugging away, even using my own money to pay for some of it. It took me eight years while working in the Air Force, but I completed my computer science degree last March. I finished with a 3.98 GPA and Summa Cum Laude, the highest distinction!

That's an outstanding accomplishment, congratulations! Did you feel ready to enter the field and start working at that point?

Not at all! I definitely learned that I wasn't prepared for the programming world based just off my bachelor's degree. It taught me all the fundamentals, which was great. I learned the theory, and how to program, but I didn't really learn how to apply what I'd learned to real-world situations.

You'd had a great deal of experience with online learning by that point; is that where you went looking to determine your next steps?

Yes! I tried everything. I did some free web development boot camps. I discovered Udemy, and tried a bunch of their courses, trying to learn different languages. Then I found Udacity. I started off with free courses. I really fell in love with Java, and that's what initially brought me to Udacity's Android courses. The satisfaction of making an app, it just pulled me in. It was something I could show my wife, and my friends. I knew it was what I wanted to pursue.

And then you heard about the Google Scholarship?

Well, I was actually working out how I was going to pay for a Nanodegree program myself when the scholarship opportunity emerged. I applied, and was selected for the challenge course. I knew when I got selected, that I only had three months, and that they were going to pick the top 10 percent of the students, after those three months were up, to get the full scholarship. My son was only about a year old then, and my wife became pregnant again right when I found out about the scholarship. I told her, "I'm going to knock this course out as fast as possible. But I need you to help me buckle down." She took care of my son as much as possible, and I finished the challenge course in about two weeks. I was determined. I wanted to show I could do it. Afterwards, I became one of the student mentors and leaders, and constantly stayed active in the channels and forums. I just did as much as I could to prove my worth.

Those efforts paid off, and you landed a full Google Scholarship for the Android Basics Nanodegree program. And now you have some good news to share, is that right? Yes, I successfully completed the Android Basics Nanodegree program on July 29th!

How are you approaching your career goals differently now?

Well, completing the projects in my Nanodegree program really improved my confidence and performance in technical interviews. When I first graduated with my bachelor's degree, I applied for a few jobs and went through a couple technical interviews. I felt completely lost, and became nervous about doing them going forward. Once I completed the Nanodegree program, I went through another technical interview and felt so prepared. I knew every answer, and I knew exactly what I was talking about.

As it turns out, you've actually earned new opportunities ​within​ the Air Force. Can you
tell us about that?

The base I'm at is considered an IT hub for the Air Force, and the Air Force recently decided to start building mobile apps organically, utilizing our service members. Soon after this was decided, senior leadership began searching for the best and brightest programmers to fill this team. I was not only recommended, but they looked over my projects from the Nanodegree program, and deemed I was one of the most qualified! Normally, opportunities like this are strictly prohibited to anyone outside the requested Air Force specialty code, so I wasn't getting my hopes up. That restriction didn't stop senior leadership. As of right now, I'm part of the mobile app team, and the only ammo troop developing mobile apps for the Air Force, in the entire world!

So what does the future hold for you next?

I feel like the last 15 years of my life have been leading up to where I'm at now. I want to pursue a job as a software developer—an Android developer, in Silicon Valley! Ever since I was a kid, I've had the dream of being a developer at Blizzard. I was a huge World of Warcraft nerd during my homeschooled years. However, I'm okay if I fall a little short of that. I really just want to be surrounded by other programmers. I want to learn from them. It's what I've always wanted. To become a programmer. The idea of leaving the military is really scary though. The thought of not being able to get a job … it's scary, it's a lot of different emotions. But my aspiration is to become a full-time software developer for a big tech company, in a nice big city.

How does your wife feel about all of this?

My wife is the best woman in the world. She wants to follow me wherever the wind takes us. She's very proud of me, and I'm very proud of her too. She does a lot. I wouldn't be able to do what I do without her. That's for sure.

I think I speak for everyone at Udacity when I say that no one here has any doubt you'll achieve whatever you set out to achieve!

It's often said that hindsight is 20/20, and in hindsight, it's tempting to say we helped create the Grow with Google Developer Scholarship just for people like Sean. To say that, however, would be doing a disservice to him. His journey, and his accomplishments, are unique. The truth is, we didn't know who we'd meet when we launched this initiative. Yet here we are today, celebrating all that Sean has accomplished!

To have played a role in his story is an honor we couldn't have predicted, but it's one we'll treasure always.

Sean, congratulations on your success in the scholarship program, and for everything you've achieved. Whether you elect to stay in the military, or make your way to California with your family, we know you'll continue to do great things!

Growing Careers and Skills Across the US

Grow with Google is a new initiative to help people get the skills they need to find a job. Udacity is excited to partner with Google on this powerful effort, and to offer the Developer Scholarship program.

Grow with Google Developer scholars come from different backgrounds, live in different cities, and are pursuing different goals in the midst of different circumstances, but they are united by their efforts to advance their lives and careers through hard work, and a commitment to self-empowerment through learning. We're honored to support their efforts, and to share the stories of scholars like Sean.

 

12th November 2018 |

Elevating user trust in our API ecosystem

Posted by Andy Wen, Group Product Manager

Google API platforms have a long history of enabling a vibrant and secure third-party app ecosystem for developers—from the original launch of OAuth which helped users safeguard passwords, to providing fine-grained data-sharing controls for APIs, to launching controls to help G Suite admins manage app access in the workplace.

In 2018, we launched Gmail Add-ons, a new way for developers to integrate their apps into Gmail across platforms. Gmail Add-ons also offer a stronger security model for users because email data is only shared with the developer when a user takes action.

We've continually strengthened these controls and policies over the years based on user feedback. While the controls that are in place give people peace-of-mind and have worked well, today, we're introducing even stronger controls and policies to give our users the confidence they need to keep their data safe.

To provide additional assurances for users, today we are announcing new policies, focused on Gmail APIs, which will go into effect January 15, 2019. We are publishing these changes in advance to provide time for developers who may need to adjust their apps or policies to comply.

Of course, we encourage developers to migrate to Add-ons where possible as their preferred platform for the best privacy and security for users (developers also get the added bonus of listing their apps in the G Suite Marketplace to reach five million G Suite businesses). Let's review the policy updates:

Policies

To better ensure that user expectations align with developer uses, the following policies will apply to apps accessing user data from consumer Google accounts (Note: as always, G Suite admins have the ability to control access to their users' applications. Read more.).

Appropriate Access: Only permitted Application Types may access these APIs.

Users typically directly interact with their email through email clients and productivity tools. Users allowing applications to access their email without their regular direct interaction (for example, services that provide reporting or monitoring to users) will be provided with additional warnings and we will require them to regrant access at regular intervals.

How Data May Not Be Used: 3rd-party apps accessing these APIs must use the data to provide user-facing features and may not transfer or sell the data for other purposes such as targeting ads, market research, email campaign tracking, and other unrelated purposes. (Note: Gmail users' email content is not used for ads personalization.)

As an example, consolidating data from a user's email for their direct benefit, such as expense tracking, is a permitted use case. Consolidating the expense data for market research that benefits a third party is not permitted.

We have also clarified that human review of email data must be strictly limited.

How Data Must Be Secured: It is critical that 3rd-party apps handling Gmail data meet minimum security standards to minimize the risk of data breach. Apps will be asked to demonstrate secure data handling with assessments that include: application penetration testing, external network penetration testing, account deletion verification, reviews of incident response plans, vulnerability disclosure programs, and information security policies.

Applications that only store user data on end-user devices will not need to complete the full assessment but will need to be verified as non-malicious software. More information about the assessment will be posted here in January 2019. Existing Applications (as of this publication date) will have until the end of 2019 to complete the assessment.

Accessing Only Information You Need: During application review, we will be tightening compliance with our existing policy on limiting API access to only the information necessary to implement your application. For example, if your app does not need full or read access and only requires send capability, we require you to request narrower scopes so the app can only access data needed for its features.

Additional developer help documentation will be posted in November 2018 so that developers can assess the impact to their app and begin planning for any necessary changes.

Application Review

All apps accessing the restricted scopes will be required to submit an application review starting on January 15, 2019. If a review is not submitted by February 15, 2019, then new grants from Google consumer accounts will be disabled after February 22, 2019 and any existing grants will be revoked after March 31, 2019.

Application reviews will be submitted from the Google API Console. To ensure related communication is received, we encourage developers to update project roles (learn more) so that email addresses or an email group is up-to-date.

For more details about the restricted scope app verification, please visit this FAQ.

 

8th October 2018 |

More granular Google Account permissions with Google OAuth and APIs

Posted by Adam Dawes, Senior Product Manager

Google offers a wide variety of APIs that third-party app developers can use to build features for Google users. Granting access to this data is an important decision. Going forward, consumers will get more fine-grained control over what account data they choose to share with each app

Over the next few months, we'll start rolling out an improvement to our API infrastructure. We will show each permission that an app requests one at a time, within its own dialog, instead of presenting all permissions in a single dialog*. Users will have the ability to grant or deny permissions individually.

To prepare for this change, there are a number of actions you should take with your app:

  • Review the Google API Services: User Data Policy and make sure you are following them.
  • Before making an API call, check to see if the user has already granted permission to your app. This will help you avoid insufficient permission errors which could lead to unexpected app errors and a bad user experience. Learn more about this by referring to documentation on your platform below:
    • Documentation for Android
    • Documentation for the web
    • Documentation for iOS
  • Request permissions only when you need them. You'll be able to stage when each permission is requested, and we recommend being thoughtful about doing this in context. You should avoid asking for multiple scopes at sign-in, when users may be using your app for the first time and are unfamiliar with the app's features. Bundling together a request for several scopes makes it hard for users to understand why your app needs the permission and may alarm and deter them from further use of your app.
  • Provide justification before asking for access. Clearly explain why you need access, what you'll do with a user's data, and how they will benefit from providing access. Our research indicates that these explanations increase user trust and engagement.

An example of contextual permission gathering

These changes will begin to roll out to new clients starting this month and will get extended to existing clients at the beginning of 2019. Google continues to invest heavily in our developer tools and platforms. Together with the changes we made last year, we expect this improvement will help increase transparency and trust in our app ecosystem.

We look forward to working with you through this change. If you have feedback, please comment below. Or, if you have any technical questions, please post them on stackoverflow under the google-oauth tag.

*our different login scopes (profile, email, openid, and plus.me) are all combined in the same consent and don't need to be requested separately.

 

5th October 2018 |

Share your #DevFest18 story!

Posted by Erica Hanson, Developer Communities Program Manager

Over 80 countries are planning a DevFest this year!

Our GDG community is very excited as they aim to connect with 100,000 developers at 500 DevFests around the world to learn, share and build new things.

Most recently, GDG Nairobi hosted the largest developer festival in Kenya. On September 22nd, DevFest Nairobi engaged 1,200+ developers, from 26+ African countries, with 37% women in attendance! They had 44 sessions, 4 tracks and 11 codelabs facilitated by 5 GDEs (Google Developer Experts) among other notable speakers. The energy was so great, #DevFestNairobi was trending on Twitter that day!

GDG Tokyo held their third annual DevFest this year on September 1st, engaging with over 1,000 developers! GDG Tokyo hosted 42 sessions, 6 tracks and 35 codelabs by partnering with 14 communities specializing in technology including 3 women-led communities (DroidGirls, GTUG Girls, and XR Jyoshibu).

Share your story!

Our community is interested in hearing about what you learned at DevFest. Use #DevFestStories and #DevFest18 on social media. We would love to re-share some of your stories here on the Google Developers blog and Twitter! Check out a few great examples below.

Learn more about DevFest 2018 here and find a DevFest event near you here.

GDGs are local groups of developers interested in Google products and APIs. Each GDG group can host a variety of technical activities for developers - from just a few people getting together to watch the latest Google Developers videos, to large gatherings with demos, tech talks, or hackathons. Learn more about GDG here.

Follow us on Twitter and YouTube.

 

4th October 2018 |

Four tips for building great transactional experiences for the Google Assistant

Posted by Mikhail Turilin, Product Manager, Actions on Google

Building engaging Actions for the Google Assistant is just the first step in your journey for delivering a great experience for your users. We also understand how important it is for many of you to get compensated for your hard work by enabling quick, hands-free transactional experiences through the Google Assistant.

Let's take a look at some of the best practices you should consider when adding transactions to your Actions!

1. Use Google Sign-In for the Assistant

Traditional account linking requires the user to open a web browser and manually log in to a merchant's website. This can lead to higher abandonment rates for a couple of reasons:

  1. Users need to enter username and password, which they often can't remember
  2. Even if the user started the conversation on Google Home, they will have to use a mobile phone to log in to the merchant web site

Our new Google Sign-In for the Assistant flow solves this problem. By implementing this authentication flow, your users will only need to tap twice on the screen to link their accounts or create a new account on your website. Connecting individual user profiles to your Actions gives you an opportunity to personalize your customer experience based on your existing relationship with a user.

And if you already have a loyalty program in place, users can accrue points and access discounts with account linking with OAuth and Google Sign-In.

Head over to our step-by-step guide to learn how to incorporate Google Sign-In.

2. Simplify the order process with a re-ordering flow

Most people prefer to use the Google Assistant quickly, whether they're at home and or on the go. So if you're a merchant, you should look for opportunities to simplify the ordering process.

Choosing a product from a list of many dozens of items takes a really long time. That's why many consumers enjoy the ability to quickly reorder items when shopping online. Implementing reordering with Google Assistant provides an opportunity to solve both problems at the same time.

Reordering is based on the history to previous purchases. You will need to implement account linking to identify returning users. Once the account is linked, connect the order history on your backend and present the choices to the user.

Just Eat, an online food ordering and delivery service in the UK, focuses on reordering as one of their core flows because they expect their customers to use the Google Assistant to reorder their favorite meals.

3. Use Google Pay for a more seamless checkout

Once a user has decided they're ready to make a purchase, it's important to provide a quick checkout experience. To help, we've expanded payment options for transactions to include Google Pay, a fast, simple way to pay online, in stores, and in the Google Assistant.

Google Pay reduces customer friction during checkout because it's already connected to users' Google accounts. Users don't need to go back and forth between the Google Assistant and your website to add a payment method. Instead, users can share the payment method that they have on file with Google Pay.

Best of all, it's simple to integrate – just follow the instructions in our transactions docs.

4. Support voice-only Actions on the Google Home

At I/O, we announced that voice-only transactions for Google Home are now supported in the US, UK, Canada, Germany, France, Australia, and Japan. A completely hands-free experience will give users more ways to complete transactions with your Actions.

Here are a few things to keep in mind when designing your transactions for voice-only surfaces:

  • Build easy-to-follow dialogue because users won't see dialogue or suggestion chips available on phones.
  • Avoid inducing choice paralysis. Focus on a few simple choices based on customer preferences collected during their previous orders.
  • Localize your transactional experiences for new regions – learn more here.
  • Don't forget to enable your transactions to work on smart speakers in the console.

Learn more tips in our Conversation Design Guidelines.

As we expand support for transactions in new countries and on new Google Assistant surfaces, now is the perfect time to make sure your transactional experiences are designed with users in mind so you can increase conversion and minimize drop-off.

 

3rd October 2018 |

Make money from your Actions, create better user experiences

Posted by Tarun Jain, Group PM, Actions on Google

The Google Assistant helps you get things done across the devices you have at your side throughout your day--a bedside smart speaker, your mobile device while on the go, or even your kitchen Smart Display when winding down in the evening.

One of the common questions we get from developers is: how do I create a seamless path for users to complete purchases across all these types of devices? We also get asked by developers: how can I better personalize my experience for users on the Assistant with privacy in mind?

Today, we're making these easier for developers with support for digital goods and subscriptions, and Google Sign-in for the Assistant. We're also giving the Google Assistant a complete makeover on mobile phones, enabling developers to create even more visually rich integrations.

Start earning money with premium experiences for your users

While we've offered transactions for physical goods for some time, starting today, you will also be able to offer digital goods, including one time purchases like upgrades--expansion packs or new levels, for example--and even recurring subscriptions directly within your Action.

Starting today, users can complete these transactions while in conversation with your Action through speakers, phones, and Smart Displays.This will be supported in the U.S. to start, with more locales coming soon.

Headspace, for example, now offers Android users an option to subscribe to their plans, meaning users can purchase a subscription and immediately see an upgraded experience while talking to their Action. Try it for yourself, by telling your Google Assistant, "meditate with Headspace"

Volley added digital goods to their role-playing game Castle Master so users could enhance their experience by purchasing upgrades. Try it yourself, by asking your Google Assistant to, "play Castle Master."

You can also ensure a seamless premium experience as users move between your Android app and Action for Assistant by letting users access their digital goods across their relationship with you, regardless of where the purchase originated. You can manage your digital goods for both your app and your Action in one place, in the Play Console.

Simplified account linking and user personalization

Once your users have access to a premium experience with digital goods, you will want to make sure your Action remembers them. To help with that, we're also introducing Google Sign-In for the Assistant, a secure authentication method that simplifies account linking for your users and reduces user drop off for login. Google Sign-In provides the most convenient way to log in, with just a few taps. With Google Sign-In users can even just use their voice to login and link accounts on smart speakers with the Assistant.

In the past, account linking could be a frustrating experience for your users; having to manually type a username and password--or worse, create a new account--breaks the natural conversational flow. With Google Sign-In, users can now create a new account with just a tap or confirmation through their voice. Most users can even link to their existing accounts with your service using their verified email address.

For developers, Google Sign-In also makes it easier to support login and personalize your Action for users. Previously, developers needed to build an account system and support OAuth-based account linking in order to personalize their Action. Now, you have the option to use Google Sign-In to support login for any user with a Google account.

Starbucks added Google Sign-In for the Assistant to enable users of their Action to access their Starbucks RewardsTM accounts and earn stars for their purchases. Since adding Google Sign-In for the Assistant, they've seen login conversion nearly double for their users versus their previous implementation that required manual account entry.

Check out our guide on the different authentication options available to you, to understand which best meets your needs.

A new visual experience for the phone

Today, we're launching the first major makeover for the Google Assistant on phones, bringing a richer, more interactive interface to the devices we carry with us throughout the day.

Since the Google Assistant made its debut, we've noticed that nearly half of all interactions with the Assistant today include both voice and touch. With this redesign, we're making the Assistant more visually assistive for users, combining voice with touch in a way that gives users the right controls in the right situations.

For developers, we've also made it easy to bring great multimodal experiences to life on the phone and other Assistant-enabled devices with screens, including Smart Displays. This presents a new opportunity to express your brand through richer visuals and with greater real estate in your Action.

To get started, you can now add rich responses to customize your Action for visual interfaces. With rich responses you can build visually engaging Actions for your users with a set of plug-and-play visual components for different types of content. If you've already added rich responses to your Action, these will work automatically on the new mobile redesign. Be sure to also check out our guidance on how and when to use visuals in your Action.

Below you can find some examples of the ways some partners and developers have already started to make use of rich responses to provide more visually interactive experiences for Assistant users on phones.

You can try these yourself by asking your Google Assistant to, "order my usual from Starbucks," "ask H&M Home to give inspiration for my kitchen," "ask Fitstar to workout," or "ask Food Network for chicken recipes."

Ready to get building? Check out our documentation on how to add digital goods and Google Sign-In for Assistant to create premium and personalized experiences for your users across devices.

To improve your visual experience for phone users, check out our conversation design site, our documentation on different surfaces, and our documentation and sample on how you can use rich responses to build with visual components. You can also test and debug your different types of voice, visual, and multimodal experiences in the Actions simulator.

Good luck building, and please continue to share your ideas and feedback with us. Don't forget that once you publish your first Action you can join our community program* and receive your exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit.


*Some countries are not eligible to participate in the developer community program, please review the terms and conditions

 

10th October 2018 |

Start a new .page today

Posted by Ben Fried, VP, CIO, & Chief Domains Enthusiast

Update: .page is now open for general registration! Find participating registrar partners at get.page.

Today we're announcing .page, the newest top-level domain (TLD) from Google Registry.

A TLD is the last part of a domain name, like .com in "google.com" or .google in "blog.google". The .page TLD is a new opportunity for anyone to build an online presence. Whether you're writing a blog, getting your business online, or promoting your latest project, .page makes it simple and more secure to get the word out about the unique things you do.

Check out 10 interesting things some people and businesses are already doing on .page:

  1. Ellen.Page is the website of Academy Award®-nominated actress and producer Ellen Page that will spotlight LGBTQ culture and social causes.
  2. Home.Page is a project by the digital media artist Aaron Koblin, who is creating a living collection of hand-drawn houses from people across the world. Enjoy free art daily and help bring real people home by supporting revolving bail.
  3. ChristophNiemann.Page is the virtual exhibition space of illustrator, graphic designer, and author Christoph Niemann.
  4. Web.Page is a collaboration between a group of designers and developers who will offer a monthly online magazine with design techniques, strategies, and inspiration.
  5. CareerXO.Page by Geek Girl Careers is a personality assessment designed to help women find tech careers they love.
  6. TurnThe.Page by Insurance Lounge offers advice about the transition from career to retirement.
  7. WordAsImage.Page is a project by designer Ji Lee that explores the visualizations of words through typography.
  8. Membrane.Page by Synder Filtration is an educational website about spiral-wound nanofiltration, ultrafiltration, and microfiltration membrane elements and systems.
  9. TV.Page is a SaaS company that provides shoppable video technology for e-commerce, social media, and retail stores.
  10. Navlekha.Page was created by Navlekhā, a Google initiative that helps Indian publishers get their content online with free authoring tools, guidance, and a .page domain for the first 3 years. Since the initiative debuted at Google for India, publishers are creating articles within minutes. And Navlekhā plans to bring 135,000 publishers online over the next 5 years.

Security is a top priority for Google Registry's domains. To help keep your information safe, all .page websites require an SSL certificate, which helps keep connections to your domain secure and helps protect against things like ad malware and tracking injections. Both .page and .app, which we launched in May, will help move the web to an HTTPS-everywhere future.

.page domains are available now through the Early Access Program. For an extra fee, you'll have the chance to get the perfect .page domain name from participating registrar partners before standard registrations become available on October 9th. For more details about registering your domain, check out get.page. We're looking forward to seeing what you'll build on .page!

 

28th September 2018 |

Google Fonts launches Japanese support

Posted by the Google Fonts team

The Google Fonts catalog now includes Japanese web fonts. Since shipping Korean in February, we have been working to optimize the font slicing system and extend it to support Japanese. The optimization efforts proved fruitful—Korean users now transfer on average over 30% fewer bytes than our previous best solution. This type of on-going optimization is a major goal of Google Fonts.

Japanese presents many of the same core challenges as Korean:

  1. Very large character set
  2. Visually complex letterforms
  3. A complex writing system: Japanese uses several distinct scripts (explained well by Wikipedia)
  4. More character interactions: Line layout features (e.g. kerning, positioning, substitution) break when they involve characters that are split across different slices

The impact of the large character set made up of complex glyph contours is multiplicative, resulting in very large font files. Meanwhile, the complex writing system and character interactions forced us to refine our analysis process.

To begin supporting Japanese, we gathered character frequency data from millions of Japanese webpages and analyzed them to inform how to slice the fonts. Users download only the slices they need for a page, typically avoiding the majority of the font. Over time, as they visit more pages and cache more slices, their experience becomes ever faster. This approach is compatible with many scripts because it is based on observations of real-world usage.

Frequency of the popular Japanese and Korean characters on the web

As shown above, Korean and Japanese have a relatively small set of characters that are used extremely frequently, and a very long tail of rarely used characters. On any given page most of the characters will be from the high frequency part, often with a few rarer characters mixed in.

We tried fancier segmentation strategies, but the most performant method for Korean turned out to be simple:

  1. Put the 2,000 most popular characters in a slice
  2. Put the next 1,000 most popular characters in another slice
  3. Sort the remaining characters by Unicode codepoint number and divide them into 100 equally sized slices

A user of Google Fonts viewing a webpage will download only the slices needed for the characters on the page. This yielded great results, as clients downloaded 88% fewer bytes than a naive strategy of sending the whole font. While brainstorming how to make things even faster, we had a bit of a eureka moment, realizing that:

  1. The core features we rely on to efficiently deliver sliced fonts are unicode-range and woff2
  2. Browsers that support unicode-range and woff2 also support HTTP/2
  3. HTTP/2 enables the concurrent delivery of many small files

In combination, these features mean we no longer have to worry about queuing delays as we would have under HTTP/1.1, and therefore we can do much more fine-grained slicing.

Our analyses of the Japanese and Korean web shows most pages tend to use mostly common characters, plus a few rarer ones. To optimize for this, we tested a variety of finer-grained strategies on the common characters for both languages.

We concluded that the following is the best strategy for Korean, with clients downloading 38% fewer bytes than our previous best strategy:

  1. Take the 2,000 most popular Korean characters, sort by frequency, and put them into 20 equally sized slices
  2. Sort the remaining characters by Unicode codepoint number, and divide them into 100 equally sized slices

For Japanese, we found that segmenting the first 3,000 characters into 20 slices was best, resulting in clients downloading 80% fewer bytes than they would if we just sent the whole font. Having sufficiently reduced transfer sizes, we now feel confident in offering Japanese web fonts for the first time!

Now that both Japanese and Korean are live on Google Fonts, we have even more ideas for further optimization—and we will continue to ship updates to make things faster for our users. We are also looking forward to future collaborations with the W3C to develop new web standards and go beyond what is possible with today's technologies (learn more here).

PS - Google Fonts is hiring :)