Posted:

Posted by Laurence Moroney, Developer Advocate

Google Voice Actions let your users quickly complete tasks in your app using voice commands. It’s a great way to drive usage of your app, and now users’ voice action requests can lead directly from Search to your Android app. In this episode of Coffee With a Googler, Laurence meets with Sunil Vemuri, product manager of Google Voice Actions.


Sunil tells us about how the speech field has progressed, and how the quality of algorithms for detecting speech have drastically improved in a short space of time. In 2013, the average error rate for speech detection was 23 percent -- almost a quarter of all words weren’t recognized. By 2015, at Google I/O, we announced that the rate was down to 8 percent, and it continues to get better.

The episode will also share how developers can get started with building for voice actions using System Actions, where the voice action can be routed from Google Search directly to your app by declaring an intent to capture that action. If you need voice actions that aren’t in the system, you can also set Custom Actions. A developer can tell Google the phrases that they’d like to have triggered (e.g. ‘Ok Google, Turn on the Lights on MyApp’) and the Google app can then fire off the Intent that you specify. In addition, you can build Voice Interactions where your app can ask the user follow-up questions before performing an action. For example, when the user asks to play some music, the app could ask for the genre.

You can learn more about Voice Actions, how they work, and how to get started at the Google Developers site for Voice Actions.

If you have any questions for Laurence or Sunil, please leave them in the comments below.

If there are any guests, technologies, or anything Google that you’d like us to chat about over Coffee, please also drop us a line!

Posted:

Posted by, Reto Meier

Starting today, the Android Developers, Chrome Developers, and Google Developers YouTube channels will host the videos that apply to each specific topic area. By subscribing to each channel, you will only be notified about content that matches your interests.

The Google Developers YouTube channel has been bringing you content across many platforms and product offerings to help inspire, inform, and delight you. Recently, we’ve been posting a variety of recurring shows that cover many broad topics across all of our developer offerings, such as Android Performance Patterns, Polycasts and Coffee With A Googler.

As we produce more and more videos, covering an ever increasing range of topics, we want to make it easier for you to find the information you need.

This means that for the Android Developers Channel, you will get content that is more focused to Android, such as Android Performance Patterns. Similarly, the Chrome Developers Channel will host more web focused content, such as Polycasts, HTTP203, Totally Tooling Tips, and New in Chrome. The Google Developers Channel will continue to broadcast broader Google Developer focused content like our DevBytes covering Google Play services releases and our Coffee With A Googler series.

We look forward to bringing you lots more video to inspire, inform, and delight -- to avoid missing any of it, you can subscribe to each of our YouTube channels using the following links, also be sure to turn notifications on in YouTube’s settings (more info here) so that you can get updates as we post new content:

Google Developers | Android Developers | Chrome Developers

Posted:

Posted by Alex Danilo, Developer Advocate

When you develop applications for Google Cast, you’re building a true multi-screen experience to ‘wow’ your users and provide a unique perspective. Part of hitting that wow factor is making the app enjoyable and easy to use.

While designing the Google Cast user experience, we performed a huge amount of user testing to refine a model that works for your users in as many scenarios as possible.

The video below gives a quick explanation of the overall user experience for Google Cast enabled applications.



We’ve also produced some targeted videos to highlight important aspects of the core Google Cast design principles.

The placement of the Cast icon is one of the most important UX guidelines since it directly affects your users familiarity with the ability to Cast. Watch this explanation to help understand why we designed it that way:



Another important design consideration is how the connection between your application and the Google Cast device should work and that’s covered in this short video:



When your users are connected to a Google Cast device that’s playing sound, it’s vital that they can control the audio volume easily. Here’s another video that covers volume control in Cast enabled applications:



To get more detailed information about our UX design principles, we have great documentation and a convenient UX guidelines checklist.

By following the Google Cast UX guidelines in your app, you will give your users a great interactive experience that’ll wow them and have them coming back for more!

Join fellow developers in the Cast Developers Google+ community for more tips, tricks and pointers to all kinds of development resources.

Posted:

Posted by Laurence Moroney, Developer Advocate

If you’ve worked with Web or cloud tech over the last 18 months, you’ll have heard about Containers and about how they let you spend more time on building software, instead of managing infrastructure. In this episode of Coffee with a Googler, we chat with Brian Dorsey into the benefits of using Containers in Google Cloud Platform for simplifying infrastructure management.

Important discussion topics covered in this episode include:

  • Containers improve the developer experience. Regardless of how large the final deployment is, they are there to make it easier for you to succeed.
  • Kubernetes -- an open source project to allow you to manage containers and fleets of containers.

Brian shares an example from Julia Ferraioli who used Containers (with Docker) to configure a Minecraft server, with many plugins, and Kubernetes to manage it.

You can learn more about Google Cloud platform, including Docker and Kubernetes at the Google Cloud Platform site.

Posted:

Originally posted on the Angular blog.

Posted by, Misko Hevery, Software Engineer, Angular

Have an existing Angular 1 application and are wondering about upgrading to Angular 2? Well, read on and learn about our plans to support incremental upgrades.

Summary

Good news!

    • We're enabling mixing of Angular 1 and Angular 2 in the same application.
    • You can mix Angular 1 and Angular 2 components in the same view.
    • Angular 1 and Angular 2 can inject services across frameworks.
    • Data binding works across frameworks.

Why Upgrade?

Angular 2 provides many benefits over Angular 1 including dramatically better performance, more powerful templating, lazy loading, simpler APIs, easier debugging, even more testable and much more. Here are a few of the highlights:

Better performance

We've focused across many scenarios to make your apps snappy. 3x to 5x faster on initial render and re-render scenarios.

    • Faster change detection through monomorphic JS calls
    • Template precompilation and reuse
    • View caching
    • Lower memory usage / VM pressure
    • Linear (blindingly-fast) scalability with observable or immutable data structures
    • Dependency injection supports incremental loading

More powerful templating

    • Removes need for many directives
    • Statically analyzable - future tools and IDEs can discover errors at development time instead of run time
    • Allows template writers to determine binding usage rather than hard-wiring in the directive definition

Future possibilities

We've decoupled Angular 2's rendering from the DOM. We are actively working on supporting the following other capabilities that this decoupling enables:

    • Server-side rendering. Enables blinding-fast initial render and web-crawler support.
    • Web Workers. Move your app and most of Angular to a Web Worker thread to keep the UI smooth and responsive at all times.
    • Native mobile UI. We're enthusiastic about supporting the Web Platform in mobile apps. At the same time, some teams want to deliver fully native UIs on their iOS and Android mobile apps.
    • Compile as build step. Angular apps parse and compile their HTML templates. We're working to speed up initial rendering by moving the compile step into your build process.

Angular 1 and 2 running together

Angular 2 offers dramatic advantages over Angular 1 in performance, simplicity, and flexibility. We're making it easy for you to take advantage of these benefits in your existing Angular 1 applications by letting you seamlessly mix in components and services from Angular 2 into a single app. By doing so, you'll be able to upgrade an application one service or component at a time over many small commits.

For example, you may have an app that looks something like the diagram below. To get your feet wet with Angular 2, you decide to upgrade the left nav to an Angular 2 component. Once you're more confident, you decide to take advantage of Angular 2's rendering speed for the scrolling area in your main content area.

For this to work, four things need to interoperate between Angular 1 and Angular 2:

    • Dependency injection
    • Component nesting
    • Transclusion
    • Change detection

To make all this possible, we're building a library named ng-upgrade. You'll include ng-upgrade and Angular 2 in your existing Angular 1 app, and you'll be able to mix and match at will.

You can find full details and pseudocode in the original upgrade design doc or read on for an overview of the details on how this works. In future posts, we'll walk through specific examples of upgrading Angular 1 code to Angular 2.

Dependency Injection

First, we need to solve for communication between parts of your application. In Angular, the most common pattern for calling another class or function is through dependency injection. Angular 1 has a single root injector, while Angular 2 has a hierarchical injector. Upgrading services one at a time implies that the two injectors need to be able to provide instances from each other.

The ng-upgrade library will automatically make all of the Angular 1 injectables available in Angular 2. This means that your Angular 1 application services can now be injected anywhere in Angular 2 components or services.

Exposing an Angular 2 service into an Angular 1 injector will also be supported, but will require that you to provide a simple mapping configuration.

The result is that services can be easily moved one at a time from Angular 1 to Angular 2 over independent commits and communicate in a mixed-environment.

Component Nesting and Transclusion

In both versions of Angular, we define a component as a directive which has its own template. For incremental migration, you'll need to be able to migrate these components one at a time. This means that ng-upgrade needs to enable components from each framework to nest within each other.

To solve this, ng-upgrade will allow you to wrap Angular 1 components in a facade so that they can be used in an Angular 2 component. Conversely, you can wrap Angular 2 components to be used in Angular 1. This will fully work with transclusion in Angular 1 and its analog of content-projection in Angular 2.

In this nested-component world, each template is fully owned by either Angular 1 or Angular 2 and will fully follow its syntax and semantics. This is not an emulation mode which mostly looks like the other, but an actual execution in each framework, dependending on the type of component. This means that components which are upgraded to Angular 2 will get all of the benefits of Angular 2, and not just better syntax.

This also means that an Angular 1 component will always use Angular 1 Dependency Injection, even when used in an Angular 2 template, and an Angular 2 component will always use Angular 2 Dependency Injection, even when used in an Angular 1 template.

Change Detection

Mixing Angular 1 and Angular 2 components implies that Angular 1 scopes and Angular 2 components are interleaved. For this reason, ng-upgrade will make sure that the change detection (Scope digest in Angular 1 and Change Detectors in Angular 2) are interleaved in the same way to maintain a predictable evaluation order of expressions.

ng-upgrade takes this into account and bridges the scope digest from Angular 1 and change detection in Angular 2 in a way that creates a single cohesive digest cycle spanning both frameworks.

Typical application upgrade process

Here is an example of what an Angular 1 project upgrade to Angular 2 may look like.

  1. Include the Angular 2 and ng-upgrade libraries with your existing application
  2. Pick a component which you would like to migrate
    1. Edit an Angular 1 directive's template to conform to Angular 2 syntax
    2. Convert the directive's controller/linking function into Angular 2 syntax/semantics
    3. Use ng-upgrade to export the directive (now a Component) as an Angular 1 component (this is needed if you wish to call the new Angular 2 component from an Angular 1 template)
  3. Pick a service which you would would like to migrate
    1. Most services should require minimal to no change.
    2. Configure the service in Angular 2
    3. (optionally) re-export the service into Angular 1 using ng-upgrade if it's still used by other parts of your Angular 1 code.
  4. Repeat doing step #2 and #3 in order convenient for your application
  5. Once no more services/components need to be converted drop the top level Angular 1 bootstrap and replace with Angular 2 bootstrap.

Note that each individual change can be checked in separately and the application continues working letting you continue to release updates as you wish.

We are not planning on adding support for allowing non-component directives to be usable on both sides. We think most of the non-component directives are not needed in Angular 2 as they are supported directly by the new template syntax (i.e. ng-click vs (click) )

Q&A

I heard Angular 2 doesn't support 2-way bindings. How will I replace them?

Actually, Angular 2 supports two way data binding and ng-model, though with slightly different syntax.

When we set out to build Angular 2 we wanted to fix issues with the Angular digest cycle. To solve this we chose to create a unidirectional data-flow for change detection. At first it was not clear to us how the two way forms data-binding of ng-model in Angular 1 fits in, but we always knew that we had to make forms in Angular 2 as simple as forms in Angular 1.

After a few iterations we managed to fix what was broken with multiple digests and still retain the power and simplicity of ng-model in Angular 1.

Two way data-binding has a new syntax: [(property-name)]="expression" to make it explicit that the expression is bound in both directions. Because for most scenarios this is just a small syntactic change we expect easy migration.

As an example, if in Angular 1 you have: <input type="text" ng-model="model.name" />

You would convert to this in Angular 2: <input type="text" [(ng-model)]="model.name" />

What languages can I use with Angular 2?

Angular 2 APIs fully support coding in today's JavaScript (ES5), the next version of JavaScript (ES6 or ES2015), TypeScript, and Dart.

While it's a perfectly fine choice to continue with today's JavaScript, we'd like to recommend that you explore ES6 and TypeScript (which is a superset of ES6) as they provide dramatic improvements to your productivity.

ES6 provides much improved syntax and intraoperative standards for common libraries like promises and modules. TypeScript gives you dramatically better code navigation, automated refactoring in IDEs, documentation, finding errors, and more.

Both ES6 and TypeScript are easy to adopt as they are supersets of today's ES5. This means that all your existing code is valid and you can add their features a little at a time.

What should I do with $watch in our codebase?

tldr; $watch expressions need to be moved into declarative annotations. Those that don't fit there should take advantage of observables (reactive programing style).

In order to gain speed and predictability, in Angular 2 you specify watch expressions declaratively. The expressions are either in HTML templates and are auto-watched (no work for you), or have to be declaratively listed in the directive annotation.

One additional benefit from this is that Angular 2 applications can be safely minified/obfuscated for smaller payload.

For interapplication communication Angular 2 offers observables (reactive programing style).

What can I do today to prepare myself for the migration?

Follow the best practices and build your application using components and services in Angular 1 as described in the AngularJS Style Guide.

Wasn't the original upgrade plan to use the new Component Router?

The upgrade plan that we announced at ng-conf 2015 was based on upgrading a whole view at a time and having the Component Router handle communication between the two versions of Angular.

The feedback we received was that while yes, this was incremental, it wasn't incremental enough. We went back and redesigned for the plan as described above.

Are there more details you can share?

Yes! In the Angular 1 to Angular 2 Upgrade Strategy design doc.

We're working on a series of upcoming posts on related topics including:

  1. Mapping your Angular 1 knowledge to Angular 2.
  2. A set of FAQs on details around Angular 2.
  3. Detailed migration guide with working code samples.

See you back here soon!

Posted:

Posted by, Ido Green, Developer Advocate

There is no higher form of user validation than having customers support your product with their wallets. However, the path to a profitable business is not necessarily an easy one. There are many strategies to pick from and a lot of little things that impact the bottom line. If you are starting a new business (or thinking how to improve the financial situation of your current startup), we recommend this course we've been working on with Udacity!

This course blends instruction with real-life examples to help you effectively develop, implement, and measure your monetization strategy. By the end of this course you will be able to:

  • Choose & implement a monetization strategy relevant to your service or product.
  • Set performance metrics & monitor the success of a strategy.
  • Know when it might be time to change methods.

Go try it at: udacity.com/course/app-monetization–ud518

We hope you will enjoy and earn from it!

Posted:

Posted by, Thomas Park, Senior Software Engineer, Google BigQuery

Many types of computations can be difficult or impossible to express in SQL. Loops, complex conditionals, and non-trivial string parsing or transformations are all common examples. What can you do when you need to perform these operations but your data lives in a SQL-based Big data tool? Is it possible to retain the convenience and speed of keeping your data in a single system, when portions of your logic are a poor fit for SQL?

Google BigQuery is a fully managed, petabyte-scale data analytics service that uses SQL as its query interface. As part of our latest BigQuery release, we are announcing support for executing user-defined functions (UDFs) over your BigQuery data. This gives you the ability to combine the convenience and accessibility of SQL with the option to use a familiar programming language, JavaScript, when SQL isn’t the right tool for the job.

How does it work?

BigQuery UDFs are similar to map functions in MapReduce. They take one row of input and produce zero or more rows of output, potentially with a different schema.

Below is a simple example that performs URL decoding. Although BigQuery provides a number of built-in functions, it does not have a built-in for decoding URL-encoded strings. However, this functionality is available in JavaScript, so we can extend BigQuery with a simple User-Defined Function to decode this type of data:


function decodeHelper(s) {
  try {
    return decodeURI(s);
  } catch (ex) {
    return s;
  }
}

// The UDF.
function urlDecode(r, emit) {
  emit({title: decodeHelper(r.title),
        requests: r.num_requests});
}

BigQuery UDFs are functions with two formal parameters. The first parameter is a variable to which each input row will be bound. The second parameter is an “emitter” function. Each time the emitter is invoked with a JavaScript object, that object will be returned as a row to the query.

In the above example, urlDecode is the UDF that will be invoked from BigQuery. It calls a helper function decodeHelper that uses JavaScript’s built-in decodeURI function to transform URL-encoded data into UTF-8.

Note the use of try / catch in decodeHelper. Data is sometimes dirty! If we encounter an error decoding a particular string for any reason, the helper returns the original, un-decoded string.

To make this function visible to BigQuery, it is necessary to include a registration call in your code that describes the function, including its input columns and output schema, and a name that you’ll use to reference the function in your SQL:


bigquery.defineFunction(
   'urlDecode',  // Name used to call the function from SQL.

   ['title', 'num_requests'],  // Input column names.

   // JSON representation of output schema.
   [{name: 'title', type: 'string'},
    {name: 'requests', type: 'integer'}],

   urlDecode  // The UDF reference.
);

The UDF can then be invoked by the name “urlDecode” in the SQL query, with a source table or subquery as an argument. The following query looks for the most-visited French Wikipedia articles from April 2015 that contain a cédille character (ç) in the title:


SELECT requests, title
FROM
  urlDecode(
    SELECT
      title, sum(requests) AS num_requests
    FROM 
      [fh-bigquery:wikipedia.pagecounts_201504]
    WHERE language = 'fr'
    GROUP EACH BY title
  )
WHERE title LIKE '%ç%'
ORDER BY requests DESC
LIMIT 100

This query processes data from a 5.6 billion row / 380 Gb dataset and generally runs in less than two minutes. The cost? About $1.37, at the time of this writing.

To use a UDF in a query, it must be described via UserDefinedFunctionResource elements in your JobConfiguration request. UserDefinedFunctionResource elements can either contain inline JavaScript code or pointers to code files stored in Google Cloud Storage.

Under the hood

JavaScript UDFs are executed on instances of Google V8 running on Google servers. Your code runs close to your data in order to minimize added latency.

You don’t have to worry about provisioning hardware or managing pipelines to deal with data import / export. BigQuery automatically scales with the size of the data being queried in order to provide good query performance.

In addition, you only pay for what you use - there is no need to forecast usage or pre-purchase resources.

Developing your function

Interested in developing your JavaScript UDF without running up your BigQuery bill? Here is a simple browser-based widget that allows you to test and debug UDFs.

Note that not all JavaScript functionality supported in the browser is available in BigQuery. For example, anything related to the browser DOM is unsupported, including Window and Document objects, and any functions that require them, such as atob() / btoa().

Tips and tricks

Pre-filter input

In our URL-decoding example, we are passing a subquery as the input to urlDecode rather than the full table. Why?

There are about 5.6 billion rows in [fh-bigquery:wikipedia.pagecounts_201504]. However, one of the query predicates will filter the input data down to the rows where language is “fr” (French) - this is about 262 million rows. If we ran the UDF over the entire table and did the language and cédille filtering in a single WHERE clause, that would cause the JavaScript framework to process over 21 times more rows than it would with the filtered subquery. This equates to a lot of CPU cycles wasted doing unnecessary data conversion and marshalling.

If your input can easily be filtered down before invoking a UDF by using native SQL predicates, doing so will usually lead to a faster (and potentially cheaper) query.

Avoid persistent mutable state

You must not store and access mutable state across UDF execution for different rows. The following contrived example illustrates this error:


// myCode.js
var numRows = 0;

function dontDoThis(r, emit) {
  emit(rowCount: ++numRows);
}

// The query.
SELECT max(rowCount) FROM dontDoThis(myTable);

This is a problem because BigQuery will shard your query across multiple nodes, each of which has independent V8 instances and will therefore accumulate separate values for numRows.

Expand select *

You cannot execute SELECT * FROM urlDecode(...) at this time; you must explicitly list the columns being selected from the UDF: select requests, title from urlDecode(...)

For more information about BigQuery User-Defined Functions, see the full feature documentation.

Posted:

Posted by Laurence Moroney, Developer Advocate

In this episode of Coffee With a Googler, Laurence meets with Developer Advocate Timothy Jordan to talk about all things Ubiquitous Computing at Google. Learn about the platforms and services that help developers reach their users wherever it makes sense.

We discuss Brillo, which extends the Android Platform to 'Internet of Things' embedded devices, as well as Weave, which is a services layer that helps all those devices work together seamlessly.

We also chat about beacons and how they can give context to the world around you, making the user experience simpler. Traditionally, users need to tell you about their location, and other types of context. But with beacons, the environment can speak to you. When it comes to developing for beacons, Timothy introduces us to Eddystone, a protocol specification for BlueTooth Low Energy (BLE) beacons, the Proximity Beacon API that allows developers to register a beacon and associate data with it, and the Nearby Messages API which helps your app 'sight' and retrieve data about nearby beacons.

Timothy and his team have produced a new Udacity series on ubiquitous computing that you can access for free! Take the course to learn more about ubiquitous computing, the design paradigms involved, and the technical specifics for extending to Android Wear, Google Cast, Android TV, and Android Auto.

Also, don't forget to join us for a ubiquitous computing summit on November 9th & 10th in San Francisco. Sign up here and we'll keep you updated.

Posted:

Posted by Larry Yang, Lead Product Manager, Project Tango

At Google I/O, we showed the world many of the cool things you can do with Project Tango. Now you can experience it yourself by downloading these apps on Google Play onto your Project Tango Tablet Development Kit.

A few examples of creative experiences include:

MeasureIt is a sample application that shows how easy it is to measure general distances. Just point a Project Tango device at two or more points. No tape measures and step ladders required.

Constructor is a sample 3D content creation tool where you can scan a room and save the scan for further use.

Tangosaurs lets you walk around and dig up hidden fossils that unlock a portal into a virtual dinosaur world.

Tango Village and Multiplayer VR are simple apps that demonstrate how Project Tango’s motion tracking enables you to walk around VR worlds without requiring an input device.

Tango Blaster lets you blast swarms of robots in a virtual world, and can even work with the Tango device mounted on a toy gun.

We also showed a few partner apps that are also now available in Google Play. Break A Leg is a fun VR experience where you’re a magician performing tricks on stage.

SideKick’s Castle Defender uses Project Tango’s depth perception capability to place a virtual world onto a physical playing surface.

Defective Studio’s VRMT is a world-building sandbox designed to let anyone create, collaborate on, and share their own virtual worlds and experiences. VRMT gives you libraries of props and intuitive tools, to make the virtual creation process as streamlined as possible.

We hope these applications inspire you to use Project Tango’s motion tracking, area learning and depth perception technologies to create 3D experiences. We encourage you to explore the physical space around the user, including precise navigation without GPS, windows into virtual 3D worlds, measurement of spaces, and games that know where they are in the room and what’s around them.

As we mentioned in our previous post, Project Tango Tablet Development Kits will go on sale in the Google Store in Denmark, Finland, France, Germany, Ireland, Italy, Norway, Sweden, Switzerland and the United Kingdom starting August 26.

We have a lot more to share over the coming months! Sign-up for our monthly newsletter to keep up with the latest news. Connect with the 5,000 other developers in our Google+ community. Get help from other developers by using the Project Tango tag in Stack Overflow. See what others are creating on our YouTube channel. And share your story on Twitter with #ProjectTango.

Join us on our journey.

Posted:

Posted by Taylor Savage, Product Manager

We’re excited to announce that the full speaker list and talk schedule has been released for the first ever Polymer Summit! Find the latest details on our newly launched site here. Look forward to talks about topics like building full apps with Polymer, Polymer and ES6, adaptive UI with Material Design, and performance patterns in Polymer.

The Polymer Summit will start on Monday, September 14th with an evening of Code Labs, followed by a full day of talks on Tuesday, September 15th. All of this will be happening at the Muziekgebouw aan ‘t IJ, right on the IJ river in downtown Amsterdam. All tickets to the summit were claimed on the first day, but you can sign up for the waitlist to be notified, should any more tickets become available.

Can’t make it to the summit? Sign up here if you’d like to receive updates on the livestream and tune in live on September 15th on polymer-project.org/summit. We’ll also be publishing all of the talks as videos on the Google Developers YouTube Channel.