Blog

We believe there is something unique at every business that will ignite the fuse of innovation.

Introduction

The Google Assistant and other Natural Language Understanding platforms are making waves in how users make use of their technology. Whether its doing simple tasks such as setting a timer, or more complex tasks like adjusting your thermostat the Google Assistant has you covered. In this post, we will step through how to build your own Assistant application to control an Android Things device to water your plants by simply asking Google to do so.

Android Things Peripherals

The Android Things device demonstrated in this application has two purposes. The first is to accept Google Assistant commands by saying "Hey CapTech Assistant" as demonstrated in my previous blog post. The second is to turn on and off a water pump from a Google Action command received by any Google Assistant enabled device. Both features can live independently from one another, but it's important to understand that you can build custom devices to be controlled by the Assistant, make requests to the Assistant, or both. 

The base portion of this demo application is built on my previous blog post. In that post I explain how to wire the lights that indicate an Assistant request is being made by a user. For more information on lighting and GPIO peripheral usage, please also find my colleague's post on the subject.

For this post, we have an additional GPIO pin being used to control a pump. The pump is a DC 12V peristaltic pump and can be found on Amazon. The pump requires at minimum of 12v, so we will use an external power source to power the pump. For my purposes, I used a power supply like the one found here. To enable the Android Things device to turn on and off the pump, we will use a 5V relay module such as the one found here.

To wire the pump to the relay, you will connect the neutral wire from your power supply to one end of the pump and and then connect another wire from the other end of the pump and into the left side of the relay as pictured below. When the Android Things device flips the relay, the circuit is completed and the pump will engage - this works much like a light switch in your home. 

RELAY IMAGE

The general wiring of the relay and Raspberry Pi for the attached demo application's use can be found below.

BOARD WIRING

Firebase Database

In the attached demo application, we will use Firebase Realtime Database to sync our pump with the Google Assistant so that our Action can turn on and off the water for a set amount of time.

To get started, set up a Firebase Database with the following JSON structure:

{
  "bonsai_status" : {
    "some_id" : {
      "status_date" : 0,
      "watering" : false,
      "watering_duration" : 0
    }
  }
}

Once you have created the Firebase Realtime Database, for the purpose of our application we will turn off the Authentication requirements. To do so, select the "rules" tab and implement the following rules:

{
  "rules": {
    ".read": "true",
    ".write": "true"
  }
}

Next, we are ready to add the Firebase reference to our Android Things application.

First we need to add the appropriate frameworks to our build.gradle file.

compile 'com.google.firebase:firebase-core:11.0.4'
compile 'com.google.firebase:firebase-database:11.0.4'

Next, in the CapTechAssistant Activity grab reference to the Firebase Database.

FirebaseDatabase assistantDatabase = FirebaseDatabase.getInstance();
DatabaseReference bonsaiData = assistantDatabase.getReference("bonsai_status");

Lastly, we want to listen for when any new children are added to our bonsai_status database. Anytime a new child is added, our callback will be hit and we will turn on or off our pump accordingly.

bonsaiData.addChildEventListener(new ChildEventListener() {
    @Override
    public void onChildAdded(DataSnapshot dataSnapshot, String s) {
        currentStatus = BonsaiStatus.getStatusFromSnapShot(dataSnapshot);
        pump.toggleGPIO(currentStatus.watering);
            if (wateringTimer != null) {
                wateringTimer.cancel();
                wateringTimer = null;
            }

            if (currentStatus.watering) {
                //water for a specific amount of time.
                wateringTimer = new CountDownTimer(currentStatus.watering_duration*1000, currentStatus.watering_duration) {
                    @Override
                    public void onTick(long millisUntilFinished) {

                    }

                    @Override
                    public void onFinish() {
                        wateringTimer.cancel();
                        pump.toggleGPIO(false);
                        wateringTimer = null;
                        
                        //once we are done, let Firebase know
                        BonsaiStatus stopStatus = new BonsaiStatus();
                        stopStatus.watering = false;
                        stopStatus.statusDate = System.currentTimeMillis();
                        stopStatus.watering_duration = 0;
                        bonsaiData.push().setValue(stopStatus);
                    }
                };
                wateringTimer.start();
            }   
    }

    @Override
    public void onChildChanged(DataSnapshot dataSnapshot, String s) {
        //not using
    }

    @Override
    public void onChildRemoved(DataSnapshot dataSnapshot) {
        //not using
    }

    @Override
    public void onChildMoved(DataSnapshot dataSnapshot, String s) {
        //not using
    }

    @Override
    public void onCancelled(DatabaseError databaseError) {
        //not using
    }
});

API.AI

Now that our Android Things device is set up to be synced with our Firebase Realtime Database, we are now ready to begin setting up our Google Assistant Action. To do so, we will leverage API.AI to process user voice commands. To get started, I highly recommend you recommend my colleague's blog on building your first Action with API.AI.

For our Action, we will set up 3 intents.

Welcome

The welcome intent is used to ask the user what they would like to do once they call up your Action. Our Welcome Intent will not have any Action. When our welcome intent is hit, it will respond to the user with one of four of our set up responses asking the user what they would like to do next.

welcome pic

In the above picture, you will notice the "end conversation" box is unchecked and is what is indicating to our Assistant device that a response is expected from the user.

Watering

The watering intent is what will be hit when the user asks our Action to water their plants. The Watering intent will respond to one of our many predetermined requests that the user can make. The intent will also expect the user to specify a duration to water the plants. To do so, we will highlight the areas of our predetermined user requests and indicate that they are a "duration" parameter.

duration request

Since our intent expects a duration, we need to prompt the user if they make a request that matches our intent but they don't indicate a duration. To do so, we will add a prompt for our duration parameter.

duration prompt

Next, we will indicate some responses that are applicable to our user's request and let the Action know that we don't expect anymore requests from the user by checking the "end conversation" box.

water response

Stop Watering

The stop watering intent, does just as it is named. It is triggered when our user indicates that they want to turn off the water pump. Again, we will enter some predefined user requests, Assistant responses, and indicate to the Action that we don't expect a follow up response from the user.

Context

When setting up the Intents, you will find boxes to enter Context for the User's requests. Context is used to indicate extra data that may be helpful in future requests made by the user. For our purposes, Context is not necessary.

Testing API.AI

You can test your API.AI intents at any time by typing requests in the right pane of the API.AI console. You can ensure that your intents respond to various user requests and respond appropriately.

Cloud Functions

Next, we will enable our API.ai processed user requests to fire a Google Cloud function to update our Firebase Realtime Database to turn on and off our Android Things pump. To do so, we will use Cloud Functions to host some javascript that will run in a Node.js environment.

To get started, download and install node.js and the Firebase CLI. If you do not already have these installed, download and install node.js and the Firebase CLI. Once installed, you should be able to run them from your command line like this to check their versions:

node --version
firebase --version

In general, you should always make sure to keep the Firebase CLI up to date with the following command:

npm install -g firebase-tools

Create and initialize your Cloud Functions workspace Now, create a folder to hold your project:

mkdir firebase-assistant-codelab
cd firebase-assistant-codelab

To authenticate and get access to your existing project:

firebase login

A window will appear requesting permissions.

firebase init

You'll now have a "functions" directory ready to hold your code. There is a default index.js which is the entry point to your Cloud Functions code. Open the index.js file for editing.

'use strict';

process.env.DEBUG = 'actions-on-google:*';

const Assistant = require('actions-on-google').ApiAiAssistant;
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);

const bonsaiStatus = admin.database().ref('bonsai_status');

// API.AI Intent names
const WATER_INTENT = 'water-duration';
const STOP_INTENT = 'stop-watering';

// Contexts
const WATER_CONTEXT = 'water_duration';
const STOP_CONTEXT = 'stop';

// Context Parameters
const DURATION_PARAM = 'duration';
const FULFILLMENT_PARAM = 'fulfillment';

exports.assistantcodelab = functions.https.onRequest((request, response) => {
    const assistant = new Assistant({request: request, response: response});

    let actionMap = new Map();
    actionMap.set(WATER_INTENT, water);
    actionMap.set(STOP_INTENT, stop)
    assistant.handleRequest(actionMap);

    /**
    *Code to start the watering process
    **/
    function water(assistant) {
       
        var d = new Date();
        var n = d.getTime();
           
        const duration = assistant.getContextArgument(WATER_CONTEXT, DURATION_PARAM).value;
        
        const numberTime = duration.amount;
        const unitTime = duration.unit;
           
          
        var timeInSeconds = 120;//default
           
        switch(unitTime){
            case "h":
                timeInSeconds = numberTime *60*60;
                break;
            case "s":
                timeInSeconds = numberTime;
                break;
            default:
                timeInSeconds = numberTime * 60;//default to unit being minutes
                break;
        }
            
        var currentStatus = {
            'status_date': 0,
            'watering': true,
            'watering_duration': 120
        };
        currentStatus.status_date = n;
        currentStatus.watering_duration = timeInSeconds;
        bonsaiStatus.push().set(currentStatus);
        assistant.tell(request.body.result.fulfillment.speech);
    }
    /**
    *Code to stop the watering process
    **/
    function stop(assistant) {
        var d = new Date();
        var n = d.getTime();
        var currentStatus = {
            'status_date': 0,
            'watering': false,
            'watering_duration': 0
        };
        currentStatus.status_date = n;
        bonsaiStatus.push().set(currentStatus);
        assistant.tell(request.body.result.fulfillment.speech);
    }
});

In the above function, you will notice we map two methods to the intents that may come in from the Assistant. Each method is called based on their respective intent. After processing their request, we then respond to the user with the response that came in from API.AI. If you choose, you can override the response with one of your own generated in the Cloud function. To learn more about what you can do with the cloud function, please find more information here.

After you complete your index.js set up, you are ready to deploy. To do so, via the terminal navigate to the function directory and run the following commands.

npm install --save actions-on-google

Next deploy the code.

firebase deploy

After the deployment is complete, a URL will be shown. Hang onto to this for later.

Webhooks

We are now ready to link our Cloud Function with our API.AI intents. To do so, open the Watering and Stop Watering Intents and under "Fulfillment" check the box for "Use Webhook."

Next in the left side panel, find the "Fulfillment" tab. Enter the URL you obtained in the previous step and ensure the webhook toggle is enabled.

Upon completing the above, you can now test your API.AI requests in the same manner as you did before, except now they should fire your Cloud Function. If you are experiencing issues, you can check your Cloud Function logs for more details.

Assistant Integration

We are finally ready to begin using our functions with our Google Assistant. In the API.AI console, in the left side window select "Integrations." From here you will be able to select from a wide array of platforms. For the purpose of our demo, we will just select "Actions on Google." You will then need to Authorize and link your projects.

From your Google Actions Console, you can set up your application for distribution. Prior to doing so, you can enable test mode to then be able to test on any Assistant enabled device that is signed in as the same owner of the Google Action.

Demo

Start Watering Demo

welcome pic

Stop Watering Demo

welcome pic

Conclusion

I encourage you to take this demo further and add an Intent of your own to check on the current status of your device or when the last time it watered your plants. Please find the attached demo application for more information.

Interested in finding out more about CapTech? Connect with us or visit our Career page to learn more about becoming a part of a team that is developing world-class mobile apps for some of the largest institutions in the world.

CapTech is a thought leader in the Mobile and Devices spaces, and has deployed over 300 mobile releases for Fortune 500 companies. Additionally, CapTech was identified by Forrester as one of top Business to Consumer (B2C) Mobile Services Providers in their 2016 Wave.

About the Author

Clinton Teegarden

Clinton Teegarden is a Senior Software Engineer and Manager at CapTech based in the Washington, DC metro area. Clinton is passionate about bringing the greatest technologies in both Mobile and IoT to his clients using proven architecture and design patterns. Clinton specializes in all things Android and has led teams in delivering products for Fortune 500 clients, servicing millions of users