fbpx

How to Build a Slack Skill for the Amazon Echo

How to Build a Slack Skill for the Amazon Echo
Reading Time: 3 minutes

This is part of a running commentary for a project that our Labs team is working on. We are interested in non-visual user interfaces. Broadly, interfaces that consider the use of people’s other modes of interacting with the world: speaking, hearing, feeling, and gesturing. So we decided to start playing around with the Amazon Echo.

We began our exploration into the capabilities of the Amazon Echo a while back, when we created a skill for Alexa, the voice service that powers the Echo. The first Alexa skill we developed enabled her to tell us what food trucks are outside of our DC office during lunch time. Since that customization went well, we decided to see if we could have her interface successfully with our beacon-enabled MD Office app. Here’s a detailed post about the mobile app we created to monitor the beacons we we placed throughout our offices. Once the app detected a beacon it would be communicated through Slack identifying the specific location of someone in the office.

We thought it might be interesting to expand Alexa’s abilities by integrating her with our MD Office app, so that when asked, she could tell us the last room someone was in at either our DC office or Louisville office. Generally the first step to doing this would be to locate the data needed for the skill in DynamoDB. Fortunately, we already had a DynamoDB table set up from the development of our MD Office app that was architected to leverage a number of AWS services. With the DynamoDB table already set up, the new Alexa skill connects to that database and returns the location of the requested person, when prompted. This is based on the correlation seen in the following example. Each office location was assigned a unique ID in the MD Office app, where a location ID of 0 was assigned to Louisville and a location ID of 1 was assigned to DC, etc. A similar correlation was applied to the beacon ID’s as well.

Based on our experience from the previous Alexa project, we created an intent called TellMeLocationIntent. This intent was then paired with a handler, so that it would be initiated when someone said any of the following phrases.

{Person} is a variable that represents the name of the individual being asked about. We created a function called getUserName that we called inside the handler, and defined some parameters to use to query the database. The function would pull names from the table SlackBeaconUserInfo, which included columns for both full names and nicknames.

We then queried the table again, so that if for example someone were to ask “where is Evan”, the function getUserFromName would search the table SlackBeaconUserInfo and look for matches associated with the name Evan. If a match is found, then a UserID is provided, and the function getCurrentLocationFromUser will call the table SlackBeaconUserInfo. Each row in that table has the following data, UserID, BeaconID, LocationID, and LastSeenAtTime. The function getCurrentLocationFromUser matches the UserID to the correct row and gives the corresponding information. From there, we match the UserID data to the location and beacon data provided by the initial query and stored in the locationIDToName and beaconIDToName variables, we are then provided with the location of the specified user.

The last step is to construct the speechOutput response that instructs Alexa on how to respond to our requests. If we were to ask, “Alexa, where is Carson (our head of research and development)?”, then Alexa could respond with, depending on where he is, “Carson is in the Louisville office near the conference room at 9:48 AM December 20, 2015.”

The previous work we had done building Alexa skills proved to be very helpful for this project. We’ll continue to explore the limitations and capabilities of Alexa. One project we may try to do in the future would be to integrate our Bus Stop DC app with the Echo, so that as a preferred bus approaches the stop near our office in DC, Alexa would let us know. We will also continue to explore the implications of IoT and the Echo.

Explore how our expertise can benefit your project. Contact us for a consultation.