Thursday, 15 September 2016

Testing an Alexa Skill

If you had in these weeks read my previous post (one and two) you may have took a look at the code
of the skill I wrote: https://github.com/crazycoder1999/EuroSoccer_Alexa_Skill/
This post will show some "tricks" I used to minimize the problems related on testing an Alexa skill.

BlackBox
The black box testing approach of an alexa skill is really complicated, it literally means: try vocally, all the possible inputs that your skill can (and can't) manage.
You can proceed in this way, of course, but you should NOT, it is an insane amount of work.

For the black box testing you can use a compatible echo device or a echo simulator: https://echosim.io/



Advice #1: Separate the code
The first advice I can give is to separate the business logic from the presentation: separate the code
that exposes the Intents of your skill from the function that answer to those Intents.

If you read the code of my skill you will see that index.js contains just the intents and the imports of
the modules.
The main code of the skill is in EuroUtils.js.
This simplify debugging and it helps to write unit testing code as you can see in test.js.

Instead of a single file, like test.js for unit testing, it is better to use one of the available Unit Testing
framework for nodejs.
If you have nodejs installed you can test EuroUtils.js launching node test.js inside the src folder.

Advice #2: Utterances generations
Another thing I didn't liked so much in Alexa, it is the way you have to create utterances.txt, because:
- it is too confusing
- it is hard to remember the logical link between Utterances and Intents

The more complex is your skill, the more complicated it becomes to keep track of them.
In order to improve the logical link between the two things I invented a new file, an xml, with some tags that helped to explain which Intents answers to a set of utterances.
There is an example here: utterances_groups_sample.xml

I than created a simple python script to extract and print on console: all the utterances to submit for the skill.

Have you got any other advice?
Write in the comments!

Thursday, 4 August 2016

Alexa, explain me how Skill works


(Continue from the 1st Post)
Amazon's voice assistant is called Alexa and it can run on Amazon Echo, Tap, Dot, FireTV and Fire Stick.
Amazon allows users to improve Alexa capabilities with the installation of 3rd party "Skills".
Some Skills are simple enough to be developed, even with very basic programming background.

Amazon is doing a very good job to teach developers how to build skills: code on github, webinar, tutorial, podcast, documentation and swaaaag!

Thinking about what people can ask

The developer of a Skill need to:
- write a list of possible questions a user can ask to Alexa for the skill
- develop a list of operations that answers for each possible questions to Alexa

Each operation can be connected to multiple questions, instead, for each question there is only one operation that can answer.

The operations are called Intents, meanwhile the questions are  called utterances.

Amazon take care of all the rest: recognize questions and parameters in user's questions and connect it to the right operation. The operation is executed and the response is sent back to the user.

Responses can be:
- voice responses
- voice and text/image: the contents is available on a mobile app for android / ios

The role of the mobile app is to give the user a way to install,delete and search for skills, give some informations on how to use a specific skill.
The way Alexa works, sometimes remeber the assistant of the movie "Her".



The mobile app helps the user to find the correct questions for the skill, but a very well done skill is the one in which for a specific intent the developer generated a very exhaustive list of possible questions.

Code on Amazon Lambda

Even if you can create and host your skill somewhere on internet with specific constraints and guide lines in any language you want, the easy way to code a skill, it is to use Amazon Lambda.

Most of the code example on github released by amazon are lambda-ready and built with javascript/nodejs, so with very few changes your new skill/bot is online.
If you are new with bots, I highly encourage to start with lambda so you can put all your efforts on the voice experience.

In the next post I will talk about testing and deployment of the EuroCup skill.
The code is available on my github here.

Monday, 1 August 2016

Attack of the Bots

Bots are a new tech trend in 2016.
Amazon,Facebook,Google,Microsoft,Apple all important companies announced bots support/integration in their products.

Of the all solutions/platforms the most interesting one for me, initially, was Amazon Alexa: because it is an expandable voice assistant.
(Hey Jeff Bezos! I'm still waiting the possibility to buy an Echo in Europe!).
I decided to build one bot, that works for 2 platform: facebook and amazon.

This and the posts that follows are a description of thoughts and choices I made to build it.

I didn't understand why the bots took so much attention on those tech companies until I started to figuring out some advantages over a traditional APP:

User advantages:
1 - Integration in known software: most of the solutions proposed are integrated in app or service known.. facebook messenger, skype, siri. Users already know them: No learning curve, they can rapidly start to talk with a bot with no need to learn and remember actions on user interface.
2 - Natural Interaction: everyone knows how to text messages and talk. This make bots attractive also for old people: it is not important if they can recognize if they are talking with a fake or real person, the important thing is if they can have the answers they are looking for.
3 - No installation/space needed on the device: no need to install anything on their device. No need to upgrade.
4 - Same uniform experience across devices

Developer adavantages:
5 - Userbase Potential: deploying something that become part of a known app (like facebook messenger) improves the link between the real user and the owner of the bot. In the case of facebook each Facebook Page(the page of a shop for example) can have its own bot that replies to a client. It is not just an anonymous user who watch a website, but a user with its information... Very valuable information if well used.
6 - One language for many platforms: web, web-mobile, desktop, mobile . No need to bother with accounts,login, oauth, tecnologies problems and so on!

Of course, Bots can't replace apps that make deep use of hardware on our device games , multimedia applications, photo application, complicated interaction apps and many other categories.
The challenge is also with web page ... but don't forget the interaction model.

Project Euro 2016
At the end of May, I decided to build a bot that gives to users, information about the European Football Cup 2016, informations like
- matches
- results
- teams
- groups

The bot were available for Amazon Alexa (it was published) and Facebook Messenger (it was not published).
The other important feature of the bot was to have a common data layer that were available for the 2 specific platform.



In the next posts I will explain the Amazon Alexa solution.

Thursday, 28 July 2016

Automator (OSX): 2 handy ideas


Automator is a tool shipped with OSX: it allows user to create simple applications that automates manual task. You can find some guide on how to use it, like this one.
Even if it is possible to record and save actions mad on the computer, I recommend to create the script manually.

In the last year I used this application for 2 scenarios.

Automatic launch applications for a Demo

When you create a demo for a presentation, it is important to be prepared and make no error: time is very important!
In november I did a speech with a final "demo".
The demo used a series of applications launched from the terminal:


The explanation of the above workflow:
  • opens a terminal
  • executes a command on the terminal (using keystrokes)
  • opens a new tab thanks to keypress [command + t]
  • executes another command
  • opens a third tab
  • keystrokes another command
  • opens the last tab, change the directory, launch a node script

Automatic Screenshots

Recently I subscribed an e-learning service with slides, available only online.
I wanted to have the same slides offline.
Thanks to automator it was possible to:
  • record the action to: select a window(the browser) and click on it. A click move the slide to the next one.
  • use screencapture to save in a predefined directory a screenshot of the entire screen. (using a bash command to build a unique filename each time).
  • repeat the step a variable number of times.
The script follows:



Thursday, 28 April 2016

One Year with Pebble (Time): a user and a developer perspective

The smartwatch "revolution" didn't happened as expected by the industry.

The sales numbers tells that Samsung, Google or Apple have conviced enough people.
Smartwatchesare perceived like new toys, instead of useful things and, often the disadavantages of having one are more than the advantages:
- ridiculous battery life
- no killer apps
- ridiculous apps (sending hearths or drawing to someone...)

and they are also expensive.

(Another interesting review from engadget about apple watch one year on)

With a lot of skepticism, last year I decided to buy one, just to understand the potential of these devices in everyday life.

Since, I found frustrating to charge my watch every night, I decided to become a bacher of the new pebble time, because it guarantees (in some conditions) 7 days of battery life, display always on, waterproof, and imho it is designed as a watch first, instead of a wrist-phone.

I'm not a typical smartwatch user.. not the one for who a smartwatch is designed for.
A smartwatch is useful if it can cover some microtasks related to "TIME" like:
- calendar notification
- alarms
and it is important that if it can work most of the time without the dependency from the smartphone.

I don't care about app notifications, neither of app integration.
I like the possibility to develop my own apps and expand watch functionality with apps.

BTW, an honorable mention is for the stock Health app, it tracks my daily sleep and daily steps: it is an addictive functionality and useful.

I use the smartwatch also in the sea, during surf sessions and this is a plus, because I can leave my phone inside the bag on the beach and receive call notifications.

Watchfaces are a nice idea, to change the look to the device but, you have to choose wise if you want to preserve battery life, because some "cool" watch faces can drain your battery faster.

Developing a WatchApp: "Work Time"
One thing I do when I arrive at work, it is to keep track of the arrival time: in order to calculate when I can leave the office.
This is a repetitive task and it can easily covered by my pebble!

I developed an app "Work Time" that vibrates when it is time to go home.
Create an app for a limited device it is an interesting challenge: inputs, cpu, memory, screen are limited.

Work Time is composed by these screens:
- Estimated Exit: It show you at what time you can leave
 

The app is configured using these 3 screens in sequence:
- Check In: the time of the arrival. Default is the current time.
- Work Time: how much time you work
- Break : how much time is your break


When your settings are done, you can close worktime, it will notify you with a vibration when you can go home and a screen like this one:
 

The pebble documentation is better than other platform I developed for... but creates a watchapp for pebble is not that easy due to:

- small community around the subject
- hard debugging: logs based
- cloud.pebble.com: online based editor and sdk.. sometimes it is a limit
- C language: good for optimization, bad for complexity


Btw, if you are interested, the code for WorkTime is here: https://github.com/crazycoder1999/worktime

Saturday, 28 November 2015

When an ESP8266 encounter MQTT and an accelerometer

This is a project I built for a presentation at http://milan2015.codemotionworld.com/ on November 21st.
This idea use ESP8266 (esp04), Nodemcu, MQTT and an accelerometer (adxl345 / sparkfun) by I2C.
It is an IOT project.
This project started as a "drone project" but the original idea changed during time.. by the way, helped me to do:
  • some experience with MQTT and I2C
  • see how much I can push the Esp8266
It was an interesting challenge!
Check the slides on my previous post to learn more about the project.
In my final demo I had:
Feel free to write me if you want to repeat the project and need helps.


Tuesday, 24 November 2015

ESP8266 and IOT talk at Codemotion Milan 2015

I was a bit far from the blog.. but it was because I was preparing a talk for

I built a demo using nodemcu, esp8266, mqtt, mosquitto, nodejs and adxl345.
Shortly: reading raw data from accelerometer (x,y,z) and send that data with MQTT through wifi.

More info coming veeeery soon..with sourcecode!