Showing posts with label amazon. Show all posts
Showing posts with label amazon. Show all posts

Thursday, 15 September 2016

Testing an Alexa Skill

If you had in these weeks read my previous post (one and two) you may have took a look at the code
of the skill I wrote: https://github.com/crazycoder1999/EuroSoccer_Alexa_Skill/
This post will show some "tricks" I used to minimize the problems related on testing an Alexa skill.

BlackBox
The black box testing approach of an alexa skill is really complicated, it literally means: try vocally, all the possible inputs that your skill can (and can't) manage.
You can proceed in this way, of course, but you should NOT, it is an insane amount of work.

For the black box testing you can use a compatible echo device or a echo simulator: https://echosim.io/



Advice #1: Separate the code
The first advice I can give is to separate the business logic from the presentation: separate the code
that exposes the Intents of your skill from the function that answer to those Intents.

If you read the code of my skill you will see that index.js contains just the intents and the imports of
the modules.
The main code of the skill is in EuroUtils.js.
This simplify debugging and it helps to write unit testing code as you can see in test.js.

Instead of a single file, like test.js for unit testing, it is better to use one of the available Unit Testing
framework for nodejs.
If you have nodejs installed you can test EuroUtils.js launching node test.js inside the src folder.

Advice #2: Utterances generations
Another thing I didn't liked so much in Alexa, it is the way you have to create utterances.txt, because:
- it is too confusing
- it is hard to remember the logical link between Utterances and Intents

The more complex is your skill, the more complicated it becomes to keep track of them.
In order to improve the logical link between the two things I invented a new file, an xml, with some tags that helped to explain which Intents answers to a set of utterances.
There is an example here: utterances_groups_sample.xml

I than created a simple python script to extract and print on console: all the utterances to submit for the skill.

Have you got any other advice?
Write in the comments!

Thursday, 4 August 2016

Alexa, explain me how Skill works


(Continue from the 1st Post)
Amazon's voice assistant is called Alexa and it can run on Amazon Echo, Tap, Dot, FireTV and Fire Stick.
Amazon allows users to improve Alexa capabilities with the installation of 3rd party "Skills".
Some Skills are simple enough to be developed, even with very basic programming background.

Amazon is doing a very good job to teach developers how to build skills: code on github, webinar, tutorial, podcast, documentation and swaaaag!

Thinking about what people can ask

The developer of a Skill need to:
- write a list of possible questions a user can ask to Alexa for the skill
- develop a list of operations that answers for each possible questions to Alexa

Each operation can be connected to multiple questions, instead, for each question there is only one operation that can answer.

The operations are called Intents, meanwhile the questions are  called utterances.

Amazon take care of all the rest: recognize questions and parameters in user's questions and connect it to the right operation. The operation is executed and the response is sent back to the user.

Responses can be:
- voice responses
- voice and text/image: the contents is available on a mobile app for android / ios

The role of the mobile app is to give the user a way to install,delete and search for skills, give some informations on how to use a specific skill.
The way Alexa works, sometimes remeber the assistant of the movie "Her".



The mobile app helps the user to find the correct questions for the skill, but a very well done skill is the one in which for a specific intent the developer generated a very exhaustive list of possible questions.

Code on Amazon Lambda

Even if you can create and host your skill somewhere on internet with specific constraints and guide lines in any language you want, the easy way to code a skill, it is to use Amazon Lambda.

Most of the code example on github released by amazon are lambda-ready and built with javascript/nodejs, so with very few changes your new skill/bot is online.
If you are new with bots, I highly encourage to start with lambda so you can put all your efforts on the voice experience.

In the next post I will talk about testing and deployment of the EuroCup skill.
The code is available on my github here.

Monday, 1 August 2016

Attack of the Bots

Bots are a new tech trend in 2016.
Amazon,Facebook,Google,Microsoft,Apple all important companies announced bots support/integration in their products.

Of the all solutions/platforms the most interesting one for me, initially, was Amazon Alexa: because it is an expandable voice assistant.
(Hey Jeff Bezos! I'm still waiting the possibility to buy an Echo in Europe!).
I decided to build one bot, that works for 2 platform: facebook and amazon.

This and the posts that follows are a description of thoughts and choices I made to build it.

I didn't understand why the bots took so much attention on those tech companies until I started to figuring out some advantages over a traditional APP:

User advantages:
1 - Integration in known software: most of the solutions proposed are integrated in app or service known.. facebook messenger, skype, siri. Users already know them: No learning curve, they can rapidly start to talk with a bot with no need to learn and remember actions on user interface.
2 - Natural Interaction: everyone knows how to text messages and talk. This make bots attractive also for old people: it is not important if they can recognize if they are talking with a fake or real person, the important thing is if they can have the answers they are looking for.
3 - No installation/space needed on the device: no need to install anything on their device. No need to upgrade.
4 - Same uniform experience across devices

Developer adavantages:
5 - Userbase Potential: deploying something that become part of a known app (like facebook messenger) improves the link between the real user and the owner of the bot. In the case of facebook each Facebook Page(the page of a shop for example) can have its own bot that replies to a client. It is not just an anonymous user who watch a website, but a user with its information... Very valuable information if well used.
6 - One language for many platforms: web, web-mobile, desktop, mobile . No need to bother with accounts,login, oauth, tecnologies problems and so on!

Of course, Bots can't replace apps that make deep use of hardware on our device games , multimedia applications, photo application, complicated interaction apps and many other categories.
The challenge is also with web page ... but don't forget the interaction model.

Project Euro 2016
At the end of May, I decided to build a bot that gives to users, information about the European Football Cup 2016, informations like
- matches
- results
- teams
- groups

The bot were available for Amazon Alexa (it was published) and Facebook Messenger (it was not published).
The other important feature of the bot was to have a common data layer that were available for the 2 specific platform.



In the next posts I will explain the Amazon Alexa solution.

Monday, 15 June 2015

Android + Gradle: Build different APKs with different dependencies

Introduction
Recently, I switched my projects, to Android Studio.
Two years ago when Google announced the new IDE, I was skeptic: android studio was buggy and gradle a new complicated tool to learn.
Today I'm an happy user of android studio.

In this tutorial, I will talk about my experience with gradle and some useful customization I made on the build process of my app, called Qoffee.

The problem
Qoffee is available on Play Store and Amazon App Store and allows users to find the best coffee in town.
One of the last features I added, it was the support for Android Wear: now it is possible to search coffees from the smartwatch!

With the new features:
- the new apk is passed from 1.2 mb to 3.9 mb
- the new apk contains also the apk to install into the smartwatch
- there is a new Service, precisely a WearableListenerService. This service is the one that talks with the watch.

New classes/features were not useful on the apk for other app store (eg. amazon).
In the new solution, it is now possible to automatically build 2 separate apk:
- one for play store: with all the features (size: 3.9 mb)
- one for other store: without android wear and google play services support (size: 1.2 mb)

Remove unused files: Separate the Code and merge manifests
I searched a way to create 2 different builds for my app, in order to create a custom Manifest and to remove the WearableListenerService from the non-play store apk.
In the src folder of my project on android studio I created 2 new “Java Folder”:
- playstore
- genericstore

Create a new Java Folder


New project tree


The original folder for my project was called “main”.
I created 2 other AndroidManifest.xml files, and placed them in the new folder playstore and genericstore like this:
- AndroidManifest.xml in main folder contains common data for both apk: eg. all the common activities
- AndroidManifest.xml in playstore folder contains only informations specific to the play store apk: eg. reference to the WearableServices class, keys for playstore services
- AndroidManifest.xml in genericstore folder contains information for non-playstore apk: mainly, empty

See the differences in the following image:

Differences between Manifest

After this operation I created two product flavors in the gradle file of my app like this:

android {
     compileSdkVersion 20
     buildToolsVersion '20.0.0'
     productFlavors {
          playstore{
               applicationId "com.andrea.degaetano.coffelover.playstore"
          }
          genericstore{
               applicationId "com.andrea.degaetano.coffelover.genericstore"
          }
     }
… … (other unchanged configurations here..)
}

The new build.gradle file generated 4 new “Build Variants” in Android Studio.
This new menu allows the developer to select which kind of build generate:

All my modules.. and the selected variant
you can change the selected variant on any module

At this point, if you try to export the apks, the genericstore apk fails, because the android wear apk is present in the genericstore apk and the package of the genericstore is different from the one in the android wear apk.
Not only, but the genericstore apk is always 3.9mb.

In order to remove the dependency from android wear in the generic apk, I have changed the dependencies of my application from:

dependencies {
     compile project(':parseLoginUI')
     compile 'com.android.support:support-v4:20.0.0'
     compile 'com.google.android.gms:play-services-wearable:7.5.0'
     compile 'com.android.support:appcompat-v7:20.0.0'
     compile files('libs/android-async-http-1.4.6.jar')
     WearApp project(':qoffeewear')
}

to

dependencies {
     compile project(':parseLoginUI')
     compile 'com.android.support:support-v4:20.0.0'
     playstoreCompile 'com.google.android.gms:play-services-wearable:7.5.0'
     compile 'com.android.support:appcompat-v7:20.0.0'
     compile files('libs/android-async-http-1.4.6.jar')
     playstoreWearApp project(':qoffeewear')
}

Important: The package of the android wear app is the same of the playstore variant: com.andrea.degaetano.coffelover.playstore