In the beginning, a virtual assistant was created…

The glue between speech recognition and auditory reponses

Everyone knows of Amazon Echo and Google Home, there are even a few open-source virtual assistants like Mycroft and Snips.ai. In my opinion, these all suffer from the same deficiency: they aren’t very smart.

I want to be able to talk to my house, and by talk, I mean actually talk. Sure there are a lot of skills or plug-ins made for these platforms, but I haven’t really been impressed by any of them enough to want to use them as my primary voice interaction with my house. You can hook them into Home-Assistant and Mycroft falls back to Wolfram Alpha for any unknown user intents; but can you really talk to them? If you ask Alexa “How are you doing?” do you get some predefined response or does it look at your home and network and respond with the status? No, it doesn’t.

Most people know I hate the cloud; putting your work on “someone else’s machine” is asking for privacy violations, platform shut down and other issues. All of my projects are local first. So right away Amazon Echo, Google Home, and even Siri are off the table. Mycroft and Snips are privacy by design, but if you look at the skills available for each, it’s appalling. For example, Snips has around 8 different integrations with Home-Assistant and almost every one of them is limited to lights, switches and maybe one or two other domains – this applies similarly to Mycroft.

I recently installed a machine-learning centric server in our rack with two CUDA enabled GPUs specifically for facilitating training and inference of machine learning models. Thus, it is only fitting that the platform for my assistant is a learning one. Enter Rasa, a machine learning chatbot. It is definitely a time sink, but it does exactly what I want. No regex patterns for determining user intent (looking at you Mycroft!), the ability to execute remote code for certain intents and allows you to enter multiple response templates for any intent so it doesn’t feel as robotic.

Natural Language Understanding

With Rasa, you define actions and intents, combining them with stories. Intents are exactly what you would expect: what the user wants the assistant to do. For example, you might have an intent named greet which returns the text “Hello, how are you today?”. Your stories can fork the logic based on the user’s response. “I’m doing terrible today” could yield the bot sending cute animal pictures from an API that returns random cat pictures to try to cheer you up. You get to design the flow of dialog how you see fit.

How does Rasa determine the user’s intent? Through training. You provide it with as many sample inputs as you can and associate them with the appropriate intent. As you use your bot, your inputs are logged and can be later annotated – annotating when it comes to machine learning is telling the bot whether it inferred the correct intent from the input or not. This right here is the time sink, it takes a lot of time to come up with sentences that a user might input for every intent you define.

Stories

We use SABNZBD to download much of our media and some times I’d like to know if my downloads are done. Before Rasa, I would have to navigate to the SABNZBD web front end to check the queue. With Rasa, I can ask it “are my downloads done?” and it will query the SABNZBD API to see if the queue is empty or not and report back! If you’re bored, you can set up intents and responses to play a game – like guess a number. The possibilities are endless!

For most intents, there’s one action – but some intents can trigger an entire tree of actions and follow up intents. For example, if the bot asks the user how they are doing, depending on the response the bot will respond differently.

## greet
* greet
  - utter_greet
> check_mood

## user in good mood
> check_mood
* mood_great
  - utter_happy

## user not in good mood
> check_mood
* mood_unhappy
  - utter_cheer_up
  - utter_did_that_help
> check_better

In the example above, when the user says “Hello” or “Hi” the bot greets them and asks how they are. If the user responds with “Good”, “Awesome”, etc. then the bot responds with a positive message like “That’s awesome, ist here anything I can do for you?”. However, if the user says “Terrible” or “Awful”, the bot will try to cheer the user up – in my case, cute animal pictures or funny jokes. If the user is still not cheered up, then it will randomly respond with something else to try to cheer the user up until they are happy.

Communicating With the House

In addition to the built-in actions, you can build custom actions. By default, these custom actions are in the configuration directory inside actions.py. If you plan on making custom actions, definitely spin up a custom action server because when you make a change to a custom action the Rasa service needs to be restarted with every change – with a custom action server, changes only require restarting the custom action server.

The easiest way to spin up a custom action server is via their docker image. You’ll tell Rasa to talk to the custom action server by editing the appropriate line in the config.yaml in the project directory. Once spun up, you can implement actions to your heart’s content. Be warned, Rasa only loads Action subclasses defined in the actions.py file – to work around this, I place the logic for the action in their own python file in separate packages and define the class itself inside actions.py. For example:

# actions.py
# NOTE: package must start with actions or it can't locate the eddie_actions package

from actions.eddie_actions.location import who_is_home, locate_person


class ActionLocatePerson(Action):
    def name(self) -> Text:
        return "action_locate_person"

    def run(self,
            dispatcher: CollectingDispatcher,
            tracker: Tracker,
            domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
        return locate_person(dispatcher,
                             tracker,
                             domain)
# eddie_actions/location.py
def locate_person(dispatcher: CollectingDispatcher,
                  tracker: Tracker,
                  domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
    person = next(tracker.get_latest_entity_values(PERSON_SLOT), None)

    response = requests.get(
        f"https://automation.prettybaked.com/api/states/person.{str(person).lower()}",
        headers={
            "Authorization": f"Bearer {HOME_ASSISTANT_TOKEN}"
        }
    )

    location = None

    try:
        response.raise_for_status()
        location = response.json().get('state', None)
    except requests.HTTPError as err:
        _LOGGER.error(str(err))

    if not location:
        dispatcher.utter_message(template="utter_locate_failed")
    elif location == "not_home":
        dispatcher.utter_message(template="utter_locate_success_away")
    else:
        dispatcher.utter_message(template="utter_locate_success", location=location)

    return []

I created a long-lived token for Rasa inside Home-Assistant and pass it to the container via an environment variable. I created similar actions for connecting to Tautalli (Plex metrics) for recently added media, SABNZBD (UseNet download client) for asking about download status and plan to connect it to pFSense and Unifi for network status – “Hey Eddie, how are you today?” “Not so good, network traffic is awfully high right now.”

Chitchat and Other Fun

With the goal of being able to actually talk to your assistant, general chitchat is a must. Generally, when you meet someone there are some pretty common patterns in the conversations: introductions, hobbies, jokes, etc. With Rasa’s slots, introductions are fairly easy to implement. Create an introduction intent and add some examples: “Hi there, I’m Teagan” (where Teagan is annotated as the bot respondent’s name, reply with the bot’s name and continue from there. Eddie, my assistant, definitely has some hobbies:

Every virtual assistant out there has some fun easter eggs. Any child of the 80’s/90’s who played games knows some of the iconic cheat codes. Eddie is not a fan of cheating:

Eddie is modeled after the Heart of Gold’s onboard computer. So, of course, it has to have specific knowledge:

Thoughts and Next Steps

Truthfully, it can be very tedious to train your assistant yourself. I highly recommend deploying an instance and sharing it with friends and family. You’ll see the conversations they have had, be able to annotate the user’s intents (or add new ones), fix the actions and responses, and training a better model.

Of course, Rasa is text-based by default. Once I am happy with the defined intents, stories, responses and flow of dialog it will need to be integrated with Speech-to-Text (currently looking at Deepspeech) and Text-to-Speech (espeak, MaryTTS or even Mozilla TTS). Keep an eye out for a post about integrating these services with Rasa for a true voice assistant that continually learns!

The Magic Mirror

Disregard that this post is around 4 months after the build…

We had just had our bathrooms remodeled and were looking at a medicine cabinet for the upstairs bathroom. Alan and I had both been wanting to build a magic mirror but never had the motivation or a fixture that would work.

We spent weeks trying to find a medicine cabinet we liked and would go with the vanity/countertop and then we saw this one.

The color and style matched the rest of the bathroom and the second shelf was the perfect height for allowing the necessary cables and hardware. Only losing one 5 inch section of the middle shelf seemed a small price to pay to design and build something we’ve been talking about for years.

The Parts

The Plan

When the medicine cabinet arrived, we evaluated our options to determine which side the monitor would go on, would we reuse the wood backing of the mirror section, how will we deliver power, etc.

We decided the right-hand mirror was a good place to mount the monitor. It was closer to the power outlets, wasn’t too in the way and would be visible to anyone using the sinks. As carefully as we could, we removed the mirror and its wood backing from the medicine cabinet. The answer to “would we reuse the wood backing of the mirror section” was answered for us, as the mirror was glued too well to the mirror and attempting to remove the mirror shattered it.

Alan grew up working in his Dad’s framing shop and is quite skilled at it, in both the technical aspect (mat cutter, frame nailer) but also the subjective aspect (mat color schemes, layers, etc). This is why we have a 5-foot mat cutter on hand and some black foam board, which was sturdy enough to use as a backing and dark enough to allow as much light to be reflected off the mirror as possible.

The Build

Once the acrylic arrived, Alan cut the black foam backing to the size of the wood backing originally attached to the mirror and an exactly sized window where the monitor would be able to sit flush against the mirror. Using ATG tape along the edges to hold both the acrylic and the foam board, it seemed like we were good to go. The monitor was such a tight fit in the window, that we didn’t even bother taping it in for extra support.

We needed to install some outlets inside the medicine cabinet. While we were waiting for the parts and motivation, we realized there were a couple large scratches on the acrylic sheet! That’s what we get for trying to save $30 by getting acrylic instead of glass. So, we took the acrylic, the monitor and the backing down and placed an order for Smart Mirror Glass and waited.

Build #2

Once the glass arrived, we realized that the reflection off of the acrylic was a little distorted compared to the glass – I guess the acrylic just had some surface imperfections. Note: there is a slight blue tone to it compared to the other mirrors, but it is hardly noticeable.

Before mounting the new glass and Pi to the medicine cabinet, I thought it would be a good idea to cut and insert a 2-gang outlet box in the back of the medicine cabinet. We had a couple 2 AC/2 USB outlets laying around, which would server perfectly for charging razors, toothbrushes and running the Magic Mirror. Unfortunately, there was no way to get the Romex (standard in-wall 2/2 or 3/2 electrical wire) to the available outlet without going up into the attic where there’s barely room to move around and itchy fiberglass – not to mention Scooby-Doo has taught me there’s probably Old Man Jenkins up there disguised as a ghoul. So, I ran it as high as I could through the vanity to the wall with the outlet and fished it up and out.

Repeating our previous steps, we attached the Smart Mirror glass and foam board to the door frame using ATG tape and forced the monitor into its little window. Since the monitor had been inserted and removed, it didn’t quite have the same snug fit – for added support we used a rubber cement that wouldn’t eat through the foam board to secure the monitor in place. Our monitor had mounts for a Raspberry Pi as well as a USB power source for it – which means if you turn off the monitor, it will turn off the Pi and Visa-Versa.

Software

We imaged the Raspberry Pi with Raspbian Stretch (primarily due to the fact that I had the image already on my machine). Once we set the OS up appropriately on the Pi, connected to WiFi and set up SSH remote access, we mounted it on the monitor in the medicine cabinet and closed the door.

We built a panel specifically for the MagicMirror in Home-Assistant, which removes the tabs/sidebar and other extraneous information with a dark theme set. To access that panel, we needed to install Xorg which gives rise to a graphical user interface, since up until now the system was headless. We used the chromium-browser package because it is simple to use and allows you to open a URL as an app (removing address bar, border, etc.).

We made a special user to run the interface, keeping user roles and purpose separate, aptly named mirror. In the home folder for mirror we created .xsession (this file defines what happens when the X server starts).

#!/bin/sh

#Turn off Power saver and Screen Blanking
xset s off -dpms

#Execute window manager for full screen
exec matchbox-window-manager  -use_titlebar no &

#Execute Browser with options
chromium-browser --disk-cache-dir=/dev/null --disk-cache-size=1 -app=http://$HA_URL:$HA_PORT/lovelace/mirror?kiosk

To make sure this all happens automatically, create a systemd script, we chose to place our’s /etc/systemd/system/information-display.service:

[Unit]
Description=Xserver and Chromium
After=network-online.target nodm.service
Requires=network-online.target nodm.service
Before=multi-user.target
DefaultDependencies=no

[Service]
User=mirror
# Yes, I want to delete the profile so that a new one gets created every time the service starts.
ExecStart=/usr/bin/startx -- -nocursor
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Don’t forget to enable your service with systemctl enable information-display.service and start it systemctl start information-display.service

Coming Soon

Adding voice control to the mirror, i.e. “Where is Alan” or “Activate Night Mode”

Integrating with the National Weather Service

Tired of weather sneaking up on me

Even with the 5-day weather forecast displayed right in my face on my bathroom mirror, I am ashamed to admit how many times a friend or colleague mentions an upcoming storm and I am completely unaware. Most recently this morning, in a meeting with the director of my team when he mentioned hoping his flight will beat the storm. Obviously, a simple forecast just isn’t enough to grasp the scale of how nature intends to fuck with your plans this week.

Weather forecast displayed on my bathroom mirror

Enter: National Weather Service API

The National Weather Service offers a free API for retrieving active alerts and alerts for specific ranges in time. Unfortunately, this isn’t integrated directly into Home-Assistant – thankfully, there’s a huge community around Home-Assistant and I was able to locate a project by Mcaminiti that integrated with the NWS Api:

His integration does exactly what it needs to do, but I felt a few aspects could be improved upon:

  • Requires the zone_id retrieved manually from a list on the NWS website
  • Only currently active alerts are consumed
  • The sensor’s state is the number of alerts, with categories broken out in the entity’s attributes

Given my tagline: Engineering only the highest quality laziness for over 20 years, is it any surprise I don’t want to manually retrieve some zone_id from a giant list of states, counties, regions, etc. Reading through the NWS API Specification for the /alerts endpoint, listed in the parameters is a property named point – which accepts latitude,longitude as a string. In Home-Assistant, many entities have latitude and longitude: zones, device_trackers, even the core configuration. So, there is absolutely no reason to go looking for a zone_id that may or may not even be correct!

Some more interesting parameters in the /alerts endpoint are start and end. The API Specification doesn’t do a great job of defining the format for start and end, but after some digging, it accepts ISO-8601 with a timezone offset, i.e. 2019-11-26T01:08:50.890262-07:00 – don’t forget the timezone offset or you will get an HTTP 400 response! By utilizing these parameters, you can get weather alerts for the future, not just the active weather alerts.

Having the number of active alerts be the sensor’s state has its use case, but for me, displaying the most recent weather alert is of much greater use for my needs. My use case is: if there is active or upcoming severe weather, I want that displayed in large letters in my mirror – preventing any more instances of surprise weather.

The Improved NWS Warnings Integration

Built as a sensor platform allows any number of NWS Warning sensors. I set up the configuration as such:

sensor:
  - platform: nws_warnings
    name: NWS Warnings
    icon: mdi:alert
    severity:
      - extreme
      - severe
    message_type:
      - alert
      - update
    zone: zone.home
    forecast_days: 3

Here, you can specify the severity of the weather alert, whether you wish to receive updates along with alerts and how far in the future you with to retrieve weather reports. If zone is provided, the request uses the zone‘s latitude and longitude as the point of reference when retrieving weather reports. Alternatively, you can specify a location instead of zone, with latitude and longitude – in case there is no zone entity for which you want to receive weather updates.

Including forecast_days will retrieve any weather reports from the start of the current day, to n-days layter (where n is the value of forecast_days); this allows for advanced warning of sever/inclement weather. Omiting forecast_days will only retrieve active weather reports.

Putting it all together

Now that we have our integration to the NWS alert API, it’s time to let my household aware of any severe/inclement weather in the coming days. I created a Conditional Card on the front-end, if there NWS Alert has something to report, it shows at the top of the MagicMirror interface, otherwise it is completely removed from the MagicMirror interface:

Making Home-Assistant Distributed, Modular State Machine Part 3

Minimalist Diagram of Home-Assistant Distributed

Custom Component: remote_instance

Original Component

Lukas Hetzenecker posted this component to the Home-Assistant community forums, but I felt it was lacking in a few places

Multiple Instances with Z-Wave

I started to notice my Z-Wave network was a little slow and decided to add a slave instance of Home-Assistant with another Z-Wave controller; however, I quickly discovered node-id collisions. This caused attributes from Z-Wave devices that shared the same node-id to merge and caused problems when trying to invoke services on the Z-Wave domain.

Addressing Attribute Merging

The component accepts an entity_prefix configuration value, intended to prevent entity_id collisions between instances. Expanding on this concept, when the component sends a state change event and the node_id attribute is present, I made the component prefix the node_id, this immediately rectified the issue with attribute merging. When invoking a service against the Z-Wave domain, instances that did not have the prefix of the node_id ignored the service, and the appropriate instance would recognize that it is intended to run there. The prefix is removed from the node_id attribute and the service call is propagated to the event loop as appropriate.

Home-Assistant Distributed API Routes

A HomeAssistantView allows a platform/component to register an endpoint with the Home-Assistant REST API, e.g. components that use OAuth, like Spotify or Automatic. OAuth requires a redirect URI for the component to receive the access token from the service, this means exposing all Home-Assistant distributed instances to the web that need this type of configuration. Exposing a single instance reduces the possible attack surface on your network and simplifies DNS configuration if you use it. Unfortunately, when a HomeAssistantView is registered, no event is sent to the event bus – which would allow the master instance to register a corresponding route.

Following the idea of Kubernetes Ingress and Redis Sentinel, when a remote instance registers its own API route, it notifies the proxy (master instance in this case). The master registers the endpoint in its own router, and when there is a request to the endpoint performs a fan-out search for the instance that can appropriately answer the request. Why the fan-out? Well, most endpoints are variable-path, i.e. /api/camera_proxy/{entity_id} or /api/service/{domain}/{entity}, which may apply to multiple instances. If an instance responds to the proxy request with a 200, the master instance registers an exact match proxy for it. For example, if both the security and appliance instance register routes of /api/camera_proxy/{entity_id}, and a request comes in for /api/camera_proxy/camera.front_door, the security instance can respond with HTTP 200, so from that point forward, when a request for /api/camera_proxy/camera.front_door comes in, the proxy server automatically sends it to the security instance.

Check out the appropriate repositories below: