The Magic Mirror

Disregard that this post is around 4 months after the build…

We had just had our bathrooms remodeled and were looking at a medicine cabinet for the upstairs bathroom. Alan and I had both been wanting to build a magic mirror but never had the motivation or a fixture that would work.

We spent weeks trying to find a medicine cabinet we liked and would go with the vanity/countertop and then we saw this one.

The color and style matched the rest of the bathroom and the second shelf was the perfect height for allowing the necessary cables and hardware. Only losing one 5 inch section of the middle shelf seemed a small price to pay to design and build something we’ve been talking about for years.

The Parts

The Plan

When the medicine cabinet arrived, we evaluated our options to determine which side the monitor would go on, would we reuse the wood backing of the mirror section, how will we deliver power, etc.

We decided the right-hand mirror was a good place to mount the monitor. It was closer to the power outlets, wasn’t too in the way and would be visible to anyone using the sinks. As carefully as we could, we removed the mirror and its wood backing from the medicine cabinet. The answer to “would we reuse the wood backing of the mirror section” was answered for us, as the mirror was glued too well to the mirror and attempting to remove the mirror shattered it.

Alan grew up working in his Dad’s framing shop and is quite skilled at it, in both the technical aspect (mat cutter, frame nailer) but also the subjective aspect (mat color schemes, layers, etc). This is why we have a 5-foot mat cutter on hand and some black foam board, which was sturdy enough to use as a backing and dark enough to allow as much light to be reflected off the mirror as possible.

The Build

Once the acrylic arrived, Alan cut the black foam backing to the size of the wood backing originally attached to the mirror and an exactly sized window where the monitor would be able to sit flush against the mirror. Using ATG tape along the edges to hold both the acrylic and the foam board, it seemed like we were good to go. The monitor was such a tight fit in the window, that we didn’t even bother taping it in for extra support.

We needed to install some outlets inside the medicine cabinet. While we were waiting for the parts and motivation, we realized there were a couple large scratches on the acrylic sheet! That’s what we get for trying to save $30 by getting acrylic instead of glass. So, we took the acrylic, the monitor and the backing down and placed an order for Smart Mirror Glass and waited.

Build #2

Once the glass arrived, we realized that the reflection off of the acrylic was a little distorted compared to the glass – I guess the acrylic just had some surface imperfections. Note: there is a slight blue tone to it compared to the other mirrors, but it is hardly noticeable.

Before mounting the new glass and Pi to the medicine cabinet, I thought it would be a good idea to cut and insert a 2-gang outlet box in the back of the medicine cabinet. We had a couple 2 AC/2 USB outlets laying around, which would server perfectly for charging razors, toothbrushes and running the Magic Mirror. Unfortunately, there was no way to get the Romex (standard in-wall 2/2 or 3/2 electrical wire) to the available outlet without going up into the attic where there’s barely room to move around and itchy fiberglass – not to mention Scooby-Doo has taught me there’s probably Old Man Jenkins up there disguised as a ghoul. So, I ran it as high as I could through the vanity to the wall with the outlet and fished it up and out.

Repeating our previous steps, we attached the Smart Mirror glass and foam board to the door frame using ATG tape and forced the monitor into its little window. Since the monitor had been inserted and removed, it didn’t quite have the same snug fit – for added support we used a rubber cement that wouldn’t eat through the foam board to secure the monitor in place. Our monitor had mounts for a Raspberry Pi as well as a USB power source for it – which means if you turn off the monitor, it will turn off the Pi and Visa-Versa.

Software

We imaged the Raspberry Pi with Raspbian Stretch (primarily due to the fact that I had the image already on my machine). Once we set the OS up appropriately on the Pi, connected to WiFi and set up SSH remote access, we mounted it on the monitor in the medicine cabinet and closed the door.

We built a panel specifically for the MagicMirror in Home-Assistant, which removes the tabs/sidebar and other extraneous information with a dark theme set. To access that panel, we needed to install Xorg which gives rise to a graphical user interface, since up until now the system was headless. We used the chromium-browser package because it is simple to use and allows you to open a URL as an app (removing address bar, border, etc.).

We made a special user to run the interface, keeping user roles and purpose separate, aptly named mirror. In the home folder for mirror we created .xsession (this file defines what happens when the X server starts).

#!/bin/sh

#Turn off Power saver and Screen Blanking
xset s off -dpms

#Execute window manager for full screen
exec matchbox-window-manager  -use_titlebar no &

#Execute Browser with options
chromium-browser --disk-cache-dir=/dev/null --disk-cache-size=1 -app=http://$HA_URL:$HA_PORT/lovelace/mirror?kiosk

To make sure this all happens automatically, create a systemd script, we chose to place our’s /etc/systemd/system/information-display.service:

[Unit]
Description=Xserver and Chromium
After=network-online.target nodm.service
Requires=network-online.target nodm.service
Before=multi-user.target
DefaultDependencies=no

[Service]
User=mirror
# Yes, I want to delete the profile so that a new one gets created every time the service starts.
ExecStart=/usr/bin/startx -- -nocursor
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Don’t forget to enable your service with systemctl enable information-display.service and start it systemctl start information-display.service

Coming Soon

Adding voice control to the mirror, i.e. “Where is Alan” or “Activate Night Mode”

Integrating with the National Weather Service

Tired of weather sneaking up on me

Even with the 5-day weather forecast displayed right in my face on my bathroom mirror, I am ashamed to admit how many times a friend or colleague mentions an upcoming storm and I am completely unaware. Most recently this morning, in a meeting with the director of my team when he mentioned hoping his flight will beat the storm. Obviously, a simple forecast just isn’t enough to grasp the scale of how nature intends to fuck with your plans this week.

Weather forecast displayed on my bathroom mirror

Enter: National Weather Service API

The National Weather Service offers a free API for retrieving active alerts and alerts for specific ranges in time. Unfortunately, this isn’t integrated directly into Home-Assistant – thankfully, there’s a huge community around Home-Assistant and I was able to locate a project by Mcaminiti that integrated with the NWS Api:

His integration does exactly what it needs to do, but I felt a few aspects could be improved upon:

  • Requires the zone_id retrieved manually from a list on the NWS website
  • Only currently active alerts are consumed
  • The sensor’s state is the number of alerts, with categories broken out in the entity’s attributes

Given my tagline: Engineering only the highest quality laziness for over 20 years, is it any surprise I don’t want to manually retrieve some zone_id from a giant list of states, counties, regions, etc. Reading through the NWS API Specification for the /alerts endpoint, listed in the parameters is a property named point – which accepts latitude,longitude as a string. In Home-Assistant, many entities have latitude and longitude: zones, device_trackers, even the core configuration. So, there is absolutely no reason to go looking for a zone_id that may or may not even be correct!

Some more interesting parameters in the /alerts endpoint are start and end. The API Specification doesn’t do a great job of defining the format for start and end, but after some digging, it accepts ISO-8601 with a timezone offset, i.e. 2019-11-26T01:08:50.890262-07:00 – don’t forget the timezone offset or you will get an HTTP 400 response! By utilizing these parameters, you can get weather alerts for the future, not just the active weather alerts.

Having the number of active alerts be the sensor’s state has its use case, but for me, displaying the most recent weather alert is of much greater use for my needs. My use case is: if there is active or upcoming severe weather, I want that displayed in large letters in my mirror – preventing any more instances of surprise weather.

The Improved NWS Warnings Integration

Built as a sensor platform allows any number of NWS Warning sensors. I set up the configuration as such:

sensor:
  - platform: nws_warnings
    name: NWS Warnings
    icon: mdi:alert
    severity:
      - extreme
      - severe
    message_type:
      - alert
      - update
    zone: zone.home
    forecast_days: 3

Here, you can specify the severity of the weather alert, whether you wish to receive updates along with alerts and how far in the future you with to retrieve weather reports. If zone is provided, the request uses the zone‘s latitude and longitude as the point of reference when retrieving weather reports. Alternatively, you can specify a location instead of zone, with latitude and longitude – in case there is no zone entity for which you want to receive weather updates.

Including forecast_days will retrieve any weather reports from the start of the current day, to n-days layter (where n is the value of forecast_days); this allows for advanced warning of sever/inclement weather. Omiting forecast_days will only retrieve active weather reports.

Putting it all together

Now that we have our integration to the NWS alert API, it’s time to let my household aware of any severe/inclement weather in the coming days. I created a Conditional Card on the front-end, if there NWS Alert has something to report, it shows at the top of the MagicMirror interface, otherwise it is completely removed from the MagicMirror interface:

Pushing Object Detection to Home-Assistant with Coral EdgeTPU

Forget the Tensorflow Component

Lots of awesome developers have added image processing components to Home-Assistant’s integration list. In fact, there are now 9 different image processors (a few that do more than just object detection) built right into Home-Assistant. I wrote the first version of the OpenCV image processing component, which was not the best given my lack of experience in Python at the time, but it obviously triggered some ideas in others – which is one of the amazing parts of open source software! However, most computers (and servers) are just not built to perform inference analysis, so the components that are self-hosted, like the OpenCV integration, just aren’t very efficient.

Enter Coral the EdgeTPU from Google

A lot of providers provide cloud-based inference engines – Google has released some solutions for performing inference at the “edge” (i.e. locally): The Google Coral Dev Board and the USB Accelerator (see all products here). The Coral Dev Board is similar to a Raspberry Pi with a TPU and the USB Accelerator is a USB 3 external TPU; A TPU is a Tensor Processing Unit – specifically designed for processing Tensors or n-D matrices. Tensors are basically mathematical representations of real-world patterns; they are, in basic terms, used for pattern matching. There are some alternatives to the EdgeTPU by Google, like the Intel Neural Compute Stick, but none have seemed to have the community backing that the EdgeTPU does.

Pushing State to Home-Assistant

My house was built around 1970, and it’s quite obvious any “repairs” (the term is used loosely) by the previous owners were done by those without the know-how. The doorbell button looks like it’s from the ’70s, I experimented with a Z-Wave Doorbell but it just wasn’t loud enough – my partner works from home and his office is in the basement. We already had an Unifi Camera mounted above the door, why not let the house tell us when someone was at the door? So, I implemented the OpenCV integration with Home-Assistant, and then tried the TensorFlow integration; I had to throw an extra 4 cores at the VM to get even semi-reliable results: if it worked, it took a few seconds before it would even trigger a notification which caused our Doordash drivers to get frustrated, and had no idea which delivery company had dropped packages as they had left the frame of view.

Why was the reliability of the integrations such an issue? Well, for one, it was done on Xenon processors (not exactly top of the line) and, two, because those integrations were polling – only updating when the loop requested their state; I lived with it but hated it.

When I discovered the EdgeTPU, I ordered both a Coral Dev Board and a USB Accelerator, I had plenty of Raspberry Pis laying around I was sure I could put it to use. Of course, the idea got back-logged to all of my other projects, like implementing my Distributed, Modular State Machine. I finally got around to it this last weekend.

The 1st Pass

I wanted the Raspberry Pi to push the state to Home-Assistant in order to get more immediate results. So the application was designed to consume an RTSP video stream and perform object detection on the frames. Watching the logs, I couldn’t believe how fast it was; each loop (retrieve frame, process, and push the state to Home-Assistant) appeared to take around a second.

The logs, however, were very misleading. While I watched the logs and Home-Assistant, I stepped in front of the camera. Only, it took a couple seconds for it to detect a person was in the frame; when I left the view of the camera, it reported that a person was in the frame for close to five to six seconds. It was way better than using the Home-Assistant integrations, but it definitely puzzled me.

OpenCV VideoCapture Implementation

The code was written to run 1:1, 1 thread to 1 camera. The camera stream was fed to OpenCV’s VideoCapture class, and continually looped while the connection was open. I found on some forums the answer to why I was experiencing such delay: when you call the VideoCapture::read() function, it provides the next frame in the buffer, not the most recent frame. This wouldn’t be much a problem if your processing could keep up with the frame rate of the video stream; if you can’t keep up with the frame rate you experience lag, as I was.

Attempting to work around this limitation, I found you could retrieve the number of frames in the buffer and set the current frame index. Unfortunately, this let to around 4-5 seconds per frame, still better than the Home-Assistant integrations, but completely unacceptable for replacing a doorbell! The answer I found somewhere deep in Stack Overflow (I’ll link if I can find it I found it).

Fun with Thread Synchronization

Have two threads for each video stream. The first thread continually pop’s the oldest frame off of the buffer and the second processes the, hopefully, current frame. Since there’s a shared resource involved, you can’t have both threads popping the VideoCapture’s buffer queue; no, you need to synchronize the shared resource, otherwise, you run into concurrency issues. Concurrency issues, depending on the context and implementation, could crash your application, cause a thread to grab expired data, or even grab data that mutate later!

So we have two streams per video stream: the “Grabber” thread and the “Processor” thread. Grabbing a frame from the buffer and discarding it takes, essentially, no time at all, while Processing a frame from the buffer could take a bit (the term “bit” is used loosely here). So which thread should be the one to tell the other “Hey dude, it’s my turn!”?

Whenever a thread wants to read from the buffer, it must tell the other “Hol’ up, yo!” to prevent some of the concurrency issues mentioned above. While the thread is chatting away with the buffer, the other thread is waiting… patiently, or impatiently – kinda depends on how late they are to their next appointment. Imagine the UI thread is waiting: all of a sudden the user sees a frozen screen (and most like bitches loudly to their cube mates). For this reason, threads should quit the chit-chat and let the next thread do what it needs to do!

To accomplish this behavior, we use a shared Lock: a local, domain-specific object that identifies who has the right to access sharing resources across separate threads. A Lock, while similar, is different than a Mutex, which usually relates to system processes – though some people (and languages) use them interchangeably (they probably mean Semaphore). When a thread wants to access a shared resource, it attempts to acquire the Lock, waiting – sometimes impatiently – until it acquires the lock; precisely the reason a lock should be released as soon as possible.

Back to the topic at hand: when the Processor thread has received its frame from the buffer, it immediately relinquishes the lock so it can process it – while the Grabber thread happily gifts the buffer’s oldest frames to the garbage collector – until the Processor needs its fix from the frame.

What the hell did I just read?

Exactly how to handle a FIFO buffer between discrete processes…

The Grabber thread:

while self._video_stream.isOpened():
    self.lock.acquire() # Blocking action, wait for lock to be free
    self._video_stream.grab()
    self.lock.release() # Put the lock up for grab

The Processor thread:

while self.video_stream.isOpened():
    self.lock.acquire() # Blocking action, wait for lock to be free
    frame = self._retrieve_frame()
    self.lock.release() # Put the lock up for grab
    if frame is None:
       time.sleep(FRAME_FAILURE_SLEEP)
       continue # Stop at next light

   detection_entity = self._process_frame(frame)

   self._set_state(detection_entity)

Making Home-Assistant Distributed, Modular State Machine Part 3

Minimalist Diagram of Home-Assistant Distributed

Custom Component: remote_instance

Original Component

Lukas Hetzenecker posted this component to the Home-Assistant community forums, but I felt it was lacking in a few places

Multiple Instances with Z-Wave

I started to notice my Z-Wave network was a little slow and decided to add a slave instance of Home-Assistant with another Z-Wave controller; however, I quickly discovered node-id collisions. This caused attributes from Z-Wave devices that shared the same node-id to merge and caused problems when trying to invoke services on the Z-Wave domain.

Addressing Attribute Merging

The component accepts an entity_prefix configuration value, intended to prevent entity_id collisions between instances. Expanding on this concept, when the component sends a state change event and the node_id attribute is present, I made the component prefix the node_id, this immediately rectified the issue with attribute merging. When invoking a service against the Z-Wave domain, instances that did not have the prefix of the node_id ignored the service, and the appropriate instance would recognize that it is intended to run there. The prefix is removed from the node_id attribute and the service call is propagated to the event loop as appropriate.

Home-Assistant Distributed API Routes

A HomeAssistantView allows a platform/component to register an endpoint with the Home-Assistant REST API, e.g. components that use OAuth, like Spotify or Automatic. OAuth requires a redirect URI for the component to receive the access token from the service, this means exposing all Home-Assistant distributed instances to the web that need this type of configuration. Exposing a single instance reduces the possible attack surface on your network and simplifies DNS configuration if you use it. Unfortunately, when a HomeAssistantView is registered, no event is sent to the event bus – which would allow the master instance to register a corresponding route.

Following the idea of Kubernetes Ingress and Redis Sentinel, when a remote instance registers its own API route, it notifies the proxy (master instance in this case). The master registers the endpoint in its own router, and when there is a request to the endpoint performs a fan-out search for the instance that can appropriately answer the request. Why the fan-out? Well, most endpoints are variable-path, i.e. /api/camera_proxy/{entity_id} or /api/service/{domain}/{entity}, which may apply to multiple instances. If an instance responds to the proxy request with a 200, the master instance registers an exact match proxy for it. For example, if both the security and appliance instance register routes of /api/camera_proxy/{entity_id}, and a request comes in for /api/camera_proxy/camera.front_door, the security instance can respond with HTTP 200, so from that point forward, when a request for /api/camera_proxy/camera.front_door comes in, the proxy server automatically sends it to the security instance.

Check out the appropriate repositories below: