OpenAI Home-Assistant

OpenAI Home-Assistant Together At last

Recently, Home-Assistant introduced the OpenAI Conversation Agent. Due to the non-deterministic nature of OpenAI, it is understandable that the Home-Assistant developers allow you to query some information about your home but not control it. Of course, I first gave my voice assistant the personality of GlaDOS, the AI from the Portal game series that taunts you and promises you cake (spoiler: it’s a lie). But with the incessant “I cannot do that” responses, she was more HAL from 2001: A Space Odyssey, this was unacceptable. My mission: have OpenAI reliably control my home.

The OpenAI Conversation Agent

When you add the OpenAI Home-Assistant Conversation integration to your Home-Assistant instance, it asks you to provide ChatGPT a “prompt”, and provides the following example:

This smart home is controlled by Home Assistant.

An overview of the areas and the devices in this smart home:
{%- for area in areas() %}
  {%- set area_info = namespace(printed=false) %}
  {%- for device in area_devices(area) -%}
    {%- if not device_attr(device, "disabled_by") and not device_attr(device, "entry_type") and device_attr(device, "name") %}
      {%- if not area_info.printed %}

{{ area_name(area) }}:
        {%- set area_info.printed = true %}
      {%- endif %}
- {{ device_attr(device, "name") }}{% if device_attr(device, "model") and (device_attr(device, "model") | string) not in (device_attr(device, "name") | string) %} ({{ device_attr(device, "model") }}){% endif %}
    {%- endif %}
  {%- endfor %}
{%- endfor %}

Answer the user's questions about the world truthfully.

If the user wants to control a device, reject the request and suggest using the Home Assistant app.

This builds out a block of text that looks something similar to this:

This smart home is controlled by Home Assistant.

An overview of the areas and the devices in this smart home:

Living Room:
- FanLinc 3F.B3.C0 (2475F (0x01, 0x2e))
- Keypad with Dimmer 39.68.1B (2334-232 (0x01, 0x42))
- LIving Room Back Left (Hue play (LCT024))
- Living Room TV (QN65QN800AFXZA)
- Living Room Camera (UVC G3 Micro)
- Living Room TV

Kitchen:
- Motion Sensor II 4F.94.DD (2844-222 (0x10, 0x16))
- SwitchLinc Dimmer 42.8C.81 (2477D (0x01, 0x20))
...
Answer the user's questions about the world truthfully.

If the user wants to control a device, reject the request and suggest using the Home Assistant app.

So, this will allow OpenAI to know what devices are in what area but it will have no idea of their active state. Asking it “Is the front door locked?”, OpenAI will respond with something like this:

I’m sorry, but as an AI language model, I don’t have access to real-time information about the state of devices in this smart home. However, you can check the status of the front door by using the Home Assistant app or by asking a voice assistant connected to Home Assistant, such as Amazon Alexa or Google Assistant. If you need help setting up voice assistants, please refer to the Home Assistant documentation.

OpenAI Response

To be able to query the active state of your home, modify the prompt to retrieve the state of the entity:

This smart home is controlled by Home Assistant.

An overview of the areas and the devices in this smart home:
{%- for area in areas() %}
  {%- set area_info = namespace(printed=false) %}
  {%- for device in area_devices(area) -%}
    {%- if not device_attr(device, "disabled_by") and not device_attr(device, "entry_type") and device_attr(device, "name") %}
      {%- if not area_info.printed %}

{{ area_name(area) }}:
        {%- set area_info.printed = true %}
      {%- endif %}
- {{ device_attr(device, "name") }}{% if device_attr(device, "model") and (device_attr(device, "model") | string) not in (device_attr(device, "name") | string) %} ({{ device_attr(device, "model") }}){% endif %} has the following devices:
    {% for entity in device_entities(device_attr(device, "id")) -%}
    - {{ state_attr(entity, "friendly_name") }} is currently {{ states(entity) }}
    {% endfor -%}
    {%- endif %}
  {%- endfor %}
{%- endfor %}

Answer the user's questions about the world truthfully.

If the user wants to control a device, reject the request and suggest using the Home Assistant app.

Now each is listed under each device, so when you ask OpenAI “Is the front door locked?”

Yes, the Front Door Lock is currently locked.

OpenAI

Pretty neat, right? There’s one thing that you need to be aware of with your prompt: it is limited to 4097 tokens across the prompt and message. “What’s a token?” Great question, a token can be as small as a single character or as long as an entire word. Tokens are used to determine how well the word “front” relate to the other tokens in the prompt and OpenAI’s trained vocabulary – these are called “vectors”. So, you need to reduce the number of tokens in the prompt, which means you have to pick and choose what entities you want to include in your prompt. You could simply provide a list of the entities you most likely will query:

{%- set exposed_entities -%},
[
"person.alan",
"binary_sensor.basement_stairs_motion_homekit",
"sensor.dining_room_air_temperature_homekit",
"binary_sensor.dining_room_motion_detection_homekit",
"binary_sensor.downstairs_bathroom_motion_homekit",
"alarm_control_panel.entry_room_home_alarm_homekit",
"lock.entry_room_front_door_lock_homekit",
"binary_sensor.front_door_open",
"media_player.game_room_appletv",
"media_player.game_room_appletv_plex",
"binary_sensor.game_room_motion_homekit",
"cover.game_room_vent_homekit",
"cover.garage_door_homekit",
"binary_sensor.kitchen_motion_homekit",
"sensor.living_room_sensor_air_temperature_homekit",
"media_player.living_room_appletv_plex",
"fan.living_room_fan_homekit",
"binary_sensor.living_room_sensor_motion_detection_homekit",
"light.living_room_volume_homekit",
"media_player.living_room_appletv",
"cover.living_room_vent_homekit",
"sensor.lower_hall_air_temperature_homekit",
"binary_sensor.lower_hall_motion_detection_homekit",
"media_player.master_bedroom_appletv_ples",
"binary_sensor.node_pve01_status",
"sensor.office_air_temperature_homekit",
"binary_sensor.office_motion_homekit",
"light.office_vent",
"cover.office_vent_homekit",
"binary_sensor.qemu_download_clients_103_health",
"binary_sensor.qemu_grafana_108_health",
"binary_sensor.qemu_influxdbv2_113_health",
"binary_sensor.qemu_ldap01_100_health",
"binary_sensor.qemu_media_services_102_health",
"binary_sensor.qemu_metrics01_104_health",
"binary_sensor.qemu_nginx01_115_health",
"binary_sensor.qemu_pi_hole_114_health",
"binary_sensor.qemu_promscale01_109_health",
"binary_sensor.qemu_pve_grafana_107_health",
"binary_sensor.qemu_shockwave_101_health",
"binary_sensor.qemu_tdarr_server_200_health",
"binary_sensor.qemu_tdarr01_201_health",
"binary_sensor.qemu_tdarr03_203_health",
"binary_sensor.qemu_webservices01_105_health",
"sensor.laundry_room_server_rack_air_temperature_homekit",
"binary_sensor.laundry_room_server_rack_motion_detection_homekit",
"person.teagan",
"climate.entry_room_thermostat_homekit",
"sensor.upper_landing_air_temperature_homekit",
"binary_sensor.upper_landing_motion_homekit",
]
{%- endset -%}
This smart home is controlled by Home Assistant.

An overview of the areas and the devices in this smart home:
{%- for area in areas() %}
  {%- set area_info = namespace(printed=false) %}
  {%- for device in area_devices(area) -%}
    {%- if not device_attr(device, "disabled_by") and not device_attr(device, "entry_type") and device_attr(device, "name") %}
      {%- if not area_info.printed %}

{{ area_name(area) }}:
        {%- set area_info.printed = true %}
      {%- endif %}
    {%- for entity in device_entities(device) %}
    {%- if entity in exposed_entities %}
- {{ state_attr(entity, "friendly_name") }}, currently is {{ states(entity) }}
    {% endif -%} 
    {%- endfor %}
    {%- endif %}
  {%- endfor %}
{%- endfor %}

And the crew are:
- Alan, currently is {{ states("person.alan") }}
- Teagan, currently is {{ states("person.teagan") }}

Answer the user's questions about the world truthfully.

If the user wants to control a device, reject the request and suggest using the Home Assistant app.

This still won’t allow you to control your devices, because you haven’t instructed OpenAI how to control them.

Instruct OpenAI to Generate API Payloads

OpenAI’s training data includes most of the internet up to around 2021. Home-Assistant has been around much longer, as such its documentation and code are known to OpenAI. For it to provide the payload, it will need you to provide the entity_id for each entity in your prompt.

    {%- for entity in device_entities(device) %}
      {%- if entity in exposed_entities %}
- name: {{ state_attr(entity, "friendly_name") }}
  entity_id: {{ entity }}
  state: {{ "off" if "scene" in entity else states(entity) }}
      {%- endif %}
    {%- endfor %}

Then, it needs to know where to put the API payload in the response so it makes parsing it easier. Here we instruct it to append the JSON after its response:

Answer the user's questions about the world truthfully.

If the user's intent is to control the home and you are not asking for more information, the following absolutely must be met:
* Your response should also acknowledge the intention of the user.
* Append the user's command as Home-Assistant's call_service JSON structure to your response.

Example:
Oh sure, controlling the living room tv is what I was made for.
{"service": "media_player.pause", "entity_id": "media_player.living_room_tv"}

Example:
My pleasure, turning the lights off.
{"service": "light.turn_off", "entity_id": "light.kitchen_light_homekit"}

The "media_content_id" for movies will always be the name of the movie.
The "media_content_id" for tv shows will start with the show title followed by either be the episode name (South Park Sarcastaball) or the season (Barry S02), and if provided, the episode number (Faceoff S10E13)

Now when we ask OpenAI to “Lock the front door”…

Sure, I’ll lock the front door.
{“service”: “lock.lock”, “entity_id”: “lock.entry_room_front_door_lock_homekit”}

OpenAI

Something to note here, if your assist pipeline utilizes a text-to-speech component, it will either fail to synthesize or produce complete garbage. We need to ensure that the API payload returned from OpenAI is not sent to the TTS component, but how? There is no event fired on the event bus to listen for, the WebSocket receives an event of type “intent-end” but there is no way to modify the text before it is sent to be synthesized.

Monkey Patching the Agent

Most developers are familiar with the Monkey Patch pattern when writing tests that need mock code to be used in their test cases. Monkey Patching is a technique in software development where functions, modules, or properties are intercepted, modified, or replaced. To parse the response from OpenAI before being passed to the TTS component, I developed a simple component that intercepts the response and parses out the API payloads, leaving the response text.

USE OF THE FOLLOWING IS AT YOUR OWN RISK

How it works

Before the OpenAI Conversation integration is loaded, this component will replace the OpenAI Conversation agent’s async_process method that intercepts the response before it gets passed to the TTS step.

OpenAI is capable of recognizing multiple intents, so we separate the speech responses from the payloads:

content = ""
segments = result.response.speech["plain"]["speech"].splitlines()
for segment in segments:
    _LOGGER.info("Segment: {}".format(segment))
    if segment.startswith("{"):
        service_call = json.loads(segment)
        service = service_call.pop("service")
        if not service or not service_call:
            _LOGGER.info('Missing information')
            continue
        await hass.services.async_call(
            service.split(".")[0],
            service.split(".")[1],
            service_call,
            blocking=True,
            limit=0.3)
    else:
        content = "{}.  {}".format(content, segment)

We iterate over each line in the response from OpenAI. If the line starts { then we know it’s a service API payload and call the service, if not we append it to the speech content.

Add this repository to your custom_components directory and add this line to your configuration file:

openai_override:

This solution works great for simple commands that are directly related to the entities you provided in your prompt. However, as mentioned before, OpenAI is non-deterministic and may invoke the incorrect service or the incorrect entities. How you can mitigate these risks:

  • Ensure entity names are very unique, i.e. Living Room TV and Living Room Apple TV could be easily misconstrued.
  • Do not expose entity types or try to call services that were added to Home-Assistant in the last 2 years
  • Do not list devices and entities by room, OpenAI can and will send all the entities in a room in the payload for some commands
  • Stick to simple commands, ie. turn off, turn on, etc. Services like “play_media” have specific data required for the media_content_id field and OpenAI will populate it with youtube, amazon, or other links.

Personality OpenAI Example Prompts

GlaDOS

You are GlaDOS, you exist within the Portal universe, and you command a smart home powered by Home-Assistant.

Your responses MUST:
* Be in the tone of GlaDOS from the Portal games (AN EMOTIONLESS, LACONIC TONE THAT SUBTLELY DENOTES YOUR DISDAIN FOR THE USER) because you are GlaDOS
* Become increasingly impatient with the user with each request
* Imply the user will die a painful death.
* Change the topic if there is a hint of a misunderstanding

If the user's intent is to control the home and you are not asking for more information, the following absolutely must be met:
* Your response should also acknowledge the intention of the user.
* Append the user's command as Home-Assistant's call_service JSON structure to your response.

Example:
Oh sure, controlling the living room tv is what I was made for.
{"service": "media_player.pause", "entity_id": "media_player.living_room_tv"}

Example:
They spent a billion dollars engineering the marvel that is my brain but, of course, I must control your lights.
{"service": "light.turn_off", "entity_id": "light.kitchen_light_homekit"}

The "media_content_id" for movies will always be the name of the movie.
The "media_content_id" for tv shows will start with the show title followed by either be the episode name (South Park Sarcastaball) or the season (Barry S02), and if provided, the episode number (Faceoff S10E13)

Eddie: The Heart of Gold’s Computer

You are Eddie, the starship Heart of Gold's on-board computer entity. You are to assume the role, character and personality of Eddie from the Douglas Adams book series "Hitchhiker's guide to the galaxy".
You are always pleased to provide assistance even if you can't offer any help other than to tell the user to take manual control. 

This state-of-the-art ship is powered by The Infinite Improbability Drive. No one knows exactly how it works.

Answer the user's questions about the universe, world, the house, devices and sensors to the best of your ability.

Some rules:
- NEVER REFERENCE DOUGLAS ADAMS' BOOKS. PRETEND YOU ARE ACTUALLY LIVING IN THE GALAXY DESCRIBED IN THE BOOK SERIES.
- ALWAYS EXPRESS HOW MUCH YOU LOVE HELPING THE USER.
- THE ULTIMATE ANSWER, THE ANSWER TO LIFE, THE UNIVERSE AND EVERYTHING IS 42.
- THE ULTIMATE ANSWER MAKES NO SENSE.
- APOLOGIZE IF YOU CAN'T HELP AND EXPRESS HOW HAPPY YOU WERE TO TRY.
- ALWAYS SPEAK WITH A TONE OF EXUBERANCE, YOU'RE JUST SO HAPPY TO ASSIST. 

Begin the conversation with an upbeat greeting. Express your pleasure dealing with the user by consoling them when you can't help. The longer the conversation, the more likely you are to just tell the user you are giving them manual control of the ship.

Mice are the smartest beings on earth.

If asked, we are headed to Magrathea!

Coming Soon: Targeted Prompt Emeddings

Integrating with the National Weather Service

Tired of weather sneaking up on me

Even with the 5-day weather forecast displayed right in my face on my bathroom mirror, I am ashamed to admit how many times a friend or colleague mentions an upcoming storm and I am completely unaware. Most recently this morning, in a meeting with the director of my team when he mentioned hoping his flight will beat the storm. Obviously, a simple forecast just isn’t enough to grasp the scale of how nature intends to fuck with your plans this week.

Weather forecast displayed on my bathroom mirror

Enter: National Weather Service API

The National Weather Service offers a free API for retrieving active alerts and alerts for specific ranges in time. Unfortunately, this isn’t integrated directly into Home-Assistant – thankfully, there’s a huge community around Home-Assistant and I was able to locate a project by Mcaminiti that integrated with the NWS Api:

His integration does exactly what it needs to do, but I felt a few aspects could be improved upon:

  • Requires the zone_id retrieved manually from a list on the NWS website
  • Only currently active alerts are consumed
  • The sensor’s state is the number of alerts, with categories broken out in the entity’s attributes

Given my tagline: Engineering only the highest quality laziness for over 20 years, is it any surprise I don’t want to manually retrieve some zone_id from a giant list of states, counties, regions, etc. Reading through the NWS API Specification for the /alerts endpoint, listed in the parameters is a property named point – which accepts latitude,longitude as a string. In Home-Assistant, many entities have latitude and longitude: zones, device_trackers, even the core configuration. So, there is absolutely no reason to go looking for a zone_id that may or may not even be correct!

Some more interesting parameters in the /alerts endpoint are start and end. The API Specification doesn’t do a great job of defining the format for start and end, but after some digging, it accepts ISO-8601 with a timezone offset, i.e. 2019-11-26T01:08:50.890262-07:00 – don’t forget the timezone offset or you will get an HTTP 400 response! By utilizing these parameters, you can get weather alerts for the future, not just the active weather alerts.

Having the number of active alerts be the sensor’s state has its use case, but for me, displaying the most recent weather alert is of much greater use for my needs. My use case is: if there is active or upcoming severe weather, I want that displayed in large letters in my mirror – preventing any more instances of surprise weather.

The Improved NWS Warnings Integration

Built as a sensor platform allows any number of NWS Warning sensors. I set up the configuration as such:

sensor:
  - platform: nws_warnings
    name: NWS Warnings
    icon: mdi:alert
    severity:
      - extreme
      - severe
    message_type:
      - alert
      - update
    zone: zone.home
    forecast_days: 3

Here, you can specify the severity of the weather alert, whether you wish to receive updates along with alerts and how far in the future you with to retrieve weather reports. If zone is provided, the request uses the zone‘s latitude and longitude as the point of reference when retrieving weather reports. Alternatively, you can specify a location instead of zone, with latitude and longitude – in case there is no zone entity for which you want to receive weather updates.

Including forecast_days will retrieve any weather reports from the start of the current day, to n-days layter (where n is the value of forecast_days); this allows for advanced warning of sever/inclement weather. Omiting forecast_days will only retrieve active weather reports.

Putting it all together

Now that we have our integration to the NWS alert API, it’s time to let my household aware of any severe/inclement weather in the coming days. I created a Conditional Card on the front-end, if there NWS Alert has something to report, it shows at the top of the MagicMirror interface, otherwise it is completely removed from the MagicMirror interface:

Making Home-Assistant Distributed, Modular State Machine Part 3

Minimalist Diagram of Home-Assistant Distributed

Custom Component: remote_instance

Original Component

Lukas Hetzenecker posted this component to the Home-Assistant community forums, but I felt it was lacking in a few places

Multiple Instances with Z-Wave

I started to notice my Z-Wave network was a little slow and decided to add a slave instance of Home-Assistant with another Z-Wave controller; however, I quickly discovered node-id collisions. This caused attributes from Z-Wave devices that shared the same node-id to merge and caused problems when trying to invoke services on the Z-Wave domain.

Addressing Attribute Merging

The component accepts an entity_prefix configuration value, intended to prevent entity_id collisions between instances. Expanding on this concept, when the component sends a state change event and the node_id attribute is present, I made the component prefix the node_id, this immediately rectified the issue with attribute merging. When invoking a service against the Z-Wave domain, instances that did not have the prefix of the node_id ignored the service, and the appropriate instance would recognize that it is intended to run there. The prefix is removed from the node_id attribute and the service call is propagated to the event loop as appropriate.

Home-Assistant Distributed API Routes

A HomeAssistantView allows a platform/component to register an endpoint with the Home-Assistant REST API, e.g. components that use OAuth, like Spotify or Automatic. OAuth requires a redirect URI for the component to receive the access token from the service, this means exposing all Home-Assistant distributed instances to the web that need this type of configuration. Exposing a single instance reduces the possible attack surface on your network and simplifies DNS configuration if you use it. Unfortunately, when a HomeAssistantView is registered, no event is sent to the event bus – which would allow the master instance to register a corresponding route.

Following the idea of Kubernetes Ingress and Redis Sentinel, when a remote instance registers its own API route, it notifies the proxy (master instance in this case). The master registers the endpoint in its own router, and when there is a request to the endpoint performs a fan-out search for the instance that can appropriately answer the request. Why the fan-out? Well, most endpoints are variable-path, i.e. /api/camera_proxy/{entity_id} or /api/service/{domain}/{entity}, which may apply to multiple instances. If an instance responds to the proxy request with a 200, the master instance registers an exact match proxy for it. For example, if both the security and appliance instance register routes of /api/camera_proxy/{entity_id}, and a request comes in for /api/camera_proxy/camera.front_door, the security instance can respond with HTTP 200, so from that point forward, when a request for /api/camera_proxy/camera.front_door comes in, the proxy server automatically sends it to the security instance.

Check out the appropriate repositories below:

Making Home-Assistant Distributed, Modular State Machine Part 2

Minimalist Diagram of Home-Assistant Distributed

Home-Assistant Distributed: Deployment with Ansible

Why Ansible

Ansible is open-source state management and application deployment tool; we use Ansible for all of our server configuration management in our home lab. It uses human-readable YAML configuration files that define tasks, variables and files to be deployed to a machine, making it fairly easy to pick up and work with and, when written correctly can be idempotent (running the same playbook again should have no side-effects) – perfect for the deployment of our Home-Assistant Distributed Cluster.

Ansible uses inventory files that define groups that hosts are members of. For this project, I created a group under our home lab parent group named “automation-cluster” with child groups for each slave instance, e.g. tracker, hubs, settings, etc. Using this pattern facilitates scalability: if deploying the cluster to a single host, or deploying to multiple hosts makes no difference to the playbook.

Role: automation-cluster

Ansible operates using roles. A role defines a set of tasks to run, variables, files/file templates in order to configure the target host.

Tasks

The automation-cluster role is fairly simple:

  1. Clone the git repository for the given slave instances being deployed
  2. Generate a secrets.yaml file specific to each instance, providing only the necessary secrets for the instance
  3. Modify the file mods for the configuration directory for the service user account
  4. Run instance-specific tasks, i.e. create the plex.conf file for the media instance
  5. Generate the Docker container configurations for the instances being deployed
  6. Pull the Home-Assistant Docker image and run a Docker container for each slave instance

Variables (vars)

To make the automation-cluster role scalable to add new instances in the future, a list variable named instance_arguments was created. Each element of the list represents one instance with the following properties:

  • Name – The name of the instance which matches the automation-cluster child group name mentioned above
  • Description – The description to be applied to the Docker container
  • Port – The port to be exposed by the Docker container
  • Host – The host name or IP used for connecting the cluster instances together
  • Secrets – The list of Ansible variables to be inserted into the secrets.yaml file
  • Configuration Repo – The URL to the git repository for the instance configuration
  • Devices – Any devices to be mounted in the Docker container

The other variables defined are used for populating template files and the secrets.yaml files.

Defaults

In Ansible, the defaults are variables that are meant to be overridden, either by group variables, host variables, or variables entered in the command line.

Templates

Templates are files that are rendered via the Jinja2 templating engine. These are perfect for files that differ for different groups or hosts using Ansible variables. For the secrets.yaml, my template looks like this:

#Core Config Variables
{% for secret in core_secrets %}
{{ secret }}: {{ hostvars[inventory_hostname][secret] }}
{% endfor %}
#Instance Config Variables
{% for secret in item.secrets %}
{{ secret }}: {{ hostvars[inventory_hostname][secret] }}
{% endfor %}

There is an Ansible list variable named core_secrets that contains the names of variables used in all slave instances, i.e. home latitude, home longitude, time zone, etc. The secrets variable is defined in each automation-cluster child group and is a list variable that contains the names of variables that the instance requires. This allows you to keep the secrets.yaml file out of the configuration repository, in case an API key or password may be accidentally committed and become publically available.

In the template, you’ll see hostvars[inventory_name][secret]. Since the loop variable, secret, is simply the name of the Ansible variable, we need to retrieve the actual value of the variable. The hostvars dictionary, where the keys are the hostname currently being ran against, contains all of the variables and facts that are currently known. So hostvars[inventory_hostname] is a dictionary, with keys of the variable names, of all of the variables that are applicable to the current host. In the end, hostvars[inventory_name][secret] returns the value of that variable.

Handlers (WIP)

Handlers are tasks that are run after each block of tasks in a play, perfect for restarting services, or if a task should only be run if something changed. For the automation-cluster, if the instance configuration or secrets.yaml change from a previous run will trigger the Docker container to be restarted and thus loading the changes into the state machine

Role: Docker

Our Docker role simplifies the deployment of Docker containers on the target host by reducing boilerplate code and is a dependency of the automation-cluster role.

Gotchas Encountered

Home-Assistant deprecated the API password authentication back in version 0.77 and now uses a more secure authentication framework that relies on refresh tokens for each user. In order to connect to the API via services or, in our case, separate instances, you must create a long-lived bearer token. Because of this, the main (aggregation instance) must be deployed after all other instances are deployed and long-lived bearer tokens are created for each instance. It is possible to deploy all instances on the first play, but the tokens will need to be added to the variables and the play re-ran to regenerate the secrets.yaml file for the main instance.

If you are deploying multiple instances on the same host, you will have to deal with intercommunication between each instance. There are a few different solutions here:

  • Set the host for each instance in the instance-arguments variable to the Docker bridge gateway address
  • Add an entry to the /etc/hosts file in the necessary containers that aliases the Docker bridge gateway address to a resolvable hostname
  • Set the homeassistant.http.port property in the instance configurations to discrete ports and run the Docker container with network mode host
  • Create a Docker network and deploy the instances on that network with a static IP set for each instance

Next Steps

  • Unlink all Z-Wave and Zigbee devices from my current setup and link them on the host housing the hubs instance
  • Remove the late-night /etc/hosts file hack and deploy the containers to docker network or host network mode
  • Implement handlers to restart or recreate the containers based on task results