The idea for this sub-project came to me after friends of mine started building a similarly complex smart home. Of course, everyone has their own server concept on which various services run. So what could be more obvious than trying to set up a generic deployment.
At this point in time, I refactored my smartserver deployment and extracted configuration parts to individual config folders. Part of the refactoring was also the effort to get it to work under RedHat. Just to see how easy it is and to have a choice.
In the end there was a setup that is completely generic and can be adapted to your own needs via an individual config part.
Why your own VLAN cloud
Now that the deployment was running in 3 different environments, the idea came up to share even more things. To enable this in a secure way, an encrypted VLAN was set up. For this purpose, a container is started on each server, which are connected to each other via a Wireguard Mesh. Each container has its own private network which is available to the other containers. If you want to share certain services or data with each other, they only need to be offered within this network. This is usually done using a container that is hooked into the network.
This is the first use case. Of course, every smart server has a hard drive RAID which already offers a certain degree of failure safety. However, since this is not sufficient in the case of personal data such as photos, documents, etc. collected for a lifetime, I thought about a spatially separate backup.
First of all, my project Cloudsync, which saves data in encrypted form on GoogleDrive, was used. But at some point I reached the limits of my data volume on Google.
After a brief consideration, the idea came up to use the individual Smartserver deployments as spatially separate backups. For this purpose, an NFS server is deployed as a container on each server, which provides the other servers with a network drive. All data is encrypted and synchronized on this drive using rclone.
Cloud Sensor Data
The second application is sharing of sensor data. After I finished with my weather station, there was of course interest in accessing this data. So I figured out how to do that in a generic way.
For this purpose, an additional MQTT broker is deployed on each server, which is exclusive used by the cloud network. These MQTT brokers are synchronized to each other. i.e. no matter what data are published anywhere, they are available everywhere.
Now I can decide in my openHAB instance which data I want to share and publish every change to this data on the Cloud MQTT Broker. At the other end, the remote openHAB instances subscribe to all values in which they are interested.