- Attempting to create a new terminal from the Synology Docker GUI yields no response. It’s as if you didn’t even click the button.
- Attempting to create a new terminal via the command line “docker exec -it <container-id> command” returns an error that the container is not running.
Restart the container and DO NOT power off any linked machines.
In my case I had a container with a number of linked machines that were no longer needed and I haven’t cleaned them up yet. I would simply power then off once I powered on the container and typically I would be okay. I discovered I was unable to create a terminal on the parent container and this left me troubleshooting. Eventually I discovered that I could create a terminal if I didn’t turn of the linked machines, but if I powered off the linked containers (after powering on the parent container), then I would not be able to create a terminal.
If this is a container that actually does exist and has a volume attached, verify the path to the volume is valid. When the path is no longer valid, the Synology Docker GUI will not start the container and simply states “Container does not exist”.
After deleting, saving, re-editing, re-adding the volume, then starting the container it came started up just fine.
Hardware: Synology DS716+
Software: Synology Photo Station 6
Data Files: .jpg & .arw (raw)
When using a Synology NAS to manage your photos via the Photo Station 6 application when I delete the JPG the RAW (ARW) remains behind.
Search the photo directory for orphan .arw files (ones without a matching .jpg), then remove it. While we are at it, lets record what we delete to a file.
Deploy an Ubuntu docker image and mount the photos directory
Use the code
rootdir = '/mnt/photo/Dump/2016/2016-02_Muppo-playing'
files = os.listdir(rootdir)
for file in files:
filename, file_ext = os.path.splitext(rootdir + '/' + file)
if not os.path.isfile(filename + '.JPG'):
os.remove(rootdir + '/' + file)
print('REMOVED:' + rootdir + '/' + file)
with open("clean-up.log", "a") as logfile:
logfile.write('REMOVED:' + rootdir + '/' + file)
In this walk through we will perform the following:
Note: The actual nginx configuration will not be covered here.
- Deploy the nginx Docker container (vr-ngx-01)
- Mount the following folders and file:
- it’s assumed your sites .conf file is in this director
- it’s assumed your SSL certs live here and are properly referenced in your /etc/nginx/conf.d/your.site.conf
- it’s assumed SSL is configured and includes conf.d/*.conf
- Link vr-ngx-01 to the Home-Assistant container (vr-hass-01)
- Fire up the container and verify connectivity over a secured connection
- Remove local port mapping for vr-hass-01
1. Deploy the container
2. Mount the local folders & file
3. Link vr-ngx-01 to vr-hass-01
4. Verify site loads
Browse to https://YOUR-SYNOLOGY-NAME:4443
Note: to make this appear at https://www.virtualrick.com you can configure your router/firewall for port forwarding. Example: external TCP 443 forwards to internal TCP 4443.
5. Remove local port mapping for vr-hass-01
Now that the nginx container is linked to the home-assistant container, there is no need for the home-assistant service port (8123) to be available directly.
Make sure the home-assistant container is turned off, then edit the container and remove the local port configuration.
Update: Link to post following this one with steps for deploying nginx as a proxy for the Home-Assistant container deployed here: CLICK HERE
I recently received my Synology DS716+ and discovered it supports running Docker containers. I figured why not run Home-Assistant in a Docker container on the Synology? Doing this will free my Raspberry Pi for another project. Here is what I did to make this happen.
Store your configuration.yaml here
Store any scripts called within your confiruation.yaml. I have a number of scripts used to execute remote commands on various devices.
I mount this folder so I can store the keys that are trusted on remote devices
Step by step screenshots
Download the image
Create the container
Launch the application