Unable to open new terminal in Synology Docker container

Symptoms:

  1. Attempting to create a new terminal from the Synology Docker GUI yields no response. It’s as if you didn’t even click the button.
  2. Attempting to create a new terminal via the command line “docker exec -it <container-id> command” returns an error that the container is not running.

Solution:

Restart the container and DO NOT power off any linked machines.

Background:

In my case I had a container with a number of linked machines that were no longer needed and I haven’t cleaned them up yet. I would simply power then off once I powered on the container and typically I would be okay. I discovered I was unable to create a terminal on the parent container and this left me troubleshooting. Eventually I discovered that I could create a terminal if I didn’t turn of the linked machines, but if I powered off the linked containers (after powering on the parent container), then I would not be able to create a terminal.

Synology Docker “Container does not exist”

If this is a container that actually does exist and has a volume attached, verify the path to the volume is valid. When the path is no longer valid, the Synology Docker GUI will not start the container and simply states “Container does not exist”.

After deleting, saving, re-editing, re-adding the volume, then starting the container it came started up just fine.

Reject Requests without a Host Name Header on NGINX

The Objective: Reject all requests that reach the NGINX server with our a host name in its header

Why it matters: When a request is made to via IP address (http://your.add.rress.here), it will return what is determined to be the “default server” for that IP address. This is often not the desired result. The result we are going for here is to close the connect with the requesting client.

The solution: 

  1. generate a bogus cert and store it in your /etc/nginx/certs/bogus/ (or  whichever folder you use for your certificates)
  2. create a “default.conf” configuration file in your /etc/nginx/conf.d/ (or whichever folder you include in your config)
  3. add the configuration to the “default.conf” file (update it if your folders are different for certs)
  4. test your configuration (/usr/sbin/nginx -t -c /etc/nginx/nginx.conf)
  5. if all is well, restart your service (sudo service nginx restart)
  6. validate it’s working as intended

Code Sample:
server {
listen 80 default_server;
server_name "";
return 444;
}
server {
listen 443 default_server ssl;
server_name "";
return 444;
ssl on;
ssl_certificate /etc/nginx/certs/bogus/cert.pem;
ssl_certificate_key /etc/nginx/certs/bogus/privkey.pem;
}

 

References:

  • http://nginx.org/en/docs/http/request_processing.html#how_to_prevent_undefined_server_names

Atlassian Monitoring with JMX (Java Management eXtension)

Want to know some details on what’s going on with your Atlassian application? (JIRA, Confluence, any JVM application).

 

Add these lines your Java Options:

-Dcom.sun.management.jmxremote.port=8686
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=SERVER.DOMAIN.COM

Create a jmxremote.password file

  1. Copy C:\Program Files\Atlassian\JIRA\jre\lib\management\jmxremote.password.template to C:\Program Files\Atlassian\JIRA\jre\lib\management\jmxremote.password
  2. Edit jmxremote.password to add your credentials
  3. Set permissions on jmxremote.password
    1. Set owner the the user your Atlassian application runs as
    2. Remove inheriting permissions
    3. Remove all account permissions except for owner
    4. save your settings

Start your service & launch jconsole

Note: Running jconsole.exe -debug is helpful for troubleshooting

Use your favorite monitoring tool to collect the stats

SolarWinds SAM module supports JMX

Windows computers not reporting to WSUS

Verify client configuration

Local Computer Policy

Verify Resultant Policy is correct

Verify Correct GPO’s are being applied

C:\>gpresult /scope computer

Update Group Policies

C:\>gpupdate /force

verify connectivity

ping wsus-server-01.domain.com

telenet wsus-server-01.domain.com 8530

If you are using a hosts file and having troubles with resolution, check out this post

Reset the client

wuauclt.exe /resetauthorization /detectnow

Force check in

wuauclt.exe /reportnow

Check WSUS in 10-15 minutes

If you are still having issues check out the client log file:

C:\Windows\WindowsUpdate.log

Windows hosts file not being used for resolution

windows version: Server 2003 R2 Standard x64 SP2

Verify it’s not working

ipconfig /flushdns

ipconfig /displaydns | more

Check for type-o’s!

Start with the simple solution first

Verify hosts file location

Open Registry Editor

Verify key: My Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services|Tcpip\Parameters\DataBasePath

Copy Value data and paste it into Explorer to verify you are editing the correct file

Verify file permissions (This was my issue)

If machine\users is not given Read and Read & Execute permissions, add the account.

 

Synology: Remove orphaned AWR files when JPG is deleted in Photo Studio 6

Background

Hardware: Synology DS716+
Software: Synology Photo Station 6
Data Files: .jpg & .arw (raw)

The problem

When using a Synology NAS to manage your photos via the Photo Station 6 application when I delete the JPG the RAW (ARW) remains behind.

The solution

Search the photo directory for orphan .arw files (ones without a matching .jpg), then remove it. While we are at it, lets record what we delete to a file.

Deploy an Ubuntu docker image and mount the photos directory


Use the code


#!/usr/bin/python
import os
rootdir = '/mnt/photo/Dump/2016/2016-02_Muppo-playing'
files = os.listdir(rootdir)
for file in files:
if file.endswith('.ARW'):
filename, file_ext = os.path.splitext(rootdir + '/' + file)
if not os.path.isfile(filename + '.JPG'):
os.remove(rootdir + '/' + file)
print('REMOVED:' + rootdir + '/' + file)
with open("clean-up.log", "a") as logfile:
logfile.write("\n")
logfile.write('REMOVED:' + rootdir + '/' + file)

How to add Domain Admins to sudoers

This process assumes your linux machine has Centrify Express running on it.

Determine the group name

$adquery user rick -G

domain_admins

domain_users

jira-software-users

Add entry to sudoers file

sudo echo “%domain_admins ALL=(ALL) NOPASSWD: ALL” >> /etc/sudoers

 

 

 

Run nginx in a Docker container on a Synology

In this walk through we will perform the following:

Note: The actual nginx configuration will not be covered here.

  1. Deploy the nginx Docker container (vr-ngx-01)
  2. Mount the following folders and file:
    1. /etc/nginx/conf.d/
      1. it’s assumed your sites .conf file is in this director
    2. /etc/nginx/certs/
      1. it’s assumed your SSL certs live here and are properly referenced in your /etc/nginx/conf.d/your.site.conf
    3. /etc/nginx/nginx.conf
      1. it’s assumed SSL is configured and includes conf.d/*.conf
  3. Link vr-ngx-01 to the Home-Assistant container (vr-hass-01)
  4. Fire up the container and verify connectivity over a secured connection
  5. Remove local port mapping for vr-hass-01

1. Deploy the container

2. Mount the local folders & file

3. Link vr-ngx-01 to vr-hass-01

4. Verify site loads

Browse to https://YOUR-SYNOLOGY-NAME:4443

Note: to make this appear at https://www.virtualrick.com you can configure your router/firewall for port forwarding. Example: external TCP 443 forwards to internal TCP 4443.

5. Remove local port mapping for vr-hass-01

Now that the nginx container is linked to the home-assistant container, there is no need for the home-assistant service port (8123) to be available directly.

Make sure the home-assistant container is turned off, then edit the container and remove the local port configuration.