Control your garage door from your Android watch

The need (or want I should say):

I want to simplify getting into my garage when returning home on my motorcycle. It’s not fun to have to search around the garage door clicker or get off the bike and go enter the code….so why not make it so I can use my voice to open it from my Android wear watch.

What I have:

Watch: Nixon Mission ( – I’ve used the Moto 360 as well, any Android Wear device should work
Android Phone: Samsung S7 Edge (any ANdroid phone should work)
Android Software: Tasker ( & Wear Tasker (
Garage Controller: Zenotech beagle done cape (
Command Center: Home Assistant (

How it comes together:

The intention of this post is not to hold ones hand through the process step-by-step, but to share the framework used to make it work. If someone does have questions on any specifics, please post them in the comments and I’ll share more specifics in that area.

Home Assistant



Script used to use the garage door sensor:

Note: You’ll want to setup certificate based authentication between your hass (Home-Assistant) server and the garage door controller



Make and AXL Query Against CUCM with Python

I recently had a need to interact with Cisco Unified Communications Manager (CUCM) in an automated way. Some quick searched returned some code that worked a little bit, but not all the way. Here is the code that I ended up with that works.

#The purpose of this script is to execute a basic query against CUCM via AXL
from suds.client import Client
from suds.sax.element import Element
import base64
import ssl
if hasattr(ssl, '_create_unverified_context'):
ssl._create_default_https_context = ssl._create_unverified_context
cmserver = 'IP-ADDRESS'
cmport = '8443'
wsdl = 'file:///var/www/html/AXLAPI.wsdl'
location = 'https://' + cmserver + ':' + cmport + '/axl/'
username = 'MyAXLUser'
password = 'MyPassword'
def getUser(userName):
base64string = base64.encodestring('%s:%s' % (username, password)).replace('\n', '')
authenticationHeader = {
"SOAPAction" : "ActionName",
"Authorization" : "Basic %s" % base64string
client = Client(wsdl, location=location, username=username, password=password, headers=authenticationHeader)
userName = str(userName)[:-1].lower()
agentName = Element('userid').setText(userName)
getUser = client.service.getUser(agentName)
DEVICEID = getUser[0][0][11][0][0]
return "no device associated"
print getUser('Rick.Breidenstein')

GitHub Link: Here it is



Snippets of problems and errors along the way

Hopefully providing the errors that I came across will help others with the same issue find a solution faster.

Creating the unverified ssl context resolved the issue below

Traceback (most recent call last):
File "", line 15, in
result = client.service.getOSVersion()
File "build/bdist.linux-x86_64/egg/suds/", line 535, in __call__
File "build/bdist.linux-x86_64/egg/suds/", line 595, in invoke
File "build/bdist.linux-x86_64/egg/suds/", line 620, in send
File "build/bdist.linux-x86_64/egg/suds/transport/", line 85, in send
File "build/bdist.linux-x86_64/egg/suds/transport/", line 107, in __open
File "/usr/lib/python2.7/", line 429, in open
response = self._open(req, data)
File "/usr/lib/python2.7/", line 447, in _open
'_open', req)
File "/usr/lib/python2.7/", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/", line 1241, in https_open
File "/usr/lib/python2.7/", line 1198, in do_open
raise URLError(err)

adding the authentication header resolved the issue below

Traceback (most recent call last):
File "", line 16, in
result = client.service.getOSVersion()
File "build/bdist.linux-x86_64/egg/suds/", line 535, in __call__
File "build/bdist.linux-x86_64/egg/suds/", line 595, in invoke
File "build/bdist.linux-x86_64/egg/suds/", line 630, in send
File "build/bdist.linux-x86_64/egg/suds/", line 687, in failed
Exception: (401, u'Unauthorized')

Introducing the Element to pass through the suds client resolved the issue below

No handlers could be found for logger "suds.client"
Traceback (most recent call last):
File "", line 29, in
result = client.service.getPhone(name = 'SEPD0C282D1ECE0')
File "build/bdist.linux-x86_64/egg/suds/", line 535, in __call__
File "build/bdist.linux-x86_64/egg/suds/", line 595, in invoke
File "build/bdist.linux-x86_64/egg/suds/", line 630, in send
File "build/bdist.linux-x86_64/egg/suds/", line 681, in failed
File "build/bdist.linux-x86_64/egg/suds/bindings/", line 235, in get_fault
suds.WebFault: Server raised fault: 'No uuid or name element found'

Error below became present when passing the user name through as a variable. Removing the last character from the string resolved this.

No handlers could be found for logger "suds.client"
Traceback (most recent call last):
File "", line 29, in
result = client.service.getUser(user = 'Tony.Hasting')
File "build/bdist.linux-x86_64/egg/suds/", line 535, in __call__
File "build/bdist.linux-x86_64/egg/suds/", line 595, in invoke
File "build/bdist.linux-x86_64/egg/suds/", line 630, in send
File "build/bdist.linux-x86_64/egg/suds/", line 681, in failed
File "build/bdist.linux-x86_64/egg/suds/bindings/", line 235, in get_fault
suds.WebFault: Server raised fault: 'Item not valid: The specified User was not found'

Error below is when you do not create an element for searchCriteria and returnedTags. reference the listUser function.

No handlers could be found for logger "suds.client"
Traceback (most recent call last):
File "./", line 46, in
File "./", line 42, in listUser
result = client.service.listUser()
File "build/bdist.linux-x86_64/egg/suds/", line 535, in __call__
File "build/bdist.linux-x86_64/egg/suds/", line 595, in invoke
File "build/bdist.linux-x86_64/egg/suds/", line 630, in send
File "build/bdist.linux-x86_64/egg/suds/", line 681, in failed
File "build/bdist.linux-x86_64/egg/suds/bindings/", line 235, in get_fault
suds.WebFault: Server raised fault: 'No Search Criteria Defined'


This is the result from a collection of many different sources. Sorry I do no have all the links to site.

Full NGINX Plus Logs in Sumo Logic

You enabled the additional logging per the NGINX documentation for Amplify and now you want to have all the metrics show up in Sumo Logic; right?

Here’s what you came for:

_sourceCategory="NGINX Plus"
| parse regex "^(?\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
| parse regex "(?[A-Z]+)\s(?\S+)\sHTTP/[\d\.]+\"\s(?\d+)\s(?[\d-]+)\s\"(?.*?)\"\s\"(?.+?)\"\s\"(?\S+)\"\s\"(?\S+)\"\ssn=\"(?\S+)\"\srt=(?\S+)\sua=\"(?\S+)\"\sus=\"(?\S+)\"\sut=\"(?\S+)\"\sul=\"(?\S+)\"\scs=(?\S+).*"

Want to play around and learn more about RegEx? I recommend you use this site:

NGINX Log File Configuration :

NGINX Amplify Agent on Ubuntu LTS 16

mkdir ~/NGINX-Amplify
cd ~/NGINX-Amplify
curl -L -O
sudo apt-get install python-software-properties python2.7

The Output

--- This script will install the NGINX Amplify Agent ---

1. Checking admin user ... root, ok.
2. Checking API key ... using YOURAPIKEY
3. Checking python version ... found python 2.7
4. Checking OS compatibility ... ubuntu detected.
5. Adding public key ... done.
6. Adding repository ... added.
7. Updating repository ...

Get:1 xenial-security InRelease [94.5 kB]
Hit:2 xenial InRelease
Hit:3 xenial-updates InRelease
Hit:4 xenial-backports InRelease
Ign:5 xenial InRelease
Get:6 xenial Release [2,526 B]
Get:7 xenial Release.gpg [287 B]
Get:8 xenial/amplify-agent amd64 Packages [1,744 B]
Get:9 xenial/amplify-agent i386 Packages [1,741 B]
Fetched 101 kB in 0s (113 kB/s)
Reading package lists... Done

7. Updating repository ... done.
8. Installing package ...

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 8 not upgraded.
Need to get 3,590 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 xenial/amplify-agent amd64 nginx-amplify-agent amd64 0.39-2~xenial [3,590 kB]
Fetched 3,590 kB in 3s (1,026 kB/s)
Selecting previously unselected package nginx-amplify-agent.
(Reading database ... 60211 files and directories currently installed.)
Preparing to unpack .../nginx-amplify-agent_0.39-2~xenial_amd64.deb ...
Unpacking nginx-amplify-agent (0.39-2~xenial) ...
Processing triggers for systemd (229-4ubuntu8) ...
Processing triggers for ureadahead (0.100.0-19) ...
Setting up nginx-amplify-agent (0.39-2~xenial) ...

8. Installing package ... done.
9. Building configuration file ... done.
10. Checking if sudo -u nginx can be used for tests ... done.
11. Checking if euid 111(nginx) can find root processes ... ok.
12. Checking if euid 111(nginx) can access I/O counters ... ok.
13. Checking connectivity to the receiver ... ok.
14. Checking system time with ntpdate(8) ... failed - no ntpdate installed!

A few checks have failed - please read the warnings above!

To start and stop the Amplify Agent type:

service amplify-agent { start | stop }

Amplify Agent log can be found here:

After the agent is launched, it might take up to 1 minute this system to appear
in the Amplify user interface.


Launching amplify-agent ...
All done.

Reject Requests without a Host Name Header on NGINX

The Objective: Reject all requests that reach the NGINX server with our a host name in its header

Why it matters: When a request is made to via IP address (, it will return what is determined to be the “default server” for that IP address. This is often not the desired result. The result we are going for here is to close the connect with the requesting client.

The solution: 

  1. generate a bogus cert and store it in your /etc/nginx/certs/bogus/ (or  whichever folder you use for your certificates)
  2. create a “default.conf” configuration file in your /etc/nginx/conf.d/ (or whichever folder you include in your config)
  3. add the configuration to the “default.conf” file (update it if your folders are different for certs)
  4. test your configuration (/usr/sbin/nginx -t -c /etc/nginx/nginx.conf)
  5. if all is well, restart your service (sudo service nginx restart)
  6. validate it’s working as intended

Code Sample:
server {
listen 80 default_server;
server_name "";
return 444;
server {
listen 443 default_server ssl;
server_name "";
return 444;
ssl on;
ssl_certificate /etc/nginx/certs/bogus/cert.pem;
ssl_certificate_key /etc/nginx/certs/bogus/privkey.pem;




Google OnHub Port Forwarding Not Working

Issue: Port forwarding of external ports to internal devices using the Google OnHub router stopped working

Cause: A device supporting UPnP was recently added to the network. I was able to arrive at this hypothesis because once I swapped in a Linksys router with no static port forwarding configured, the offending device was available and responding externally.

Resolution: Disable UPnP on the offending device.

Atlassian Monitoring with JMX (Java Management eXtension)

Want to know some details on what’s going on with your Atlassian application? (JIRA, Confluence, any JVM application).


Add these lines your Java Options:

Create a jmxremote.password file

  1. Copy C:\Program Files\Atlassian\JIRA\jre\lib\management\jmxremote.password.template to C:\Program Files\Atlassian\JIRA\jre\lib\management\jmxremote.password
  2. Edit jmxremote.password to add your credentials
  3. Set permissions on jmxremote.password
    1. Set owner the the user your Atlassian application runs as
    2. Remove inheriting permissions
    3. Remove all account permissions except for owner
    4. save your settings

Start your service & launch jconsole

Note: Running jconsole.exe -debug is helpful for troubleshooting

Use your favorite monitoring tool to collect the stats

SolarWinds SAM module supports JMX

Setup GNS3 Server w/ IOU support on a remote server (CLI only)

Get a VPS to host your GNS3 Server (or use an existing server you have)

  1. Go to VPS Dime (This link gives me credit for the referral)
  2. Create an instance using Ubuntu 16.04 LTS (Xenial)

Install GNS3

  • add-apt-repository ppa:gns3/ppa
  • apt-get update
  • apt-get install python3-setuptools python3-pyqt4 python3-ws4py python3-netifaces
  • apt-get install cmake libelf-dev uuid-dev libpcap-dev software-properties-common
  • apt-get install libssl1.0.0/xenial libssl-dev/xenial openssl/xenial
  • apt-get install gns3-server
  • dpkg –add-architecture i386 (two dashes, add, dash,architecture,space,i386)
  • apt-get update
  •  apt-get install gns3-iou
  • setcap cap_net_raw+ep /usr/bin/iouyap (If you want details)

Stuff you need to figure out 🙂

  • Get your IOU images and upload them
  • Install your iou license
    • Vi ~/.iourc
    • echo ‘’ >> /etc/hosts

Start using your GNS3 server from your GNS3 client

  • Make sure your client and server versions match or it will yell at you

Now to create a docker container to run this on my Synology instead of a externally hosted VPS



Backup remote linux machines to Synology

The Purpose

I use a number of Virtual Private Servers (VPS) and wanted to make a backup of the data and applications running on them.

The first step is to make a local copy of your data to a folder on the remote machine, then you can pull these files to the Synology NAS via a scheduled task. For my applications I simply used tar to backup all the directories I can about to a single file and a mysql dump to dump all the databases in the mysql server to a single file.

Setup Authentication

generate your keys


  • do not configure with a password

Verify sshd is configured to use key files

vi /etc/ssh/sshd_config

AuthorizedKeysFile      %h/.ssh/authorized_keys

Add the public key to ssh authorized_keys

cat >> ~/.ssh/authorized_keys

Copy the private key to the Synology

Use any method you like for this. I personally simply copied the contents of the private key, then pasted it into a file on my local machine and moved it to an existing share on the NAS.

Connect to source machine from Synology and trust the source machine

chmod 400 $AbsolutePathToPrivateKey
ssh -p 22 -i $PRIVATEKEY

The scheduled task

Create the scheduled task

save the script below local to the Synology and make it executable


  1. You may need to enable SSH terminal access on your NAS.
  2. If you edit the script locally on a windows machine with Notepad ++ make sure you change the EOL (End of Line) to Unix

/usr/bin/rsync -avz --progress -e "ssh -p $PORT -i $SSHID" $USER@$SERVER:$SOURCE $TARGET >> $LOG 2>&1

Run the script and verify your data is copied