Control your garage door from your Android watch

The need (or want I should say):

I want to simplify getting into my garage when returning home on my motorcycle. It’s not fun to have to search around the garage door clicker or get off the bike and go enter the code….so why not make it so I can use my voice to open it from my Android wear watch.

What I have:

Watch: Nixon Mission (http://www.nixon.com/us/en/mens-model-mission) – I’ve used the Moto 360 as well, any Android Wear device should work
Android Phone: Samsung S7 Edge (any ANdroid phone should work)
Android Software: Tasker (https://play.google.com/store/apps/details?id=net.dinglisch.android.taskerm&hl=en) & Wear Tasker (https://play.google.com/store/apps/details?id=com.cuberob.weartasker&hl=en)
Garage Controller: Zenotech beagle done cape (http://zenotec.net/)
Command Center: Home Assistant (https://home-assistant.io/)

How it comes together:

The intention of this post is not to hold ones hand through the process step-by-step, but to share the framework used to make it work. If someone does have questions on any specifics, please post them in the comments and I’ll share more specifics in that area.

Home Assistant

Configuration

https://github.com/RickB17/home-assistant/blob/master/configuration.yaml

Scripts

Script used to use the garage door sensor:

Note: You’ll want to setup certificate based authentication between your hass (Home-Assistant) server and the garage door controller

https://github.com/RickB17/home-assistant/blob/master/Garage-State.sh

Tasker

https://github.com/RickB17/home-assistant/blob/master/tasker.txt

 

Make and AXL Query Against CUCM with Python

I recently had a need to interact with Cisco Unified Communications Manager (CUCM) in an automated way. Some quick searched returned some code that worked a little bit, but not all the way. Here is the code that I ended up with that works.


#!/usr/bin/python
#The purpose of this script is to execute a basic query against CUCM via AXL
from suds.client import Client
from suds.sax.element import Element
import base64
import ssl
if hasattr(ssl, '_create_unverified_context'):
ssl._create_default_https_context = ssl._create_unverified_context
cmserver = 'IP-ADDRESS'
cmport = '8443'
wsdl = 'file:///var/www/html/AXLAPI.wsdl'
location = 'https://' + cmserver + ':' + cmport + '/axl/'
username = 'MyAXLUser'
password = 'MyPassword'
def getUser(userName):
base64string = base64.encodestring('%s:%s' % (username, password)).replace('\n', '')
authenticationHeader = {
"SOAPAction" : "ActionName",
"Authorization" : "Basic %s" % base64string
}
client = Client(wsdl, location=location, username=username, password=password, headers=authenticationHeader)
userName = str(userName)[:-1].lower()
agentName = Element('userid').setText(userName)
try:
getUser = client.service.getUser(agentName)
DEVICEID = getUser[0][0][11][0][0]
return DEVICEID
except:
return "no device associated"
print getUser('Rick.Breidenstein')

GitHub Link: Here it is

 

 

Snippets of problems and errors along the way

Hopefully providing the errors that I came across will help others with the same issue find a solution faster.

Creating the unverified ssl context resolved the issue below

Traceback (most recent call last):
File "test.py", line 15, in
result = client.service.getOSVersion()
File "build/bdist.linux-x86_64/egg/suds/client.py", line 535, in __call__
File "build/bdist.linux-x86_64/egg/suds/client.py", line 595, in invoke
File "build/bdist.linux-x86_64/egg/suds/client.py", line 620, in send
File "build/bdist.linux-x86_64/egg/suds/transport/http.py", line 85, in send
File "build/bdist.linux-x86_64/egg/suds/transport/http.py", line 107, in __open
File "/usr/lib/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1241, in https_open
context=self._context)
File "/usr/lib/python2.7/urllib2.py", line 1198, in do_open
raise URLError(err)
urllib2.URLError:

adding the authentication header resolved the issue below

Traceback (most recent call last):
File "test.py", line 16, in
result = client.service.getOSVersion()
File "build/bdist.linux-x86_64/egg/suds/client.py", line 535, in __call__
File "build/bdist.linux-x86_64/egg/suds/client.py", line 595, in invoke
File "build/bdist.linux-x86_64/egg/suds/client.py", line 630, in send
File "build/bdist.linux-x86_64/egg/suds/client.py", line 687, in failed
Exception: (401, u'Unauthorized')

Introducing the Element to pass through the suds client resolved the issue below

No handlers could be found for logger "suds.client"
Traceback (most recent call last):
File "test.py", line 29, in
result = client.service.getPhone(name = 'SEPD0C282D1ECE0')
File "build/bdist.linux-x86_64/egg/suds/client.py", line 535, in __call__
File "build/bdist.linux-x86_64/egg/suds/client.py", line 595, in invoke
File "build/bdist.linux-x86_64/egg/suds/client.py", line 630, in send
File "build/bdist.linux-x86_64/egg/suds/client.py", line 681, in failed
File "build/bdist.linux-x86_64/egg/suds/bindings/binding.py", line 235, in get_fault
suds.WebFault: Server raised fault: 'No uuid or name element found'

Error below became present when passing the user name through as a variable. Removing the last character from the string resolved this.

No handlers could be found for logger "suds.client"
Traceback (most recent call last):
File "test.py", line 29, in
result = client.service.getUser(user = 'Tony.Hasting')
File "build/bdist.linux-x86_64/egg/suds/client.py", line 535, in __call__
File "build/bdist.linux-x86_64/egg/suds/client.py", line 595, in invoke
File "build/bdist.linux-x86_64/egg/suds/client.py", line 630, in send
File "build/bdist.linux-x86_64/egg/suds/client.py", line 681, in failed
File "build/bdist.linux-x86_64/egg/suds/bindings/binding.py", line 235, in get_fault
suds.WebFault: Server raised fault: 'Item not valid: The specified User was not found'

Error below is when you do not create an element for searchCriteria and returnedTags. reference the listUser function.

No handlers could be found for logger "suds.client"
Traceback (most recent call last):
File "./cucm-axl.py", line 46, in
listUser(user[:-1])
File "./cucm-axl.py", line 42, in listUser
result = client.service.listUser()
File "build/bdist.linux-x86_64/egg/suds/client.py", line 535, in __call__
File "build/bdist.linux-x86_64/egg/suds/client.py", line 595, in invoke
File "build/bdist.linux-x86_64/egg/suds/client.py", line 630, in send
File "build/bdist.linux-x86_64/egg/suds/client.py", line 681, in failed
File "build/bdist.linux-x86_64/egg/suds/bindings/binding.py", line 235, in get_fault
suds.WebFault: Server raised fault: 'No Search Criteria Defined'

References:

This is the result from a collection of many different sources. Sorry I do no have all the links to site.

Full NGINX Plus Logs in Sumo Logic

You enabled the additional logging per the NGINX documentation for Amplify and now you want to have all the metrics show up in Sumo Logic; right?

Here’s what you came for:

_sourceCategory="NGINX Plus"
| parse regex "^(?\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
| parse regex "(?[A-Z]+)\s(?\S+)\sHTTP/[\d\.]+\"\s(?\d+)\s(?[\d-]+)\s\"(?.*?)\"\s\"(?.+?)\"\s\"(?\S+)\"\s\"(?\S+)\"\ssn=\"(?\S+)\"\srt=(?\S+)\sua=\"(?\S+)\"\sus=\"(?\S+)\"\sut=\"(?\S+)\"\sul=\"(?\S+)\"\scs=(?\S+).*"

Want to play around and learn more about RegEx? I recommend you use this site: http://regexr.com/

References:
NGINX Log File Configuration : https://github.com/nginxinc/nginx-amplify-doc/blob/master/amplify-guide.md#additional-nginx-metrics

NGINX Amplify Agent on Ubuntu LTS 16


mkdir ~/NGINX-Amplify
cd ~/NGINX-Amplify
curl -L -O https://github.com/nginxinc/nginx-amplify-agent/raw/master/packages/install.sh
sudo apt-get install python-software-properties python2.7
sudo API_KEY='USEYOURKEY' sh ./install.sh

The Output

--- This script will install the NGINX Amplify Agent ---

1. Checking admin user ... root, ok.
2. Checking API key ... using YOURAPIKEY
3. Checking python version ... found python 2.7
4. Checking OS compatibility ... ubuntu detected.
5. Adding public key ... done.
6. Adding repository ... added.
7. Updating repository ...

Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Hit:3 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease
Ign:5 https://packages.amplify.nginx.com/ubuntu xenial InRelease
Get:6 https://packages.amplify.nginx.com/ubuntu xenial Release [2,526 B]
Get:7 https://packages.amplify.nginx.com/ubuntu xenial Release.gpg [287 B]
Get:8 https://packages.amplify.nginx.com/ubuntu xenial/amplify-agent amd64 Packages [1,744 B]
Get:9 https://packages.amplify.nginx.com/ubuntu xenial/amplify-agent i386 Packages [1,741 B]
Fetched 101 kB in 0s (113 kB/s)
Reading package lists... Done

7. Updating repository ... done.
8. Installing package ...

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
nginx-amplify-agent
0 upgraded, 1 newly installed, 0 to remove and 8 not upgraded.
Need to get 3,590 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 https://packages.amplify.nginx.com/ubuntu xenial/amplify-agent amd64 nginx-amplify-agent amd64 0.39-2~xenial [3,590 kB]
Fetched 3,590 kB in 3s (1,026 kB/s)
Selecting previously unselected package nginx-amplify-agent.
(Reading database ... 60211 files and directories currently installed.)
Preparing to unpack .../nginx-amplify-agent_0.39-2~xenial_amd64.deb ...
Unpacking nginx-amplify-agent (0.39-2~xenial) ...
Processing triggers for systemd (229-4ubuntu8) ...
Processing triggers for ureadahead (0.100.0-19) ...
Setting up nginx-amplify-agent (0.39-2~xenial) ...

8. Installing package ... done.
9. Building configuration file ... done.
10. Checking if sudo -u nginx can be used for tests ... done.
11. Checking if euid 111(nginx) can find root processes ... ok.
12. Checking if euid 111(nginx) can access I/O counters ... ok.
13. Checking connectivity to the receiver ... ok.
14. Checking system time with ntpdate(8) ... failed - no ntpdate installed!

A few checks have failed - please read the warnings above!

To start and stop the Amplify Agent type:

service amplify-agent { start | stop }

Amplify Agent log can be found here:
/var/log/amplify-agent/agent.log

After the agent is launched, it might take up to 1 minute this system to appear
in the Amplify user interface.

PLEASE CHECK THE DOCUMENTATION HERE:
https://github.com/nginxinc/nginx-amplify-doc

Launching amplify-agent ...
All done.

Reject Requests without a Host Name Header on NGINX

The Objective: Reject all requests that reach the NGINX server with our a host name in its header

Why it matters: When a request is made to via IP address (http://your.add.rress.here), it will return what is determined to be the “default server” for that IP address. This is often not the desired result. The result we are going for here is to close the connect with the requesting client.

The solution: 

  1. generate a bogus cert and store it in your /etc/nginx/certs/bogus/ (or  whichever folder you use for your certificates)
  2. create a “default.conf” configuration file in your /etc/nginx/conf.d/ (or whichever folder you include in your config)
  3. add the configuration to the “default.conf” file (update it if your folders are different for certs)
  4. test your configuration (/usr/sbin/nginx -t -c /etc/nginx/nginx.conf)
  5. if all is well, restart your service (sudo service nginx restart)
  6. validate it’s working as intended

Code Sample:
server {
listen 80 default_server;
server_name "";
return 444;
}
server {
listen 443 default_server ssl;
server_name "";
return 444;
ssl on;
ssl_certificate /etc/nginx/certs/bogus/cert.pem;
ssl_certificate_key /etc/nginx/certs/bogus/privkey.pem;
}

 

References:

  • http://nginx.org/en/docs/http/request_processing.html#how_to_prevent_undefined_server_names

Google OnHub Port Forwarding Not Working

Issue: Port forwarding of external ports to internal devices using the Google OnHub router stopped working

Cause: A device supporting UPnP was recently added to the network. I was able to arrive at this hypothesis because once I swapped in a Linksys router with no static port forwarding configured, the offending device was available and responding externally.

Resolution: Disable UPnP on the offending device.

Atlassian Monitoring with JMX (Java Management eXtension)

Want to know some details on what’s going on with your Atlassian application? (JIRA, Confluence, any JVM application).

 

Add these lines your Java Options:

-Dcom.sun.management.jmxremote.port=8686
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=SERVER.DOMAIN.COM

Create a jmxremote.password file

  1. Copy C:\Program Files\Atlassian\JIRA\jre\lib\management\jmxremote.password.template to C:\Program Files\Atlassian\JIRA\jre\lib\management\jmxremote.password
  2. Edit jmxremote.password to add your credentials
  3. Set permissions on jmxremote.password
    1. Set owner the the user your Atlassian application runs as
    2. Remove inheriting permissions
    3. Remove all account permissions except for owner
    4. save your settings

Start your service & launch jconsole

Note: Running jconsole.exe -debug is helpful for troubleshooting

Use your favorite monitoring tool to collect the stats

SolarWinds SAM module supports JMX

Setup GNS3 Server w/ IOU support on a remote server (CLI only)

Get a VPS to host your GNS3 Server (or use an existing server you have)

  1. Go to VPS Dime (This link gives me credit for the referral)
  2. Create an instance using Ubuntu 16.04 LTS (Xenial)

Install GNS3

  • add-apt-repository ppa:gns3/ppa
  • apt-get update
  • apt-get install python3-setuptools python3-pyqt4 python3-ws4py python3-netifaces
  • apt-get install cmake libelf-dev uuid-dev libpcap-dev software-properties-common
  • apt-get install libssl1.0.0/xenial libssl-dev/xenial openssl/xenial
  • apt-get install gns3-server
  • dpkg –add-architecture i386 (two dashes, add, dash,architecture,space,i386)
  • apt-get update
  •  apt-get install gns3-iou
  • setcap cap_net_raw+ep /usr/bin/iouyap (If you want details)

Stuff you need to figure out 🙂

  • Get your IOU images and upload them
  • Install your iou license
    • Vi ~/.iourc
    • echo ‘127.0.0.127 xml.cisco.com’ >> /etc/hosts

Start using your GNS3 server from your GNS3 client

  • Make sure your client and server versions match or it will yell at you

Now to create a docker container to run this on my Synology instead of a externally hosted VPS

 

 

Backup remote linux machines to Synology

The Purpose

I use a number of Virtual Private Servers (VPS) and wanted to make a backup of the data and applications running on them.

The first step is to make a local copy of your data to a folder on the remote machine, then you can pull these files to the Synology NAS via a scheduled task. For my applications I simply used tar to backup all the directories I can about to a single file and a mysql dump to dump all the databases in the mysql server to a single file.

Setup Authentication

generate your keys


#ssh-keygen

  • do not configure with a password

Verify sshd is configured to use key files


vi /etc/ssh/sshd_config


AuthorizedKeysFile      %h/.ssh/authorized_keys

Add the public key to ssh authorized_keys


cat key.pub >> ~/.ssh/authorized_keys

Copy the private key to the Synology

Use any method you like for this. I personally simply copied the contents of the private key, then pasted it into a file on my local machine and moved it to an existing share on the NAS.

Connect to source machine from Synology and trust the source machine


chmod 400 $AbsolutePathToPrivateKey
ssh -p 22 -i $PRIVATEKEY your-user-name@server.your-domain.com

The scheduled task

Create the scheduled task

save the script below local to the Synology and make it executable

notes:

  1. You may need to enable SSH terminal access on your NAS.
  2. If you edit the script locally on a windows machine with Notepad ++ make sure you change the EOL (End of Line) to Unix


#!/bin/bash
USER="your-user-name"
SERVER="server.your-domain.com"
PORT="22"
SSHID="/volume1/backups/scripts/certificates/server.your-domain.com.privkey.pem"
SOURCE="/backups/synology/"
TARGET="/volume1/backups/server.your-domain.com/"
LOG="/volume1/backups/server.your-domain.com/backup.log"
/usr/bin/rsync -avz --progress -e "ssh -p $PORT -i $SSHID" $USER@$SERVER:$SOURCE $TARGET >> $LOG 2>&1

Run the script and verify your data is copied

References

http://raphael.kallensee.name/journal/how-to-backup-an-external-server-with-a-synology-nas-via-rsync/