Rev.IO and Netbox…

Rev.IO has kind of a terrible asset management interface, (and they’ve killed their WWW subdomain without a redirect,… but that’s another rant) but we’ve chosen it due to its ability to handle MSP billing, so while it’s not ideal it’s something that my team has to work with.

So our first task was taking all of the Inventory that was in Rev.IO which as been transitioned into being our asset management platform as well since inventory is tracked in Rev.IO for billing purposes it does not make sense to use another platform given our small volume. The issue lies in how Rev.IO does its asset management and that is that you cannot tie an asset to a physical location. You can add multiple sites, but cannot associate an inventory item to a physical location. This is where Netbox comes in for us, since we have all of our physical locations in Netbox, we can associate an asset tag ID and asset ID from Rev.IO to a physical location or customer.

Here is what I did, with Python, to make this work;

First, we have to ensure that we import all of our customers from Rev.IO into Netbox, now we have two issues with Rev.IO here. Their documentation indicates that the ALL flag will get you all customer status’ – OPEN, CLOSED, PENDING etc… this is untrue, you must run each individually to get them all, the ALL flag, returns 0.

We created a custom_field in Netbox called revio that is the customer_id from Rev.io to allow pivoting on that id.

#!/usr/bin/python
import requests
import json
import argparse
import sys
import re

#Get customer list from Rev.IO and ensure they are all in Netbox
        r_parms = {"search.page_size":"100000","search.status":"OPEN"}
        response = requests.request("GET", url + "Customers", headers=headers, params=r_parms)
        netbox_r = requests.request("GET", netbox + "tenancy/tenants/?limit=10000", headers=netbox_h)
        #Read JSON in response
        data = response.json()
        #Read JSON in netbox
        netbox_d = netbox_r.json()
        #Iterate over JSON
        customercount = 0
        for i in data['records']:
                customercount += 1
                revio_id = i['customer_id']
                netbox_hasit = False
                for n in netbox_d['results']:
                        if n['custom_fields']['revio'] == revio_id:
                                netbox_hasit = True
                                break
                if not netbox_hasit:
                        print("No Netbox entry for " + i['service_address']['company_name'] + " - " + str(revio_id) + "Adding it")
			netbox_name = i['service_address']['company_name']
			netbox_slug = re.sub('[!@#$\'\".,&()]', '', netbox_name)
			netbox_slug = netbox_slug.replace("/", "-")
			netbox_p = {'name': netbox_slug, 'slug': netbox_slug.replace(" ", "-"), 'custom_fields': {'revio':revio_id}}
			sc = requests.post(netbox + "tenancy/tenants/", json=netbox_p, headers=netbox_h)
			print(sc.text)

The above will check Netbox for existing customers (tenants) that have the matching custom_field value and if not, add them. Again, you have to change the parms value from OPEN to CLOSED etc. to get everyone.

More to follow in a later post.

HashiCorp Vault

This is more of just a quick note to remember some things for the LDAP configuration when NOT using Microsoft AD.

For OpenLDAP/FreeIPA, this is what you need for correct group listing/membership-

Group Filter: (|(memberUid={{.Username}})(member={{.UserDN}})(uniqueMember={{.UserDN}}))
Group Attribute: cn
Group DN: cn=groups,cn=accounts,dc=<your domain>,dc=<your suffix>

Oxidized Container via Podman

I hadn’t really dived into familiarizing myself with Podman however, it does offer some really unique advantages over say Docker. Firstly, Docker requires that you run a daemon to manage your containers whilst Podman can start individual containers at boot via systemd. This is a huge benefit and so it looks like I’ll be moving most of my Docker containers over to Podman management. Podman is very easy to understand since… if you understand Docker, you understand Podman, the commands are even the same.

So on to Oxidized which is a RANCID replacement ( thank god ). It has a great community around it and support for lots and lots of different device types and works great with Gitlab.

So the important part for me was to get the systemd script setup for Oxidized, and here’s what that looks like;

more /etc/systemd/system/oxidized.service 
[Unit]
Description=Podman container-oxidized.service
Documentation=man:podman-generate-systemd(1)
Wants=network.target
After=network-online.target

[Service]
Restart=on-failure
ExecStart=/usr/bin/podman start oxidized
ExecStop=/usr/bin/podman stop -t 10 oxidized
ExecStopPost=/bin/rm -rf /etc/oxidized/pid
KillMode=none
Type=forking
PIDFile=/var/run/containers/storage/overlay-containers/.../userdata/conmon.pid

[Install]
WantedBy=multi-user.target

The command to initially generate this was;

podman generate systemd --name oxidized

However we have to enable Podman to also remove the .pid from Oxidized as sometimes that is not cleanly resolved so that is why;

ExecStopPost=/bin/rm -rf /etc/oxidized/pid

Has been added, finally save this file to say;

/etc/systemd/system/oxidized.service

And enable/start it via systemctl;

systemctl daemon-reload
systemctl enable oxidized
systemctl start oxidized

Site Back

Well this site is back, my career took me away for a while but I’m moving back into the FOSS space a bit and so hopefully we can get this updated.

Also it’s now running on an RPI 4B instead of a standard server which I guess is neat.. but in reality it’s just, you don’t need to burn the extra power these days even if you want to host your own site.

Dovecot 2.0.X for CentOS 6 with SSLv3 Disabled [ POODLE ]


Dovecot 2.0.X which ships with CentOS 6 does not have a flag to disable SSLv3 [ which is now broken by POODLE ]. Attached below are recompiled versions with SSLv3 disabled along with the SRC RPM.

dovecot-2.0.9-8.el6.x86_64.rpm
dovecot-devel-2.0.9-8.el6.x86_64.rpm
dovecot-mysql-2.0.9-8.el6.x86_64.rpm
dovecot-pgsql-2.0.9-8.el6.x86_64.rpm
dovecot-pigeonhole-2.0.9-8.el6.x86_64.rpm
dovecot-2.0.9-8.el6.src.rpm

Gluster 3.4.X is a turd


If you’re on 3.3 or older, skip 3.4.X or upgrade through it, it is full of memory leaks and bugs, 3.5.1+ is _considerably_ more stable. 3.4.4-1 has a massive memory leak that will eventually consume all available memory and crash. Avoid at all costs.