OpenVAS on the Raspberry Pi 4 works really well!

I’ve been trying to set up OpenVAS on a tiny PC like the RPi lately. Based on this post: https://dayne.broderson.org/2018/05/24/RPi_Vulnerability_Scanner.html

I wasn’t expecting much success. And that’s what I found. It wasn’t really usable. I saw the TinkerBoard and the extra performance and RPi compatibility and thought that might be a good thing to try. I was never able to get a working mix of software on the Tinker. The repositories aren’t quite the same and some of the necessary packages, OpenVAS in particular, are not maintained.

Then the RPi4 was announced. I knew this might be the ticket to making this work. 4GB of RAM!!! Unfortunately, the 4GB model isn’t available yet, as far as I can tell. I decided to wait. Then I found out my sometimes partner in crime, Steve, had ordered a pair of 2GB models. Of course, I asked if I could borrow one.

I’m happy to report that the install is simple and it was able to scan my /24 that averages about 75 IP’s in about 3 hours! I didn’t modify anything performance related and didn’t have any of the problems that Dayne referenced.

I do need to sort through a few logistic issues to make this functional in the way I’m thinking. For one thing, I want to run this headless. No problem, except OpenVAS (specifically the GSA web management) is finicky about identifying the IP address it’s listening on. So far I have to manually set it and haven’t figured out how to make it work with 0.0.0.0. I’ll find a way. I also had a problem with the management interface failing due to memory starvation. I think. The scan will continue to run, so it’s not a showstopper. I’m hoping the 4GB will help with that. I also think it’ll be helpful to throw some heatsinks on. It seemed to get pretty hot.

Without further pre-amble, the steps I took. This is very similar to Dayne’s post with a few exceptions:

sudo apt update
sudo apt upgrade
sudo apt autoremove //habit for me
sudo apt-get install openvas
sudo openvas-setup //this took a good hour, maybe more, to run. Lots of errors, but it seems to have been ok.
sudo openvas-start

This is the part I haven’t sorted out yet. You need to update the service config files to reflect something other than 127.0.0.1. I tried 0.0.0.0 and was unsuccessful. When I changed it to the DHCP IP address it worked. I don’t see this as being a good solution as I intend on using this in different environments. Regardless, here are the commands until I can sort out the right answer:

sudo nano /lib/systemd/system/greenbone-security-assistant.service
sudo systemctl daemon-reload
sudo service greenbone-security-assistant restart

sudo nano /lib/systemd/system/openvas-manager.service
sudo systemctl daemon-reload
sudo service greenbone-security-assistant restart
sudo service openvas-manager restart
sudo service openvas-scanner restart

And my GS service line that I edited in the above command:
ExecStart=/usr/sbin/gsad –foreground –listen=0.0.0.0 –port=9392 –mlisten=0.0.0.0 –mport=9390 –allow-header-host=192.168.169.198

The –allow-header-host is the problem I need to fix. I’ll update as I make improvements. One of my goals is to attach a small LCD that will display the IP address.

Grafana and Chromecasting – Part 2

Following up on my previous post about running Grafana out to a Chromecast dongle on a TV, I’ve now tested it going directly to a Vizio 4k TV with the built in Chromecast functionality. Looks like it works fine and is actually pretty responsive. Unfortunately, it appears to be displaying in 1080P. I haven’t tested this much, so it might be possible to push it to 4k.

This is the tool I’m using on the server side: https://mrothenbuecher.github.io/Chromecast-Kiosk/

This opens up the possibility to use smaller, low end TVs for Grafana directly, but also for other signage or metrics displays. One reason I’ve been interested in this for a while was to run displays in a service desk/call center environment. Previously, you had to run video from dedicated PCs that would run a couple hundred dollars. With this solution you only need to buy a TV that has Chromecast built in and then run the Kiosk program on a Ubuntu VM. You can then have multiple different feeds running to different TVs.

I think the Chromecast implementation might still be a little finicky with picking a resolution, but it should be consistent across the same model of TV.

Grafana through built in Chromecast
My current layout, optimized for 1080p

OpenVAS for simple vulnerability scanning

I’ve been looking for a simple security vulnerability scanning tool for a while now. OpenVAS looked promising in the past, but I always had trouble getting it to work. I decided to work through it this weekend and figure out what I was doing wrong. In a nutshell, here it is:

GSM Community Edition and lagging OpenVAS Plugin Feed

The bottom line is that the free community version doesn’t update the feed except for daily. Per the link, you can manually force it at the initial setup and then wait about 30 minutes for the feed to download. This is what I did and now I have an excellent scanner! I also now have a list of things to fix on my home network. 🙂

Plex, Channels and trying to get the two to work together

For a while now I’ve been running Plex as my primary media server. I’ve been trying to use the Plex DVR, but found it to be very finicky. Mostly, it would fail to record shows without telling me in any way. I’d review my upcoming recordings and then check them a couple of days after and they just weren’t there. No notice, nothing. I’ve also been using the Channels app on the AppleTV for live TV. Part of my frustration with PlexDVR is that the channel tuning was always slow. I think if it worked well otherwise I probably would have stuck with it, but that’s not the case. So, I decided to try Channels DVR. What a difference. All I can say is it just seems to work. Everything appears to be getting recorded, channel changes on live TV are fast, it’s really been an excellent experience. I had one issue that I was able to overcome, which is how to get the Channels recordings into Plex in an automated way. Channels DVR has a means of managing recordings and will tag commercials with chapter markers, but that’s just in the Channels DVR app.

I have Channels DVR running in a Docker in Unraid. Probably not necessary, but it’s handy. It was super simple to install. The problem is that Channels DVR wants to record everything to a “TV” folder within the directory you set it to for the recordings. In my case, I have a couple of Unraid user shares related to TV. I have the main TV shows storage at /mnt/user/TVShows and I have the recordings directory at /mnt/user/LiveTV. This means the Channels DVR recordings will go into /mnt/user/LiveTV/TV/showname.

The fix ended up being pretty simple. I run Plex on a Ubuntu server. Here’s my fstab:

1
2
3
4
5
6
7
//192.168.169.8/DVDs /mnt/DVDs cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/KidDVDs /mnt/KidDVDs cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/Movies /mnt/Movies cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/TVShows /mnt/TVShows cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/Pictures /mnt/Pictures cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/Music /mnt/Music cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/LiveTV/TV /mnt/DVR cifs guest,uid=1000,iocharset=utf8 0 0

Inside Plex I have the TV shows library mapped to both /mnt/TVShows and /mnt/DVR. Plex’s autodiscovery scans both folders just fine and coalesces the shows from both locations. I still need to figure out the comskip, but hitting the jump button is fine for now. In retrospect, I probably could have simply pointed it at TVShows and let it create a new directory in there, but this way keeps the folder structures a little cleaner.

Free Veeam linux backups

I didn’t feel like fighting against Nutanix CE any more so I copied my VMs out using qemu and rebuilt the entire cluster with ESXi 6.7 free. Seems to be ok so far with a few minor issues. Now that I’m back in operation I have a concern about having a single copy of my VMs on standalone hosts with RAID0 single disk configs. A bit of a roll of the dice but it works. So, I need a backup solution. ESXi doesn’t support API connections for backups so you have to use something else. I’m using Veeam Linux Agent free on each of my VMs now. I’m sending the backups to an NFS share on the Unraid server. Here’s my list of commands so I can do this again later and not wonder how I did it.

Make sure we’re updated before starting:

1
2
3
sudo apt-get update
sudo apt-get upgrade
sudo apt-get autoremove

SCP the Veeam download file to the server, which I think might just be a repository and user add script.

1
2
3
sudo dpkg -i veeam-release-deb_1.0.5_amd64.deb
sudo apt-get update
sudo apt-get install veeam

Next we’ll take care of the NFS share mounting.

1
2
3
sudo apt-get install nfs-common
sudo mkdir /mnt/backup
sudo nano /etc/fstab

Here’s my one fstab line for connecting to the Unraid NFS share:

1
192.168.169.8:/mnt/user/vmware /mnt/backup nfs auto 0  0

Mount the new share:

1
sudo mount -a

You should now be able to run the Veeam agent with “sudo veeam” which will launch the ncurses gui. Pretty obvious how to work through it from there, just select local storage and browse to the /mnt/backup location if that’s what you named it.

Nutanix CE, Dell H700 and disk detection

Not sure why I didn’t post this before, but here it is. When installing Nutanix CE on a Dell server (R710 in my case) with a PERC H700 card you can’t do a pass-through on the disks. You have to configure them as a single disk RAID0 in the PERC config.  Ctrl-R at boot for the H700.  That works, but then CE can’t detect the type of drive it is.  It will fail with a message about missing requirements.  You have to tell the install routine which drive is the “rotational” drive.

Exit the install process and login as root:nutanix/4u.

1
dmesg | grep sda

You might need to do this for sda, sdb, sdc until you find the HDD, not the SSD. The drive sizes should tell you which is the right one.

1
echo 0 > /sys/block/sd?/queue/rotational

Where sd? is replaced with the correct sd. Logout and log back in as install. Should be fine from there. This is being done with the img install, not the iso. I did get a similar error on the iso but didn’t try to work around it. I’ll see if it works the same way with that.

Ok, not bailing on Nutanix CE just yet

Since my last post I’ve been fighting to get the cluster back to some semblance of stability.  Doesn’t look like that’s going to happen.  However, there isn’t anything out there that approaches the ease of configuration, fault tolerance and hyperconverged storage I’m looking for.  So…I’m making sure I have a backup copy of my VMs (especially Grafana) that would be problematic to rebuild, and then I’ll blow away the entire cluster and start over from scratch.

First challenge is backups.  I found this link which describes what appears to be the simplest process:  Backup and restore VMs in Nutanix CE

I’ve had to modify this a bit for my particular circumstances, but it appears to be working well.  First, I’m lucky enough to have a Unraid NAS that holds all my media.  Plenty of space and it happens to support NFS.  With some trial and error and the instructions above I managed to work out the following command:

1
qemu-img convert -c nfs://127.0.0.1/default-container-46437110940265/.acropolis/vmdisk/be09f08c-56bf-472c-b8a6-16c022333ca5 -O qcow2 nfs://192.168.169.8/mnt/user/vmware/UnifiVideo.qcow2

Using this command and the previous instructions I was able to send the backup qcow2 image directly to Unraid for backup purposes.  It sure would be nice to have a checkbox option, but this will do for now.  Also, be sure to use the screen option in the other post because it’s likely your session will time out on modestly sized VMs.

Bailing on Nutanix CE

I’ve had a number of problems with CE lately that look like they’ll be difficult to fix.  Not sure I’ll have a better experience with a different platform, but I figure it’s about time to try.  If nothing else, I’ll have some backup copies of my VMs, which is not an obvious thing in CE.

So, here’s what I’ve had to go through to export VMs:

Log into a CVM and then “acli” to find the vmdisk_uuid.  Do a “vm.get [vm name] include_vmdisk_paths=1” to see a list of parameters.  Copy the vmdisk_uuid from about the middle of the output.  Exit the acli.

Run this:

1
qemu-img convert -c nfs://127.0.0.1/PrismCentral-CTR/.acropolis/vmdisk/[vmdisk_uuid from the previous step] -O qcow2 ./data/stargate-storage/disks/drive-scsi0-0-0-1/NameOfVM.qcow2

This command might be different depending on your setup.  I’m running PrismCentral so I had to use this location to find the disk.  The path is listed above the uuid in the acli command.  The output target will need to be adjusted depending on your space requirements.  If you leave out the target destination I believe it will save it in your current directory.  That might be ok or it might be too small for the VM.  I checked mine and decided to save it to the slow tier disk.  Depending on the disk size it might take a very long time.

Once it completes you can use SCP (not SFTP) to copy the file off.  I used WinSCP to connect to the same CVM.  The path for the above command is /home/nutanix/data/stargate-storage/disks/drive-scsi0-0-0-1.  The disk copy is in there for me and I can SCP it to somewhere else.  I tried sending it directly to an NFS share I have running on the network but it failed permissions, despite being whitelisted.  This process is cumbersome, but it works.  I’m sure there are better ways…

Grafana and Chromecasting to a TV

I’ve wanted to use simple Chromecast dongles for pumping a Grafana dashboard to a TV for a while now.  The challenge has been how to effectively manage the casting source.  Chromecasts can’t manage any of their own content, they can only be a casting target.  I don’t want a mobile device sitting in the rack with it’s sole purpose being the casting function.  Management of that would be difficult.  I also want to be able to cast to multiple Chromecasts with the same content or different content.

Google makes this difficult by limiting the signing certificate in the casting protocol.  However, some people have worked around it.  I’ve tried two different casting servers and I’m having success with:

https://mrothenbuecher.github.io/Chromecast-Kiosk/

I set up a dedicated VM with pretty light resources, installed Tomcat and then added the Kiosk server.  It works really well with one caveat.

The Chromecast dongles will arbitrarily decide if the TV is 720P or 1080P.  For most video content this doesn’t have a dramatic impact, but when you’re trying to display a dense Grafana dashboard it can make all the difference.  Unfortunately, this isn’t controllable in any way.  You have to test it against the TV and hope it works.

I now have a 32″ TV in the kitchen which is 1080P (also hard to find at 32″) and displaying a pretty dense Grafana dashboard.  I’ll try to add a picture here later.  I think this could be incredibly useful for business monitoring scenarios and is a lot less expensive than putting a PC on a TV.

Ubiquiti USG site to site VPN with a single controller

Quick note about how to make this work. If you want to have two Unifi Security Gateways connect to a single controller at one location, you need to open up a couple of ports. Specifically, 8080 and 8443 need to be open to the controller. I strongly suggest you make sure you have a fixed IP at the remote side and you lock down the ACL (port forward) to only allow traffic to 8080 and 8443 from that remote public IP. Once you have that in place, you can have the remote USG be adopted by the controller’s public IP. Be sure to add it to a different site.

After adoption is successful in the controller, turning on the site to site VPN is trivial. In Networks you create a new network. Select Site-to-site VPN from the “home” site network configuration. You should see the new remote site listed at the bottom. Simple as that.