Plex, Channels and trying to get the two to work together

For a while now I’ve been running Plex as my primary media server. I’ve been trying to use the Plex DVR, but found it to be very finicky. Mostly, it would fail to record shows without telling me in any way. I’d review my upcoming recordings and then check them a couple of days after and they just weren’t there. No notice, nothing. I’ve also been using the Channels app on the AppleTV for live TV. Part of my frustration with PlexDVR is that the channel tuning was always slow. I think if it worked well otherwise I probably would have stuck with it, but that’s not the case. So, I decided to try Channels DVR. What a difference. All I can say is it just seems to work. Everything appears to be getting recorded, channel changes on live TV are fast, it’s really been an excellent experience. I had one issue that I was able to overcome, which is how to get the Channels recordings into Plex in an automated way. Channels DVR has a means of managing recordings and will tag commercials with chapter markers, but that’s just in the Channels DVR app.

I have Channels DVR running in a Docker in Unraid. Probably not necessary, but it’s handy. It was super simple to install. The problem is that Channels DVR wants to record everything to a “TV” folder within the directory you set it to for the recordings. In my case, I have a couple of Unraid user shares related to TV. I have the main TV shows storage at /mnt/user/TVShows and I have the recordings directory at /mnt/user/LiveTV. This means the Channels DVR recordings will go into /mnt/user/LiveTV/TV/showname.

The fix ended up being pretty simple. I run Plex on a Ubuntu server. Here’s my fstab:

1
2
3
4
5
6
7
//192.168.169.8/DVDs /mnt/DVDs cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/KidDVDs /mnt/KidDVDs cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/Movies /mnt/Movies cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/TVShows /mnt/TVShows cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/Pictures /mnt/Pictures cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/Music /mnt/Music cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/LiveTV/TV /mnt/DVR cifs guest,uid=1000,iocharset=utf8 0 0

Inside Plex I have the TV shows library mapped to both /mnt/TVShows and /mnt/DVR. Plex’s autodiscovery scans both folders just fine and coalesces the shows from both locations. I still need to figure out the comskip, but hitting the jump button is fine for now. In retrospect, I probably could have simply pointed it at TVShows and let it create a new directory in there, but this way keeps the folder structures a little cleaner.

Free Veeam linux backups

I didn’t feel like fighting against Nutanix CE any more so I copied my VMs out using qemu and rebuilt the entire cluster with ESXi 6.7 free. Seems to be ok so far with a few minor issues. Now that I’m back in operation I have a concern about having a single copy of my VMs on standalone hosts with RAID0 single disk configs. A bit of a roll of the dice but it works. So, I need a backup solution. ESXi doesn’t support API connections for backups so you have to use something else. I’m using Veeam Linux Agent free on each of my VMs now. I’m sending the backups to an NFS share on the Unraid server. Here’s my list of commands so I can do this again later and not wonder how I did it.

Make sure we’re updated before starting:

1
2
3
sudo apt-get update
sudo apt-get upgrade
sudo apt-get autoremove

SCP the Veeam download file to the server, which I think might just be a repository and user add script.

1
2
3
sudo dpkg -i veeam-release-deb_1.0.5_amd64.deb
sudo apt-get update
sudo apt-get install veeam

Next we’ll take care of the NFS share mounting.

1
2
3
sudo apt-get install nfs-common
sudo mkdir /mnt/backup
sudo nano /etc/fstab

Here’s my one fstab line for connecting to the Unraid NFS share:

1
192.168.169.8:/mnt/user/vmware /mnt/backup nfs auto 0  0

Mount the new share:

1
sudo mount -a

You should now be able to run the Veeam agent with “sudo veeam” which will launch the ncurses gui. Pretty obvious how to work through it from there, just select local storage and browse to the /mnt/backup location if that’s what you named it.

Nutanix CE, Dell H700 and disk detection

Not sure why I didn’t post this before, but here it is. When installing Nutanix CE on a Dell server (R710 in my case) with a PERC H700 card you can’t do a pass-through on the disks. You have to configure them as a single disk RAID0 in the PERC config.  Ctrl-R at boot for the H700.  That works, but then CE can’t detect the type of drive it is.  It will fail with a message about missing requirements.  You have to tell the install routine which drive is the “rotational” drive.

Exit the install process and login as root:nutanix/4u.

1
dmesg | grep sda

You might need to do this for sda, sdb, sdc until you find the HDD, not the SSD. The drive sizes should tell you which is the right one.

1
echo 0 > /sys/block/sd?/queue/rotational

Where sd? is replaced with the correct sd. Logout and log back in as install. Should be fine from there. This is being done with the img install, not the iso. I did get a similar error on the iso but didn’t try to work around it. I’ll see if it works the same way with that.

Ok, not bailing on Nutanix CE just yet

Since my last post I’ve been fighting to get the cluster back to some semblance of stability.  Doesn’t look like that’s going to happen.  However, there isn’t anything out there that approaches the ease of configuration, fault tolerance and hyperconverged storage I’m looking for.  So…I’m making sure I have a backup copy of my VMs (especially Grafana) that would be problematic to rebuild, and then I’ll blow away the entire cluster and start over from scratch.

First challenge is backups.  I found this link which describes what appears to be the simplest process:  Backup and restore VMs in Nutanix CE

I’ve had to modify this a bit for my particular circumstances, but it appears to be working well.  First, I’m lucky enough to have a Unraid NAS that holds all my media.  Plenty of space and it happens to support NFS.  With some trial and error and the instructions above I managed to work out the following command:

1
qemu-img convert -c nfs://127.0.0.1/default-container-46437110940265/.acropolis/vmdisk/be09f08c-56bf-472c-b8a6-16c022333ca5 -O qcow2 nfs://192.168.169.8/mnt/user/vmware/UnifiVideo.qcow2

Using this command and the previous instructions I was able to send the backup qcow2 image directly to Unraid for backup purposes.  It sure would be nice to have a checkbox option, but this will do for now.  Also, be sure to use the screen option in the other post because it’s likely your session will time out on modestly sized VMs.

Bailing on Nutanix CE

I’ve had a number of problems with CE lately that look like they’ll be difficult to fix.  Not sure I’ll have a better experience with a different platform, but I figure it’s about time to try.  If nothing else, I’ll have some backup copies of my VMs, which is not an obvious thing in CE.

So, here’s what I’ve had to go through to export VMs:

Log into a CVM and then “acli” to find the vmdisk_uuid.  Do a “vm.get [vm name] include_vmdisk_paths=1” to see a list of parameters.  Copy the vmdisk_uuid from about the middle of the output.  Exit the acli.

Run this:

1
qemu-img convert -c nfs://127.0.0.1/PrismCentral-CTR/.acropolis/vmdisk/[vmdisk_uuid from the previous step] -O qcow2 ./data/stargate-storage/disks/drive-scsi0-0-0-1/NameOfVM.qcow2

This command might be different depending on your setup.  I’m running PrismCentral so I had to use this location to find the disk.  The path is listed above the uuid in the acli command.  The output target will need to be adjusted depending on your space requirements.  If you leave out the target destination I believe it will save it in your current directory.  That might be ok or it might be too small for the VM.  I checked mine and decided to save it to the slow tier disk.  Depending on the disk size it might take a very long time.

Once it completes you can use SCP (not SFTP) to copy the file off.  I used WinSCP to connect to the same CVM.  The path for the above command is /home/nutanix/data/stargate-storage/disks/drive-scsi0-0-0-1.  The disk copy is in there for me and I can SCP it to somewhere else.  I tried sending it directly to an NFS share I have running on the network but it failed permissions, despite being whitelisted.  This process is cumbersome, but it works.  I’m sure there are better ways…

Grafana and Chromecasting to a TV

I’ve wanted to use simple Chromecast dongles for pumping a Grafana dashboard to a TV for a while now.  The challenge has been how to effectively manage the casting source.  Chromecasts can’t manage any of their own content, they can only be a casting target.  I don’t want a mobile device sitting in the rack with it’s sole purpose being the casting function.  Management of that would be difficult.  I also want to be able to cast to multiple Chromecasts with the same content or different content.

Google makes this difficult by limiting the signing certificate in the casting protocol.  However, some people have worked around it.  I’ve tried two different casting servers and I’m having success with:

https://mrothenbuecher.github.io/Chromecast-Kiosk/

I set up a dedicated VM with pretty light resources, installed Tomcat and then added the Kiosk server.  It works really well with one caveat.

The Chromecast dongles will arbitrarily decide if the TV is 720P or 1080P.  For most video content this doesn’t have a dramatic impact, but when you’re trying to display a dense Grafana dashboard it can make all the difference.  Unfortunately, this isn’t controllable in any way.  You have to test it against the TV and hope it works.

I now have a 32″ TV in the kitchen which is 1080P (also hard to find at 32″) and displaying a pretty dense Grafana dashboard.  I’ll try to add a picture here later.  I think this could be incredibly useful for business monitoring scenarios and is a lot less expensive than putting a PC on a TV.

Ubiquiti USG site to site VPN with a single controller

Quick note about how to make this work. If you want to have two Unifi Security Gateways connect to a single controller at one location, you need to open up a couple of ports. Specifically, 8080 and 8443 need to be open to the controller. I strongly suggest you make sure you have a fixed IP at the remote side and you lock down the ACL (port forward) to only allow traffic to 8080 and 8443 from that remote public IP. Once you have that in place, you can have the remote USG be adopted by the controller’s public IP. Be sure to add it to a different site.

After adoption is successful in the controller, turning on the site to site VPN is trivial. In Networks you create a new network. Select Site-to-site VPN from the “home” site network configuration. You should see the new remote site listed at the bottom. Simple as that.

Poor trunk labeling

Just a reference for anyone trying to create a trunk between Dell and Ubiquiti switches. In my case, I wanted to create a trunk between a Dell N2048P and a Ubiquiti Edge Switch, with a native VLAN in use across the trunk. They both seem to have the notion of Access, General, and Trunk modes. They also both suggest that General is essentially the same as Trunk mode, where tags will be retained across the link, and any untagged frames will dump into the PVID, or Native VLAN.

Unfortunately, this doesn’t appear to be the case. The only way I could get it to work was to set both to Trunk, despite some suggestions out there about untagged traffic being ignored. It seems that as long as you don’t set the trunk to ignore untagged traffic manually, it will act like a regular old trunk port. What’s worse is that in the General mode it will sort of work.

Raspberry Pi Zero W headless setup

There seems to be conflicting info out there for how to accomplish this.  Compounded with the Zero’s different micro ports, it’s easier if you can set it up as a headless device.  Unfortunately, I found that if you try to do this locally, with a monitor and keyboard, the order of operations causes the ssh keys to be faulty.  So, let’s make it easy and just do it all from the start.  Download Raspbian Jessie Lite.  I believe the version I got is 03.02.  In Windows I’m using Rufus to write the disk image.  Select the disk image from the folder icon in the lower right.  You need to search for all file types, as it’s not an ISO.  Once you select it, Rufus will automatically determine that it needs to be a DD write for the file.  Fire it off on your micro SD card and let it finish.  It will take a few minutes.

When it’s done you’ll have a single partition viewable in Windows for the SD card.  Right click and create a Notepad file called ssh.txt in the root of that partition.  Just create it.  Don’t edit it.  Create another Notepad file and call it wpa_supplicant.conf.  Open that in Notepad and add the following:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
    ssid="Your SSID Here"
    proto=RSN
    key_mgmt=WPA-PSK
    pairwise=CCMP TKIP
    group=CCMP TKIP
    psk="YourPresharedKeyHere"
}

Modify the SSID and PSK to match your WiFi settings and save it.  Pop the SD card out, pop it into your Zero W and boot it up.  Wait a few minutes, and then you’ll need to find the dhcp address the Zero W received.  For me, I checked the dhcp scope on my firewall and found a new dhcp lease for a device named “raspberrypi”.  Open up Putty and ssh to that IP.  You should be connected at that point.  Probably a good idea to run raspi-config and update the password and host name.