“It doesn’t matter what we play, if we play it with intent, people will respond.” I’m paraphrasing Adam Neely. Sure does apply to a lot of things in life.
I’m starting to replace the fans in my LB4M to try to quiet it down. During this process I wanted to keep an eye on my temp and the associated fan speeds to make sure I wasn’t running into a problem with cooling. Here’s my Telegraf code with the applicable OIDs.
agents = [ “192.168.1.1:161” ]
timeout = “5s”
retries = 3
version = 2
community = “something”
max_repetitions = 10
name = “QuantaSwitch”
name = “Temp”
oid = “220.127.116.11.4.1.4418.104.22.168.22.214.171.124.0”
name = “Fan Speed 1”
oid = “126.96.36.199.4.1.44188.8.131.52.184.108.40.206.0”
name = “Fan Speed 2”
oid = “220.127.116.11.4.1.4418.104.22.168.22.214.171.124.1”
name = “Fan Speed 3”
oid = “126.96.36.199.4.1.44188.8.131.52.184.108.40.206.2”
I started with pulling all 3 original fans and only installing two of the Noctua fans. The temp went from 40c to 60c before I shut it down. The Noctuas were also starting to run pretty fast. I’ve reconnected one of the original fans and now the temp is around 48c. I’m not sure if this is because the Noctuas are less efficient or if having the mixed fans is causing them to be confused about the speed. I’ll get a third Noctua soon and report my results.
The fan tray just slides out the back, so no need to open up the case. It’s super easy. This is the fan I used, and it’s a straight fit: https://www.amazon.com/gp/product/B071W93333/ref=ppx_yo_dt_b_asin_title_o00_s01?ie=UTF8&psc=1
Following up on my previous post about running Grafana out to a Chromecast dongle on a TV, I’ve now tested it going directly to a Vizio 4k TV with the built in Chromecast functionality. Looks like it works fine and is actually pretty responsive. Unfortunately, it appears to be displaying in 1080P. I haven’t tested this much, so it might be possible to push it to 4k.
This is the tool I’m using on the server side: https://mrothenbuecher.github.io/Chromecast-Kiosk/
This opens up the possibility to use smaller, low end TVs for Grafana directly, but also for other signage or metrics displays. One reason I’ve been interested in this for a while was to run displays in a service desk/call center environment. Previously, you had to run video from dedicated PCs that would run a couple hundred dollars. With this solution you only need to buy a TV that has Chromecast built in and then run the Kiosk program on a Ubuntu VM. You can then have multiple different feeds running to different TVs.
I think the Chromecast implementation might still be a little finicky with picking a resolution, but it should be consistent across the same model of TV.
For a while now I’ve been running Plex as my primary media server. I’ve been trying to use the Plex DVR, but found it to be very finicky. Mostly, it would fail to record shows without telling me in any way. I’d review my upcoming recordings and then check them a couple of days after and they just weren’t there. No notice, nothing. I’ve also been using the Channels app on the AppleTV for live TV. Part of my frustration with PlexDVR is that the channel tuning was always slow. I think if it worked well otherwise I probably would have stuck with it, but that’s not the case. So, I decided to try Channels DVR. What a difference. All I can say is it just seems to work. Everything appears to be getting recorded, channel changes on live TV are fast, it’s really been an excellent experience. I had one issue that I was able to overcome, which is how to get the Channels recordings into Plex in an automated way. Channels DVR has a means of managing recordings and will tag commercials with chapter markers, but that’s just in the Channels DVR app.
I have Channels DVR running in a Docker in Unraid. Probably not necessary, but it’s handy. It was super simple to install. The problem is that Channels DVR wants to record everything to a “TV” folder within the directory you set it to for the recordings. In my case, I have a couple of Unraid user shares related to TV. I have the main TV shows storage at /mnt/user/TVShows and I have the recordings directory at /mnt/user/LiveTV. This means the Channels DVR recordings will go into /mnt/user/LiveTV/TV/showname.
The fix ended up being pretty simple. I run Plex on a Ubuntu server. Here’s my fstab:
//192.168.169.8/DVDs /mnt/DVDs cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/KidDVDs /mnt/KidDVDs cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/Movies /mnt/Movies cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/TVShows /mnt/TVShows cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/Pictures /mnt/Pictures cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/Music /mnt/Music cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/LiveTV/TV /mnt/DVR cifs guest,uid=1000,iocharset=utf8 0 0
Inside Plex I have the TV shows library mapped to both /mnt/TVShows and /mnt/DVR. Plex’s autodiscovery scans both folders just fine and coalesces the shows from both locations. I still need to figure out the comskip, but hitting the jump button is fine for now. In retrospect, I probably could have simply pointed it at TVShows and let it create a new directory in there, but this way keeps the folder structures a little cleaner.
I didn’t feel like fighting against Nutanix CE any more so I copied my VMs out using qemu and rebuilt the entire cluster with ESXi 6.7 free. Seems to be ok so far with a few minor issues. Now that I’m back in operation I have a concern about having a single copy of my VMs on standalone hosts with RAID0 single disk configs. A bit of a roll of the dice but it works. So, I need a backup solution. ESXi doesn’t support API connections for backups so you have to use something else. I’m using Veeam Linux Agent free on each of my VMs now. I’m sending the backups to an NFS share on the Unraid server. Here’s my list of commands so I can do this again later and not wonder how I did it.
Make sure we’re updated before starting:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get autoremove
SCP the Veeam download file to the server, which I think might just be a repository and user add script.
sudo dpkg -i veeam-release-deb_1.0.5_amd64.deb
sudo apt-get update
sudo apt-get install veeam
Next we’ll take care of the NFS share mounting.
sudo apt-get install nfs-common
sudo mkdir /mnt/backup
sudo nano /etc/fstab
Here’s my one fstab line for connecting to the Unraid NFS share:
192.168.169.8:/mnt/user/vmware /mnt/backup nfs auto 0 0
Mount the new share:
sudo mount -a
You should now be able to run the Veeam agent with “sudo veeam” which will launch the ncurses gui. Pretty obvious how to work through it from there, just select local storage and browse to the /mnt/backup location if that’s what you named it.
Since my last post I’ve been fighting to get the cluster back to some semblance of stability. Doesn’t look like that’s going to happen. However, there isn’t anything out there that approaches the ease of configuration, fault tolerance and hyperconverged storage I’m looking for. So…I’m making sure I have a backup copy of my VMs (especially Grafana) that would be problematic to rebuild, and then I’ll blow away the entire cluster and start over from scratch.
First challenge is backups. I found this link which describes what appears to be the simplest process: Backup and restore VMs in Nutanix CE
I’ve had to modify this a bit for my particular circumstances, but it appears to be working well. First, I’m lucky enough to have a Unraid NAS that holds all my media. Plenty of space and it happens to support NFS. With some trial and error and the instructions above I managed to work out the following command:
qemu-img convert -c nfs://127.0.0.1/default-container-46437110940265/.acropolis/vmdisk/be09f08c-56bf-472c-b8a6-16c022333ca5 -O qcow2 nfs://192.168.169.8/mnt/user/vmware/UnifiVideo.qcow2
Using this command and the previous instructions I was able to send the backup qcow2 image directly to Unraid for backup purposes. It sure would be nice to have a checkbox option, but this will do for now. Also, be sure to use the screen option in the other post because it’s likely your session will time out on modestly sized VMs.
I’ve had a number of problems with CE lately that look like they’ll be difficult to fix. Not sure I’ll have a better experience with a different platform, but I figure it’s about time to try. If nothing else, I’ll have some backup copies of my VMs, which is not an obvious thing in CE.
So, here’s what I’ve had to go through to export VMs:
Log into a CVM and then “acli” to find the vmdisk_uuid. Do a “vm.get [vm name] include_vmdisk_paths=1” to see a list of parameters. Copy the vmdisk_uuid from about the middle of the output. Exit the acli.
qemu-img convert -c nfs://127.0.0.1/PrismCentral-CTR/.acropolis/vmdisk/[vmdisk_uuid from the previous step] -O qcow2 ./data/stargate-storage/disks/drive-scsi0-0-0-1/NameOfVM.qcow2
This command might be different depending on your setup. I’m running PrismCentral so I had to use this location to find the disk. The path is listed above the uuid in the acli command. The output target will need to be adjusted depending on your space requirements. If you leave out the target destination I believe it will save it in your current directory. That might be ok or it might be too small for the VM. I checked mine and decided to save it to the slow tier disk. Depending on the disk size it might take a very long time.
Once it completes you can use SCP (not SFTP) to copy the file off. I used WinSCP to connect to the same CVM. The path for the above command is /home/nutanix/data/stargate-storage/disks/drive-scsi0-0-0-1. The disk copy is in there for me and I can SCP it to somewhere else. I tried sending it directly to an NFS share I have running on the network but it failed permissions, despite being whitelisted. This process is cumbersome, but it works. I’m sure there are better ways…
With all of this tutorial publishing I’ve been doing for Grafana, I have neglected to post what mine currently looks like. So, here it is. This is the 1080P version. I also built an iPad version that has a lot of the same info but compressed for the smaller screen.