Setting up k3s on a single RPi4

Testing out the ability to run some containers on RPi4 and I have a single borrowed device. Most of the tutorials out there describe a multi-node configuration and that’s led to some challenges for me to understand this. First, I started with this tutorial: https://github.com/mhausenblas/kube-rpi

This gets you there most of the way, but I found a few errors in the configuration. First, the port forward config needs to include “–address 1.2.3.4” where the IP address is the external IP of your RPi.

Second, the token retrieval for logging into the dashboard was not correct. Here’s what I used instead:


1
2
kubectl get secrets
kubectl describe secret kdash-kubernetes-dashboard-token-mwgj5

The designator for your instance may be different after the token part of the name. Copying that token into the login allowed me to get into the dashboard.

I’m now having a problem with the dashboard throwing a bunch of errors. I’ll update when I have that worked out.

The RPi Zero W is…ok…

First of all, I have not returned to using the Chromecast for my Grafana visualization. I didn’t elaborate on this, but my previous post was driven by a change to Grafana visualization that effectively broke the ChromeCast Kiosk server I was using. The recommended solution was to use a proxy for displaying the iFrame, or something. I wasn’t keen on setting up another server instance (not that big a deal) just for the purpose of running the proxy. Yes, I know I could have set that up on the same kiosk server.

I quickly implemented the RPi Zero W I referenced in my previous post, just to get something up and running. It works ok in kiosk mode. There are a couple of issues that are relatively minor. It’s only remotely managed via SSH. The previous Chromecast was managed from the server. Very simple to make adjustments with that. Any adjustments to the RPi must be made via SSH. I’m comfortable with that, but I don’t access the RPi enough for it not to be a little relearning every time.

The RPi will not dynamically update a Grafana dashboard. I’m sure I could script it to reload on a certain interval. This was also a benefit of the ChromeCast server. Well, sort of a benefit. The CC Server could have a refresh interval which would reload the page. You could also manually force the refresh. My only way to do that with the RPi is to pull the power, or SSH in and reboot it. Or figure out how to script it. Which brings me to the big negative to the RPi…

This thing is slow. So….slow…. Doesn’t really matter when the dashboard is up and running. It will happily do the Grafana 1 minute refresh without issue. A reboot takes something like 10 minutes before the dashboard is fully drawn. Even the SSH session is pokey. Slow…. The Zero is still running the same CPU as the RPi 1, just a little overclocked. You’d think they could move to the RPi 2/3 CPU at this point and still keep the price down.

Which brings me to the reason I’m sticking with this as my solution. The Zero W is $10. The current generation CC is going for at least $30 and seems to rarely be on sale for less. A third of the cost sure buys a lot of patience with the other issues I have. The Zero W works fine with USB power from the TV, and can usually be hidden behind the TV with some double-sided tape. No case needed. Ok, you need to spend a few bucks on a SD card.

Could be a nice solution for a NOC full of TVs!

Monitoring Temp and Fan Speed in a Quanta LB4M switch

I’m starting to replace the fans in my LB4M to try to quiet it down. During this process I wanted to keep an eye on my temp and the associated fan speeds to make sure I wasn’t running into a problem with cooling. Here’s my Telegraf code with the applicable OIDs.

[[inputs.snmp]]
agents = [ “192.168.1.1:161” ]
timeout = “5s”
retries = 3
version = 2
community = “something”
max_repetitions = 10
name = “QuantaSwitch”
[[inputs.snmp.field]]
name = “Temp”
oid = “1.3.6.1.4.1.4413.1.1.43.1.8.1.4.0”
[[inputs.snmp.field]]
name = “Fan Speed 1”
oid = “1.3.6.1.4.1.4413.1.1.43.1.6.1.4.0”
[[inputs.snmp.field]]
name = “Fan Speed 2”
oid = “1.3.6.1.4.1.4413.1.1.43.1.6.1.4.1”
[[inputs.snmp.field]]
name = “Fan Speed 3”
oid = “1.3.6.1.4.1.4413.1.1.43.1.6.1.4.2”

I started with pulling all 3 original fans and only installing two of the Noctua fans. The temp went from 40c to 60c before I shut it down. The Noctuas were also starting to run pretty fast. I’ve reconnected one of the original fans and now the temp is around 48c. I’m not sure if this is because the Noctuas are less efficient or if having the mixed fans is causing them to be confused about the speed. I’ll get a third Noctua soon and report my results.

The fan tray just slides out the back, so no need to open up the case. It’s super easy. This is the fan I used, and it’s a straight fit: https://www.amazon.com/gp/product/B071W93333/ref=ppx_yo_dt_b_asin_title_o00_s01?ie=UTF8&psc=1

Grafana and Chromecasting – Part 2

Following up on my previous post about running Grafana out to a Chromecast dongle on a TV, I’ve now tested it going directly to a Vizio 4k TV with the built in Chromecast functionality. Looks like it works fine and is actually pretty responsive. Unfortunately, it appears to be displaying in 1080P. I haven’t tested this much, so it might be possible to push it to 4k.

This is the tool I’m using on the server side: https://mrothenbuecher.github.io/Chromecast-Kiosk/

This opens up the possibility to use smaller, low end TVs for Grafana directly, but also for other signage or metrics displays. One reason I’ve been interested in this for a while was to run displays in a service desk/call center environment. Previously, you had to run video from dedicated PCs that would run a couple hundred dollars. With this solution you only need to buy a TV that has Chromecast built in and then run the Kiosk program on a Ubuntu VM. You can then have multiple different feeds running to different TVs.

I think the Chromecast implementation might still be a little finicky with picking a resolution, but it should be consistent across the same model of TV.

Grafana through built in Chromecast
My current layout, optimized for 1080p

Plex, Channels and trying to get the two to work together

For a while now I’ve been running Plex as my primary media server. I’ve been trying to use the Plex DVR, but found it to be very finicky. Mostly, it would fail to record shows without telling me in any way. I’d review my upcoming recordings and then check them a couple of days after and they just weren’t there. No notice, nothing. I’ve also been using the Channels app on the AppleTV for live TV. Part of my frustration with PlexDVR is that the channel tuning was always slow. I think if it worked well otherwise I probably would have stuck with it, but that’s not the case. So, I decided to try Channels DVR. What a difference. All I can say is it just seems to work. Everything appears to be getting recorded, channel changes on live TV are fast, it’s really been an excellent experience. I had one issue that I was able to overcome, which is how to get the Channels recordings into Plex in an automated way. Channels DVR has a means of managing recordings and will tag commercials with chapter markers, but that’s just in the Channels DVR app.

I have Channels DVR running in a Docker in Unraid. Probably not necessary, but it’s handy. It was super simple to install. The problem is that Channels DVR wants to record everything to a “TV” folder within the directory you set it to for the recordings. In my case, I have a couple of Unraid user shares related to TV. I have the main TV shows storage at /mnt/user/TVShows and I have the recordings directory at /mnt/user/LiveTV. This means the Channels DVR recordings will go into /mnt/user/LiveTV/TV/showname.

The fix ended up being pretty simple. I run Plex on a Ubuntu server. Here’s my fstab:

1
2
3
4
5
6
7
//192.168.169.8/DVDs /mnt/DVDs cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/KidDVDs /mnt/KidDVDs cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/Movies /mnt/Movies cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/TVShows /mnt/TVShows cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/Pictures /mnt/Pictures cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/Music /mnt/Music cifs guest,uid=1000,iocharset=utf8 0 0
//192.168.169.8/LiveTV/TV /mnt/DVR cifs guest,uid=1000,iocharset=utf8 0 0

Inside Plex I have the TV shows library mapped to both /mnt/TVShows and /mnt/DVR. Plex’s autodiscovery scans both folders just fine and coalesces the shows from both locations. I still need to figure out the comskip, but hitting the jump button is fine for now. In retrospect, I probably could have simply pointed it at TVShows and let it create a new directory in there, but this way keeps the folder structures a little cleaner.

Free Veeam linux backups

I didn’t feel like fighting against Nutanix CE any more so I copied my VMs out using qemu and rebuilt the entire cluster with ESXi 6.7 free. Seems to be ok so far with a few minor issues. Now that I’m back in operation I have a concern about having a single copy of my VMs on standalone hosts with RAID0 single disk configs. A bit of a roll of the dice but it works. So, I need a backup solution. ESXi doesn’t support API connections for backups so you have to use something else. I’m using Veeam Linux Agent free on each of my VMs now. I’m sending the backups to an NFS share on the Unraid server. Here’s my list of commands so I can do this again later and not wonder how I did it.

Make sure we’re updated before starting:

1
2
3
sudo apt-get update
sudo apt-get upgrade
sudo apt-get autoremove

SCP the Veeam download file to the server, which I think might just be a repository and user add script.

1
2
3
sudo dpkg -i veeam-release-deb_1.0.5_amd64.deb
sudo apt-get update
sudo apt-get install veeam

Next we’ll take care of the NFS share mounting.

1
2
3
sudo apt-get install nfs-common
sudo mkdir /mnt/backup
sudo nano /etc/fstab

Here’s my one fstab line for connecting to the Unraid NFS share:

1
192.168.169.8:/mnt/user/vmware /mnt/backup nfs auto 0  0

Mount the new share:

1
sudo mount -a

You should now be able to run the Veeam agent with “sudo veeam” which will launch the ncurses gui. Pretty obvious how to work through it from there, just select local storage and browse to the /mnt/backup location if that’s what you named it.

Nutanix CE, Dell H700 and disk detection

Not sure why I didn’t post this before, but here it is. When installing Nutanix CE on a Dell server (R710 in my case) with a PERC H700 card you can’t do a pass-through on the disks. You have to configure them as a single disk RAID0 in the PERC config.  Ctrl-R at boot for the H700.  That works, but then CE can’t detect the type of drive it is.  It will fail with a message about missing requirements.  You have to tell the install routine which drive is the “rotational” drive.

Exit the install process and login as root:nutanix/4u.

1
dmesg | grep sda

You might need to do this for sda, sdb, sdc until you find the HDD, not the SSD. The drive sizes should tell you which is the right one.

1
echo 0 > /sys/block/sd?/queue/rotational

Where sd? is replaced with the correct sd. Logout and log back in as install. Should be fine from there. This is being done with the img install, not the iso. I did get a similar error on the iso but didn’t try to work around it. I’ll see if it works the same way with that.