Not sure why I didn’t post this before, but here it is. When installing Nutanix CE on a Dell server (R710 in my case) with a PERC H700 card you can’t do a pass-through on the disks. You have to configure them as a single disk RAID0 in the PERC config. Ctrl-R at boot for the H700. That works, but then CE can’t detect the type of drive it is. It will fail with a message about missing requirements. You have to tell the install routine which drive is the “rotational” drive.
Exit the install process and login as root:nutanix/4u.
You might need to do this for sda, sdb, sdc until you find the HDD, not the SSD. The drive sizes should tell you which is the right one.
echo 0 > /sys/block/sd?/queue/rotational
Where sd? is replaced with the correct sd. Logout and log back in as install. Should be fine from there. This is being done with the img install, not the iso. I did get a similar error on the iso but didn’t try to work around it. I’ll see if it works the same way with that.
Since my last post I’ve been fighting to get the cluster back to some semblance of stability. Doesn’t look like that’s going to happen. However, there isn’t anything out there that approaches the ease of configuration, fault tolerance and hyperconverged storage I’m looking for. So…I’m making sure I have a backup copy of my VMs (especially Grafana) that would be problematic to rebuild, and then I’ll blow away the entire cluster and start over from scratch.
First challenge is backups. I found this link which describes what appears to be the simplest process: Backup and restore VMs in Nutanix CE
I’ve had to modify this a bit for my particular circumstances, but it appears to be working well. First, I’m lucky enough to have a Unraid NAS that holds all my media. Plenty of space and it happens to support NFS. With some trial and error and the instructions above I managed to work out the following command:
qemu-img convert -c nfs://127.0.0.1/default-container-46437110940265/.acropolis/vmdisk/be09f08c-56bf-472c-b8a6-16c022333ca5 -O qcow2 nfs://192.168.169.8/mnt/user/vmware/UnifiVideo.qcow2
Using this command and the previous instructions I was able to send the backup qcow2 image directly to Unraid for backup purposes. It sure would be nice to have a checkbox option, but this will do for now. Also, be sure to use the screen option in the other post because it’s likely your session will time out on modestly sized VMs.
I’ve had a number of problems with CE lately that look like they’ll be difficult to fix. Not sure I’ll have a better experience with a different platform, but I figure it’s about time to try. If nothing else, I’ll have some backup copies of my VMs, which is not an obvious thing in CE.
So, here’s what I’ve had to go through to export VMs:
Log into a CVM and then “acli” to find the vmdisk_uuid. Do a “vm.get [vm name] include_vmdisk_paths=1” to see a list of parameters. Copy the vmdisk_uuid from about the middle of the output. Exit the acli.
qemu-img convert -c nfs://127.0.0.1/PrismCentral-CTR/.acropolis/vmdisk/[vmdisk_uuid from the previous step] -O qcow2 ./data/stargate-storage/disks/drive-scsi0-0-0-1/NameOfVM.qcow2
This command might be different depending on your setup. I’m running PrismCentral so I had to use this location to find the disk. The path is listed above the uuid in the acli command. The output target will need to be adjusted depending on your space requirements. If you leave out the target destination I believe it will save it in your current directory. That might be ok or it might be too small for the VM. I checked mine and decided to save it to the slow tier disk. Depending on the disk size it might take a very long time.
Once it completes you can use SCP (not SFTP) to copy the file off. I used WinSCP to connect to the same CVM. The path for the above command is /home/nutanix/data/stargate-storage/disks/drive-scsi0-0-0-1. The disk copy is in there for me and I can SCP it to somewhere else. I tried sending it directly to an NFS share I have running on the network but it failed permissions, despite being whitelisted. This process is cumbersome, but it works. I’m sure there are better ways…