MFA in Office 365, not talkin’ bout Azure

Microsoft is frustratingly vague about support for basic MFA in all Office 365 offerings. They have these lists of feature support across different packages, which go into great detail and yet don’t include basic MFA. Maybe this post will get up in the rankings so others don’t have to spin their wheels looking for an answer.

My results for licenses that have basic MFA include:

  • F1
  • Business Premium
  • E1
  • E3
  • E5

I have not tested Business Essentials or Exchange Online licenses yet. However, I do have an old Exchange Online Kiosk account and it appears to allow enabling it.

In fact, when I try to enable MFA there does not appear to be a restriction based on license type. Let me know your results!

OpenVAS on the Raspberry Pi 4 works really well!

I’ve been trying to set up OpenVAS on a tiny PC like the RPi lately. Based on this post: https://dayne.broderson.org/2018/05/24/RPi_Vulnerability_Scanner.html

I wasn’t expecting much success. And that’s what I found. It wasn’t really usable. I saw the TinkerBoard and the extra performance and RPi compatibility and thought that might be a good thing to try. I was never able to get a working mix of software on the Tinker. The repositories aren’t quite the same and some of the necessary packages, OpenVAS in particular, are not maintained.

Then the RPi4 was announced. I knew this might be the ticket to making this work. 4GB of RAM!!! Unfortunately, the 4GB model isn’t available yet, as far as I can tell. I decided to wait. Then I found out my sometimes partner in crime, Steve, had ordered a pair of 2GB models. Of course, I asked if I could borrow one.

I’m happy to report that the install is simple and it was able to scan my /24 that averages about 75 IP’s in about 3 hours! I didn’t modify anything performance related and didn’t have any of the problems that Dayne referenced.

I do need to sort through a few logistic issues to make this functional in the way I’m thinking. For one thing, I want to run this headless. No problem, except OpenVAS (specifically the GSA web management) is finicky about identifying the IP address it’s listening on. So far I have to manually set it and haven’t figured out how to make it work with 0.0.0.0. I’ll find a way. I also had a problem with the management interface failing due to memory starvation. I think. The scan will continue to run, so it’s not a showstopper. I’m hoping the 4GB will help with that. I also think it’ll be helpful to throw some heatsinks on. It seemed to get pretty hot.

Without further pre-amble, the steps I took. This is very similar to Dayne’s post with a few exceptions:

sudo apt update
sudo apt upgrade
sudo apt autoremove //habit for me
sudo apt-get install openvas
sudo openvas-setup //this took a good hour, maybe more, to run. Lots of errors, but it seems to have been ok.
sudo openvas-start

This is the part I haven’t sorted out yet. You need to update the service config files to reflect something other than 127.0.0.1. I tried 0.0.0.0 and was unsuccessful. When I changed it to the DHCP IP address it worked. I don’t see this as being a good solution as I intend on using this in different environments. Regardless, here are the commands until I can sort out the right answer:

sudo nano /lib/systemd/system/greenbone-security-assistant.service
sudo systemctl daemon-reload
sudo service greenbone-security-assistant restart

sudo nano /lib/systemd/system/openvas-manager.service
sudo systemctl daemon-reload
sudo service greenbone-security-assistant restart
sudo service openvas-manager restart
sudo service openvas-scanner restart

And my GS service line that I edited in the above command:
ExecStart=/usr/sbin/gsad –foreground –listen=0.0.0.0 –port=9392 –mlisten=0.0.0.0 –mport=9390 –allow-header-host=192.168.169.198

The –allow-header-host is the problem I need to fix. I’ll update as I make improvements. One of my goals is to attach a small LCD that will display the IP address.

OpenVAS for simple vulnerability scanning

I’ve been looking for a simple security vulnerability scanning tool for a while now. OpenVAS looked promising in the past, but I always had trouble getting it to work. I decided to work through it this weekend and figure out what I was doing wrong. In a nutshell, here it is:

GSM Community Edition and lagging OpenVAS Plugin Feed

The bottom line is that the free community version doesn’t update the feed except for daily. Per the link, you can manually force it at the initial setup and then wait about 30 minutes for the feed to download. This is what I did and now I have an excellent scanner! I also now have a list of things to fix on my home network. ūüôā

Grafana and Chromecasting to a TV

I’ve wanted to use simple Chromecast dongles for pumping a Grafana dashboard to a TV for a while now.¬† The challenge has been how to effectively manage the casting source.¬† Chromecasts can’t manage any of their own content, they can only be a casting target.¬† I don’t want a mobile device sitting in the rack with it’s sole purpose being the casting function.¬† Management of that would be difficult.¬† I also want to be able to cast to multiple Chromecasts with the same content or different content.

Google makes this difficult by limiting the signing certificate in the casting protocol.¬† However, some people have worked around it.¬† I’ve tried two different casting servers and I’m having success with:

https://mrothenbuecher.github.io/Chromecast-Kiosk/

I set up a dedicated VM with pretty light resources, installed Tomcat and then added the Kiosk server.  It works really well with one caveat.

The Chromecast dongles will arbitrarily decide if the TV is 720P or 1080P.¬† For most video content this doesn’t have a dramatic impact, but when you’re trying to display a dense Grafana dashboard it can make all the difference.¬† Unfortunately, this isn’t controllable in any way.¬† You have to test it against the TV and hope it works.

I now have a 32″ TV in the kitchen which is 1080P (also hard to find at 32″) and displaying a pretty dense Grafana dashboard.¬† I’ll try to add a picture here later.¬† I think this could be incredibly useful for business monitoring scenarios and is a lot less expensive than putting a PC on a TV.

Ubiquiti USG site to site VPN with a single controller

Quick note about how to make this work. If you want to have two Unifi Security Gateways connect to a single controller at one location, you need to open up a couple of ports. Specifically, 8080 and 8443 need to be open to the controller. I strongly suggest you make sure you have a fixed IP at the remote side and you lock down the ACL (port forward) to only allow traffic to 8080 and 8443 from that remote public IP. Once you have that in place, you can have the remote USG be adopted by the controller’s public IP. Be sure to add it to a different site.

After adoption is successful in the controller, turning on the site to site VPN is trivial. In Networks you create a new network. Select Site-to-site VPN from the “home” site network configuration. You should see the new remote site listed at the bottom. Simple as that.

Poor trunk labeling

Just a reference for anyone trying to create a trunk between Dell and Ubiquiti switches. In my case, I wanted to create a trunk between a Dell N2048P and a Ubiquiti Edge Switch, with a native VLAN in use across the trunk. They both seem to have the notion of Access, General, and Trunk modes. They also both suggest that General is essentially the same as Trunk mode, where tags will be retained across the link, and any untagged frames will dump into the PVID, or Native VLAN.

Unfortunately, this doesn’t appear to be the case. The only way I could get it to work was to set both to Trunk, despite some suggestions out there about untagged traffic being ignored. It seems that as long as you don’t set the trunk to ignore untagged traffic manually, it will act like a regular old trunk port. What’s worse is that in the General mode it will sort of work.

Raspberry Pi Zero W headless setup

There seems to be conflicting info out there for how to accomplish this. ¬†Compounded with the Zero’s different micro ports, it’s easier if you can set it up as a headless device. ¬†Unfortunately, I found that if you try to do this locally, with a monitor and keyboard, the order of operations causes the ssh keys to be faulty. ¬†So, let’s make it easy and just do it all from the start. ¬†Download Raspbian Jessie Lite. ¬†I believe the version I got is 03.02. ¬†In Windows I’m using Rufus to write the disk image. ¬†Select the disk image from the folder icon in the lower right. ¬†You need to search for all file types, as it’s not an ISO. ¬†Once you select it, Rufus will automatically determine that it needs to be a DD write for the file. ¬†Fire it off on your micro SD card and let it finish. ¬†It will take a few minutes.

When it’s done you’ll have a single partition viewable in Windows for the SD card. ¬†Right click and create a Notepad file called ssh.txt in the root of that partition. ¬†Just create it. ¬†Don’t edit it. ¬†Create another Notepad file and call it wpa_supplicant.conf. ¬†Open that in Notepad and add the following:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
    ssid="Your SSID Here"
    proto=RSN
    key_mgmt=WPA-PSK
    pairwise=CCMP TKIP
    group=CCMP TKIP
    psk="YourPresharedKeyHere"
}

Modify the SSID and PSK to match your WiFi settings and save it. ¬†Pop the SD card out, pop it into your Zero W and boot it up. ¬†Wait a few minutes, and then you’ll need to find the dhcp address the Zero W received. ¬†For me, I checked the dhcp scope on my firewall and found a new dhcp lease for a device named “raspberrypi”. ¬†Open up Putty and ssh to that IP. ¬†You should be connected at that point. ¬†Probably a good idea to run raspi-config and update the password and host name.

Unifi APs in Grafana using SNMP

This is kind of goofy with how Ubiquiti doesn’t do well at supporting SNMP. ¬†For one thing, they don’t support it through the controller, only directly to each AP. ¬†But, you have to enable it at the controller to have it flip the switch on the APs so they’ll respond. ¬†They really want you to use the API, which is great if you’re a programmer. ¬†I am not. ¬†I’m a router jockey, so I like SNMP. ¬†Anyway, after finding and downloading the MIBs I had a look through them and sorted out a couple of OIDs I was interested in. ¬†Specifically, client count per radio and Eth0 bits in and bits out. ¬†Here’s what I loaded into Telegraf. ¬†You need a separate inputs section for each AP you want to monitor. ¬†Nope, not really an “Enterprise” approach.

[[inputs.snmp]]
agents = [ “192.168.x.x:161” ] ¬†## The IP of a single AP.
timeout = “5s”
retries = 3
version = 1
community = “RO_Community”
max_repetitions = 10
name = “UnifiWiFiOffice”
[[inputs.snmp.field]]
name = “Bits.Out”
oid = “1.3.6.1.4.1.41112.1.6.2.1.1.12.0”
[[inputs.snmp.field]]
name = “Bits.In”
oid = “1.3.6.1.4.1.41112.1.6.2.1.1.6.0”
[[inputs.snmp.field]]
name = “2.4.Clients”
oid = “1.3.6.1.4.1.41112.1.6.1.2.1.8.0”
[[inputs.snmp.field]]
name = “5.0.Clients”
oid = “1.3.6.1.4.1.41112.1.6.1.2.1.8.3”

Unraid shell script for getting stats into Grafana

Continuing the documentation effort. ¬†This is a shell script you run from Unraid in a cron job to feed stats to InfluxDB. ¬†You can then present them in Grafana. ¬†Note about that, I was having a lot of trouble getting the Grafana graphs to present correctly for anything coming from this script. ¬†I had to change the Fill from “null” to “none” in the graph. ¬†Not sure why that’s happening, but “none” gets it to behave just like everything else.

## Assembled from this post: https://lime-technology.com/forum/index.php?topic=52220.msg512346#msg512346

## add to cron like:

## * * * * * sleep 10; /boot/custom/influxdb.sh > /dev/null 2>&1

## //0,10 * * * * /boot/custom/influxdb.sh > /dev/null 2>&1
#

# Set Vars

#

DBURL=http://192.168.x.x:8086 ## IP address of your InfluxDB server

DBNAME=dashboard ## Easier if you pick an existing DB

DEVICE=”UNRAID”

CURDATE=`date +%s`

# Current array assignment.

# I could pull the automatically from /var/local/emhttp/disks.ini

# Parsing it wouldnt be that easy though.

DISK_ARRAY=( sdn sdl sdf sdc sdj sde sdo sdh sdi sdd sdk sdm sdg sdp sdb )

DESCRIPTION=( parity disk1 disk2 disk3 disk4 disk5 disk6 disk7 disk8 disk9 disk10 disk11 disk12 disk13 cache )

#

# Added -n standby to the check so smartctl is not spinning up my drives

#

i=0

for DISK in “${DISK_ARRAY[@]}”

do

smartctl -n standby -A /dev/$DISK | grep “Temperature_Celsius” | awk ‘{print $10}’ | while read TEMP

do

curl -is -XPOST “$DBURL/write?db=$DBNAME” –data-binary “DiskTempStats,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} Temperature=${TEMP} ${CURDATE}000000000” >/dev/null 2>&1

done

((i++))

done
# Had to increase to 10 samples because I was getting a spike each time I read it. This seems to smooth it out more

top -b -n 10 -d.2 | grep “Cpu” | tail -n 1 | awk ‘{print $2,$4,$6,$8,$10,$12,$14,$16}’ | while read CPUusr CPUsys CPUnic CPUidle CPUio CPUirq CPUsirq CPUst

do

top -bn1 | head -3 | awk ‘/load average/ {print $12,$13,$14}’ | sed ‘s/,//g’ | while read LAVG1 LAVG5 LAVG15

do

curl -is -XPOST “$DBURL/write?db=$DBNAME” –data-binary “cpuStats,Device=${DEVICE} CPUusr=${CPUusr},CPUsys=${CPUsys},CPUnic=${CPUnic},CPUidle=${CPUidle},CPUio=${CPUio},CPUirq=${CPUirq},

CPUsirq=${CPUsirq},CPUst=${CPUst},CPULoadAvg1m=${LAVG1},CPULoadAvg5m=${LAVG5},CPULoadAvg15m=${LAVG15} ${CURDATE}000000000” >/dev/null 2>&1

done

done
if [[ -f byteCount.tmp ]] ; then
# Read the last values from the tmpfile – Line “eth0”

grep “eth0” byteCount.tmp | while read dev lastBytesIn lastBytesOut

do

cat /proc/net/dev | grep “eth0” | grep -v “veth” | awk ‘{print $2, $10}’ | while read currentBytesIn currentBytesOut

do

# Write out the current stats to the temp file for the next read

echo “eth0” ${currentBytesIn} ${currentBytesOut} > byteCount.tmp
totalBytesIn=`expr ${currentBytesIn} – ${lastBytesIn}`

totalBytesOut=`expr ${currentBytesOut} – ${lastBytesOut}`
curl -is -XPOST “$DBURL/write?db=$DBNAME” –data-binary “interfaceStats,Interface=eth0,Device=${DEVICE} bytesIn=${totalBytesIn},bytesOut=${totalBytesOut} ${CURDATE}000000000” >/

dev/null 2>&1
done

done
else

# Write out blank file

echo “eth0 0 0” > byteCount.tmp

fi
# Gets the stats for boot, disk#, cache, user

#

df | grep “mnt/\|/boot\|docker” | grep -v “user0\|containers” | sed ‘s/\/mnt\///g’ | sed ‘s/%//g’ | sed ‘s/\/var\/lib\///g’| sed ‘s/\///g’ | while read MOUNT TOTAL USED FREE UTILIZATION DISK

do

if [ “${DISK}” = “user” ]; then

DISK=”array_total”

fi

curl -is -XPOST “$DBURL/write?db=$DBNAME” –data-binary “drive_spaceStats,Device=${DEVICE},Drive=${DISK} Free=${FREE},Used=${USED},Utilization=${UTILIZATION} ${CURDATE}000000000” >/dev/null 2>&

1

done