Stop Recommending Home Assistant for Home Automation

Filed under: Home Automation,Technology — Shu @November 18th, 2020 8:05 pm

This post expands and refines on a comment I made in response to a blog post by Rob Dodson in which he explains why he ripped out a $9,000 automated home lighting system. His post centers around the Z-Wave device protocol, but touches upon Home Assistant, a popular open source control center for home automation devices.

About five years ago, I was looking for a home automation system to control my lights and security system. I choose Home Assistant as a controller on a Raspberry Pi to control Z-Wave devices.

  • Home Assistant is open source, aka free.
  • Raspberry Pis are cheap.
  • Z-Wave protocol is ubiquitous across many devices like switches and sensors.

Basically, it was a cheap solution that gave me a lot of flexibility beyond just controlling lights. I regret this whole decision whenever things go wrong, and it often does.

First off is Home Assistant. There is absolutely no way a non-Linux user would be able to use this software. Despite their claims and goals, it is not designed for the average user who even is fairly tech savvy. They have tried to be more user-friendly in the past year or two by moving a lot of admin functions to the web interface. However, so much troubleshooting inherently involves command lines in Linux, it is now in this obnoxious state where some operations need to partially be done in the web interface and then continued in the command line. For example, deleting a node. You have to initially “delete” it in the web app to clean things up, then manually remove them from the Home Assistant configuration files.

Another issue with HA is how they love to completely rewrite and break things for the sake of rewriting things. In my five years with HA, I have witnessed recommended install methods get created, rebranded, and deprecated (, complete overhaul of the UI architecture, complete overhaul of the Z-Wave architecture, and non-functioning items inserted as placeholders into the web interface for features to be implemented later. You *have* to read the release notes before updating. I’ve run into occasions where pip upgrades will break the whole setup. Good luck googling for answers in this environment when you have a problem. If the answer is more than a year old, it’s probably not relevant anymore.

“Well, just don’t upgrade,” you may say. The interface nags you when an upgrade is available. It is much softer now, but previously, it inundated you with security warnings about running an out of date version. If you’re an average user, who just trusts developers at face value, you’re going to upgrade. Because security!

We wouldn’t have to google problems so often if not for the documentation and terms. The documentation is not terrible, but it is not geared towards end users. Home Assistant abstracts out everything it touches into an entity and node. If you simply want to do X, you’ll have to identify what is the entity in your setup, and what is the node. When you add a Z-Wave device to your network, Home Assistant creates an entry for that one device in the user interface under Entities, Nodes, Switches, Z-Wave, and Devices. Five years later, and I still can’t tell you definitively what the difference is between the five.

It doesn’t help Home Assistant with how flakey Z-Wave and devices themselves are. If you want Z-Wave light switches, you’ll find three brands – GE, Honeywell, and Jasco. However, they are all Jasco. The first two (and maybe also some Leviton switches) are just rebranded Jasco. The first generation of Jascos are complete garbage. I’ve had a 40% failure rate with them. They’ll randomly die or fry in a power failure. So far, knock on wood, no failures with the second generation.

It’s extremely common for switches to misreport their state to Home Assistant, and Home Assistant can’t do anything about it. A switch may tell Home Assistant, “I am off” or “I am on,” but the actual mechanism to turn the light on and off may be broken. What’s Home Assistant supposed to do? Ask the switch, “Are you lying to me?” Home Assistant is trying to herd cats. This is a problem with not just home automation, but any technology that is built on a controller/worker architecture.

Home Assistant/Z-Wave delivered on my desire for a cheap and flexible option. Is it reliable? Nope. Is it easy to maintain and fix? Ha. All too often, I see recommendations for Home Assistant when someone asks, “I’m looking for a cheap home automation system.” Unless I know that person has experience in Linux, it’s a disservice to recommend Home Assistant. On the other hand, looking at other platforms and hubs, Home Assistant still gives you the most flexibility and consolidation. This really says more about the dismal state of home automation rather than praise for Home Assistant.

It’s 2020, and home automation is nowhere near “just works.”

Google and Apple Have Been Screwing Around With Email Security and it Doesn’t Actually Help Security

Filed under: Technology — Shu @September 19th, 2020 3:55 pm

TL;DR – Two-factor authentication (2FA) with Gmail, Apple’s native mail clients, and an email forwarding service is much harder to set up than it needs to be due to poor communication by both Google and Apple. The end result is that it’s just much easier, if people can’t sacrifice Gmail/Apple/Forwarding, to just turn off 2FA which is a bad thing.

I try to do the right thing. I turn on two-factor authentication on my Gmail account, don’t resuse passwords, etc. However, getting this up and running across all my Apple devices was a PITA and took a huge amount of googling and experimenting. Apple and Google’s history on mail, like just about everything, has been a clash of tech egos and insanity. I went through periods of turning 2FA on and off to get things to work correctly, and I can only imagine a non-technical user to just say, “fuck it” and keep it off. So despite Apple and especially Google’s preachings, their bickering and hair-pulling business decisions have helped no one but hackers.

In the old days, websites would just require a password to sign in. Two-factor authentication became popular to secure sites more important sites. Two-factor authentication is the process where a site or company requires a second way to verify you are trying to sign in besides just a password. Banks typically do this. If you are signing in on a new computer or browser, they often will send you a code in a text message on your phone, and you have to enter that code in the browser before they’ll let you in. The idea is that it’s harder to compromise both your password and your physical phone instead of just the former.

Google is Mostly to Blame

Instead of a simple text message, though, Google requires you to authenticate via the Google App. You have to download their Google App on the phone, and when you sign in elsewhere, the verification is done in that app. You have to click “Yes, I am trying to sign in on the desktop” in the Google App. There are also scenarios where they also text you a code to enter somewhere in addition to the app.

This is all fine and dandy, and makes it slightly harder to do an attack called SIM Swapping, but there are plenty of scenarios where you aren’t signing in via a browser. For example, an email application. I also have security cameras that are controlled by the camera maker’s app. I have to enter in a user name/password like the old days to email me when motion is detected. For those scenarios, Google introduced two methods – a setting called “Allow Less Secure Apps” and “App Passwords.” The former allows your username/password to be used anywhere in an old school way, and the later is available when you have two-factor authentication enabled. App Passwords generate one random password per application and ties it to a device, basically forbidding you to reuse passwords. So if someone breaks into my camera system, they can’t use that password to sign in on a Windows XP machine in Russia.

The problem is that Google, like usual, does a crappy job at communicating/educating the layman about these changes. Further, again, like typical Google, you can’t ever trust how long they will keep supporting a method or product.

For both “Allow Less Secure Apps” and “App Passwords,” they freakin’ warn you all over, that this is unsecure and you shouldn’t use it. That’s definitely true for Allow Less Secure, and technically true for App Passwords, but come on. App Passwords is essential for those scenarios without a browser and is actually a pretty good security mechanism.

Now you’re in a situation where App Passwords are perceived to be dangerous, and because Google warns you that it’s a bad way of doing, they imply they may yank them in the future. So you don’t use them. However, you still have to set up the security camera to email you when a rabbit is running across your driveway. How do you get that working? Turn off 2FA and turn on “Allow Less Secure Apps.” Congratulations, you are now no longer following best security practices. Despite what Google wants you to do, their communication compels otherwise.

Apple is Also Mostly to Blame

I fully admit a lot of Apple’s blame is because of my special snowflake setup. In the email specs and most apps including the Gmail web interface, you can choose a specific “Reply-To” address that is entirely different than the email address on the system. I use this feature. I use Gmail, but I never give out my Gmail address. The Reply-To address is my alumni address. I send out email through Google, and recipients’ email clients are supposed to use the Reply-To address. They don’t always and instead respond to my Gmail address. It works, but my Gmail address is not my “real” address. My school alumni address forwards to the Gmail account.

In my encounters, this is an uncommon requirement, but I’ve met many people, and organizations, where this setup is used for various reasons. It isn’t exactly a rare edge case.

Apple’s Mail applications has been inconsistent and slow at incorporating Google’s security changes for both macOS and iOS. The smoothest way for native applications to support 2FA is to launch a browser window, kick you over to Google, make you sign in with Google, Google then says yea or nay, the browser then tells the OS yea or nay. *Hand Waves* this is a type of 2FA called Oauth.

It’s Been Inconsistent on macOS

There are two ways you can add a Gmail account to macOS You can choose “Gmail” which does a lot of under the hood magic, or you can directly enter Gmail server settings via “Add Other Account…” I hated the “Gmail” method. It never allowed you to change your Reply-To address and hid what the hell was going on. For years up to and including Mojave, I had to use the later to accomodate my special snowflake email forwarding setup. It worked ok until 2FA was turned on. When you added an account via “Gmail,” it at least supported the Oauth flow to work. If you add it via “Add Other Account…” it doesn’t. So it looked like you were dead in the water with 2FA, Gmail, and a forwarding email.

However, you actually weren’t dead in the water because you could set up an App Password. The problem was usually didn’t tell you what was going on. It just spun and spun until time out and just says “ is not responding.” Occasionally a dialog box pops up telling you to use App Passwords, and that’s how I discovered that was the issue. I set up an App Password, an lo and behold, it works. Now, this specific issue could have been 100% Google’s fault. Maybe they didn’t provide a usable error for to report. Who knows.

Giving Apple credit, they did indirectly fix things in Catalina. If you add an email via the “Gmail” method, it now allows you to change your Reply-To email address. So you’re kicked into the Oauth flow and can use an email forwarder. As far as I know, though, “Add Other Account…” still doesn’t support Oauth.

It’s Been a Consistent Dumpster Fire on iOS

iOS is a different story. Setting up a gmail account as “Gmail,” does not allow you to change your email address in outgoing emails at all. This is true even up to iOS 14 released a few days ago, so that is not an option. Setting up the server directly, never gives you the App Password warning. It just spins, timesout, and says something like “ is not currently available.” So if you only use email on your phone, you’ll never be told to use an App Password. Again, still the same as of iOS 14.

I can’t imagine how you would figure this out if you didn’t have macOS or if your Google skills weren’t exemplary. If you use a Reply-To, it is just be easier to turn off 2FA and set up Gmail as a regular IMAP server. You are led to believe that 2FA, iOS, and email forwarders are an impossible combination.

Other Apps: Yay!

All this mess made me explore other email clients. The thing they all have in common is that they do not try to be clever and hide things, and allowed you to change your Reply-To address.

Edison on iOS was great until it wasn’t. Hopefully an issue like that won’t pop up again.

Thunderbird is a throwback that is usable, but the interface is woefully outdated for modern wide screens and laptops.

Yeah, I actually used Outlook for a while on my 2015 MBP (the last great Macbook Pro) on Mojave. It ain’t terrible.

In Conclusion, This is What You Need to Do…

With a lot of finagling, you can actually use two-factor authentication and special email setups with Apple software. On macOS, in < Catalina, use an App Password via server settings directly. In Catalina and hopefully beyond, add the account as a Gmail account. In any version of iOS, use App Passwords instead of your normal Gmail password. You will never know this strictly from using iOS.

It would be a lot easier if Google wasn’t constantly changing Gmail security up, and Apple stopped trying to be so damn clever with their email clients. Both companies need to stop trying to force people into screen time on their ecosystems at the expense of good security.

Great. As if Ketogenatics Weren’t Insufferable Enough

Filed under: Keto — Shu @September 13th, 2020 1:03 pm

Keto restrains aging-induced exacerbation of COVID-19 (maybe) in mice (not humans). They took a look at how ketogenesis protects mice from cytokines inflammation in a coronavirus that works similarly to SARS-CoV-2.

Completely wrong statement you can now start posting on Facebook: If you’re on a ketogenic diet, you’re immune to COVID-19.

No running at all this week. 1) California fires blowing up here making it look like fucking nuclear winter. Controlled burn ban didn’t work that well, did it, Californians? Maybe one day Californians will know about “unintended consequences.” 2) Been working on staining my deck this week.

I think Mount Rainier is Going to Erupt Soon

Filed under: Running,Running Gear — Shu @September 7th, 2020 1:41 am

At 10:00 pm tonight, I was on the last mile of my run. I approached an elderly couple at a bus stop and the wife jumped out in front of me. I swear, she was not even four feet tall. She was amazed and tickled at my ankle running lights and asked me all about them.

I’m not a morning person, so morning running is out. Some sort of visibility aid is essential up here where it gets pretty dark at 4:00 pm. These lights have gotten me not killed on more than one occasion. There’s several flashing modes including not flashing. I, of course, use the seizure-inducing mode that flashes like 20 times a second. A full charge will last me a couple of weeks, and so far, they’ve been rain-proof. Even in pouring storms, they’ve survived. Only $22 bucks, too.

She was so funny and cheerful. I was concerned that they were out so late, and whether they still had a ways to go after the bus dropped them off. I offered to buy an Uber ride for them, but they graciously declined. The bus drops them off right in front of their house, they said. I wonder if this is like the Hawaiian volcano goddess Pele vanishing hitchhiker stories, where the person who gives Pele a ride has their house spared when a volcano erupts. I think this encounter is a sign that Rainier will erupt soon.

Status-wise, things have been going well. Doing about 15 miles per week for the last couple of weeks. Full ketosis. About 10-10:30 per mile, with bursts down to 9:45/mile over one or two miles on occasion. Muscles feeling ok. Yoga’s going well, still a pain in the ass. I’ve been thinking about adding plyometrics to get my pace up and improve my stride. Pogoing is kept in control because there’s a lot less to do these days. There’s a swarth of new gyms to the west of me, and I’m in gold by most of them. Earlier this year, I hit gold in all of the gyms through my usual route in Mill Creek.

There is no Such Thing as a Free Lunch For Yaesu’s WIRES-X

Filed under: Amateur Radio — Shu @August 27th, 2020 1:11 am

I recently purchased parts and slapped them on a Raspberry Pi 2 I had lying around to create an MMDVM hotspot for digital ham radio. Using the popular and free software, Pi-Star, this would allegedly be a cheap way to get into all digital platforms – WIRES-X, D-Star, and DMR. However, since I own two Yaesu’s, I’ve concentrated on WIRES-X thus far.

To review, the traditional flow into WIRES-X (or DMR, or D-Star for that matter), goes like this:

ham -> repeater (via radio) -> internet -> WIRES-X server network (or DMR or D-Star) -> repeaters -> local hams (via radio)

As tech-savvy as Seattle is, I really only have three usable WIRES-X repeaters within range around me. Often, at least one of them is offline, or their connection to WIRES-X is offline. After all, it is reliant someone’s internet connection. Repeaters are set up by some generous person or club using their own time and money. Further, since they are open to anyone, I have to share the repeater with others. If I bounce around rooms, I’ll piss off other users and get my call sign banned. The hotspot solution gives me full control of how I want to get on WIRES-X, or so I thought.

The problem is that no one has actually reversed engineered a way into the WIRES-X system. It is a proprietary network who’s only entrance is Yaesu hardware. Even if the technical challenges are solved, there would probably be legal obstacles.

The most efficient way would be:

ham -> hotspot (via radio) -> internet -> WIRES-X server network -> Yaesu repeaters -> local hams (via radio)

Instead, somehow, Yaesu hardware must be inserted as the gatekeeper into WIRES-X. Again, generous people and groups have set this up. There are two platforms that have been created – Yaesu System Fusion (YSF) and Fusion Connect System (FCS). Each platform also has their own rooms, and they try to map to every WIRES-X room. However, the flow is now:

ham -> hotspot (via radio) -> internet -> YSF/FCS -> Yaesu repeater (via radio) -> internet -> WIRES-X server network -> Yaesu repeaters -> local hams (via radio)

First note Yaesu repeater -> internet. This connection is there for all flows. You have no control over this, and it can break. Now go back and note the YSF/FCS -> Yaesu repeater portion. That is a traditional radio broadcast between two internet servers. Think about that.
It’s a server with an antenna attached that sends a radio broadcast to a Yaesu repeater in the same room. There is no Cat5 cable here. Someone somewhere else basically two antennas attached to two servers. You have no control over it. It can also break. Worse, it can be wrong.. These are two critical points that can and often fail, and the initiating ham may not even know they failed.

Why am I berating this? Because in every tutorial or video on the Internet about hotspots, they advertise hotspots as a free way to get onto WIRES-X. This is false. Maybe it’s true for the end user, but the Yaesu repeater is purchased by someone and maintained by someone. The YSF and FCS platforms are paid for and maintained by someone, too. No money is being made by the YSF and FCS networks, and thus, there is no guarantee of accuracy or uptime.

This is not sanctimonious proselytizing that we hotspot users need to be eternally grateful to the YSF and FCS providers, although we should be thankful. The point is that shit breaks and the “it’s so easy” promises by tutorials and YouTubers is wrong.

Pi-Star lists FCS rooms and then YSF rooms. There is no guarantee that the FCS/YSF numbers will be the same the next time you connect. Servers can disappear because a guy is tired of keeping the bridge up, or his rabbit chewed on his cable. Since it’s just an ordinal list of numerical rooms, the lower the WIRES-X mapping on that list, the higher chance that mapping shifts affect that lower mapping. You have to occasionally check.

To map a FCS room to what you enter on your ham radio, list out the FCS rooms list starting with the first server, FCS01 to FCS05. The entry in your radio maps starting at 00010 and increments by one. So, to go to FCS00100, you enter 00010. FCS00100, Deutschland room, is 00020. FCS00290, Texas-Nexus, is 00244, etc. YSF room numbers are just the YSF room number. Several sites talk about a small algorithm you can calculate in your head to get the WIRES-X mapping. Because the bridges are so temporal, this algorithm often doesn’t work, especially for higher level numbers.

There is also no guarantee that the YSF and FCS rooms are correctly mapping to their advertised WIRES-X rooms. That advertisement is just that – a self-reported connection that is not based on anything. It is common for this mapping to go down, but the YSF/FCS portion remains up. FCS has a detailed status page of its servers, but nothing to indicate whether it’s successfully linked to its targeted network. In that case, all the YSF/FCS users are in a room thinking they’re in a WIRES-X room, but they are not. They’re in their own purgatory. You have two rooms, with identical names, existing with each one isolated from each other.

FCS is also limited to 100 mappings of WIRES-X rooms per server. There are five servers in total which means there is a max of 500 WIRES-X room connections. You bet there are more than 500 WIRES-X rooms. That part is theoretically infinite. To make up for it, YSF nodes are one YSF server to one WIRES-X rooms. However, no one person/group maintains all the YSF nodes. Each YSF mapping is at the whim of someone else. There are plenty of WIRES-X rooms that are unreachable by hotspot users.

TL;DR – There is no such thing as a free lunch with digital ham radio. Someone is paying for something, and there is no 100% guarantee of uptime or accuracy. In fact, from a software architecture perspective, the whole thing is held together by chewed gum and rope bridges. Be aware of this when troubleshooting, and don’t for one second think hotspot is a guaranteed way of getting into WIRES-X. Yaesu hardware, either yours or an open repeater, is the only way to fully get into WIRES-X.

I Need a Break

Filed under: Running — Shu @August 16th, 2020 2:45 pm

I’ve been running straight for the past 4-5 weeks now, doing 6 miles max on Sundays, (3-4 miles/runs, 2 weekday runs), under keto, and it’s going well. I’ve had a shit ton of house projects this weekend, cleaning the dining room and bird cages. I’m not running tonight, and will probably not run this week.

The good news is that the yoga has definitely, definitely helped. I still need the post-run routines, but at least they seem to be doing something. My stride is better. My ankle is 100%, my piriformis is about 95%. I only feel an occasional twinge in it.

Fully in keto, and I feel leaner and lighter. Got the Wii U back online, so maybe I’ll start weighing myself again. Dunno.

It’s Time to be Honest With Ourselves

Filed under: Keto,Running — Shu @June 28th, 2020 8:59 pm
  • Lack of a goal to work towards (i.e. official races) screws me up. I need that structure, I need a goal. I’m not training for anything right now, and “just because” isn’t enough to motivate me.
  • I’m doing 7-9 miles per run, 20+ miles per week, but there’s a lot of Pokemon Going during that time. I stop way too many times. Shadow fights, taking down decayed gyms, trying to find good quests, etc. I need to ask myself if I’m a runner who PoGos or a Pogoer who runs. It’s the former, and I need to start acting like it.
  • I’m not in keto. A few chips here and there, handfuls of peanuty crack, and Halo Top adds up. Weight has slowly crept up, which is not helping the muscle fatigue. And of course, you can’t outrun a bad diet. Running has kept the situation from exploding out of control, but it’s not enough to mitigate it.
  • I need to do yoga again. Before I started running again, I was doing P90X. The Yoga program was a pain in the ass, but I need to put that in my routine. These post-run stretches aren’t resolving the situation. My piriformis is tighter, my ankle is twingey due to the lack of leg strength.
  • There’s no races this year. Everything’s either cancelled or “virtual” which I am not into. There’s no need to rush and force myself to ramp up to 20+ miles a week. If I slowly build up to it, great, but there is no hurry. I just need to basically stay in running shape for next year to be really ready for training mode.

So, we’re going to reset things. Drop down the mileage, incorporate some yoga into the regiment. No, seriously, get back on keto. Really. I’m going to keep things at a medium level, or really slowly creep up, and get very fat adapted for next year.

Quarantines are the Best Thing Ever!!

Filed under: Running — Shu @April 22nd, 2020 5:47 pm

Oh man, the air is so. clear. I can feel the difference in my lungs! I’ve been back up to 21-22 miles a week since everything started. The roads and trails are empty! I started a new remote job back in November, so I’ve been WFH already. I’ve been taking advantage of this opportunity. Basically I shut down the laptop, then put on running shoes for an hour or an hour and a half!

Sure, the races are all cancelled because Americans are a nervous, paranoid group of hypochondriacs. However, I’m going to keep up the mileage and be in good shape to target a marathon for later this year or next year!

Raspberry Pi-Powered Time Machine Backups – No Really, Do It This Way. The Other Ways Are Wrong.

Filed under: Home Automation — Shu @April 14th, 2020 4:25 pm

TL;DR – If you want to set up a Raspberry Pi and a hard drive to act as a network Time Machine server, use Samba and ext4. If you find a post on some site that tells you to install netatalk and hfsprogs, flip off your monitor, close the tab, and Google again.

A long, long time ago, I looked into setting up a Raspberry Pi to handle network backups for all the Macs at home. There were many posts on how to set up a Pi with an USB-connected external hard drive to act as a now discontinued Airport Time Machine Capsule.

This was great and a great fit for several reasons:

  • All the machines that are relevant at home are Macs. Time Machine backups are easy to set up on the Mac side.
  • Apple discontinued the Time Machine Capsule because they want you to backup and store things in the cloud. This is stupid. Do. Not. Trust. The. Cloud. You have been warned.
  • Time Machine Capsule was a great idea in an overpriced (typical) package anyway. A Pi solution would be cheap and scale way better than TMC ever did.
  • A Time Machine Capsule would let me back up all my Macs to it on the network with little fuss on the Mac side
  • This is not my primary backup method. My primary backup method is copying my HDs every quarter or so onto a HD and putting that HD in a safe deposit box. This primary method is for the “oh shit I’ve lost everything but at least I have my files” scenario. This backup method is just for occasionally recovering a deleted file that I needed.

In google searches, there’s a slew of blogs and content farms that walk you through how to set this up. They all say the same thing, because, well, they just copy the same crap from each other. After some trial and error, turns out, the way these sites tell you do to it is absolute garbage. They’re just there to generate clicks for the site, and no one actually tested to see if they work.

Using a Raspberry Pi for Time Machine on Paul Mucur’s site is the only way to set up a reliable Pi-powered Time Machine server.. Do not trust any other site.

Do not trust howtogeek, lifehacker, etc. Do not trust any other blog site that shows you how to do this. Paul Mucur’s way is the only way to do it if you actually want a functioning Time Machine backup solution.

Every other site tells you to use Apple File Protocol via the unix package, netatalk, to communicate with the machine. They tell you also to format the drive with HFS+ and access it via the hfsprogs package. Neither technology works reliably. It’d be easy to blame the software, but it’s more likely AFP and HFS+ are terrible and have always been terrible.

I remember messing with netatalk in the late 90’s trying to get some Macs to talk to some unix shares. Fucking thing never worked, and apparently is only marginally ok today. However, it still isn’t reliable to handle Time Machine backups.

HFS+’s woes and shortcomings have been well critiqued.

At first I had an issue where the first Time Machine backup would work, taking like 24 hours. This is expected. Incremental backups would work fine for maybe three or four times. Then, the backup file on the drive would get corrupted. Time Machine would complain, and I’d have to start all over, running that initial 24 hour+ backup again. This was insane. I didn’t think it was necessarily the drive, because I could still connect and write to it ok. I could also transfer huge files, big and small, via other methods like FTP and SCP. My hunch was the networking was crap.

I found that Time Machine also supported Samba, not just AFP. I got rid of netatalk and switched to Samba. The incremental backup corruptions disappeared. Great! I thought. I could go into Time Machine and see all my backups. I thought my old nemesis netatalk was to blame. Turns out, it was only partial.

After a power outage, I found the whole damn USB drive was corrupted. I could go into the Pi and mount the shares just fine, but they were always mounted as read only. No amount of fscking could fix it. I thought it was the cheap Amazon Basics USB hard drive enclosure, so I got a better rated one. No difference. I always had to format the drive and do the huge initial backup again. I went through three power failures like this. I was about to give up on the whole idea, but on a whim, I tried formatting the drive as ext4. Macs can’t read/write ext4, but Samba handled that detail for the Macs. There was no reason that couldn’t work.

I’ve now suffered through three additional power outages, and I’m able to recover flawlessly. I have to manually re-mount the shares on the Pi, but they mount as RW. This is just an automation task that I haven’t implemented yet. The clients then can reconnect, and the backup file is not corrupted. I’m able to backup from High Sierra and Catalina. Time Machine just picks up where it left off on all machines.

My hardware setup is a Raspberry Pi 4, Seagate 6TB NAS HD in a Inatek enclosure connected via USB 3. The Pi 4 is connected via Cat 6 cable. Due to the USB and ethernet architecture limitations of other Pi’s, don’t do this on anything but a Pi 4.

I was going to document the steps, but found Paul Mucur already did it. The only thing I deviated from was not installing Avahi initially. This then requires me to manually reconnect from the Macs, after re-mounting should a power outage occur. I will try in install Avahi again in the future. The rest of the instructions on that site on Samba and ext4 are solid and look like they should work.

This Should Be Fun!

Filed under: Running — Shu @February 3rd, 2020 5:01 pm

I haven’t given any races my money yet. Paying attention to the WuFlu situation. I’ve basically spent my days reading scientific papers about it. It’s gonna come here. Stockpiled food, pet food, coffee, and TP just in case. This should be fun!

« Previous PageNext Page »