Galapagos Photography Tips

Filed under: Technology — Shu @September 4th, 2021 4:25 pm

The photos are Here.

The Galapagos is often a once-in-a-lifetime trip. So, in preparation, I wanted to get it right for photography. I did a bunch of research beforehand, and found some good tips and not good tips. The bad ones will show up in google searches, so be careful.

You have no control over time and space.

If you take a standard Galapagos cruise like most people do, you are not in control of your schedule. You go everything with your fellow passengers. In excursions, you are often hurried along. You sleep on the boat. You wake up, you eat breakfast on the boat. You get on a dinghy to an island. You have an activity, whether a hike or scuba, in the morning, go back to the boat, and have a hike or scuba in the afternoon. Your itinerary is heavily regimented, and you cannot go anywhere without a guide. This is the main point to remember. There are ramifications to this:

  • You cannot go anywhere off designated trails to get the perfect shot. No climbing on things, either (duh).
  • You will not be able to camp out at a spot for extended periods of time waiting for a perfect shot. Guides will tell you to keep moving with the group to stick to the schedule.
  • You will not have any control of where you are Golden Hours. You’ll pretty much be on the boat in the morning, so forget that one. Evenings it depends on the time of year, but fair chance you’ll miss those, too.
  • You will not be able to return to a location. These tours have a set itinerary. When you’re done, you’re done. If the weather didn’t cooperate, that’s a pisser. You’ll have to live with full sun beating down on you or full clouds.

You will be snorkeling.

This is a good time to try underwater photography. I used my old Canon XSi because it would have been quite a bummer to screw something up, have the bag leak, and destroy my primary camera on this trip. I bought a Ewa Marine underwater bag and it worked perfectly. I didn’t want to blow $1-2K on a real hard case just to goof around under water, nor did I want to trust a camera with a cheap Chinese made bag from Amazon. The Ewa Marine bags at $300 was a good compromise. It worked perfectly.

Prior to my purchase I asked photography forums about this bag. Almost universally they ragged on it, saying bags are untrustworthy, no matter the equipment, it will eventually leak, blah blah blah. It was all idiotic, elitist BS. I might use this bag once a year. I’m not going to be abusing it. A $300 bag for occasional use is fine. Buncha dumb gatekeepers who can’t keep things in context.

Only thing I would have changed is maybe I should have gone ahead and gotten a red filter. The visibility is not as clear as you would expect in some areas. Five meters under water is really not a lot. Your subjects will likely be more than five meters away in any direction. Plus, that rule is for full sun, which even in July, we did not get at all during our trip.

You probably won’t need a tripod.

Maybe a monopod. But see the point above for having to move often. You often won’t have time to set everything up perfectly, take your perfect 3 second shot, and move onto another angle to repeat. I had a camera backpack with a small tripod and never used it.

Plus, the trails are *very* rugged. The trails are often nothing but large volcanic rocks that require climbing. It would be a pain in the ass to take a sturdy Manfroto tripod with you.

You will need a good telephoto lens

One forum I ran across, someone said you won’t need a telephoto lens because the animals are so tame. Utter BS the worst advice about Galapagos photography I found. First off, they’re animals. They’re not obligated to get right up to you. Second, you absolutely cannot deviate off the trails. Our guide had stories where he’s had to boot people from the tour because they kept wandering off. This rule is no joke. You will need a good telephoto to reach those beyond the immediate trail areas. Third, birds. You can get some magnificent pictures of birds in flight if you have a good lens.

It definitely was a trip of a lifetime and I got the chance to take some amazing photos. Just remember to bring your best equipment, whatever it may be, a lot of common sense, and understand that while the wildlife is tame, the restrictions offer some challenges to get the perfect photo.

I was in the Galapagos for a week.

Filed under: Technology — Shu @August 14th, 2021 2:20 pm

And spent almost two weeks total with Kali Linux on an old MacBook Air as my only computer. People are often warned that Kali should not be considered as a normal everyday laptop due to hardware compatibility issues, so I was curious how true that was.

Why Linux and Why Kali?

I wanted a laptop that could do everything I needed, but I wouldn’t cry if it got lost or stolen. Everything about the Galapagos is very controlled and safe, but we were putzing around Ecuador for a few days before with long layovers in Mexico City and Houston. Linux on an ancient and clean laptop was an ideal choice. Everything would be locked down, and if it got stolen, the damage of losing accounts and personal data would be isolated.

Kali was chosen because I had been working with it for a while already in VMs. I didn’t have enough time beforehand to screw around with changing desktop environments, etc. I needed something up and running sooner rather than later. I hate GNOME, and the lighter weight of xfce would be ideal for the older specs of this Air (4GB ram, 1.8 i5).

What Was I Going to Do With It

  • Standard web, email, youtubing
  • Slacking, Discording, and Signaling
  • Docker to screw around with coding if I had down time
  • Image sorting and previewing of RAW photos. Not editing.

What Went Well

Web, Email, and Youtubing

Stellar. Absolutely no problems. The included Firefox worked well, but I installed Brave and went to town. I never had any problems with any sites including ordering stuff off Amazon. Email we done via web clients. No problems there, either, and with passwords never being saved, I didn’t have to worry.

Slacking, Discording, and Signaling

No problems here, either. The fan kicked on due to the high demands, but they all worked great. You had to add repos for all three, so it’s one more step than apt-get install with the command line or GUI. I was able to keep in touch with friends.

Docker and Coding

No problems here, either, but then again, I really didn’t have time to do much. I downloaded a simple Python editor and goofed with code here and there on the plane, but nothing too heavy.


No problems connecting to my work VPN (OpenVPN) via the commandline. I didn’t need to, but felt connected to the real world should there have been any emergencies.

What Sucked

Image Sorting

I wanted a way to review the hundreds of pictures I would take per day and quickly toss out any that obviously sucked. I didn’t want to edit, because I knew this 4GB machine wouldn’t be able to handle it.

Overall, the experience sucked. I didn’t get a chance to download a real program like RawTherapee or Lightzone beforehand. Still, it wouldn’t have made things better due to their high demands and it was overkill for the task. There are two included programs in Kali that can view RAW images. Ristretto which had a bad habit of opening *every* file in a directory, which killed my workflow. Atril, which opens all images in their own separate window, didn’t tile the windows sequentially, and doesn’t have a deletion feature.

I ended up using Brave of all things and then manually deleting later. Brave at least let me quickly view images in sequential order. The only thing I was happy with was the SD card in the Air which allowed me to easily transfer the files to the hard drive. Kali handled this just fine.

Freakin’ NTP of All Things

Shortly after installing, the system clock would always set itself to February 1, 2021 8:00 am. I have no idea what is so special about this timestamp. This was a huge problem because it made SSL certificates on sites invalid, which nuked web browsing.
Looking online, there were a bunch of posts blaming timesyncd, system clock, and NTP. I eventually found one solution that work:

This took a long time to find and I’m still not sure why it worked, but it’s the only solution that worked. Clearly, though, this is a problem that Kali should address.

Waking from Sleep/Suspension

I didn’t dive too far into this, but there are a lot of discussion across all distros about how their laptops won’t wake from sleep correctly, often forcing restarts. My experience mirrored this. If I closed the laptop or set it on Suspend/Hibernate in the UI menu, sometimes I could wake it later by pounding on the keyboard, opening the lid, etc. Usually I couldn’t and I had to force a restart. I ended up just shutting down whenever I remembered.

The most common “try this” solution was to increase the swap size. I did, but it didn’t work. It seems the system would wake, but the display wouldn’t. This is probably a huge problem tied to hardware that is not worth exploring.

Hotel Webcaptive Screens

You know those screens that popup, when you go to a website, in cafes and hotel networks, that require you to accept the terms and agreement or enter your room number, before you can connect? Linux in general, Firefox specifically, don’t handle these well and generally won’t show those popups.
The most reliable way is to connect to a network, go into the terminal, curl -LG to Google, see the URL that it really connects to, copy/paste that URL into a browser, then accept the terms. Otherwise, nothing connects and you’re wondering why Linux can’t wifi.


Yes, a very pleasant experience if you know what you’re doing! You do need a fair amount of Linux experience to figure out things like the NTP issue and webcaptive issue. It met almost all my needs, and I felt safe with it. I will definitely use this setup for traveling out of the country in the future.

Books on macOS and iDevices are a Mess

Filed under: Uncategorized — Shu @June 25th, 2021 7:16 pm

What was once a simple paradigm between syncing books on desktop machines and iPhones are now a mess thanks to iCloud, the detachment of iDevices from the desktop, bust mostly just plain half baked implementations. I simply want to take a PDF or eBook and read it either on my desktop, my iPad, or iPhone. The later two would have to support offline reading, like on a plane.

After googling and getting nothing but old blog spam from 2013 and countless toggling of settings, I have discovered:

1) If Books on macOS or iOS/iPadOS is set to use iCloud Drive, you will not be able to transfer a book on your hard drive to an iDevice. Syncing does not work, dragging and dropping to the Books section on macOS does nothing. The user is left with no error messages or warnings.

2) You cannot see your Books through nor iCloud Drive on macOS. The only way to see Books on iCloud is to turn on iCloud for Books on the Mac or iDevice.

3) Dragging a book/PDF from your file system to macOS books when iCloud is turned on for Books will upload the Book/PDF to iCloud. Maybe.

4) Unless you want *all* your books and PDFs to be on iCloud, you need to download everything to your Mac, delete them from Books, and then manually sync. Otherwise, the ridiculously small 5GB iCloud space you’re given will be eaten up real quick.

5) I purchased Neuromancer. Books keeps offering the other two books in the trilogy, and I can’t remove them. I can’t remove them from my macOS or iDevices. My wife bought a series of other books. I can’t remove them from my macOS or iDevices, either. I can’t get rid of this Winnie the Pooh shit, either.

6) You have to go into your account settings to unhide books. There isn’t an easy way to see *all* your purchases from the store. There is no Hide functionality in macOS Books. I’m not sure how to bring them once deleted.

I wish someone could explain to me this paradigm. If it’s just an eBook store, I shouldn’t be able to manage my own PDFs and eBooks with it. Either way, the behavior shouldn’t be completely different when one device is using iCloud.

TL;DR – Apple’s eBook implementation is a complete clusterfuck. Turn off iCloud usage for Books on all your devices and just manage it yourself via the sync function on the device in macOS.

Adventures with Python’s Asyncio

Filed under: Technology — Shu @June 11th, 2021 6:02 pm

I finally got the chance to work with Python’s Asyncio package in a semi-real world project recently. I’ve had a scraping program that scrapes investment fund data and inserts it into a database. As the number of funds grew, the number of investment vehicles grew. In revisiting the app, I decided it was a good time to finally try rewriting it with asyncio.

Asyncio has been out for years now, but it’s gone through some criticisms and knocks. Instead of trying to wrap my head around it, my strategy has been to wait things out and use Go for data engineering work instead. Go makes concurrency wonderfully easy, but I didn’t want to redo this app in Go because of this app’s reliance on BeautifulSoup for parsing, which is pretty incredible. Through a few Python releases, asyncio has been honed, simplified, and abstracted out for mortals to use, so I thought I’d give it a shot.

There are two key steps in this app that would benefit from async – the act of retrieving and parsing of a page, and the act of saving the data to a database. I read a few tutorials and gave it a first pass. The results were pretty surprising.

Trial 1: Synchronous, Parsing and Clean Database (832 Parsings)
Trial Milliseconds
1 366083
2 298531
3 258822
4 245793

Average: 392307.25, SD 54056.86

Trial 2: Asyncio, Parsing and Clean Database (832 Parsings)
Trial Milliseconds
1 530063
2 456371
3 429044
4 438373

Average: 463462.75, SD 45825.87

There was a noticeable “warm-up” with the first runs, but subsequent runs were usually faster. The key takeaway, though, was that while asyncio was more consistent, on average, it was almost 60% slower than synchronous!

Clearly I was doing something wrong. I suspected three things:
1) I did not fully understand the package and wrote the whole thing incorrectly. (High suspicion)
2) Database operations were not being asynced. (High suspicion)
3) BeautifulSoup and downloaded was not being asynced. (Low suspicion)

I attacked #1 and #2 simultaneously.

I started to re-read tutorials and documentation. I made some changes to the code, re-read a lot of things. I mainly learned that a lot of blog posts and documentation out there is outdated, needlessly complex, and/or did not simulate real world situations. Sleep() does not replace blocking operations IRL! Defining a simple async function that is awaited definitely fit my needs, and thinking back on work projects, maybe 80-90% of my historical data engineering needs.

After this, subsequent runs were similar numbers. The problem must be elsewhere.

I went down the rabbit hole of asyncing the database operations. The project uses SQLAlchemy for database operations. The result of this dive was not good. SQLAlchemy’s async features are in beta. That’s fine. However, the drivers need to be async, and the only reliable one right now is for PostGres. There is for MariaDB/MySQL in aiomysql, but I ran into so many bugs that are still open, I had to abandon that effort.

So, I was ready to just ditch asyncio for now and stick with synchronous code. Just to test, though, I removed parts of the code that wrote to the database. My code would just download and parse. Guess what? Similar results. The async version was much slower. Database operations were not the blocking factor.

At this point, I felt more confident with how asyncio worked. I thought this through. The main workhorse functions did parsing and database insertions. Since they were defined as coroutines, they all had to run and complete. The parsing had to occur before the database operations. There was no way around that. Whether BeautifulSoup or SQLAlchemy was async or not didn’t matter. Both had to run, so both were in the same async’ed function.

What else could it be? Reading more and thinking about it, there’s a lot of overhead in asyncio, and have noted it is slower in certain use cases. Maybe there was an economies of scale issue here.

I took this opportunity to update my dataset. I was parsing about 832 investment vehicles with each run, but those were out of date. I updated the funds I was monitoring and this jumped it to 1,049. I then re-ran both the async and sync versions.

Trial 3: Synchronous, Parsing and Clean Database (1,049 Parsings)
Trial Milliseconds
1 338347
2 256635
3 249421
4 241273

Average: 271419, SD 45057.80

Trial 4: Asyncio, Parsing and Clean Database (1,049 Parsings)
Trial Milliseconds
1 295613
2 267927
3 245697
4 231022

Average: 260064.75, SD 28138.98

Both methods sped up, but the async version finally decreased below the synchronous version! And it was way more consistent!


  • By increasing the amount of work, the amount of time in running async went from 7.5 minutes to about 4.3 minutes. This leads me to believe there is an overhead cost with asyncio, and you will not see the benefits on small datasets and projects.
  • Multiple blog posts say “async/await creates a coroutine.” This is not true. async creates the coroutine. Await just means run this call until it’s done.
  • That means it probably doesn’t matter very much if libraries you use to make blocking operations are not async-ready. Don’t stress it. You can just place them in a coroutine (defined by “async def”), await the call, and it will be ran until complete for you.
  • Only coroutines, again, defined by async def, can be awaited. (Except generators, which is another can of worms.) Unlike Node, it’s fine to define an async function without any awaits because the whole thing becomes just an awaitable coroutine.

All in all, good fun, good practice, complicated, and a lot easier to control than Javascript’s async operations.

Birch Bay Race

Filed under: Keto,Races — Shu @April 11th, 2021 10:14 am

15k, 1:37 time, 10:27 pace, minorly keto’ed (I don’t think that fat adapted). Terrible time, but it was the first race in about a year and a half.

The route was beautiful. About three miles runs along Birch Bay, into Birch Bay State Park for a mile or two, then back up along the bay. The sun was out, but pretty much did nothing to the temperature. It was quite windy. In certain parts, you running into a strong wind. Annoyingly, when we checked out the next day, there was almost no wind in the area.

Feels pretty good to get out, but I’m definitely not fully fat adapted, and after two years, I’m still dealing with this piraformis issue. The both kind of feed off each other. I’m not doing the 25 miles per week training due to the butt issue, which inhibits me from getting as fat adapted as I want to be to get under 10:00 pace.

Stop Recommending Home Assistant for Home Automation

Filed under: Home Automation,Technology — Shu @November 18th, 2020 8:05 pm

This post expands and refines on a comment I made in response to a blog post by Rob Dodson in which he explains why he ripped out a $9,000 automated home lighting system. His post centers around the Z-Wave device protocol, but touches upon Home Assistant, a popular open source control center for home automation devices.

About five years ago, I was looking for a home automation system to control my lights and security system. I choose Home Assistant as a controller on a Raspberry Pi to control Z-Wave devices.

  • Home Assistant is open source, aka free.
  • Raspberry Pis are cheap.
  • Z-Wave protocol is ubiquitous across many devices like switches and sensors.

Basically, it was a cheap solution that gave me a lot of flexibility beyond just controlling lights. I regret this whole decision whenever things go wrong, and it often does.

First off is Home Assistant. There is absolutely no way a non-Linux user would be able to use this software. Despite their claims and goals, it is not designed for the average user who even is fairly tech savvy. They have tried to be more user-friendly in the past year or two by moving a lot of admin functions to the web interface. However, so much troubleshooting inherently involves command lines in Linux, it is now in this obnoxious state where some operations need to partially be done in the web interface and then continued in the command line. For example, deleting a node. You have to initially “delete” it in the web app to clean things up, then manually remove them from the Home Assistant configuration files.

Another issue with HA is how they love to completely rewrite and break things for the sake of rewriting things. In my five years with HA, I have witnessed recommended install methods get created, rebranded, and deprecated (, complete overhaul of the UI architecture, complete overhaul of the Z-Wave architecture, and non-functioning items inserted as placeholders into the web interface for features to be implemented later. You *have* to read the release notes before updating. I’ve run into occasions where pip upgrades will break the whole setup. Good luck googling for answers in this environment when you have a problem. If the answer is more than a year old, it’s probably not relevant anymore.

“Well, just don’t upgrade,” you may say. The interface nags you when an upgrade is available. It is much softer now, but previously, it inundated you with security warnings about running an out of date version. If you’re an average user, who just trusts developers at face value, you’re going to upgrade. Because security!

We wouldn’t have to google problems so often if not for the documentation and terms. The documentation is not terrible, but it is not geared towards end users. Home Assistant abstracts out everything it touches into an entity and node. If you simply want to do X, you’ll have to identify what is the entity in your setup, and what is the node. When you add a Z-Wave device to your network, Home Assistant creates an entry for that one device in the user interface under Entities, Nodes, Switches, Z-Wave, and Devices. Five years later, and I still can’t tell you definitively what the difference is between the five.

It doesn’t help Home Assistant with how flakey Z-Wave and devices themselves are. If you want Z-Wave light switches, you’ll find three brands – GE, Honeywell, and Jasco. However, they are all Jasco. The first two (and maybe also some Leviton switches) are just rebranded Jasco. The first generation of Jascos are complete garbage. I’ve had a 40% failure rate with them. They’ll randomly die or fry in a power failure. So far, knock on wood, no failures with the second generation.

It’s extremely common for switches to misreport their state to Home Assistant, and Home Assistant can’t do anything about it. A switch may tell Home Assistant, “I am off” or “I am on,” but the actual mechanism to turn the light on and off may be broken. What’s Home Assistant supposed to do? Ask the switch, “Are you lying to me?” Home Assistant is trying to herd cats. This is a problem with not just home automation, but any technology that is built on a controller/worker architecture.

Home Assistant/Z-Wave delivered on my desire for a cheap and flexible option. Is it reliable? Nope. Is it easy to maintain and fix? Ha. All too often, I see recommendations for Home Assistant when someone asks, “I’m looking for a cheap home automation system.” Unless I know that person has experience in Linux, it’s a disservice to recommend Home Assistant. On the other hand, looking at other platforms and hubs, Home Assistant still gives you the most flexibility and consolidation. This really says more about the dismal state of home automation rather than praise for Home Assistant.

It’s 2020, and home automation is nowhere near “just works.”

Google and Apple Have Been Screwing Around With Email Security and it Doesn’t Actually Help Security

Filed under: Technology — Shu @September 19th, 2020 3:55 pm

TL;DR – Two-factor authentication (2FA) with Gmail, Apple’s native mail clients, and an email forwarding service is much harder to set up than it needs to be due to poor communication by both Google and Apple. The end result is that it’s just much easier, if people can’t sacrifice Gmail/Apple/Forwarding, to just turn off 2FA which is a bad thing.

I try to do the right thing. I turn on two-factor authentication on my Gmail account, don’t resuse passwords, etc. However, getting this up and running across all my Apple devices was a PITA and took a huge amount of googling and experimenting. Apple and Google’s history on mail, like just about everything, has been a clash of tech egos and insanity. I went through periods of turning 2FA on and off to get things to work correctly, and I can only imagine a non-technical user to just say, “fuck it” and keep it off. So despite Apple and especially Google’s preachings, their bickering and hair-pulling business decisions have helped no one but hackers.

In the old days, websites would just require a password to sign in. Two-factor authentication became popular to secure sites more important sites. Two-factor authentication is the process where a site or company requires a second way to verify you are trying to sign in besides just a password. Banks typically do this. If you are signing in on a new computer or browser, they often will send you a code in a text message on your phone, and you have to enter that code in the browser before they’ll let you in. The idea is that it’s harder to compromise both your password and your physical phone instead of just the former.

Google is Mostly to Blame

Instead of a simple text message, though, Google requires you to authenticate via the Google App. You have to download their Google App on the phone, and when you sign in elsewhere, the verification is done in that app. You have to click “Yes, I am trying to sign in on the desktop” in the Google App. There are also scenarios where they also text you a code to enter somewhere in addition to the app.

This is all fine and dandy, and makes it slightly harder to do an attack called SIM Swapping, but there are plenty of scenarios where you aren’t signing in via a browser. For example, an email application. I also have security cameras that are controlled by the camera maker’s app. I have to enter in a user name/password like the old days to email me when motion is detected. For those scenarios, Google introduced two methods – a setting called “Allow Less Secure Apps” and “App Passwords.” The former allows your username/password to be used anywhere in an old school way, and the later is available when you have two-factor authentication enabled. App Passwords generate one random password per application and ties it to a device, basically forbidding you to reuse passwords. So if someone breaks into my camera system, they can’t use that password to sign in on a Windows XP machine in Russia.

The problem is that Google, like usual, does a crappy job at communicating/educating the layman about these changes. Further, again, like typical Google, you can’t ever trust how long they will keep supporting a method or product.

For both “Allow Less Secure Apps” and “App Passwords,” they freakin’ warn you all over, that this is unsecure and you shouldn’t use it. That’s definitely true for Allow Less Secure, and technically true for App Passwords, but come on. App Passwords is essential for those scenarios without a browser and is actually a pretty good security mechanism.

Now you’re in a situation where App Passwords are perceived to be dangerous, and because Google warns you that it’s a bad way of doing, they imply they may yank them in the future. So you don’t use them. However, you still have to set up the security camera to email you when a rabbit is running across your driveway. How do you get that working? Turn off 2FA and turn on “Allow Less Secure Apps.” Congratulations, you are now no longer following best security practices. Despite what Google wants you to do, their communication compels otherwise.

Apple is Also Mostly to Blame

I fully admit a lot of Apple’s blame is because of my special snowflake setup. In the email specs and most apps including the Gmail web interface, you can choose a specific “Reply-To” address that is entirely different than the email address on the system. I use this feature. I use Gmail, but I never give out my Gmail address. The Reply-To address is my alumni address. I send out email through Google, and recipients’ email clients are supposed to use the Reply-To address. They don’t always and instead respond to my Gmail address. It works, but my Gmail address is not my “real” address. My school alumni address forwards to the Gmail account.

In my encounters, this is an uncommon requirement, but I’ve met many people, and organizations, where this setup is used for various reasons. It isn’t exactly a rare edge case.

Apple’s Mail applications has been inconsistent and slow at incorporating Google’s security changes for both macOS and iOS. The smoothest way for native applications to support 2FA is to launch a browser window, kick you over to Google, make you sign in with Google, Google then says yea or nay, the browser then tells the OS yea or nay. *Hand Waves* this is a type of 2FA called Oauth.

It’s Been Inconsistent on macOS

There are two ways you can add a Gmail account to macOS You can choose “Gmail” which does a lot of under the hood magic, or you can directly enter Gmail server settings via “Add Other Account…” I hated the “Gmail” method. It never allowed you to change your Reply-To address and hid what the hell was going on. For years up to and including Mojave, I had to use the later to accomodate my special snowflake email forwarding setup. It worked ok until 2FA was turned on. When you added an account via “Gmail,” it at least supported the Oauth flow to work. If you add it via “Add Other Account…” it doesn’t. So it looked like you were dead in the water with 2FA, Gmail, and a forwarding email.

However, you actually weren’t dead in the water because you could set up an App Password. The problem was usually didn’t tell you what was going on. It just spun and spun until time out and just says “ is not responding.” Occasionally a dialog box pops up telling you to use App Passwords, and that’s how I discovered that was the issue. I set up an App Password, an lo and behold, it works. Now, this specific issue could have been 100% Google’s fault. Maybe they didn’t provide a usable error for to report. Who knows.

Giving Apple credit, they did indirectly fix things in Catalina. If you add an email via the “Gmail” method, it now allows you to change your Reply-To email address. So you’re kicked into the Oauth flow and can use an email forwarder. As far as I know, though, “Add Other Account…” still doesn’t support Oauth.

It’s Been a Consistent Dumpster Fire on iOS

iOS is a different story. Setting up a gmail account as “Gmail,” does not allow you to change your email address in outgoing emails at all. This is true even up to iOS 14 released a few days ago, so that is not an option. Setting up the server directly, never gives you the App Password warning. It just spins, timesout, and says something like “ is not currently available.” So if you only use email on your phone, you’ll never be told to use an App Password. Again, still the same as of iOS 14.

I can’t imagine how you would figure this out if you didn’t have macOS or if your Google skills weren’t exemplary. If you use a Reply-To, it is just be easier to turn off 2FA and set up Gmail as a regular IMAP server. You are led to believe that 2FA, iOS, and email forwarders are an impossible combination.

Other Apps: Yay!

All this mess made me explore other email clients. The thing they all have in common is that they do not try to be clever and hide things, and allowed you to change your Reply-To address.

Edison on iOS was great until it wasn’t. Hopefully an issue like that won’t pop up again.

Thunderbird is a throwback that is usable, but the interface is woefully outdated for modern wide screens and laptops.

Yeah, I actually used Outlook for a while on my 2015 MBP (the last great Macbook Pro) on Mojave. It ain’t terrible.

In Conclusion, This is What You Need to Do…

With a lot of finagling, you can actually use two-factor authentication and special email setups with Apple software. On macOS, in < Catalina, use an App Password via server settings directly. In Catalina and hopefully beyond, add the account as a Gmail account. In any version of iOS, use App Passwords instead of your normal Gmail password. You will never know this strictly from using iOS.

It would be a lot easier if Google wasn’t constantly changing Gmail security up, and Apple stopped trying to be so damn clever with their email clients. Both companies need to stop trying to force people into screen time on their ecosystems at the expense of good security.

Great. As if Ketogenatics Weren’t Insufferable Enough

Filed under: Keto — Shu @September 13th, 2020 1:03 pm

Keto restrains aging-induced exacerbation of COVID-19 (maybe) in mice (not humans). They took a look at how ketogenesis protects mice from cytokines inflammation in a coronavirus that works similarly to SARS-CoV-2.

Completely wrong statement you can now start posting on Facebook: If you’re on a ketogenic diet, you’re immune to COVID-19.

No running at all this week. 1) California fires blowing up here making it look like fucking nuclear winter. Controlled burn ban didn’t work that well, did it, Californians? Maybe one day Californians will know about “unintended consequences.” 2) Been working on staining my deck this week.

I think Mount Rainier is Going to Erupt Soon

Filed under: Running,Running Gear — Shu @September 7th, 2020 1:41 am

At 10:00 pm tonight, I was on the last mile of my run. I approached an elderly couple at a bus stop and the wife jumped out in front of me. I swear, she was not even four feet tall. She was amazed and tickled at my ankle running lights and asked me all about them.

I’m not a morning person, so morning running is out. Some sort of visibility aid is essential up here where it gets pretty dark at 4:00 pm. These lights have gotten me not killed on more than one occasion. There’s several flashing modes including not flashing. I, of course, use the seizure-inducing mode that flashes like 20 times a second. A full charge will last me a couple of weeks, and so far, they’ve been rain-proof. Even in pouring storms, they’ve survived. Only $22 bucks, too.

She was so funny and cheerful. I was concerned that they were out so late, and whether they still had a ways to go after the bus dropped them off. I offered to buy an Uber ride for them, but they graciously declined. The bus drops them off right in front of their house, they said. I wonder if this is like the Hawaiian volcano goddess Pele vanishing hitchhiker stories, where the person who gives Pele a ride has their house spared when a volcano erupts. I think this encounter is a sign that Rainier will erupt soon.

Status-wise, things have been going well. Doing about 15 miles per week for the last couple of weeks. Full ketosis. About 10-10:30 per mile, with bursts down to 9:45/mile over one or two miles on occasion. Muscles feeling ok. Yoga’s going well, still a pain in the ass. I’ve been thinking about adding plyometrics to get my pace up and improve my stride. Pogoing is kept in control because there’s a lot less to do these days. There’s a swarth of new gyms to the west of me, and I’m in gold by most of them. Earlier this year, I hit gold in all of the gyms through my usual route in Mill Creek.

There is no Such Thing as a Free Lunch For Yaesu’s WIRES-X

Filed under: Amateur Radio — Shu @August 27th, 2020 1:11 am

I recently purchased parts and slapped them on a Raspberry Pi 2 I had lying around to create an MMDVM hotspot for digital ham radio. Using the popular and free software, Pi-Star, this would allegedly be a cheap way to get into all digital platforms – WIRES-X, D-Star, and DMR. However, since I own two Yaesu’s, I’ve concentrated on WIRES-X thus far.

To review, the traditional flow into WIRES-X (or DMR, or D-Star for that matter), goes like this:

ham -> repeater (via radio) -> internet -> WIRES-X server network (or DMR or D-Star) -> repeaters -> local hams (via radio)

As tech-savvy as Seattle is, I really only have three usable WIRES-X repeaters within range around me. Often, at least one of them is offline, or their connection to WIRES-X is offline. After all, it is reliant someone’s internet connection. Repeaters are set up by some generous person or club using their own time and money. Further, since they are open to anyone, I have to share the repeater with others. If I bounce around rooms, I’ll piss off other users and get my call sign banned. The hotspot solution gives me full control of how I want to get on WIRES-X, or so I thought.

The problem is that no one has actually reversed engineered a way into the WIRES-X system. It is a proprietary network who’s only entrance is Yaesu hardware. Even if the technical challenges are solved, there would probably be legal obstacles.

The most efficient way would be:

ham -> hotspot (via radio) -> internet -> WIRES-X server network -> Yaesu repeaters -> local hams (via radio)

Instead, somehow, Yaesu hardware must be inserted as the gatekeeper into WIRES-X. Again, generous people and groups have set this up. There are two platforms that have been created – Yaesu System Fusion (YSF) and Fusion Connect System (FCS). Each platform also has their own rooms, and they try to map to every WIRES-X room. However, the flow is now:

ham -> hotspot (via radio) -> internet -> YSF/FCS -> Yaesu repeater (via radio) -> internet -> WIRES-X server network -> Yaesu repeaters -> local hams (via radio)

First note Yaesu repeater -> internet. This connection is there for all flows. You have no control over this, and it can break. Now go back and note the YSF/FCS -> Yaesu repeater portion. That is a traditional radio broadcast between two internet servers. Think about that.
It’s a server with an antenna attached that sends a radio broadcast to a Yaesu repeater in the same room. There is no Cat5 cable here. Someone somewhere else basically two antennas attached to two servers. You have no control over it. It can also break. Worse, it can be wrong.. These are two critical points that can and often fail, and the initiating ham may not even know they failed.

Why am I berating this? Because in every tutorial or video on the Internet about hotspots, they advertise hotspots as a free way to get onto WIRES-X. This is false. Maybe it’s true for the end user, but the Yaesu repeater is purchased by someone and maintained by someone. The YSF and FCS platforms are paid for and maintained by someone, too. No money is being made by the YSF and FCS networks, and thus, there is no guarantee of accuracy or uptime.

This is not sanctimonious proselytizing that we hotspot users need to be eternally grateful to the YSF and FCS providers, although we should be thankful. The point is that shit breaks and the “it’s so easy” promises by tutorials and YouTubers is wrong.

Pi-Star lists FCS rooms and then YSF rooms. There is no guarantee that the FCS/YSF numbers will be the same the next time you connect. Servers can disappear because a guy is tired of keeping the bridge up, or his rabbit chewed on his cable. Since it’s just an ordinal list of numerical rooms, the lower the WIRES-X mapping on that list, the higher chance that mapping shifts affect that lower mapping. You have to occasionally check.

To map a FCS room to what you enter on your ham radio, list out the FCS rooms list starting with the first server, FCS01 to FCS05. The entry in your radio maps starting at 00010 and increments by one. So, to go to FCS00100, you enter 00010. FCS00100, Deutschland room, is 00020. FCS00290, Texas-Nexus, is 00244, etc. YSF room numbers are just the YSF room number. Several sites talk about a small algorithm you can calculate in your head to get the WIRES-X mapping. Because the bridges are so temporal, this algorithm often doesn’t work, especially for higher level numbers.

There is also no guarantee that the YSF and FCS rooms are correctly mapping to their advertised WIRES-X rooms. That advertisement is just that – a self-reported connection that is not based on anything. It is common for this mapping to go down, but the YSF/FCS portion remains up. FCS has a detailed status page of its servers, but nothing to indicate whether it’s successfully linked to its targeted network. In that case, all the YSF/FCS users are in a room thinking they’re in a WIRES-X room, but they are not. They’re in their own purgatory. You have two rooms, with identical names, existing with each one isolated from each other.

FCS is also limited to 100 mappings of WIRES-X rooms per server. There are five servers in total which means there is a max of 500 WIRES-X room connections. You bet there are more than 500 WIRES-X rooms. That part is theoretically infinite. To make up for it, YSF nodes are one YSF server to one WIRES-X rooms. However, no one person/group maintains all the YSF nodes. Each YSF mapping is at the whim of someone else. There are plenty of WIRES-X rooms that are unreachable by hotspot users.

TL;DR – There is no such thing as a free lunch with digital ham radio. Someone is paying for something, and there is no 100% guarantee of uptime or accuracy. In fact, from a software architecture perspective, the whole thing is held together by chewed gum and rope bridges. Be aware of this when troubleshooting, and don’t for one second think hotspot is a guaranteed way of getting into WIRES-X. Yaesu hardware, either yours or an open repeater, is the only way to fully get into WIRES-X.

Next Page »