I’m not dead; I’m hiding in the rocks with a dead alien.
I haven’t written recently because one of the impacts of Covid ways of working for me has been increased problems with my back, leading to arm and wrist pain.
As a result, I am now doing almost everything by voice.
The macOS version of Dragon that I used to use, while it still sort of works on the previous macOS release, is borderline non-functional for things other than basic dictation, compounded by the Chrome extension I used to use for voice-powered web browsing having died sometime in the intervening years. I’m still using it for work, but for personal things I’ve moved back to Windows.
This has much better text editing facilities than actually worked in the macOS version, as well as somewhat better integration with the operating system and things like Microsoft Outlook, which seems the best choice for email at the moment (although it has some annoying bugs).
It also integrates with Google Chrome. This means I’m speaking links to browse the web, which I’ve always felt is one of the most natural ways to work anyway.
It does, however, mean that I’m noticing all the appalling accessibility errors on the modern web. Expect an extended rant at some point, although probably not a complete series as I did last time.
One of the things that has changed in the last 8 years is that a lot of people now are using web technologies to write desktop applications. As far as I can tell, this means that Visual Studio Code and GitHub’s Windows application are completely inaccessible – I have to mouse my way around their windows rather than being able to speak buttons and menus.
You may remember from last time that doing everything by mouse when you’re driving by voice can be pretty frustrating. Particularly when interface elements only become visible when you mouse over them.
I’m also experimenting with driving Android by voice. They’ve done a pretty good job, albeit with some slightly strange decisions that can make it harder to scroll things around – this may be to do with how Android’s UI works, of course.
What I haven’t yet figured out is how to do it without my Google Home devices trying to interpret the same voice commands. I don’t know if this is because the Home and Assistant teams don’t really talk to each other; or if there’s someone within Google secretly trying to ensure that people don’t go all in on any one big tech vendor, even if it’s their own employer.
Last time around I offered workshops and presentations to start-ups and other teams interested in understanding how the things they build are experienced by a voice user. I can’t easily do that now with a full-time job, but if you’re interested I’m sure I could start a side gig in the evenings on Twitch running through websites and highlighting my frustrations.
So many frustrations.
Stay safe. Wear a mask. Ensure your clickable elements are either anchors, buttons, or form controls.