...I'm just going to post progress here to thin things out a little from "help it's broke" and increase the chance of free help or confirmation. You have to make it exciting to have any hope of pulling people in on the weekend!
So having given this a little more thought, my main concern is getting rebooted onto the disk that I failed-out, and still have the RAID working when it is done booting.
If/when I unplug the disk I'm running on now (sda) and reboot, md will only see sdb and will see that sdb2 is marked failed so md2 will probably not start...only md1 will be up. At that point,
[shame]based on times I've broken it in the past
[/shame], LVM will likely start up the server's /dev/mapper/main-root LV directly from /dev/sdb2 and ignore the md level entirely. That won't cause data loss but (I think) it does introduce a chicken and egg problem with getting md back running. That would have to be done offline.
So I think what I'm going to do is physically pull the sdb disk from my server and plug it into another system and boot up on a Knoppix CD. That way I can fight mdadm until it starts md2 in degraded mode on sdb2 only, while still keeping the current crippled but functioning system up on /dev/sda in case I have to go with fixing the failed upgrade after all.