I dug out my old pair of Koss Pro4As.
I was able to get 2/6 right. I got the classical one, and the Neil Young one that had orchestral backing. The other 4 I guessed and guessed wrong.
It might be that I know much better what orchestral instruments should sound like. With rock guitar I don't know if there is deliberate distortion being faithfully reproduced, versus distortion introduced later. Or, I might be giving myself too much credit for lucky guesses.
I did learn that these three formats are much closer than I would have predicted. For acoustical environments like a car and for popular music there seems no advantage to the higher quality.
Starting at the bottom:I agree about how important the environment is. Heck, in a car with normal road noise (i.e. not a top line Mercedes, for example) even a cassette is usually good enough. But then we get into even more psychoacoustics than the old Bell Labs studies that showed how non-linear hearing is. I hate to state the obvious, but the more you listen, the more you hear. And that's not a fixed point in development. If you practice listening very hard to music and KNOW where the artifacts of lossy compression are most likely, you'll hear them more easily. It's like birdwatching... the little gray dot you just saw flitter out of sight showed markings I know to look for that tell ME it's a male ruby crowned kinglet. A non-bicyclist sees a bicycle flash past and it's just a bicycle; a bike-nut sees, in that flash, vintage Campagnolo derailleurs mismatched with Stronglight 107 Campy look-alikes and first generation Universal side-pull brakes. We don't just see what we are looking for: we see what we learn to look for as well. That's actually just basic survival: the corollary is that we do NOT see what we do NOT look for, and with so much in our environment, being able to filter out things is as important as filtering in.
Your second point, and Geezer's notes about video, bring up another point: what we are comparing against. If everything we see on the screen is shot from something we see every day, we will have one way at assessing how good the picture is. Those of us who are more familiar with a live, unreinforced music ensemble will have different auditory expectations than those who spend all their time with synthesized, processed, amplified sound. If you go to a recording forum you'll see plenty of discussion about the exact microphone to use to capture the "right" sound from an electric guitar speaker cabinet with all the "correct" distortion(s) reproduced with complete fidelity. That gets really interesting when the electric guitarist has spent too many years in front of loud speaker cabinets without hearing protection: the balance that sounds right to their damaged and now frequency challenged hearing will be shrill to less damaged ears, so what IS the sound we're listening for?
Your first point, the good o'l Koss Pro 4A headphones, brings back such fond memories! I remember sitting next to a Roberts cross-field head 15 ips reel-to-reel with a pair of those on. They were mighty good for their time, but that Roberts (Craig in USA) had a pretty hefty headphone amp circuit. For point of interest, maybe try the test with those hitched right to your computer, then to your stereo amp. The heirarchy in my systems are, lowest sound quality to highest: Android tablet stereo jack; MacBook Pro stereo jack; Onkyo receiver/HDMI switch; external headphone amp with unbalanced headphone connection; same external amp with balanced headphone connection. I'm pretty sure that if you ran the Koss right off your computer you'd get pretty poor high and low frequency response. Although my MacBook stereo out has a pretty decent DAC, the amp is pretty weak and has a very poor impedence match with my headphones, hence the low position in the hierarchy.
Regarding Geezer's post: his expansion/recompression example brought me back to the old reel-to-reel again
I can almost feel the grimace on my face as each new "generation" of tape edits added more junk on top of the original signal. With tape the answer is to NOT create a new generation unless there is no other option. Noise from added generations is more noticeable than well made splices.
With mp3 the answer is to not go there until you are done editing. So, even though it's tons of disk space, save all the play-alongs in lossless formats. Once they are mixed with playing and the overdubs are done, save the whole mess in mp3.
That being said, though, using HIGH sample mp3 instead of WAV for the first background play-along input, and recording all build layers in lossless, then mixing back down to mp3, is unlikely to sound much different from starting with WAV formats. That's much more like the tape approach of using splices and whatever to avoid getting yet another tape generation. Keep track of how many you've done and you never get to the 5th photocopy of a photocopy.