Vision Pro: What’s next?

You don’t need me to explain to you that Vision Pro is a flop. It was never expected to sell in large volume this year because Apple was always bound by the number of displays that could be made for them, but Apple is not selling a ton of these things.

It wouldn’t be my first Apple device that didn’t sell well (I say as I think longingly about my old iPhone 13 mini and my 2019 Mac Pro), but this isn’t just selling poorly, this thing’s a dud.

I’d like to talk about why.

Just before WWDC last year I talked about why I thought the product that was later announced to be Vision Pro was a long shot for Apple. At that time, I said Apple needed two things to succeed: a killer app, and a technological leap past the headset form factor.

I got ahead of myself suggesting that we needed to move past the headset form factor already. Rather, we need to get the headset form factor right in the first place.

Vision Pro’s got the nicest, most high-res displays in the VR headset product category. They aren’t good enough. Stuff is blurry and disorienting. The field of view is super narrow too. To add insult to injury, I’ve been buying super high-res Retina displays from Apple since 2010 too, so my eyes are even more discerning. The current display tech is fine to demonstrate spatial computing as a concept, but it’s not a product yet.

And that’s not even getting into the input mechanism. I’m not going to mince words here: Vision Pro’s eye and hand tracking fucking sucks, and Apple has no idea how to fix it. From the moment I put the headset on and start entering my passcode, I’ll be looking at one number and the headset will highlight an adjacent number on the keypad. Detection of the pinch to click gesture is unreliable. At best it works maybe 90% of the time. If Apple sold me a mouse or trackpad where the button worked 90% of the time, or even 99% of the time, or 99.9% of the time, it would be defective and I’d return it. On Vision Pro that’s just the best it can do. Even when it’s having an accurate streak, the input mechanism isn’t fast or precise enough. I spent some time trying to edit a Freeform document and it was one of the most frustrating things I’ve done with a computer in awhile (and I program computers for a living, so that’s a tough contest).

And when I say Apple has no idea how to fix it, what I mean is that ~2 months ago I filed a bug report with Apple describing this issue, and although I’ve gotten a couple of responses fishing for more information (which if you’ve ever filed a bug report with Apple you know is a rare occurrence), but they are no closer to solving this than before. visionOS hasn’t seen a torrent of bugfix updates addressing higher-priority bugs. Apple flat out doesn’t care. I’m not demanding that a team of engineers is working around the clock on this; I just want them to respond to me and keep a conversation open (and it’s not like there are so many Vision Pro customers that this is burdensome). I don’t have much hope for a platform where basic input doesn’t work.

So let’s talk killer apps, shall we?

There are features of Vision Pro that are compelling. Immersive video is neat! I’m not a huge Alicia Keyes fan, but watching a video where it feels like I’m in the room with her doing a rehearsal session and it feels like I’m sitting a few feet from her is kind of incredible.

It’s neat to watch a TV show and then have my Messages window floating off to the side right next to the show so I don’t have to look down at the phone in my hands to message a friend.

I love the idea of Virtual Mac Display. At best it currently offers a B- experience because of the weird resolution and blurriness issues, but a later iteration of Vision Pro with a higher res display with wider field of view would be a dream for me, allowing me to take a full Mac computing experience anywhere.

The closest I can see to there being a killer app is the idea of doing your computing, but spatially. Again, though, that needs considerably better hardware than what today’s state of the art has to offer, and it requires serious evolution of the UI and app/window management experience to get to the point where you’re doing your general-purpose computing on visionOS. Ask iPad power users about Apple’s track record evolving their simpler UIs to be more pro-grade.


My WWDC expectations for Vision Pro are at rock bottom at this point. It seems like most of the keynote’s content has been leaked by Bloomberg and rumored visionOS improvements sound modest at best. And that’s not too surprising; this is a four month old device. But there are some serious fundamentals that need fixing, and even once those fundamentals are fixed, it’s an upgrade from “almost impossible to use” to “immature product”.

My most generous assessment of this situation is that virtual/mixed/augmented reality computing devices are a worthwhile pursuit, but they need many more years of sustained investment to get to where they could do some of the most exciting things we want to see them do, and companies like Meta and now Apple are making the best products they can right now in hopes that collectively, the community will come together and do interesting (albeit janky) things with these devices. Technologies can’t mature from within a company’s labs; they mature through early adopters buying products and using them, letting the companies making them get better.

In short, by making Vision Pro, Apple has entered the market for a category of product that is more in its infancy than perhaps any other category of product Apple has ever sold.

Leave a Reply

Your email address will not be published. Required fields are marked *