There are a lot of seemingly smaller tweaks and changes, but no talk of Apple Intelligence yet. Why?

Has Apple’s love affair with AR cooled? Not on your life, and while it is true that visionOS didn’t get much time during the company’s big reveals at WWDC and last week’s iPhone launch, the company has delivered a handful of valuable improvements to the OS — hinting at future product plans as it did.

Perhaps you missed the hint, but what was interesting about the update wasn’t what’s new as much as what isn’t new, particularly the lack of support for Apple Intelligence. You could argue that this is only because the AI isn’t quite ready yet, but I can’t help but see its absence as a hint of what’s to come.

It’s always useful to cast about for a morsel of what Apple has actually said to support the theory. So, what has the company said? It’s already told us it’s working on a version of Siri with more contextual intelligence it hopes to ship in 2025.

That AI will be able to make contextual generative AI (genAI)-driven decisions in reaction to what it sees you interacting with on your iPhone, iPad, or Mac. Imagine what it will do if used with Vision Pro as it also looks around where you really are. If I’m right about this direction, it’s possible we’ll see visionOS equipped with a profoundly powerful contextual intelligence perhaps toward the end of next year

The Gift of Sound & Vision

Did I call this profound? Think about it: contextual intelligence is essential for an effective/responsive voice-driven interface at the intersection of technology and the everyday world. What you are looking at will change depending on your context, and the data your device surfaces will reflect the complexity of such complex lives. I see this as being Door Detection on steroids.I also think the gap between the idea and the reality will delay complete realization for a while — but it’s a beginning.

I also think that late 2025 time frame hints at Apple’s target release schedule for the slightly more mass market Vision 2.0 devices speculators expect. However, speculation doesn’t mean much these days until Bloomberg’s Mark Gurman’s “little birds” (Game of Thrones reference) begin to whisper. And I don’t think they’ve discussed how AI, voice, and contextual genAI will underpin dramatic new user interfaces across a multitude of consumer devices (plausibly including something designed by Jony Ive).

Let’s move away from speculation based on what didn’t happen to run through what’s actually changed.

Here’s a run-down on how Apple has expanded Spatial Reality:

The handy gesture

For me, the biggest improvement is around gesture. Apple has made it handy to access Home View and Control Center on Vision devices. To get to Home, you just need to use your hand — stare at your palm, then tap the dot that appears. If you turn your hand around you’ll be presented with time and battery information and can tap in that view to invoke Control Center or adjust volume controls. You can also now change the icon arrangement in Home, and avatar hand movements have been made smoother with new animations

Memories get Spatial

Apple made several tweaks to photos and videos: 

  • You can turn existing photos into spatial images, adding depth to create a stereoscopic effect.
  • The Photos app on Vision devices has been improved.
  • You can share photos, videos, and panoramas during FaceTime calls using SharePlay.
  • You get video trimming controls to use from within the headset.

(I’m quite interested to see the extent to which a future version of Apple Intelligence will be able to generate 3D environments from 2D photos you can then explore using Vision Pro. I believe that is inevitable.)

When you need a keyboard

You can use a Mac keyboard with Vision devices. In visionOS 2, the device will recognize your keyboard and display it on screen. This makes it much easier to use the input device. Apple has also introduced support for Bluetooth mice, which means you can navigate your device using a mouse and keyboard. A Messaging improvement means you can now dictate a message by staring at your microphone icon.

The infinite workspace

For work, perhaps one of the better enhancements (coming later this year) is the introduction of a new panoramic screen, equivalent to two 4K displays standing alongside each other. This really is giant real estate and should make complex workflows more possible.

On the web

The Safari browser lets Vision Pro users watch videos on a large display in any environment. Siri will read page content and Tab Group support makes it easier to handle multiple tabs. When you get time off, you get emoji reaction and singalong tools in Apple Music and the capacity to watch up to 5 MLS and MLB games in Multiview mode. (The latter feature is also expected later in the year.) You can also watch video in one window while working in other applications.

Virtuality and immersive environments

Apple expanded the number of immersive environments available in visionOS. It also improved the avatar system, so it captures more accurate skin tones and clothing colors. 

And the rest

  • Guest mode gives guests 30-day access to your device as it saves their eye and hand data.
  • Live Captions provide real‑time transcriptions of speech, audio, and video content, including FaceTime calls.
  • There’s a new travel mode for trains.

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

Exit mobile version