Google’s vision for the post‑phone era stopped being a distant concept this week and started to look like something you can actually wear. What began as demo videos and developer previews has splintered into real hardware: reference smart glasses, Samsung’s Galaxy XR updates, and Xreal’s Project Aura — all running the same Android XR software and driven by Gemini’s multimodal smarts.
A new class of devices
The hardware on display ranges from audio‑only "AI glasses" to monocular frames with a single microLED projection and even chunky wired XR glasses that hook to a battery pack. In practice that means some models will give you spoken answers and camera‑assisted context, while others will float maps, video calls and full Android apps into your field of view. The monocular designs — a single in‑lens screen — are coming next year, with dual‑display (binocular) versions arriving later.
Project Aura, built with Xreal, sits between lightweight spectacles and a full VR headset. It’s wired, uses a battery/trackpad unit, and gives a surprisingly roomy virtual desktop (Xreal calls it a 70‑degree field of view). Samsung’s Galaxy XR is getting software updates — like PC Connect and travel mode — that will let it stream your Windows desktop into the headset and stabilize the view on a bumpy plane. Those platform updates underline Google’s strategy: make Android XR useful on many shapes and sizes of hardware rather than force a single, definitive wearable.
You can read more about Google’s mapping ambitions with Gemini in the new Maps copilot coverage here: Google Maps Gets Gemini.
What the demos actually do
The demos felt less like sci‑fi and more like small, sensible conveniences. Navigation places a subtle arrow near your sightline and a map when you tilt your head down; live translation captions appear during conversations; a point‑of‑view camera can stream to a call; and Gemini — aided by world‑facing cameras — can identify objects, suggest recipes from pantry ingredients, or even apply playful edits to photos via Nano Banana.
Apps aren’t being rebuilt from scratch for XR. Google’s approach projects existing Android app UI into a minimalist XR surface, so Uber, YouTube Music and other familiar apps showed up as widgets in the demos without bespoke rewrites. That compatibility is one reason smaller hardware makers can ship competent experiences on day one.
There’s also Watch integration: when a display‑less pair takes a photo, a higher‑resolution preview can appear on your wrist. (If you use a wearable, the pairing story suddenly matters — as does how those cross‑device notifications behave with iOS.) For people who live in both ecosystems, Google says it’s working to support iPhone users too.
If you want an Apple‑style wrist companion, the demo hinted at close ties with smartwatches like the Apple Watch for quick previews and control.
For developers and partners: one platform to rule them all
Android XR’s Developer Preview 3 opens up tools for glasses and headsets, and Google emphasizes the ecosystem over a single device. That’s why companies like Samsung, Xreal, Warby Parker and Gentle Monster are fronting hardware while Google handles the OS and Gemini integrations. The payoff: apps built for one Android XR device should run on many others, avoiding the fragmentation that has hindered prior wearable pushes.
Samsung’s early lead with Galaxy XR and Google’s platform work sets up an interesting dynamic as Samsung prepares broader distribution for XR devices globally; that rollout will be an important test of whether consumers will adopt a new class of personal computers in public.
Read about Samsung’s wider plans for Galaxy XR here: Samsung Prepares Global Push for Galaxy XR.
Privacy, social friction and the hard parts
Google is careful to call out the lessons of the first Google Glass: social acceptance matters. The prototypes use visible lights and clear on/off markings when cameras or recordings are active, and Google says it will enforce permissions, encryption and conservative third‑party camera access. Still, a blinking LED and a permissions dialog won’t erase cultural unease overnight — the “glasshole” problem was as much social as technical.
There are also hard tradeoffs around AI: Gemini’s ability to analyze your surroundings feeds useful features (instant translations, object ID, recipe suggestions) but also raises questions about where images and context data live and how long records are retained. Google’s broader Gemini integrations — like deeper search across your Drive and Gmail — have already triggered privacy conversations, and adding wearable cameras only amplifies that debate. See more on Gemini’s workspace integrations here: Gemini’s Deep Research.
Why this matters (and why it might still fail)
The ambition here is big: create a flexible platform that lets many manufacturers experiment with form factors while giving developers a single target. If Android XR works as promised, it could avoid the app drought that hurt earlier wearables and could make AR‑like features feel natural rather than gimmicky.
But success hinges on a few brittle things: convincing people to wear them socially, shipping hardware that looks and feels like normal eyewear, and proving day‑to‑day usefulness that’s better than simply pulling out a phone. Google has the software muscle and partners; what it doesn’t yet have is market certainty. There’s also Apple — quiet so far on glasses — and Meta, which already sells headsets and glasses, making this an ecosystem fight as much as a hardware one.
For now, Google’s play is smart: roll out a spectrum of devices, lean on app compatibility, and let partners iterate on style. The next year will tell whether consumers view XR as a helpful extension of their phones or another experiment relegated to early adopters and trade‑show demos.