Razer quietly recast its desktop AI from a window on your PC to a tiny, self‑contained holographic roommate. At CES 2026 the company showed Project AVA — a cylindrical puck with a 5.5‑inch animated hologram that can watch your screen, watch you, and chat like a chipper coach, stylist, or... anime friend.

Small bottle, big personality

The hardware looks familiar to anyone who buys desk gadgets: a compact tube with Chroma RGB, a down‑firing speaker, dual far‑field mics and an HD camera with ambient light sensing. Plug it into a Windows PC via USB‑C for the high‑bandwidth vision mode and the hologram springs to life. Razer demonstrated several personas: AVA (a calm guide), Kira (an e‑girl–style avatar), Zane (an edgier male persona), Sao (a salary‑worker archetype), and eventually a likeness modeled on League of Legends legend Faker.

Motion, eye‑tracking and lip‑sync are handled with animation work from Animation Inc., so the characters don't feel like static sprites. They react to on‑screen action, give loadout advice during games, translate text, help organize your day, and even offer wardrobe suggestions based on the built‑in camera.

Software, openness and the awkward bits

Under the hood Razer currently runs the experience through xAI's Grok for the demos, but the unit was built to be AI‑agnostic: the company says Project AVA will support other models and, down the line, user‑chosen LLMs or Razer’s own AI. That flexibility matters — both for capabilities and for the thorny ethics of using real people’s faces or stylistic archetypes.

Razer emphasizes that the gaming coaching is designed to comply with developer terms by focusing on strategy and lore rather than automating play. Still, turning esports stars into coached avatars raises the same questions that have already bubbled up around likeness, consent and deepfake‑style recreations in other AI products. The industry conversation around avatar rights and brand consent has been getting louder — see the recent debate over real‑world likenesses in AI apps like Sora — and products like AVA land right in the middle of it as conversations intensify.

There are also privacy tradeoffs. Razer says mics can be muted and a physical camera shutter is planned for retail units. But a hologram that’s built to look at your screen and you will still make some users uneasy — which is exactly why consent‑focused frameworks and benchmarks for computer vision are going to matter more often now at the policy and standards level.

How Razer positions AVA

Razer pitches Project AVA as a “friend for life” that’s as useful for brainstorming and translations as it is for gaming. In demos, the avatar could scan gameplay footage and suggest loadouts or explain pros and cons for equipment choices. The device is essentially an external visual and audio front end for whatever LLM you select, letting the model access visual context from your desk and your screen.

That visual context raises another privacy point: models that can index what’s on your screen and search files will need careful boundaries. Google’s work on model‑driven search and document integration shows how powerful — and invasive — these features can be when they touch personal files, which is why the choice of which model runs a device like AVA will be meaningful for users concerned about privacy and data handling and the way those models surface personal information.

When it ships (and what it might cost)

Razer opened US reservations at CES with a $20 deposit; official shipments target the second half of 2026. The company has not published a final price, but early hands‑on reporting guesses it will sit in the same ballpark as other Razer peripherals — roughly a few hundred dollars. That price will be a key factor: is the device a neat novelty, or something people actually integrate into daily workflows and gaming rigs?

Why this matters (and why people will argue about it)

Project AVA is notable less because it is the first hologram and more because it packages a pretty sophisticated set of capabilities — high‑fidelity animation, microphone arrays, a camera that feeds context to a language model, and an open approach to model choice — into a desk gadget. It’s an explicit bet that many users will prefer an embodied, animated interface to purely voice or screen‑based assistants.

At the same time, AVA crystallizes a few trends that will push the conversation forward in 2026: the spread of generative AI into consumer hardware, the blending of visual and conversational signals, and the policy and consent questions that follow when companies offer avatars modeled after living people. Those are not Razer‑only problems; they’re the same tensions that have surfaced as avatars and deepfakes get easier to produce and distribute and as companies and regulators try to catch up.

If you like the idea of a holographic coach or a desk‑size anime companion, Razer is banking on you being willing to pay for the novelty and the utility. If you worry about the mix of always‑looking cameras and generative AI, AVA might be the moment you start asking more detailed, product‑level privacy questions — and demanding clearer answers.

Razer's push into holographic companions also fits into a broader pattern of putting AI into more consumer devices; the company showed other concept projects at CES that point toward an ecosystem of assistants in headphones and peripherals. Whether that ecosystem lands gracefully will depend on pricing, model choice, and the guardrails Razer and others build around consent and data use.

Tags: Razer, Hologram, AI, CES 2026, Gaming

RazerHologramAIGamingCES