A small camera icon, invisible at first glance, just changed how NotebookLM works on the go. Google’s AI notebook — long a web-first playground for turning your documents and videos into study aids — has been quietly leveling up on phones: you can now snap sources, generate infographics and slide decks, and pick up audio overviews where you left off.
Snap, upload, synthesize
On Android and iOS the app now surfaces a floating camera button on the homepage and in the Sources tab. Tap it and your phone’s camera opens so you can capture whiteboards, textbook pages, lecture slides or handwritten notes and push them straight into NotebookLM as sources. There’s also an Image option in the “Add a source” flow to pull screenshots and photos from your gallery.
That might sound small, but it removes the old friction—no more emailing yourself a photo or juggling cloud uploads. Force-stopping the app can trigger the server-side update in some cases, if an updated feature hasn’t appeared for you yet.
From raw photos to polished assets
Once your image is in NotebookLM you can use the same study tools the web version offers: audio overviews, flashcards and quizzes that summarize and test what the source contains. Newer additions brought to mobile include Studio’s Infographic and Slide Generation: the system can condense your sources into a single illustrative infographic or a multi-page slide deck (exportable as a PDF). According to rollout notes, the image generation work here leans on Google’s Nano Banana Pro model to produce visuals tied to your material.
Those Studio outputs aren’t decorative afterthoughts. Teachers, field researchers and conference-goers will notice the difference: photograph a slide, get a draft presentation and a quick quiz—handy for prepped follow-ups.
Small quality-of-life upgrades that add up
The app now remembers where you stopped in audio overviews, syncing that position across devices. Start listening on your phone during a commute and resume on the web without hunting for the timestamp. Mobile also caught up with flashcards and quizzes, plus a few interface niceties like unselecting sources from the chat screen.
Not everything is instant or uniform. Some features (shared chat histories across sessions, for example) still roll out unevenly, and Google’s server-side toggles mean availability can vary by account and region.
Why this matters beyond convenience
NotebookLM has always leaned on grounding answers in your uploaded sources rather than open-ended chat. That focus gets stronger when you can drop real-world artifacts into the system as images. It narrows the gap between capture and insight.
There’s a bigger picture here too: Google is folding NotebookLM more tightly into its generative-AI ecosystem. Studio’s visuals and the deeper synthesis modes draw from the same family of models that have been expanding into other Workspace and consumer products. If you follow Google’s strategy, it’s clear the company wants specialized AI tools — like a research notebook — to sit alongside chatbots and map copilots rather than be swallowed by them. For a taste of that wider integration, consider how Google is weaving Gemini-style research into productivity flows like Gmail and Drive in its Deep Research efforts: Gemini Deep Research plugs into Gmail and Drive. Likewise, some of the same experimentation around AI-driven assistants shows up in features such as the AI Mode agentic booking tests that Google has been piloting elsewhere in its apps AI Mode’s agentic booking experiments.
Rough edges and privacy questions
Powerful on-device capture raises obvious questions about security and privacy. NotebookLM processes user-provided content to generate outputs, and as those inputs expand to photos and slides, the stakes for sensitive information increase. Enterprises and educators will want clarity on where data is stored, how long it’s retained, and who can access generated artifacts — especially if organizations plan to use the app for confidential research.
There’s also the classic rollout friction: server-side flips, staggered availability across platforms and features that show up first on the web then trickle down. Users who depend on a fully consistent cross-device experience may find it uneven for a short while.
Practical scenarios where this helps
- A student snaps a professor’s whiteboard and gets a short audio overview plus flashcards for exam prep.
- At a conference, a product manager photos a competitor slide and immediately gets a draft deck and talking points for a follow-up meeting.
- A researcher in the field captures printed source material and later generates an infographic to share with colleagues.
These are small workflow accelerations, but together they turn the phone into a true capture-and-summarize device rather than just a content conduit.
Google’s NotebookLM mobile updates are incremental but meaningful: the app removes steps, brings richer outputs to palm-sized screens, and leans on visual generation to make notes more useful. If you live in meetings, lectures or research trips, this update nudges NotebookLM from helpful to habit-forming. Update the app and give the camera a try — you might start treating your phone like a mobile research assistant.