A Google AI security engineer once asked a chatbot to help polish a work email — and later the same assistant surprised him by identifying his home address. That little moment captures why people who build, defend, or study AI are dead serious about what you type into these boxes.
Chatbots are useful. They’re also porous. Models learn from enormous datasets, companies operate on different privacy promises, and long-term-memory features can stitch conversations together in ways users don’t expect. Mix in data breaches, corporate incentives, and lax regulation and you get a messy, risky landscape.
Below is a practical, human-centered guide that stitches together advice from engineers, security experts, clinicians, and journalists. Read it and ask the right questions before you paste that patient list, patent draft, or social‑security number into a chat.
Start by thinking like a postcard
One of the simplest mental models you’ll hear from privacy pros: treat public chatbots like a postcard. If you wouldn’t scribble it on a postcard that anyone at a coffee shop could read, don’t put it in a public chat. That covers obvious things — SSNs, credit‑card numbers, exact home addresses — and less obvious ones, like proprietary code snippets, sensitive patient details, or unreleased product plans.
Why? Public models may use inputs to improve future training, and even enterprise products sometimes have long‑term memory features. Human error and hacks happen; there are documented cases of employees accidentally leaking corporate secrets to chatbots. So assume anything you enter could be seen again.
Know the room you’re in
Not all AI experiences are equal. There’s a real difference between a consumer chatbot that may be used to improve models and a paid enterprise instance that promises not to. Use enterprise offerings for confidential work and keep public tools for low‑risk tasks like brainstorming or drafting generic copy.
Also watch integrations. Services that let an assistant search your inbox or drive change the stakes — features that pull from Gmail, Drive, or other account data can be immensely productive but raise obvious privacy flags. Google’s new integrations that let Gemini surface Gmail and Drive content are a clear example of productivity versus privacy tension, and they’re worth examining before you opt in.(/news/gemini-deep-research-gmail-drive-integration)
Four habits engineers swear by
1) Minimize what you share. Only give the assistant what’s necessary for the task. No extra context.
2) Turn off model‑improvement settings. Many tools let you opt out of having your chats used for training — flip that off when possible.
3) Use temporary or incognito chat modes for one‑off questions that you don’t want stored.
4) Delete histories regularly. Accounts get compromised; call it digital hygiene.
Harsh Varshney, who works on Chrome AI security, recommends these exact moves — and keeps personal details out of public chats even when he’s testing capabilities.
Don’t ask a chatbot to replace a professional
There’s a long list of things you shouldn’t delegate to a chatbot: anything illegal, personalized medical diagnosis, formal legal drafting without lawyer review, tax filing strategy, or life‑critical emergency guidance. Chatbots hallucinate, miss context, and sometimes mirror the worst corners of the internet. For matters that can hurt your health, finances, freedom, or important relationships, talk to a qualified human.
That said, chatbots can still accelerate research, draft non‑binding summaries, and suggest questions to ask a professional — if you treat their output as a starting point, not the final authority.
Compartmentalize and diversify your footprint
An expert trick from privacy advocates is to spread your interactions across multiple services. Use one assistant for calendar and scheduling, another for code snippets, and a third for general research. Compartmentalizing makes it harder for any single provider to build a complete profile about you.
At the organizational level, companies should resist the temptation to fling proprietary data into consumer tools. Hospitals, law firms, and product teams need clear policies and training. Clinicians and administrators have a particular obligation to educate patients and staff about AI’s limits and risks: medical records and therapy notes are not fodder for public models.
If you build or launch a chatbot, plan for liability
Legal and compliance teams have real work to do. Before shipping an assistant, require privacy audits, implement data‑retention limits, and be transparent about what data you log and why. Clear consent language and robust opt‑outs build trust. Jeff Pennington and other researchers argue for radical transparency: tell users plainly how inputs will (and won’t) be used.
Regulation is lagging. Until lawmakers act, reputation and user trust will be primary levers to force better behavior.
Practical short checks — before you hit Send
- Is this a public or enterprise model? If public, pause.
- Does the prompt contain PHI, PII, or proprietary text? If yes, don’t send.
- Is the response going to guide a life‑or‑death action, legal decision, or financial move? Ask a human instead.
- Is there a temporary chat mode available? Use it.
- Is there a model‑improvement toggle? Turn it off.
A final note on convenience vs. control
AI is already stitching itself into browsers, email, and calendars. Features that let assistants book appointments or read your inbox make life easier, but convenience often costs context and control. If you want the convenience, be intentional: read the privacy settings, whitelist only what you trust, and limit the assistant’s memory.
Chatbots will continue to get smarter. That’s exciting. It’s also why a little skepticism, a few simple habits, and clear organizational rules will keep the upside from becoming an avoidable exposure.
If nothing else, remember the postcard rule. It’s cheap, memorable, and it works.