AI Prompt Security: What You Should Never Share With an AI
AI assistants have become part of daily work: drafting emails, writing code, summarizing documents, generating passwords. They feel conversational and private, like thinking out loud. They aren't. What you type into an AI prompt can be stored for months or years, reviewed by employees, or used to improve future models. This article covers exactly what to keep out of your prompts and how to get the help you need without the risk.
What actually happens to your prompts
When you send a message to an AI assistant, it travels to a server operated by the AI company: OpenAI, Anthropic, Google, Meta, or whichever provider the product uses. What happens next depends on the company's data policy, which is often long, ambiguous, and subject to change.
In practice, your prompts may be:
- Stored for months or years, depending on your account tier and the product's current policy
- Reviewed by human employees for safety, quality, and policy enforcement
- Used to improve future models, unless you explicitly opt out (and opt-out coverage varies)
- Subject to legal disclosure via subpoenas, law enforcement requests, or regulatory inquiries
- At risk in a data breach. These services are high-value targets. Assume breaches are possible and plan accordingly.
Consumer vs. enterprise: Enterprise and API plans often have stronger privacy defaults than consumer chat products, but you still need to read the policy for your specific product. Don't assume enterprise protection if you're on a free or personal tier.
What you should never include in an AI prompt
These categories cover the most common and most damaging leaks.
- Passwords & credentials Never paste a password, PIN, or passphrase into an AI chat. Not to "check if it's strong", not to ask for variations, not as an example. If the conversation is stored, your credential is stored. Generate passwords in a browser-local tool instead.
- API keys & tokens API keys, OAuth tokens, private keys, and service secrets. Developers paste these accidentally all the time when sharing error messages or code snippets. Treat any key as compromised the moment it leaves your machine. Revoke and rotate immediately.
- Personal ID numbers Social security numbers, passport numbers, national ID numbers, driver's licence numbers. These are the core inputs for identity theft. There is no scenario where an AI needs this information to help you.
- Financial data Credit card numbers, bank account details, IBAN codes, CVVs. Even partial numbers combined with other context can be dangerous if they appear in stored data or a breach.
- Medical records Health conditions, prescriptions, test results, therapy notes. Medical data carries legal protection in most jurisdictions for good reason. It is highly sensitive and frequently targeted.
- Confidential business data Unreleased product plans, M&A details, client lists, internal financial figures, proprietary source code. Pasting this into a third-party AI is often a direct policy violation, as several employees at major companies have discovered publicly.
- Other people's data Personal details about colleagues, clients, or patients shared without their consent. Even anonymized names can be re-identifying when combined with other context. Data protection laws in many countries apply regardless of where you process the data.
What counts as PII context?
Readers often miss subtler identifiers. Beyond names and ID numbers, avoid including: your bank name, your employer's name, your email address or username, your city or location, client names, and account numbers of any kind. These details seem harmless individually. Combined, they create a profile.
The real leak is logs, files, and pasted code
Most credential leaks in AI prompts don't happen when someone intentionally shares a secret. They happen when someone pastes a log file, a config snippet, or a chunk of code that silently contains a key or token.
Before pasting anything into an AI, do three things:
- Scan for tokens, keys, URLs with embedded credentials, and email addresses. Look for patterns like
Authorization:,Bearer,PRIVATE KEY,token=,api_key=, or any long random-looking string. - Redact first. Replace keys with
[REDACTED]and emails with[email protected]before sending. - Rotate any key that may have been exposed. If you're not sure whether a key was included, treat it as compromised and generate a new one.
The specific risk with AI password generation
AI-powered password generators, including the AI Mode on this site, are useful for creating memorable, context-specific passwords. But they introduce a network request that local generators don't: your text description is sent to a remote server.
The rule: describe what you need the password for in generic terms. Never include the actual password, your username, your account name, or the institution's name.
Bad prompt: "Generate a new password for my Chase bank account — my current one is BlueSky2024! and my username is [email protected]"
Good prompt: "Generate a strong, memorable password for a banking site — something I can type easily but wouldn't guess"
The good prompt gets you what you need. The bad prompt hands a third-party server your existing credentials, username, and institution. If you'd rather avoid the network request entirely, use Random or Passphrase mode. Both generate passwords entirely inside your browser with no data sent anywhere.
How to use AI safely
None of this means AI tools are unusable. They're genuinely powerful. It means applying the same discretion you'd use with any external service.
- Anonymize before pasting. Replace real names, company names, account numbers, and identifying details with placeholders. "Client X" and "Company A" are fine. Real names are not.
- Read the privacy policy of the specific product you're using. Consumer chatbots and enterprise API access have very different terms. Know which one you're on.
- Use local tools for sensitive operations. Password generation and any processing of credentials should happen locally, in your browser or on your machine, not via a third-party AI.
- Opt out of training data use where available. Most major AI providers offer this in account settings. It's not a guarantee, but it reduces the scope of retention.
- Treat the AI like a smart colleague in a shared office. You'd ask that colleague to help draft an email or debug code. You wouldn't read them your PIN.
Quick checklist before you paste
Run through this before sending anything sensitive:
- No passwords, PINs, or passphrases in the prompt
- No API keys, tokens, or secrets (check pasted code and logs)
- No real names, email addresses, usernames, or account numbers
- No bank names, employers, or client names for context
- No medical details, ID numbers, or financial data
- Used opt-out for training data in account settings
- On an enterprise or API plan if handling work-sensitive data
What to do instead
For anything involving real credentials or sensitive data, use tools that never leave your device.