Keep Company Data Private When Using AI Tools
Keep Company Data Private When Using AI Tools
Overview
AI tools are powerful because they can read large amounts of text at once. That same trait makes them risky for private data. This article gives you a clear set of habits that keep customer and company information out of the wrong places.
Before You Begin
- Know the difference between consumer AI (free ChatGPT, free Gemini, free Claude) and enterprise AI (Copilot, ChatGPT Enterprise, Gemini for Workspace). Enterprise tools have data-protection guarantees consumer tools do not.
- Confirm which tools your company has approved.
- Recognize what counts as sensitive: customer PII, financial records, health data, source code, internal strategy, employee data, and anything marked confidential.
Steps
- Default to your company's enterprise AI tool. Approved tools are configured so your prompts stay inside your tenant.
- Strip identifying details before pasting any data into a public AI tool. Replace real names, emails, and account numbers with placeholders like [customer name] or [account].
- Avoid pasting full files into public AI. If you need a summary, summarize the structure first, then ask the AI to help with the parts that are not sensitive.
- Watch your prompts for accidental detail. A question like "Why is John Doe at Acme Corp upset about invoice 12345?" leaks three pieces of customer data.
- Be careful with screenshots. Screenshots of dashboards, customer portals, and reports often include data you did not mean to share.
- Read the privacy notice in any new AI tool before you use it for work. Look for whether prompts are used to train models. If they are, do not put company data in.
- Clear chat history on shared computers. A coworker reopening the browser should not see your prompts.
- Report any accidental data exposure to your security team the same day. They can often request deletion from the vendor before it spreads.
Troubleshooting
- If you already pasted sensitive data into a public AI tool: tell your security team immediately. Many vendors honor deletion requests, but only if asked quickly.
- If a browser extension promises AI features for free: be cautious. Many of these extensions transmit page content to third-party servers.
- If you need to share customer data with an AI for legitimate work: route the request through your security team. They can spin up an approved tool for it.
- If a chatbot embedded on a vendor website asks for customer details: verify whether the vendor is approved before answering.
Related Articles
Need More Help?
Submit a ticket at support.bostonmit.com or email support@bostonmit.com.
Related Articles
AI Acceptable Use: What's Allowed and What's Not
AI Acceptable Use: What's Allowed and What's Not Overview AI tools can speed up your work, but they also create new risks. This article covers the everyday principles for using AI at work safely. Your company's full AI policy may go further. When the ...
Use ChatGPT Enterprise Safely at Work
Use ChatGPT Enterprise Safely at Work Overview ChatGPT Enterprise is the business version of ChatGPT. Unlike the consumer free or Plus versions, Enterprise does not train on your company's prompts, supports single sign-on, and meets enterprise ...
Get Started With Google Gemini at Work
Get Started With Google Gemini at Work Overview Google Gemini is the AI assistant built into Google Workspace. It writes drafts in Gmail and Docs, summarizes Meet conversations, and helps you turn raw data into something useful in Sheets. This guide ...
Get Started With Microsoft Copilot
Get Started With Microsoft Copilot Overview Microsoft Copilot is the AI assistant built into Microsoft 365. It can draft emails in Outlook, summarize Teams meetings, build slide decks in PowerPoint, and analyze spreadsheets in Excel. This guide gets ...