You probably know the feeling. You need help getting work done and AI is simply the fastest way: summarizing a meeting, drafting an email, analyzing a spreadsheet, rewriting text for clarity, brainstorming, extracting risks from a contract, writing a brief for the team, or just quickly understanding a complex document.

And then comes the moment: "Is it safe to put this in here?"

Good news: you don't need to be a security specialist to make a reasonable decision. You just need to stop worrying about labels and learn 3 simple questions.

AI security = 3 questions

Whenever you're sending text, files, screenshots, or data to AI, ask yourself the following:

1. WHERE does the data flow?

Simply put: where does your content go and through whom does it pass?

  • Is it a web application?
  • Is it a corporate tool within your account (Workspace/M365)?
  • Is it a browser plugin/extension?
  • Is it an integration (Drive, Slack, Jira, Notion, CRM)?
  • Is it "one app" that actually sends data onward to third parties?

Why it matters: the more "stops" along the way, the more places where data can be stored, logged, or accidentally exposed.

Practical question for the vendor: "Can you describe the data flow? Where is data processed and are there any subprocessors involved?"

2. WHO can access the data?

This is where most misunderstandings happen. It's not just about hackers. It's about everyday operations.

  • Can only you (and your company) see the content?
  • Or can vendor administrators, support staff, subcontractors also access it?
  • Most importantly: do you have auditability – who sent / downloaded / modified what?

Why it matters: if you don't know who had access, you can't justify it internally or during an audit. And typically "shared accounts" and chaos emerge.

Practical question for the vendor: "Who on your side can see the content and under what conditions? Do you have roles, audit logs, and access controls?"

3. HOW long does the data stay?

Retention = how long your content is stored (history, logs, cache, backups).

  • Is it deleted immediately after processing?
  • Is it stored in your account history?
  • What about logs and backups – how long do they exist?

Why it matters: longer retention = more time and more places where something can go wrong.

Practical question for the vendor: "What is the retention period for content and logs? Can it be adjusted? What does deletion and export look like when the contract ends?"

Buzzwords: what they mean and what they DON'T

Don't be lulled by phrases. Always verify where data flows, who can access it, and how long it stays.

Encrypted in transit

Encrypted transfer. Says nothing about who sees the data after delivery.

No training on your data

They don't train the model. Doesn't exclude logs, history, or support access.

We don't store your data

Often just "we don't store it permanently." Logs and backups may live on.

EU-based servers

Hosting in EU. Support/subcontractors may be outside the EU.

Quick traffic light

Green

You can clearly answer all 3 questions (where / who / how long).

Orange

One answer is vague – don't send sensitive content until you clarify.

Red

Two or more "I don't know / not addressed" – don't send internal/sensitive content.

Mini-checklist "Is This Safe?"

Before you put anything into AI, check:

  • I know where data is processed and whether subprocessors are involved
  • I know who can see the content (roles, admin, support)
  • I know how long data is retained (including logs and backups)
  • I have audit / traceability (who did what)
  • I know how deletion and data export work

Next up: the real difference between EU and non-EU servers, and why "EU-based" doesn't always mean safe. We'll also look at the EU AI Act in plain language.