← Back to Blog
AI5 min read

Your Staff Is Already Using ChatGPT. Here's Why That Should Concern You.

2026-03-10

You didn't roll out an AI policy. You didn't purchase any AI tools. You didn't sign any Business Associate Agreements with an AI vendor.

But there's a good chance your staff is using ChatGPT anyway.

They're using it to draft patient follow-up emails. To summarize visit notes. To look up billing codes faster. To write prior authorization letters. They're using it because it works, it's free, and nobody told them not to.

And they're doing it with patient names, dates of birth, diagnosis codes, and appointment details in the prompt.

This Is Already Happening in Healthcare Practices

A 2024 survey by the American Medical Association found that nearly 40% of healthcare staff reported using consumer AI tools for work tasks — with or without employer knowledge or approval. In small practices where oversight is thinner and efficiency pressure is high, that number is almost certainly higher.

This isn't about your staff doing something malicious. They're doing something helpful. They found a tool that saves them time, and they used it. The problem is that the tool was never designed to handle Protected Health Information (PHI), and using it that way is a HIPAA violation whether you knew about it or not.

Why Consumer AI Tools Aren't HIPAA-Safe

Tools like ChatGPT (OpenAI), Gemini (Google), and the standard version of Claude (Anthropic) are consumer products. When your staff pastes a patient's information into one of these tools, that data is transmitted to and processed on servers owned by a third-party company.

Under HIPAA, any vendor that receives, processes, or stores PHI on behalf of your practice must sign a Business Associate Agreement (BAA). A BAA is a legal contract that obligates the vendor to protect the data, report breaches, and comply with HIPAA requirements.

None of these consumer AI tools come with a BAA by default. Some enterprise versions do — but the free and standard paid tiers do not. And even where enterprise options exist, using the wrong tier is still a violation.

Here's what that means practically: if a staff member pastes a patient's name, DOB, and diagnosis into ChatGPT to draft a letter, your practice has transmitted PHI to an unsecured third party. That's a potential breach. The fact that you didn't know doesn't eliminate the liability.

What OCR Is Watching

The HHS Office for Civil Rights — the federal body that enforces HIPAA — has already signaled that AI-related PHI exposure is an enforcement priority. In 2024 and 2025, OCR issued guidance specifically addressing AI tool usage in healthcare settings, warning covered entities that the "workforce member acting independently" defense has limits.

In other words: if your staff uses an unauthorized AI tool and a breach occurs, "we didn't know" is not a complete defense. Covered entities are expected to implement reasonable safeguards, including policies around technology use.

The Audit That Usually Reveals This

When we do a practice audit, we ask a simple question: "Do any of your staff use AI tools for work tasks?"

The most common answer from practice owners is: "Not that I know of."

The most common answer from staff, asked separately, is: "Yeah, I use ChatGPT sometimes. It helps with emails."

That gap — between what the owner knows and what's actually happening — is where the exposure lives.

What You Can Do Right Now

You don't need to ban AI. Banning it doesn't work anyway — people will use it more quietly. What you need is a policy and a compliant alternative.

Step 1: Acknowledge that this is probably already happening. Don't wait for a complaint or a breach investigation to find out. Ask your staff directly, without judgment, whether they use any AI tools for work.

Step 2: Create a clear, written policy. Staff need to know what's permitted and what isn't. A simple one-page AI use policy — what tools are approved, what data can and cannot be used with AI — removes ambiguity and gives you documentation of your compliance effort.

Step 3: Give them a compliant alternative. If you ban AI without offering a replacement, you're just pushing usage underground. The solution is to provide a tool that does what they need — drafting, summarizing, lookup — in a way that's been properly configured with HIPAA compliance in mind.

That might be a HIPAA-eligible cloud AI platform with a signed BAA. For practices that want maximum control, it might be a locally-deployed AI model that never sends data off your network. Either way, the goal is the same: let your staff get the efficiency benefits of AI without the legal exposure.

The Bottom Line

Your staff using AI isn't the problem. Your staff using AI with patient data in tools that were never designed for it is the problem.

The good news is that this is a solvable problem — and it's much cheaper to solve it proactively than to address it after a breach or an OCR complaint.


Wondering what AI tools your practice is actually using — and whether any of them create HIPAA exposure? We cover this as part of every free practice audit.

Ready to Find Out What's Costing Your Practice?

In 15 minutes, we'll identify your top 3 revenue and time leaks — at no cost and no obligation.

Get Your Free Practice Audit