AI

Claude for healthcare - making HIPAA compliance work without enterprise budgets

Mid-size healthcare organizations face an impossible choice between modern AI tools and HIPAA compliance. Claude works in healthcare, but you need a Business Associate Agreement and proper safeguards. Here is how to implement defensible controls without enterprise budgets or dedicated compliance staff.

Mid-size healthcare organizations face an impossible choice between modern AI tools and HIPAA compliance. Claude works in healthcare, but you need a Business Associate Agreement and proper safeguards. Here is how to implement defensible controls without enterprise budgets or dedicated compliance staff.

Key takeaways

  • Business Associate Agreements are mandatory but accessible - Anthropic offers BAAs for API access, and cloud platforms like AWS Bedrock provide HIPAA-compliant Claude access without enterprise negotiations
  • De-identification is harder than it looks - Removing names and dates does not make data safe, and the Safe Harbor method requires eliminating all 18 HIPAA identifiers before data leaves your control
  • Most HIPAA violations come from process failures, not technology - Recent OCR enforcement actions show that missing BAAs, inadequate training, and poor access controls cause more problems than security breaches
  • You can implement defensible controls without dedicated compliance staff - Focus on documenting risk assessments, establishing clear PHI boundaries, and training developers on what protected health information actually looks like in your systems
  • Need help implementing these strategies? Let's discuss your specific challenges.

Your development team wants to use Claude to improve patient documentation and automate administrative workflows. Your compliance officer asks about HIPAA. Your budget cannot support enterprise healthcare platforms.

The answer is not avoiding AI. It is understanding which HIPAA requirements actually apply and implementing controls you can sustain.

When Claude becomes a business associate

Here is what matters: HIPAA treats cloud services that handle PHI as business associates. This is not optional. When you send protected health information to Claude, Anthropic becomes your business associate under federal law.

That means you need a Business Associate Agreement before you start. No BAA, no Claude with PHI. The HHS Office for Civil Rights enforces this aggressively. In January 2025, OCR announced a three million dollar settlement with a medical supplier over PHI breaches. Most violations OCR finds involve missing BAAs or inadequate vendor oversight.

Anthropic offers Business Associate Agreements, but only for their API products. The Claude.ai chat interface cannot be used with PHI. If your team wants to paste patient notes into Claude.ai to generate summaries, that violates HIPAA. Full stop.

The practical path: use Claude through AWS Bedrock, Google Vertex AI, or Azure. These platforms sign BAAs and provide the technical safeguards HIPAA requires. Your alternative is contacting Anthropic sales directly for API access with a BAA, though this involves a review process and works better for organizations with clear use cases and technical capabilities.

What protected health information actually means

Most healthcare organizations get this wrong. They think removing patient names makes data safe. It does not.

HIPAA defines 18 specific identifiers that must be removed for data to qualify as de-identified under the Safe Harbor method. Names, sure. But also all dates except year, geographic areas smaller than state level, phone numbers, email addresses, medical record numbers, device identifiers, and biometric data.

Miss one and your data is still PHI.

The real problem shows up with rare conditions and small populations. A 47-year-old patient in rural Montana with a rare genetic disorder is identifiable even without a name. The combination of age, location, and diagnosis creates a unique fingerprint. HHS guidance acknowledges that properly de-identified data still carries some re-identification risk.

Your safer approach: Expert Determination. This requires a qualified statistician to assess re-identification risk and document that the risk is very small. It costs more than Safe Harbor but works when you need richer data for AI training or analysis.

For most mid-size healthcare organizations, the practical answer is simpler. Do not de-identify at all. Keep PHI as PHI, implement proper controls, get your BAA, and document everything. Trying to de-identify data so you can skip HIPAA compliance usually creates more risk than it eliminates.

The three controls that actually work

HIPAA Security Rule requirements break down into administrative, physical, and technical safeguards. For AI tools, three areas demand attention.

Access controls come first. Who can send PHI to Claude? How do you prevent developers from pasting patient data into unauthorized tools? This requires more than policy documents. You need technical enforcement. Recent HIMSS guidance on AI governance emphasizes that strong access controls and audit trails form the foundation of compliant AI usage.

Practical implementation: Create separate development environments that never touch real PHI. Use synthetic test data or properly de-identified limited datasets for development work. Require code review for any function that accesses patient information. Log every API call that touches PHI with enough detail to reconstruct what happened during an audit.

Training comes second. Generic HIPAA training videos do not work. Your developers need to recognize PHI in your specific systems. What does a patient identifier look like in your EHR database? Which API endpoints return protected information? When does aggregated data still count as PHI?

Build runbooks for common tasks. How do you test a new feature that processes clinical notes? What approval is required before deploying code that touches patient data? When must you document a new AI use case in your risk assessment?

Logging comes third. The HIPAA Security Rule requires audit controls to record access to electronic PHI. For Claude usage, this means logging which users sent what types of queries, when, and what data those queries contained. You do not need expensive SIEM platforms. Basic API logging with retention policies and periodic review meets the requirement.

Building a defensible compliance program

HIPAA does not require perfection. It requires reasonable safeguards based on your size, complexity, and resources. Research shows that smaller healthcare organizations struggle with compliance not because they lack sophistication but because they try to implement enterprise controls they cannot sustain.

Start with a risk assessment. HHS provides guidance but the practical questions are straightforward. Where is your PHI? Who needs access? What happens if it leaks? Which controls reduce risk most effectively for your budget?

Document your decisions. When OCR audits you, they want to see thoughtful risk management. Why did you choose AWS Bedrock over direct API access? How did you determine that your logging approach provides adequate audit trails? What training did you provide and how do you verify it worked?

Your documentation proves you took HIPAA seriously. It does not need to prove you implemented every possible control. A mid-size clinic with 200 employees faces different requirements than a hospital system with 10,000 staff. OCR enforcement actions focus heavily on organizations that skipped risk assessments entirely or ignored identified risks without explanation.

Common mistakes that create real risk

The biggest mistake is treating BAAs as checkbox exercises. Signing a BAA with Anthropic or AWS does not make your Claude usage compliant. It makes their service eligible for your compliant usage. You still need your own safeguards.

Second mistake: assuming cloud means compliant. Cloud providers offer HIPAA-eligible services, but you must configure them correctly. Default settings rarely meet HIPAA requirements. Encryption, access controls, logging, all require active configuration and testing.

Third mistake: inadequate developer training. Your team needs to understand PHI boundaries in your specific systems. Training should cover real scenarios they encounter. What do you do when a user reports a bug that requires reviewing their patient record? How do you troubleshoot a failed API call that contains PHI in the error message?

Fourth mistake: not testing your controls. How do you know your logging works? Have you tried to reconstruct a patient query from your audit logs? Can you identify when someone accessed PHI inappropriately? Testing does not require penetration testing or formal audits. It requires attempting to use your own systems in ways that should be blocked or logged, then verifying your controls worked.

What this means for your organization

HIPAA compliance for Claude healthcare usage is achievable without enterprise budgets or dedicated compliance teams. Focus on fundamentals: get your BAA, implement strong access controls, train your team on actual PHI scenarios, log appropriately, and document your approach.

The organizations that struggle are those that either ignore HIPAA entirely or try to implement controls designed for much larger entities. Find the middle path. Understand your actual risk, implement controls that make sense for your size and budget, and document why your approach provides reasonable safeguards.

When you start using Claude through a HIPAA-compliant platform with a proper BAA, you are not taking a regulatory risk. You are making a documented decision about how to improve patient care and operations while managing PHI responsibly. That is exactly what HIPAA requires.

About the Author

Amit Kothari is an experienced consultant, advisor, and educator specializing in AI and operations. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.

Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.