← Back to Blog
Clinical·12 min read·Mar 25, 2026

HIPAA Compliance in the Age of AI: What Dentists Need to Know

AI tools that process patient data introduce new compliance obligations that many dental practices are not prepared for. The core question is straightforward: does protected health information touch an AI model, and if so, under what terms?

The answer determines whether your practice is operating within HIPAA guidelines or creating liability with every scan, note, and diagnostic query your team runs through an AI system.

How Does HIPAA Apply to AI in Dental Practices?

HIPAA applies to dental AI the same way it applies to any system that processes protected health information. If an AI tool receives patient X-rays, clinical notes, or demographic data, the entity operating that tool must comply with the Privacy Rule, Security Rule, and Breach Notification Rule. Third-party AI vendors are business associates and require a signed BAA.

The Privacy Rule governs how PHI can be used and disclosed. AI diagnostic tools that analyze patient X-rays are performing a treatment function, which is generally permitted without patient authorization. But the picture changes when the AI vendor retains copies of the data, uses it for model training, or shares aggregated insights with other customers.

The Security Rule requires administrative, physical, and technical safeguards for electronic PHI. For AI systems, this means encryption, access controls, audit logging, and transmission security must all be in place before a single patient image is processed.

Most general-purpose AI services, including consumer chatbots and transcription apps, explicitly state that user inputs may be used for model training. Sending PHI to those services without a Business Associate Agreement is a HIPAA violation, regardless of intent. Staff members using ChatGPT to draft a clinical note or running a patient X-ray through an unapproved tool are creating breach exposure every time.

Do You Need a BAA with Your AI Vendor?

Yes. Any AI vendor that processes, stores, or transmits protected health information on behalf of a dental practice is a business associate under HIPAA. A signed Business Associate Agreement is legally required before any PHI is shared. Operating without a BAA exposes the practice to fines of $100 to $50,000 per violation, with annual caps up to $1.5 million per category.

The BAA defines exactly what the vendor can and cannot do with patient data. It specifies permitted uses, required safeguards, breach notification timelines, and the vendor's obligations when the agreement terminates.

Not all BAAs are created equal. Many AI vendors include broad language that permits “de-identified use for research and product improvement.” Under this language, your patients' X-rays can be stripped of identifiers and fed into the vendor's training pipeline. The data is technically de-identified under HIPAA Safe Harbor, but it is still your patients' clinical images contributing to a commercial product you do not control.

Before signing any BAA with an AI vendor, demand specific answers to these questions: Does the vendor retain patient data after processing? Is any data used for model training, even in de-identified form? Can the practice audit the vendor's data handling practices? What is the data deletion timeline after contract termination?

For a detailed analysis of how third-party AI vendors handle patient X-rays, see our post on the hidden data pipeline in dental AI.

What Are the Encryption Requirements for Dental AI?

HIPAA requires encryption of electronic PHI both at rest and in transit. For dental AI, this means patient X-rays must be encrypted using AES-256 or equivalent when stored, and transmitted over TLS 1.2 or higher when sent to an AI model for analysis. Encryption keys should be managed through a dedicated key management service, not hardcoded or shared.

Encryption at rest protects data stored on servers, databases, and storage volumes. If a server is compromised or a storage device is physically stolen, encrypted data remains unreadable without the decryption key.

Encryption in transit protects data as it moves between systems. When a dental practice sends an X-ray to an AI model for analysis, that transmission must be encrypted to prevent interception. TLS 1.2 is the current minimum standard, with TLS 1.3 preferred for new implementations.

Key management is the part most practices overlook. Encryption is only as strong as the protection of the keys used to encrypt and decrypt the data. AWS Key Management Service (KMS) provides hardware-backed key storage with automatic rotation, access policies, and usage logging. Every key operation is recorded, creating an auditable chain of custody for encrypted data.

NexV uses AWS KMS for all encryption operations. Patient data is encrypted at rest using KMS-managed keys with AES-256. All data in transit uses TLS 1.2 or higher. The practice's KMS keys are isolated within their own AWS account, meaning no other NexV customer and no NexV employee can access the decryption keys.

What Audit Trail Requirements Apply to AI Systems?

HIPAA requires covered entities to log who accessed protected health information, when, and for what purpose. For AI systems in dental practices, this extends to recording which AI model processed which patient's data, what diagnostic output was generated, and which clinician reviewed and accepted the result. Logs must be immutable and retained for a minimum of six years.

The Security Rule's audit control standard requires information systems that contain or use ePHI to record and examine activity. For AI diagnostic tools, this means every inference request must be logged with the patient identifier, the requesting provider, the timestamp, the model version, and the output.

NexV uses AWS CloudTrail to maintain a comprehensive audit log of every AI interaction. CloudTrail records are immutable, meaning they cannot be altered or deleted after creation. These logs capture API calls, data access events, and configuration changes across the entire infrastructure.

The audit trail extends beyond AI interactions to cover all system activity. Every login, patient record access, clinical note creation, and billing action is logged with the user identity, timestamp, and action performed. This complete audit trail is available for compliance review at any time through NexV's security dashboard.

Does Dental AI Require Patient Consent?

HIPAA permits the use of PHI for treatment purposes without specific patient authorization. AI-assisted diagnosis of X-rays generally falls under the treatment exemption. However, if AI processing involves sending data to third parties or using data for purposes beyond direct treatment, such as model training, practices should disclose this in their Notice of Privacy Practices and consider obtaining explicit consent.

The treatment exemption covers the use of PHI by healthcare providers for diagnosis, treatment planning, and care coordination. When a dentist uses an AI tool to analyze a periapical radiograph for caries detection, that use falls squarely within the treatment purpose.

The consent question becomes more complex when AI processing involves third parties. If a patient's X-ray is sent to an external AI vendor's server for analysis, the practice is disclosing PHI to a business associate. While this is permitted under HIPAA with a proper BAA, the practice's Notice of Privacy Practices should disclose this data flow.

Best practice is transparency. Inform patients that AI diagnostic tools are used in their care, explain what data is processed, and document that disclosure in the patient record. This approach satisfies both the letter of HIPAA and the ethical obligation to keep patients informed about how their health information is used.

NexV's AI diagnostic engineprocesses all data within the practice's own AWS environment. No third party receives the data, which simplifies the consent analysis considerably. The AI tool is functionally equivalent to a diagnostic instrument that the practice owns and operates directly.

For a deeper look at how AI assists clinical decision-making within these compliance boundaries, see our breakdown of AI treatment planning for dental practices.

Common HIPAA Mistakes with AI Tools

The most frequent violation is staff using consumer AI tools -- chatbots, transcription apps, note generators -- that have no BAA and no data handling guarantees. Even well-intentioned use of these tools can create a reportable breach.

A dental assistant who types a patient's symptoms into ChatGPT to help draft a clinical note has just disclosed PHI to a third party without a BAA. A hygienist who uses a personal phone app to transcribe a patient conversation has created an unsecured copy of PHI on an unmanaged device. These scenarios happen daily in practices that have not established clear AI use policies.

The second common mistake is assuming that de-identification makes data safe. HIPAA's Safe Harbor standard requires removal of 18 specific identifiers, and most ad-hoc de-identification efforts miss several of them. A dental X-ray with the patient name removed but the practice name, date of service, and tooth numbers intact may still be identifiable.

The third mistake is failing to conduct a risk assessment that includes AI tools. HIPAA requires periodic risk assessments that identify threats to PHI. AI diagnostic tools, ambient scribes, and AI-powered documentation systems all process PHI and must be included in the assessment scope.

How NexV's Architecture Eliminates Third-Party Risk

NexV processes all AI diagnostics within a single AWS account controlled by the practice. Patient X-rays are analyzed by SageMaker models running inside the practice's own cloud environment. No data leaves the account, no third-party processors touch patient information, and all encryption keys are managed through AWS KMS with hardware-backed security modules.

The architecture is fundamentally different from third-party AI services. When a practice uses NexV's AI diagnostic engine, the X-ray travels from the practice's browser to NexV's API, which routes it to a SageMaker inference endpoint within the same AWS account. The model processes the image and returns detection results. At no point does the image leave the practice's cloud boundary.

There are no third-party processors in the chain. NexV provides the trained model, but the model runs within infrastructure that the practice's organization controls. This means there is one BAA (with AWS), one data residency boundary, and one set of access controls to manage.

AWS CloudTrail provides a complete record of every API call made within the account, including every AI inference request. These logs are stored in a separate S3 bucket with write-once permissions, ensuring they cannot be tampered with. The logs are retained for the duration required by the practice's compliance policy.

For practices concerned about the broader data privacy landscape, our imaging platform overview details how X-rays are stored, accessed, and protected throughout their lifecycle. And for multi-location practices evaluating compliance at scale, NexV's single-tenant architecture means each practice gets its own isolated environment with no shared infrastructure between customers.

Building a HIPAA-Compliant AI Policy for Your Practice

A HIPAA-compliant AI policy should specify which AI tools are approved for clinical use, prohibit consumer AI services for any PHI processing, require BAAs with all AI vendors, mandate staff training on approved workflows, and include AI systems in the practice's annual risk assessment. The policy should be reviewed and updated whenever new AI tools are adopted.

Every dental practice using AI tools should have a written policy that addresses these points:

  • Approved tools list. Enumerate every AI tool that staff are permitted to use with patient data. Any tool not on the list is prohibited. Update the list when new tools are adopted or retired.
  • Prohibited activities. Explicitly state that consumer AI services (ChatGPT, Google Bard, personal transcription apps) may not be used with any patient information, including de-identified data.
  • BAA verification. Maintain a current BAA for every AI vendor. Review BAAs annually to ensure terms have not changed. Flag any BAA that permits data use for model training.
  • Staff training. Train all staff on approved AI workflows during onboarding and annually thereafter. Document training completion for compliance records.
  • Risk assessment inclusion.Add all AI tools to the practice's HIPAA risk assessment. Evaluate data flows, access controls, and breach exposure for each tool.
  • Incident response. Define the process for reporting and responding to AI-related data incidents. Include scenarios such as unauthorized AI tool use, model output errors that affect patient care, and vendor data breaches.

The cost of building and maintaining this policy is minimal compared to the cost of a HIPAA violation. Penalties range from $100 per violation for unknowing violations to $50,000 per violation for willful neglect. A single breach involving AI-processed patient data could trigger penalties across multiple violation categories.

For practices evaluating how the technology dividend affects their compliance posture, the key insight is that fewer vendors means fewer BAAs, fewer risk assessment entries, and fewer potential breach surfaces. Consolidating AI capabilities within a single platform like NexV simplifies compliance by reducing the number of data flows that need to be monitored and controlled.

Ready to see NexV in action?

View Security Details