Every home health and home care agency is being pitched an “AI documentation tool” right now. Some of those tools will save your clinicians an hour a day. Some of them will end up on the wall of shame on the HHS Office for Civil Rights breach portal. The difference isn’t the AI — it’s the architecture around it.
This article cuts through the marketing language and gives you the concrete questions to ask any AI vendor before you let them touch protected health information.
What AI is genuinely good at in care documentation
- Drafting visit notes from structured inputs. Given the tasks completed, vitals, and observations a caregiver enters, AI produces a clean, narrative visit note that the clinician reviews and signs.
- Catching documentation contradictions. “Patient ambulated independently” in the visit note vs. “requires assistance” on the OASIS — AI flags it for review before the chart locks.
- Surfacing what’s missing. Care plan goal mentions wound care; visit notes don’t reference wound status. AI flags the gap.
- Translating between languages. Caregiver dictates in Spanish, note appears in English (and the original is preserved for the audit trail).
- Summarizing across visits. Generating a caregiver-handoff summary or a family update from 30 days of notes, in seconds rather than 20 minutes.
What AI should not do — full stop
- Sign or finalize clinical documentation. A licensed clinician must always be the signing authority.[4]
- Make clinical decisions (medication changes, plan-of-care updates) without human review.
- Send PHI to any external service without a Business Associate Agreement in place.
- Train on your patient data to improve someone else’s product.
- Operate without an audit log of what it generated, what was edited, and by whom.
The 7-question vendor checklist
Before letting any AI documentation product near your charts, get written answers to these questions:
- Will you sign a HIPAA Business Associate Agreement? (Non-negotiable. If the answer isn’t an immediate yes, the conversation is over.)
- Where is PHI stored, and is it encrypted at rest and in transit? (AES-256 at rest, TLS 1.2+ in transit.)
- Do you train any models on our patient data? (The only acceptable answer is “no, ever.”)
- Which AI providers (OpenAI, Anthropic, etc.) do you route PHI to, and do you have BAAs with them? (Most major providers offer a HIPAA-compliant tier — make sure your vendor uses it.)
- How do you log AI-generated content vs. human edits? (Required for both surveys and any future medical-record subpoena.)
- What happens when the AI is wrong? (You want to see human-in-the-loop review baked in, not as an optional setting.)
- Can we get a SOC 2 Type II report and a HIPAA risk assessment? (Mature vendors have these. Newer ones should at least be working on them with a credible timeline.)
The NIST framework
The NIST AI Risk Management Framework provides the structure increasingly used by regulators and enterprise buyers to evaluate AI systems.[3] The core ideas — that AI must be trustworthy, explainable, and continuously monitored for harm — apply directly in a clinical context. If your vendor can’t describe how they map to NIST AI RMF, that’s a yellow flag.
The Better Place AI was designed from the first line of code around the boundaries above. Our AI assists clinicians and caregivers — it doesn’t replace their judgment, doesn’t train on your patient data, and never operates outside a signed BAA.
- BAA-backed AI by default: every AI feature operates inside our HIPAA-compliant infrastructure with signed agreements all the way down the stack.
- Zero training on customer PHI: your patient data is used to serve you, not to train models for anyone else — ever.
- Human-in-the-loop documentation: AI drafts, the clinician reviews and signs. The audit log shows exactly what was AI-generated and what the clinician changed.
- Cross-document consistency checks: the platform compares OASIS, visit notes, care plan goals, and medications, flagging contradictions for human review.
- Audit-ready by construction: every AI interaction is logged with timestamp, prompt context, output, and reviewer — the trail surveyors and OCR ask for.
- NIST AI RMF–aligned design: model selection, monitoring, and human-oversight controls map to the NIST framework regulators are coalescing around.
What to do this week
- Send the 7-question checklist to every AI vendor currently in your evaluation.
- Ask your IT or compliance lead to inventory any free or unmanaged AI tools clinicians are already using on the side.
- Pick one low-risk, high-volume documentation use case to pilot with a vendor that passes the checklist.
- Define what “success” means before the pilot starts — time saved, error rate, clinician satisfaction.
AI in care documentation is not a question of if — it’s a question of which vendors and which workflows you trust. The agencies that get this right will save real clinician time. The ones that get it wrong will end up explaining themselves to OCR.
Ask us all 7 questions
We’ll answer each of the questions above on a 20-minute call, in writing, with our BAA template attached. No marketing fluff.
References
- U.S. Department of Health and Human Services. HIPAA Privacy and Security Rules: 45 CFR Parts 160 and 164.
- HHS Office for Civil Rights (2024). Guidance on the use of artificial intelligence and PHI.
- NIST AI Risk Management Framework (2024). AI RMF 1.0 and Generative AI Profile.
- Centers for Medicare & Medicaid Services. Conditions of Participation: Documentation requirements.
Industry statistics are drawn from publicly available reports by the organizations listed.