OASIS-E and the HOPE Assessment: What Every Home Health Agency Needs to Know in 2026

CMS continues to evolve post-acute assessments. If your team isn't fully up to speed on OASIS-E and HOPE, you're leaving compliance — and money — on the table.
If you run a home health agency, you've spent the last several years navigating a series of significant changes to the OASIS instrument. The transition to OASIS-E introduced new items, modified existing ones, and fundamentally altered how some clinical data points are captured. The HOPE (Home Health Outcome and Process Evaluation) assessment added another layer of complexity, aligning home health documentation more closely with standardized patient assessment data across post-acute settings.
For agencies that have kept pace with these changes, the new instruments offer an opportunity to demonstrate clinical quality and capture more accurate reimbursement. For agencies that haven't, the exposure is growing — in the form of inaccurate case-mix weights, compliance gaps, and audit vulnerability.
Here's where things stand in 2026, and what your agency needs to have dialed in.
What Changed with OASIS-E — and Why It Still Matters
OASIS-E represented the most significant revision to the OASIS instrument in over a decade. While many agencies completed the initial transition, the nuances of the changes continue to cause problems in day-to-day documentation.
The most consequential changes involved the standardization of patient assessment data across post-acute care settings. CMS introduced and expanded Section GG (Functional Abilities and Goals) items to create a common language for measuring patient function across home health, skilled nursing, inpatient rehabilitation, and long-term acute care. For home health clinicians accustomed to the legacy functional items, this required a fundamental shift in how they assess and score patient abilities.
Section GG items use a different scoring methodology than the legacy OASIS functional items. They measure what the patient actually does during the assessment, using a specific performance-based scale that ranges from "dependent" to "independent." Clinicians who were trained on the older item set sometimes default to scoring based on what the patient can do (capacity) rather than what they actually did during the assessment (performance). This distinction seems minor but it has direct reimbursement implications.
OASIS-E also refined several items related to cognitive function, behavioral health, and social determinants of health. These additions reflect CMS's broader goal of capturing a more complete picture of the patient, but they also mean more data points for clinicians to address during every comprehensive assessment.
Understanding the HOPE Assessment
The HOPE assessment was introduced as part of CMS's effort to create a more unified approach to patient assessment in home health. While HOPE incorporates many elements that home health clinicians will recognize from the traditional OASIS, it introduces new data collection points and modified definitions for existing ones.
HOPE assessment items focus on standardized outcome measurement. The goal is to track patient progress across episodes using consistent, comparable data points. This means that accurate HOPE data collection isn't just about individual claim reimbursement — it's about the agency's quality scores, star ratings, and public reporting metrics.
For clinicians, the practical impact is that HOPE items require the same level of precision and consistency as OASIS items, but they serve a somewhat different purpose. While OASIS items primarily drive payment grouping, HOPE items primarily drive quality measurement. An agency that scores OASIS items accurately but treats HOPE items as an afterthought will see the consequences in its quality metrics — which increasingly influence referral patterns, payor contracts, and public perception.
The Five Areas Where Agencies Struggle Most
Based on patterns observed across agencies of all sizes, five specific areas generate the most errors and confusion in OASIS-E and HOPE documentation.
Section GG scoring inconsistency remains the most widespread issue. Clinicians score GG items differently depending on their training, their interpretation of the assessment criteria, and how much time they have during the visit. One clinician might score a patient as needing "supervision or touching assistance" for bed mobility, while another clinician assessing the same patient might score "partial/moderate assistance." This inconsistency creates data quality problems that affect both reimbursement and quality reporting.
The solution starts with standardized training on the GG scoring criteria, with specific emphasis on the difference between capacity and performance, and the precise definitions of each assistance level. But training alone isn't enough — ongoing calibration through inter-rater reliability checks is essential for maintaining consistency across your clinical team.
Medication-related items continue to cause documentation gaps. OASIS-E expanded the medication management section to capture more detail about the patient's ability to self-administer medications and the complexity of the medication regimen. Clinicians who rush through these items or score them based on assumptions rather than direct assessment leave reimbursement value on the table.
Wound assessment documentation must align precisely with OASIS wound status items. The most common problem is a disconnect between the narrative wound description in the clinical note and the coded wound status in the OASIS. When a note describes a wound that's clearly worsening but the OASIS item indicates "stable," the inconsistency creates both a compliance risk and a reimbursement risk.
Cognitive and behavioral health items are newer to many home health clinicians and receive less training emphasis than functional or clinical items. But these items influence clinical grouping under PDGM, and inaccurate scoring can shift an episode into a lower-paying category. Clinicians need specific guidance on how to assess and score these items based on observable patient behavior during the visit.
Assessment timing and completion requirements create procedural errors that can invalidate entire assessments. The SOC assessment must be completed within five calendar days. Recertification assessments have their own timing windows. When agencies don't have systems in place to track and enforce these deadlines, late assessments become a recurring compliance problem.
How Technology Is Closing the Gap
The complexity of OASIS-E and HOPE creates a documentation challenge that manual processes struggle to manage at scale. A clinician conducting a comprehensive SOC assessment needs to accurately capture and score dozens of individual items across multiple clinical domains, all while maintaining focus on the patient and providing direct care.
AI-powered documentation tools are increasingly filling this gap. When an AI scribe is trained on the OASIS-E and HOPE instruments, it can capture clinical information from a natural patient conversation and map it to the correct assessment items automatically. The clinician doesn't need to mentally track which GG item to score while talking to the patient about their mobility — the AI handles that mapping.
More importantly, AI can perform real-time consistency checks during the documentation process. If a clinician's conversation suggests one functional level but the AI-generated score doesn't match, the system can flag the discrepancy before the assessment is finalized. This catches the kind of internal inconsistencies that manual QA reviews often miss — because the AI is reviewing every item against every other item, not just spot-checking a sample.
Training Your Team for Accuracy
Technology alone isn't enough. Agencies that achieve consistently high OASIS-E and HOPE accuracy combine technology with structured education and accountability.
Initial training should cover not just what each item measures, but why it matters. Clinicians who understand how their OASIS responses affect reimbursement, quality scores, and compliance are more likely to invest the effort in accurate scoring. Abstract training on assessment criteria is less effective than concrete examples showing the financial impact of a scoring error.
Ongoing calibration exercises maintain inter-rater reliability across your clinical team. Quarterly case studies where multiple clinicians independently score the same patient scenario — followed by group discussion of any discrepancies — keep scoring consistent as staff turns over and new clinicians join the team.
Targeted feedback from QA to clinical staff closes the loop between assessment errors and documentation practice. When a QA reviewer identifies a pattern — for example, a specific clinician consistently under-scoring Section GG items — that feedback needs to reach the clinician quickly and constructively, with specific guidance on how to improve.
What's Coming Next
CMS continues to evolve the OASIS and HOPE instruments. The trend is clear: more standardization across post-acute settings, more granular data collection, and more sophisticated use of assessment data for quality measurement and payment adjustment.
Agencies that invest in OASIS and HOPE accuracy today aren't just solving this year's compliance requirements. They're building the infrastructure — the training programs, the technology tools, the QA processes — that will keep them compliant and competitive as the assessment requirements continue to evolve.
The agencies that treat OASIS-E and HOPE as bureaucratic checkboxes will fall further behind. The agencies that treat them as core clinical and business processes will pull ahead.
Lime Health AI is optimized for OASIS and HOPE accuracy — capturing every required data point through natural conversation and flagging inconsistencies in real time. Request a demo to see how it works.