Skip to content
Back to Blog
April 8, 2026·10 min read

MEAT Criteria Explained: How Coders Decide When a Diagnosis Is Reportable

MEATRisk AdjustmentHCC CodingDocumentationICD-10

By Daniel Plasencia — Certified Risk Coder (CRC), Certified Professional Coder (CPC)

MEAT Criteria Explained: How Coders Decide When a Diagnosis Is Reportable

MEAT — Monitor, Evaluate, Assess, Treat — is the documentation framework risk adjustment coders use to decide whether a diagnosis listed in a chart can actually be reported. It is not an ICD-10-CM coding rule and it is not in the Official Guidelines, but every Medicare Advantage plan, every CMS RADV auditor, and every CDI program in the country uses it as the standard for whether a diagnosis is supported well enough to bill against. The AAPC primer on including MEAT in risk-adjustment documentation is the industry reference most coders train against.

This guide walks through what each MEAT element actually means, the chart patterns that satisfy it, the patterns that fail it, and the most common edge cases coders hit during real-world chart review.

Where MEAT Came From and Why It Exists

MEAT is not a CMS regulation. It is a CDI / coding-industry framework that emerged in the early 2010s as Medicare Advantage plans tried to translate the abstract requirement of "the diagnosis must be addressed at the encounter" into something coders could check on a per-chart basis. Primary-care organizations like the AAFP echo this in their own member guidance: their HCC coding reference page walks family physicians through documenting active management in exactly these terms. The framework caught on because it gave coders a clear rubric: if you can find any of the four MEAT elements in the encounter note for a given diagnosis, the diagnosis is supported. If you cannot, it is not.

MEAT was reinforced by every major HCC chart review in the past decade. RADV auditors do not literally write "MEAT failed" on their findings, but the patterns they fail charts for are exactly the patterns MEAT was designed to catch — diagnoses that appear in problem lists or copy-forwarded text without any active evidence of clinical management. The OIG work plan project on CMS-HCC V24 vs. V28 trends lays out the federal oversight lens on exactly this kind of weakly supported diagnosis.

The Four Elements

M — Monitor

Monitoring means the provider is watching the condition over time. The clearest evidence of monitoring is an order or a result tied to that condition: a hemoglobin A1c for diabetes, a serum creatinine and eGFR for chronic kidney disease, a blood pressure reading for hypertension, an ECG for atrial fibrillation, an INR for warfarin therapy, a TSH for hypothyroidism.

A provider note that says "Diabetes — A1c 7.4 today, will recheck in 3 months" is monitoring. A note that just lists "diabetes" with no labs, no vitals, no orders, no plan to recheck is not.

What passes:

  • Lab or imaging ordered or reviewed at this encounter that is clinically tied to the diagnosis
  • Vitals trended for the diagnosis (BP for HTN, weight for CHF, glucose for DM)
  • Symptom severity tracked over time ("dyspnea improved since last visit")
  • What fails:

  • Diagnosis listed in problem list with no associated labs, vitals, or orders at this visit
  • Labs done but not addressed in the assessment
  • Copy-forwarded labs from a prior visit with no current interpretation
  • E — Evaluate

    Evaluating means the provider is forming a clinical judgment about the condition: severity, control, response to therapy, complications. Evaluation language is the most common MEAT element in well-documented charts because it requires the provider to actually think and write about the condition.

    What passes:

  • "Diabetes type 2 — well controlled, A1c at goal"
  • "Heart failure — currently compensated, no JVD, no edema"
  • "Depression — stable on current sertraline dose, PHQ-9 down to 7"
  • "CKD stage 3a — eGFR stable, no progression"
  • What fails:

  • Listing the condition with no severity or control language
  • "Stable" by itself, with no reference to what is being evaluated
  • Boilerplate carry-forward language ("doing well") that does not reference the specific condition
  • A — Assess

    Assessing means the provider has made an active diagnosis statement at this encounter. The cleanest evidence of assessment is the diagnosis appearing in the assessment / plan section of the note, not just in the problem list or past medical history. Coders should look for the condition to be named in the body of the assessment, not buried in a free-text history field.

    What passes:

  • Diagnosis named in the assessment / plan section, with or without a plan
  • Diagnosis added to the encounter problem list (not just the global problem list) for this visit
  • Discussion of the diagnosis in the body of the note that demonstrates clinical thought
  • What fails:

  • Diagnosis only in the global problem list, never mentioned in the body of the note
  • Past medical history list with no reference in the encounter
  • Copy-forwarded "active diagnoses" section that contradicts the body of the note
  • T — Treat

    Treating means the provider is doing something about the condition: a medication, a procedure, a referral, a lifestyle intervention, a continuation of an existing therapy. Treatment is the most concrete MEAT element and is the easiest for coders to find.

    What passes:

  • Active medication for the condition (whether new, refilled, or continued)
  • Procedure or therapy ordered or performed at this visit
  • Referral to a specialist for the condition
  • Active monitoring with intent to treat ("if BP > 140 next visit, will start lisinopril")
  • What fails:

  • A medication that is on the medication list but is not associated with the diagnosis ("on metformin" with no diabetes diagnosis in the assessment)
  • A discontinued medication (the condition may still exist, but the patient is no longer being treated for it at this visit)
  • A medication for a different condition that the chart never links
  • How Many Elements Do You Need?

    One. A single MEAT element, clearly tied to the diagnosis, in the encounter note, is enough to support reporting the code. You do not need all four. A diagnosis with M (lab ordered) but no E, A, or T is supportable. A diagnosis with only T (active medication) is supportable.

    The exception is when the documentation is genuinely ambiguous. If a coder can find one element but the rest of the note actively contradicts it (e.g., "diabetes" in the assessment but "no DM" in the problem list and no diabetes meds), the chart should be queried, not coded.

    Real Chart Examples

    Example 1 — Passes (clean MEAT for diabetes with CKD)

    > Assessment / Plan:

    >

    > 1. Type 2 diabetes mellitus with diabetic chronic kidney disease, stage 3a — A1c 7.8 today, eGFR 48, urine albumin/creatinine 320. Continue metformin 1000 mg BID and empagliflozin 10 mg daily. Recheck A1c and BMP in 3 months. ACEi already on board for proteinuria.

    This has all four MEAT elements: monitoring (A1c, eGFR, ACR), evaluating (severity language, CKD stage), assessing (diagnosis statement in assessment), and treating (active medications, recheck plan). This is the gold standard for risk adjustment documentation. E11.22 + N18.31 are both supportable.

    Example 2 — Fails (problem list only)

    > History: Patient is here for a routine follow-up. Doing well overall.

    >

    > Past Medical History (problem list): Type 2 diabetes mellitus, chronic kidney disease, hypertension, hyperlipidemia, depression

    >

    > Assessment / Plan:

    >

    > 1. Hypertension — BP 132/78, continue lisinopril

    > 2. Hyperlipidemia — continue atorvastatin

    The patient has diabetes, CKD, and depression in the problem list but the encounter note only addresses hypertension and hyperlipidemia. None of the other three diagnoses are evaluated, monitored, assessed, or treated at this visit. They are not reportable from this encounter, even though they are real conditions.

    This is the most common documentation gap in risk adjustment chart review. The fix is provider education: every active condition that is being managed needs to appear in the assessment with at least one MEAT element, every visit, even if it is just one line.

    Example 3 — Edge case (medication only, no mention)

    > Assessment / Plan:

    >

    > 1. Annual physical — patient feels well

    >

    > Medications: lisinopril 20 mg daily, metformin 1000 mg BID, atorvastatin 40 mg daily, sertraline 50 mg daily

    The patient is clearly being treated for hypertension, diabetes, hyperlipidemia, and depression based on the medication list. Is that enough to support reporting all four diagnoses?

    The answer depends on the plan and the auditor. The strict interpretation of MEAT is that the diagnosis must be named in the encounter, even briefly, with at least one MEAT element. A medication list without any reference to the underlying diagnosis is generally not enough — the provider could be continuing the medication for any number of reasons, and the chart does not document the active diagnosis. Most plans treat this pattern as requiring a query.

    The looser interpretation, which some plans accept, is that an active medication on the current medication list is implicit treatment of the underlying diagnosis. This is risky from a RADV standpoint and most CDI programs do not allow it.

    The safe answer for a coder facing this pattern: query the provider, do not infer the diagnosis from the med list alone.

    Example 4 — Edge case (status condition that does not require active management)

    > Assessment / Plan:

    >

    > 1. History of cerebrovascular accident with right hemiparesis — gait stable, continues to use cane, PT discharged 6 months ago

    Stroke with residual deficit (the I69 family) is a status condition that captures an HCC under V28 (HCC 132 — Late Effects of Stroke or Anoxic Brain Damage). The deficit itself is the reportable condition -- you can verify the current mapping on the CMS 2026 risk-adjustment model software page -- and the documentation of the residual deficit (right hemiparesis, gait, cane use) is enough to support the code. The provider does not need to "treat" a residual deficit that is permanent — recognizing and documenting the residual is the assessment.

    The same logic applies to certain ostomies, amputations, and transplant statuses. These are genuine status conditions where the existence of the condition is the documentation, not the active management.

    When to Query Instead of Coding

    The general rule: if you can find at least one clear MEAT element in the encounter note for a given diagnosis, code it -- and that diagnosis should match the CMS ICD-10-CM code set at the specificity the record supports. If the documentation is ambiguous, contradictory, or has only the diagnosis name in the problem list with no other elements, query the provider before coding.

    The most common query patterns:

  • The diagnosis is in the problem list but never appears in the body of the note → query for active assessment
  • The medication is being taken but the diagnosis is not in the assessment → query for the diagnosis
  • The diagnosis is named but the severity or specificity required for the HCC is missing → query for severity
  • The diagnosis appears as both active and historical in different parts of the same note → query to clarify status
  • Why MEAT Matters Beyond Coding

    MEAT is the framework that determines RAF capture, but its real value is in driving better documentation. The plans that train providers to write notes that satisfy MEAT do not just pass RADV — they have better continuity of care, fewer medication errors, and more accurate problem lists. The discipline that protects RAF dollars is the same discipline that produces better clinical records.

    For coders new to risk adjustment, the single most useful habit is to read every encounter note with MEAT in mind: for each diagnosis you see, ask whether the provider monitored, evaluated, assessed, or treated it during this visit. If the answer is yes, code it. If the answer is no, query. Apply that filter to every chart and the rest of HCC coding becomes much easier.

    HCC Buddy Academy

    Try this in HCC Buddy Academy

    MEAT Criteria Fundamentals

    Part of the MEAT/TAMPER Documentation Mastery course

    Watch Free Preview
    Daniel Plasencia

    Daniel Plasencia

    Founder & Developer

    Daniel Plasencia — Risk adjustment coding professional and software engineer who built the tool he wished existed, at a price coders can actually afford.

    Get HCC Coding Tips in Your Inbox

    Join our newsletter for coding tips, guideline updates, and tool announcements.

    Related Articles