The Problem
You have an agent that processes images with Claude vision. A user uploads an invoice screenshot, Claude extracts the line items as JSON, and your code runs the math — totals, tax, position sizing, whatever the domain requires.
It works great. Then one day a user uploads an image with two invoices. Or a blurry scan. Or a format Claude hasn’t seen before.
Claude returns JSON — it always returns JSON, because that’s what your prompt asked for. But the fields are empty:
{"item": "Widget", "quantity": null, "unitPrice": null, "tax": null}
Your code doesn’t check. It runs quantity * unitPrice and gets NaN. Then it passes NaN to the next function, which passes it to the formatter, which renders this to the user:
Widget — NaN units @ $undefined = $NaN
Tax: $NaN
Total: $NaN
The AI didn’t fail. It told you it couldn’t extract the data — you just didn’t listen.
Why NaN Slips Through
This is the part that burns people. You have guards. They look correct:
const riskPerShare = Math.abs(entry - stop);
if (riskPerShare <= 0) continue; // skip bad data
let shares = Math.floor(dollarRisk / riskPerShare);
if (shares < 1) continue; // skip tiny positions
Both guards pass when the values are NaN. In JavaScript, NaN <= 0 is false. NaN < 1 is false. NaN isn’t less than anything, greater than anything, or equal to anything — including itself. Your guard says “skip if the value is bad” but NaN doesn’t register as bad. It’s not bad. It’s nothing. And nothing passes every comparison by failing it.
The fix is simple once you know the rule:
if (!riskPerShare || riskPerShare <= 0) continue;
if (!shares || shares < 1) continue;
!NaN is true. That’s your catch.
But this is a band-aid. The real fix is upstream.
The Fix: Validate at the Boundary
The moment you get structured data back from AI, validate it before your code touches it. This is the same principle as validating user input at an API boundary — except here, the “user” is a language model that sometimes returns partial data.
Step 1: Validate required fields
function validateParsedItem(item) {
const required = ['name', 'quantity', 'unitPrice'];
const missing = [];
if (!item.name || typeof item.name !== 'string') {
missing.push('name');
}
for (const field of ['quantity', 'unitPrice']) {
const val = item[field];
if (val === undefined || val === null || typeof val !== 'number' || isNaN(val)) {
missing.push(field);
}
}
return { valid: missing.length === 0, missing };
}
Step 2: Gate before business logic
const parsed = extractJSON(response);
if (!parsed) return 'Could not parse response.';
for (const item of items) {
const validation = validateParsedItem(item);
if (!validation.valid) {
results.push(
`**${item.name || 'unknown'}** — couldn't extract complete data.\n` +
`Missing: ${validation.missing.join(', ')}\n\n` +
`Please re-upload a clearer image for this item.`
);
continue;
}
// Now safe to do math
results.push(calculateTotals(item));
}
The validation catches the problem at the source. The user sees “missing: quantity, unitPrice” instead of a screen full of NaN. They know what to do — re-upload, send a better image, split the items.
Step 3: Design for arrays from day one
The prompt that causes the most subtle failures is the one that assumes a single item:
Return a JSON object: { "name", "quantity", "unitPrice" }
This works until someone sends two items in one image. Claude tries to pick one, or merges them, or returns partial data for both. The fix is to always ask for and handle an array:
ALWAYS return a JSON array, even for a single item:
[{ "name", "quantity", "unitPrice" }]
Important: quantity and unitPrice must be numbers, not strings.
And in your code:
const parsed = extractJSON(response);
const items = Array.isArray(parsed) ? parsed : [parsed];
for (const item of items) {
// validate and process each independently
}
This is three extra lines of code. It handles every case — one item, two items, ten items — without any special logic.
The Boundary Principle
AI-parsed output is an untrusted boundary. Same as user input, same as a third-party API response. The model is doing its best, but “its best” includes partial data, wrong types, and missing fields. Your code needs to handle that gracefully.
The pattern is always the same:
- Parse — extract JSON from the response
- Normalize — wrap in an array if it isn’t one
- Validate — check required fields exist and are the right type
- Gate — only run business logic on items that pass validation
- Report — tell the user specifically what’s missing and what to do about it
Key Takeaway
The AI will always give you something. A JSON object, a best guess, a partial extraction. The question is whether your code treats that something as truth or as a claim that needs verification. Validate at the boundary. Fail loudly. Your users will thank you when they see “missing: unitPrice” instead of “$NaN.”