Logging & Recovery
Resilient prompts need observability. prompt-fn layers structured logging, fallbacks, and manual content parsing so you can still ship when a provider emits malformed JSON.
pino logger
import pino from 'pino';
const logger = pino({ name: 'Adder Prompt', level: 'debug' });
const addNumbers = definePrompt({
...,
logger,
});
- Each invocation logs
module,errorName, andcontentwhen something goes wrong. - Use
logger.child({ module: name })if you need per-prompt context.
Recovery flow
- We call
generateObject/generateTextwithoutputSchema. - If
generation.outputis missing, we inspectgeneration.contentand attempt to parse JSON ourselves. recoverFromContent(seesrc/utils/outputRecovery.ts) strips fences, trims whitespace, and re-validates.- If manual parsing works, we log a warning instead of an error and return the recovered payload.
- If everything fails, the original AI SDK error bubbles up, still enriched with logger metadata.
You can override the recovery strategy by wrapping recoverFromContent or by transforming generation.content before calling outputSchema.parse.
Tuning provider requests
- Ollama OpenAI-compatible endpoints often wrap JSON in quotes. The built-in recovery handles this scenario automatically.
- For providers that support structured output mode (e.g., Gemini 1.5, OpenAI Responses API), keep
outputSchemastrict—prompt-fn already passes the JSON schema to the provider. - If you rely on
generateText, consider supplying a lightweightoutputSchemaand falling back torecoverFromContentfor consistency.