<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Monitoring on Dataverse AI Solutions</title><link>https://dataverse-ai.org/tags/monitoring/</link><description>Recent content in Monitoring on Dataverse AI Solutions</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sat, 09 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://dataverse-ai.org/tags/monitoring/index.xml" rel="self" type="application/rss+xml"/><item><title>RAG in Production, Part 2: The User-Facing Half - Cost, Feedback, Errors, and Test Gates</title><link>https://dataverse-ai.org/posts/rag-monitoring-production-part2/</link><pubDate>Sat, 09 May 2026 00:00:00 +0000</pubDate><guid>https://dataverse-ai.org/posts/rag-monitoring-production-part2/</guid><description>A pipeline that scores green on every metric can still be quietly failing its users. This is Part 2 of the series - covering cost-per-useful-answer, explicit and implicit user feedback, a typed error taxonomy, 10-day trend charts, and the CI gates that keep the signals honest.</description></item><item><title>RAG in Production, Part 1: Why Observability Matters Before Anything Breaks</title><link>https://dataverse-ai.org/posts/rag-monitoring-production-part1/</link><pubDate>Sat, 02 May 2026 00:00:00 +0000</pubDate><guid>https://dataverse-ai.org/posts/rag-monitoring-production-part1/</guid><description>Building a RAG pipeline is the easy part. This is Part 1 of a two-part series on how I instrumented my personal assistant&amp;#39;s Vault for production - covering the four observability layers, span tracing, and the pipeline metrics that tell us whether our system is actually working.</description></item></channel></rss>