For this week's paper read, we actually dive into our own research.
We wanted to create a replicable, evolving dataset that can keep pace with model training so that you always know you're testing with data your model has never seen before. We also saw the prohibitively high cost of running LLM evals at scale, and have used our data to fine-tune a series of SLMs that perform just as well as their base LLM counterparts, but at 1/10 the cost.
So, over the past few weeks, the Arize team generated the largest public dataset of hallucinations, as well as a series of fine-tuned evaluation models.
We talk about what we built, the process we took, and the bottom line results.
đ Read the paper: https://arize.com/llm-hallucination-dataset/
Podden och tillhörande omslagsbild pÄ den hÀr sidan tillhör
Arize AI. InnehÄllet i podden Àr skapat av Arize AI och inte av,
eller tillsammans med, Poddtoppen.