Large Language Models (LLMs) have dominated the field of NLP in recent years. LLMs have demonstrated the ability to solve tasks with zero- or few-shot prompting. Recent literature has focused on the concept of self-correction, i.e. having an LLM correct its own outputs. Attempts to self-correct logical or reasoning errors often cause correct answers to become incorrect, resulting in worse performances overall. In this paper, we break down the self-Correction process into two core components: mistake finding and output correction. For mistake finding, we release BIG-Bench Mistake, a dataset of logical mistakes in Chain-of-Thought reasoning traces. For output
Podden och tillhörande omslagsbild på den här sidan tillhör
HackerNoon. Innehållet i podden är skapat av HackerNoon och inte av,
eller tillsammans med, Poddtoppen.