Meta Launches AI That Can Monitor Other AI As Human Involvement Diminishes
The release follows Meta's introduction of the tool in an August paper, which detailed how it relies upon the same "chain of thought" technique used by OpenAI's recently released o1 models to get it to make reliable judgments about models' responses.
That technique involves breaking down complex problems into smaller logical steps and appears to improve the accuracy of responses on challenging problems in subjects like science, coding and math.
Meta's researchers used entirely AI-generated data to train the evaluator model, eliminating human input at that stage as well.
The ability to use AI to evaluate AI reliably offers a glimpse at a possible pathway toward building autonomous AI agents that can learn from their own mistakes, two of the Meta researchers behind the project told Reuters.