Microsoft is introducing a new feature called Remediation that builds on the company's efforts to combat AI inaccuracies. Customers who use Microsoft Azure to power their AI systems can now use the feature to automatically detect and rewrite incorrect content in AI output.
The remediation feature is available in preview as part of Azure AI Studio – a suite of security tools designed to detect vulnerabilities, track down “hallucinations,” and block malicious prompts. Once enabled, the remediation system scans and identifies inaccuracies in AI output by comparing it to a customer's source material.
It then highlights the error, provides information about why it is incorrect, and rewrites the content in question – all “before the user can see the inaccuracy.” While this seems like a helpful way to fix the nonsense that AI models often propagate, it may not be a completely reliable solution.
Vertex AI, Google's cloud platform for companies building AI systems, has a feature that “grounds” AI models by comparing results with Google Search, the company's own data, and (soon) third-party datasets.
In a statement to TechCrunchA Microsoft spokesperson said the “correction” system uses “small and large language models to match the results to the base documents,” meaning it is not immune to errors either. “It is important to note that groundedness detection does not solve 'accuracy,' but helps match the results of generative AI to the base documents,” Microsoft said TechCrunch.