Microsoft is working on AI safety measures
Advanced AI systems like ChatGPT are changing how we use the internet, but sometimes not for the better. Microsoft, which invested in OpenAI, the company behind ChatGPT, is now fully embracing this technology.
They’re offering Copilot AI in their products and services, allowing businesses to create their large language models (LLMs). However, these AI systems can still act unexpectedly, known as “hallucinations.” To deal with this, Microsoft is introducing new tools in its Azure AI platform to control AI behavior.
Even after lots of testing, developers are often surprised by what their AI systems do. They can still make mistakes, like saying inappropriate things, even if companies try to stop it. Microsoft calls these “prompt injection attacks.” To fight back, the company is adding five new features to Azure AI Studio. Some are already available including Prompt Shield, Risk and Safety Monitoring, and Safety Evaluations. These tools help stop bad outputs and check for harmful content in real time.
Soon, Azure’s AI platform will offer message templates to guide developers to safer outputs. They’ll also add Groundedness Detection to make sure outputs make sense. Microsoft plans to automatically include these safety features in GPT-4 models. But for other LLMs, users might need to add them manually. By focusing on safety, Microsoft hopes to avoid embarrassing mistakes with generative AI.