Site icon WinCert

Microsoft is working on AI safety measures

<p>Advanced AI systems like ChatGPT are changing how we use the internet&comma; but sometimes not for the better&period; Microsoft&comma; which invested in OpenAI&comma; the company behind ChatGPT&comma; is now fully embracing this technology&period;<&sol;p>&NewLine;<p><img class&equals;"alignnone size-full wp-image-4207" src&equals;"https&colon;&sol;&sol;www&period;wincert&period;net&sol;wp-content&sol;uploads&sol;2021&sol;05&sol;network-3537401&lowbar;640&period;jpg" alt&equals;"" width&equals;"640" height&equals;"426" &sol;><&sol;p>&NewLine;<p>They&&num;8217&semi;re offering Copilot AI in their products and services&comma; allowing businesses to create their large language models &lpar;LLMs&rpar;&period; However&comma; these AI systems can still act unexpectedly&comma; known as &&num;8220&semi;hallucinations&period;&&num;8221&semi; To deal with this&comma; Microsoft is introducing new tools in its Azure AI platform to control AI behavior&period;<&sol;p>&NewLine;<p>Even after lots of testing&comma; developers are often surprised by what their AI systems do&period; They can still make mistakes&comma; like saying inappropriate things&comma; even if companies try to stop it&period; Microsoft calls these &&num;8220&semi;prompt injection attacks&period;&&num;8221&semi; To fight back&comma; the company is adding five new features to Azure AI Studio&period; Some are already available including Prompt Shield&comma; Risk and Safety Monitoring&comma; and Safety Evaluations&period; These tools help stop bad outputs and check for harmful content in real time&period;<&sol;p>&NewLine;<p>Soon&comma; Azure&&num;8217&semi;s AI platform will offer message templates to guide developers to safer outputs&period; They&&num;8217&semi;ll also add Groundedness Detection to make sure outputs make sense&period; Microsoft plans to automatically include these safety features in GPT-4 models&period; But for other LLMs&comma; users might need to add them manually&period; By focusing on safety&comma; Microsoft hopes to avoid embarrassing mistakes with generative AI&period;<&sol;p>&NewLine;

Exit mobile version