Site icon WinCert

Microsoft brings NPU optimized DeepSeek-R1 AI to Copilot+ PCs

<p>Microsoft is set to integrate &OpenCurlyDoubleQuote;NPU-optimized” versions of the DeepSeek-R1 AI model into Copilot&plus; PCs&comma; starting with Snapdragon X devices&period; Support for Intel Lunar Lake and AMD Ryzen AI 9 processors will follow&period; The first release&comma; DeepSeek-R1-Distill-Qwen-1&period;5B&comma; will be available to developers via the Microsoft AI Toolkit&comma; with larger 7B and 14B models coming later&period;<&sol;p>&NewLine;<p><img class&equals;"alignnone size-full wp-image-4207" src&equals;"https&colon;&sol;&sol;www&period;wincert&period;net&sol;wp-content&sol;uploads&sol;2021&sol;05&sol;network-3537401&lowbar;640&period;jpg" alt&equals;"" width&equals;"640" height&equals;"426" &sol;><&sol;p>&NewLine;<p>Copilot&plus; PCs must meet specific hardware requirements&comma; including at least 256GB storage&comma; 16GB RAM&comma; and an NPU capable of 40 TOPS&period; Older NPU-equipped devices may not be compatible&period;<&sol;p>&NewLine;<p>&OpenCurlyDoubleQuote;These optimized models enable developers to build AI-powered applications that run efficiently on-device&comma; leveraging the powerful NPUs in Copilot&plus; PCs&comma;” Microsoft stated&period; The company highlighted its advancements in efficient inferencing through its Phi Silica work&comma; reducing power consumption while maintaining performance&period; The models also use the Windows Copilot Runtime &lpar;WCR&rpar; with ONNX QDQ format for scalability across Windows devices&period;<&sol;p>&NewLine;<p>Microsoft detailed the optimizations that make DeepSeek-R1 models work seamlessly on local hardware&period; A sliding window design improves speed and extends context support&comma; while the 4-bit QuaRot quantization scheme enhances low-bit processing efficiency&period;<&sol;p>&NewLine;<p>The 1&period;5B model will soon be available via the AI Toolkit extension in VS Code&comma; allowing developers to experiment locally&period; Additionally&comma; Microsoft is making DeepSeek-R1 accessible through Azure AI Foundry&comma; offering a secure and scalable cloud platform for enterprise use&period;<&sol;p>&NewLine;<p>Meanwhile&comma; OpenAI has accused DeepSeek of using stolen proprietary code to develop its AI&comma; reportedly built for under &dollar;10 million&comma; which is far less than the billions spent by US firms&period;<&sol;p>&NewLine;

Exit mobile version