Intel FPGAs Infuse The Azure Cloud With Magic Of AI | EngineersGarage
Close or Esc Key

Arduino Projects   |   Raspberry Pi   |   Electronic Circuits   |   AVR   |   PIC   |   8051   |   Electronic Projects

Intel FPGAs Infuse The Azure Cloud With Magic Of AI

Submitted By: 

Shreepanjali Mod
Microsoft introduced the Project Brainwave powered Azure Machine Learning Hardware Accelerated Models  at the Microsoft Build Conference held on 7th May, 2018. The Azure Machine Learning Hardware Accelerated models come loaded with Microsoft Azure Machine Learning SDK that will provide a preview of this concept. These Azure Machine Learning Hardware Accelerated Models will provide the customers with access to one of the best Artificial Learning (AI) inferencing performances in the industry. They will be able to use Azure’s grand-scale Intel FPGA deployments technology for their own models. 
 
Microsoft-Azure
 
Fig.1 : Image Source : ]]>Microsoft-Azure]]>
 
As the general manager and corporate vice president of Intel Corporation’s Programmable Solutions Group, Daniel McNamara, likes to add, “We are an integral technology provider for enabling AI forward deep collaboration with Microsoft. AI has the potential for a wide range of user scenarios from training to inference, language  recognition to image analysis , and Dental has the widest portfolio of hardware,  software and tools to enable this full spectrum of workloads.” In  simple words,  the Azure Machine Learning Hardware Accelerated Models  will allow developers and data scientists to use deep neural networks for a wide range of real-time workloads like Healthcare,  retail,  and manufacturing over the largest accelerated cloud on this planet.  These  Azure Machine Learning Hardware Accelerated Models  will also help in training models, their deployment on Project Brainwave, leverage Intel’s FPGAs on the edge on inside the cloud. 
 
Brainwave Board
 
Fig.2 : Image Source : ]]>Brainwave Board]]>
 
Project Brainwave is the key to unleashing the infinite potential of AI in future. It will do so by unlocking programmable hardware with the help of Intel’s FPGAs capable of delivering real-time AI. It should be noted that the architecture based on FPGA is much more power efficient and cost-effective than others. It can easily run DNNs like ResNet 50 which needs approx. 8 billion calculations without any batches. AI customers would not have to make a hard choice between low cost or high performance.  
 
]]>]]>