Close or Esc Key

Arduino Projects   |   Raspberry Pi   |   Electronic Circuits   |   AVR   |   PIC   |   8051   |   Electronic Projects

Intels Brings Out AI Camera, Xeon Scalable Optimization And FPGA-based Accelerations For Deep Learning At Baidu Create

Submitted By: 

Shreepanjali Mod

Recently,  at the Baidu Create  held in Beijing,  the vice president of Intel announced a series of collaboration with  Baidu  on AI (Artificial Intelligence).  The   series included Intel power Baidu’s Xeye  which is an advanced AI retail camera with Intel Movidius  vision Processing Unit that highlighted the plants of companies to  accelerate workload.  This VPUs  will be offered as a service utilising Intel FPGAs, it also optimises PaddlePaddle*  one of the deep learning Framework from Baidu  for intel Xeon Scalable Processors.

 

Fig. 1: Baidu AI
(Image source: Intel Newsroom)

 As Gadi Singer,  the vice president and architecture General Manager of Intel’s  Artificial Intelligence products,  puts it, “From  enabling in-device intelligence,  to providing data centre scale on Intel Xeon Scalable processes,  to accelerating workloads with Intel FPGAs,  to making it simpler for PaddlePaddle developers to code a cross platforms, Baidu is taking advantage of Intel’s  products and expertise to bring its latest AI  advancements to life.”

 The Baidu’s Xeye  camera makes use of Intel Movidius Myriad to VPUs For delivering high performance and low power visual intelligence for retailers.  The  credit for this goes to  Intel’s  purpose-built VPU  solutions teamwork Baidu’s  highly developed machine learning algorithms,  this camera is capable of analysing gestures and objects while detecting individuals to offer personalised shopping experiences in retail settings.

Baidu Is currently developing a completely  heterogeneous computing platform founded on latest Field Programmable Gate Array (FPGAs) technology by Intel. Intel FPGAs  can accelerate performance as well as Energy Efficiency while adding extra flexibility  two data centre workload.  It  would also allow workload acceleration as a service over Baidu Cloud.

With PaddlePaddle  that has now been optimised for intel Xeon Scalable processors,  data scientist as well as developers can now make use of the same hardware that empowers data centres and clouds across the world who developed the AI algorithms. PaddlePaddle  has been optimised for intel Technologies at different levels including communication,   architecture,  memory, and computing.

 The two  legendary companies are also looking into integration of nGraph  and PaddlePaddle,  a Framework neutral,  DNN (Deep Neural Network) model compiler is capable of targeting a wide range of devices. 

]]>]]>