In real-world application, that probably translates to a power savings of 90 to 99 percent, which could make voice control practical for relatively simple electrical devices. That comprises power-constrained devices that have to harvest energy from their environments or go months between battery charges.
Such equipment from the technological backbone of what is called the ‘Internet of Things’, or IOT, which refers to the idea that automobiles, civil-engineering structures, appliances, livestock, and even manufacturing equipment will soon have sensors that report data directly to networked servers, aiding with maintenance and coordination of tasks.
“Speech input will transform into a natural interface for numerous wearable applications and intelligent devices,” says Anantha Chandrakasan, the Vannevar Bush Lecturer of Electrical Engineering and Computer Science at MIT, whose team introduced the novel chip. “The miniaturization of such devices will need a distinct interface than keyboard or touch. It will be crucial to embed the speech functionality locally to save system energy consumption caompred to performing this operation in the cloud.”
“I don’t think that we truly introduced this technology for a specific application,” adds Michael Price, who headed the design of the chips as an MIT graduate student in electronic engineering and computer science and now works for chipmaker Analog Devices. “We have tried to place the infrastructure in place to offer better trade-offs to a system designer than they would have had with conventional technology, whether it was hardware or software acceleration.”
Jim Glass, Chandrakasan, and Price, a senior researcher at MIT’s Computer Science and Artificial Intelligence Laboratory, introduced the novel chip in a paper Price presented at the International Solid-State Circuits Conference. A node in the centre of a neural network might receive information from a dozen of other nodes and transfer data to another dozen. Each of those two dozen connections has a linked ‘weight’, a number that indicates how prominently data sent across it should factor into the receiving node’s computations.
The primary step in minimizing the novel chip’s memory bandwidth is to compress the weights linked with each node. The information are decompressed only after they are brought on – chip. The chip also exploits the fact that with speech recognition, wave upon wave of data must pass through the network. The incoming audio signal is split up into 10-millisecond increments, each of which must be evaluated separately. The MIT scientist’s chip brings in a singular node of the neural network at a time, but it passes the information from 32 consecutive 10-millisecond increments through it.
The study was funded through the Omulus Project, a joint association between Quanta Computer and MIT, and the chip was prototypes through the Taiwan semiconductor manufacturing company’s university shuttle program
Filed Under: News