Amazon releases AZ1 Neural Edge Processor

On September 24th, Amazon held its annual hardware event, introducing their new generation of  smart home hubs, speakers and other gadgets. Perhaps one of the most striking, yet precedented, announcements was Amazon’s first Neural Edge Processor*, AZ1.  

* A Neural Processor is a hardware platform that is specifically designed to run neural networks. An Edge Processor is a hardware platform that is designed to perform computations locally where the data is gathered, as opposed to sending the data to a cloud server as a query to be processed there.

It appears that Amazon too is going towards keeping as much of the computations at the edge as possible, instead of processing all the queries in the cloud. The AZ1 Neural Edge Processor is designed to run Deep Learning inference quickly and efficiently locally on the device. Amazon has collaborated with MediaTek to integrate AZ1 with MediaTek’s high performance MT8512 chip at the silicon level, which is used in the new generation of Amazon Echo smart speakers. Amazon has also developed a speech recognition neural network model that runs effectively on the AZ1, which sets the expectations even higher for their newly released gadgets that will carry this processor.

Speaking of MediaTek, they’re one of the largest IC design companies in the world (well, right after Broadcom, Qualcomm and NVIDIA). They provide software and hardware solutions for a wide range of applications from wearable gadgets to automotive, and they are very well-known for their chipsets used in voice assistant devices. Such devices are today often the heart, or rather the “hub”, of smart home solutions, where “things” in the house can be controlled via voice commands from this device. MediaTek’s technology is the physical hardware brain behind Amazon’s Alexa as well as many high end smart TVs, and it seems like they’re heading in the direction of deploying more edge computing within the smart home space, since the announcement of their collaboration with Amazon around AZ1; an area where most of the heavier data processing, e.g. speech recognition and computer vision, was usually done in the cloud, but are now moving over to the edge.

However, with smart home products gaining increasing popularity, on-device computations will continue to rise. This along with the ever-growing deep learning hardware solutions for data processing, will require smart home platforms to be capable of running neural networks while still maintaining a low cost and executing fast enough. The tricky part here is to customize the product, i.e the hardware platform and the neural network models to work well together, to meet the product requirements. This is why we have developed EmbeDL; optimizing deep learning networks, binding the model and hardware platform together and making sure the product requirements are well met. EmbeDL’s technology bridges the gap between efficient edge/embedded/on-device processing and Deep Learning R&D teams, providing a smooth integration of the two worlds for our customers.

Stay tuned by signing up for our newsletter below!

Written by <a href="" target="_self">Elaheh Malekzadeh</a>

Written by Elaheh Malekzadeh

Elaheh works as a deep learning engineer at EmbeDL. She graduated from Shiraz University in 2018, where she studied B.Sc. in electronics engineering. She then pursued her M.Sc. at KTH Sweden in embedded systems, with a focus on hardware and embedded platforms. Elaheh became interested in using deep learning within constrained embedded architectures and focused on studying the fault tolerance of such systems in her master’s thesis. At EmbeDL, Elaheh works with exploring hardware options and implementing solutions on processing platforms. In her spare time, she likes to go swimming, listen to a LOT of podcasts and explore the city of Gothenburg.

October 3, 2020

Stay up to date with our newsletter