EmbeDL is a spin-out from Chalmers University of Technology and research continues to be at the heart of what we do. We are exploring many different areas related to our mission of bringing DL to embedded systems. Topics include deep learning compression, network architecture search, quantization strategies and hardware design.
A word from our Chief Scientist
We live in very exciting times! AlphaGo, Autonomous Driving and GPT-3 have grabbed the headlines in dailies around the world and captured the public’s imagination about the wonders of AI. But we are just at the beginning! When the Internet-of-Things (IoT) revolution gets fully under way, the power of AI will be unleashed in smart devices that will surround us in everyday life.
For us in the academic research world, there is a huge paradigm shift underway. Previously machine learning, programming languages and computer architecture were all separate sub-disciplines of Computer Science and Engineering which hardly interacted with one another. But these new AI systems call for a tight interconnection between these previously disparate subfields. AI algorithms are being optimized for different hardware using techniques from compiling programming languages and new hardware is being developed to accelerate AI. All these sub-fields are co-evolving with each other at breakneck speed!
At EmbeDL we will keep abreast of the latest developments in these technologies via a tight coupling to world leading research groups in these subfields at Chalmers and other leading European Research Institutes and we will bring those cutting-edge research ideas to life in practical innovations that will impact the world.
Chief Scientist @ EmbeDL &
Professor @ Chalmers University
Due to fundamental limitations of scaling at the atomic scale, coupled with heat density problems of packing an ever increasing number of transistors in a unit area, Moore’s Law has slowed down.
The primary ambition of the LEGaTO project is to address this challenge by starting with a Made-in-Europe mature software stack, and optimizing this stack to support energy-efficient computing on a commercial cutting-edge European-developed CPU–GPU–FPGA heterogeneous hardware substrate and FPGA-based Dataflow Engines (DFE), which will lead to an order of magnitude increase in energy efficiency.
In this new research project we will continue to push the boundaries of making Deep Learning more efficient in embedded systems. More information will be added soon, so stay tuned!
LV-EmbeDL will extend the EmbeDL’s Deep Learning optimizer with BSC’s aggressive low energy technology to maximize the energy efficiency of embedded FPGA-based DL systems.
Are you currently preparing a research application?
We are always interested to join exciting research consort… Your content goes here. Edit or remove this text inline or in the module Content settings. You can also style every aspect of this content in the module Design settings and even apply custom CSS to this text in the module Advanced settings.