Skip to content

Call for Papers: TECS Special Issue on Accelerating AI on the Edge

ACM Transactions on Embedded Computing Systems
Special Issue on Accelerating AI on the Edge

Guest Editors
Muhammad Shafique, NYU Abu Dhabi, United Arab Emirates
Theocharis Theocharides, University of Cyprus, Cyprus
Hai (Helen) Li, Duke University, USA
Chun Jason Xue, City University of Hong Kong

Important Dates
Submissions deadline: June 1st, 2021
First-round review decisions: August 15th, 2021
Deadline for revision submissions: September 15th, 2021
Notification of final decisions: November 1st, 2021
Tentative publication: Spring 2022

Click here for the full Call for Papers and submission instructions.

The need for real-time intelligent data analytics for decision support near the data acquisition points, emphasizes the need of revolutionizing the way we design, build, test and verify processors, accelerators and systems that facilitate machine learning (and deep learning in particular) implemented in resource-constrained environments for use at the edge and the fog. To facilitate AI at the edge, we need to re-focus on problems such as design, verification, architecture, scheduling and allocation policies, optimization, and many more, for determining the most efficient way to implement these novel applications within a resource-constrained system, which may or may not be connected. Acceleration of AI at the edge therefore, is a fast-growing field of machine learning technologies and applications including algorithms, hardware, and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery-operated devices. There is growing momentum demonstrated by technical progress and ecosystem development. This special issue therefore targets research at the intersection of AI/machine learning applications, algorithms, software, and hardware in deeply embedded machine learning systems. Intelligent edge-based systems are becoming “good enough” for (i) many commercial applications and new systems on the horizon; (ii) significant progress is being made on algorithms, networks, and models down to 100 kB and below; and (iii) initial low power applications in vision and audio are becoming mainstream and commercially available. There is growing momentum demonstrated by technical progress and ecosystem development. This special issue therefore targets research at the intersection of AI/machine learning applications, algorithms, software, and hardware in deeply embedded machine learning systems.

Intelligent edge-based systems are becoming “good enough” for (i) many commercial applications and new systems on the horizon; (ii) significant progress is being made on algorithms, networks, and models down to 100 kB and below; and (iii) initial low power applications in vision and audio are becoming mainstream and commercially available. There is growing momentum demonstrated by technical progress and ecosystem development. This special issue therefore targets research at the intersection of AI/machine learning applications, algorithms, software, and hardware in deeply embedded machine learning systems.