Tianlong Chen (陈天龙)

What does not kill you makes you stronger

(ECCV 2020) HALO, Hardware-Aware Learning to Optimize

HALO: Hardware-Aware Learning to Optimize

[Paper] [Code]


There has been an explosive demand for bringing machine learning (ML) powered intelligence into numerous Internet-of-Things (IoT) devices. However, the effectiveness of such intelligent functionality requires in-situ continuous model adaptation for adapting to new dataand environments, while the on-device computing and energy resources are usually extremely constrained. Neither traditional hand-crafted (e.g., SGD, Adagrad, and Adam) nor existing meta optimizers are specifically designed to meet those challenges, as the former requires tedious hyper-parameter tuning while the latter are often costly due to themeta algorithms’ own overhead. To this end, we propose hardware-aware learning to optimize (HALO), a practical meta optimizer dedicated to resource-efficient on-device adaptation. Our HALO optimizer features the following highlights: (1) faster adaptation speed (i.e., taking fewer data oriterations to reach a specified accuracy) by introducing a new regularizer to promote empirical generalization; and (2) lower per-iteration complexity, thanks to a stochastic structural sparsity regularizer being enforced. Furthermore, the optimizer itself is designed as a very light-weight RNN and thus incurs negligible overhead. Ablation studies and experiments onfive datasets, six optimizees, and two state-of-the-art (SOTA) edge AIdevices validate that, while always achieving a better accuracy (↑0.46% -↑20.28%), HALO can greatly trim down the energy cost (up to↓60%) inadaptation, quantified using an IoT device or SOTA simulator. Codesand pre-trained models are at https://github.com/RICE-EIC/HALO .