FoggyCache: Cross-Device Approximate Computation Reuse

Abstract

Mobile and IoT scenarios increasingly involve sophisticated contextual sensing and recognition. These are often computation intensive and latency sensitive. While existing approaches revolve around computation offloading or on-device optimization, we pursue opportunities across devices. In this paper, we observe that the same application is often invoked on multiple devices in close proximity, such as voice-driven virtual assistance in smart homes. Moreover, the application instances often process similar contextual data that maps to the same outcome. This presents optimization opportunities based on eliminating such redundancy. Therefore, we propose approximate computation reuse across devices, which minimizes redundant computation by harnessing the equivalence between different input values and reusing previously computed outputs with high confidence. For this, we propose adaptive locality sensitive hashing (A-LSH) and homogenized $k$ nearest neighbors (H-kNN) techniques that address the practical challenges of approximate computation reuse. We further incorporate approximate computation reuse as a service, called FoggyCache, in the computation offloading runtime, with a two-level cache structure that spans the nearby server as well as the local devices. Extensive evaluation shows that FoggyCache consistently harnesses over 90% of all reuse opportunities, which translates to reduced computation latency and energy consumption by a factor of 2 to 5, while incurring at most 5% accuracy penalty.

Publication
In 24th ACM International Conference on Mobile Computing and Networking (MobiCom)
Date