Jellyfish: Timely Inference Serving for Dynamic Edge Networks

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

381 Downloads (Pure)

Abstract

While high accuracy is of paramount importance for deep learning (DL) inference, serving inference requests on time is equally critical but has not been carefully studied especially when the request has to be served over a dynamic wireless network at the edge. In this paper, we propose Jellyfish—a novel edge DL inference serving system that achieves soft guarantees on end-to-end inference latency often specified as a service-level objective (SLO). To handle the network variability, Jellyfish exploits both data and deep neural network (DNN) adaptation to conduct tradeoffs between accuracy and latency. Jellyfish features a new design that enables collective adaptation policies where the decisions for data and DNN adaptations are aligned and coordinated among multiple users with varying network conditions. We propose efficient algorithms to dynamically adapt DNNs and map users, so that we fulfill latency SLOs while maximizing the overall inference accuracy. Our experiments based on a prototype implementation and real-world WiFi and LTE network traces show that Jellyfish can meet latency SLOs at around the 99th percentile while maintaining high accuracy.
Original languageEnglish
Title of host publication2022 IEEE Real-Time Systems Symposium (RTSS)
Subtitle of host publication[Proceedings]
PublisherIEEE
Number of pages15
ISBN (Electronic)9781665453462
ISBN (Print)9781665453479
Publication statusPublished - 2022

Fingerprint

Dive into the research topics of 'Jellyfish: Timely Inference Serving for Dynamic Edge Networks'. Together they form a unique fingerprint.

Cite this