Abstract
Performance variability has been acknowledged as a problem for over a decade by cloud practitioners and performance engineers. Yet, our survey of top systems conferences reveals that the research community regularly disregards variability when running experiments in the cloud. Focusing on networks, we assess the impact of variability on cloud-based big-data workloads by gathering traces from mainstream commercial clouds and private research clouds. Our dataset consists of millions of datapoints gathered while transferring over 9 petabytes on cloud providers' networks. We characterize the network variability present in our data and show that, even though commercial cloud providers implement mechanisms for quality-of-service enforcement, variability still occurs, and is even exacerbated by such mechanisms and service provider policies. We show how big-data workloads suffer from significant slowdowns and lack predictability and replicability, even when state-of-the-art experimentation techniques are used. We provide guidelines to reduce the volatility of big data performance, making experiments more repeatable.
Original language | English |
---|---|
Title of host publication | Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2020 |
Publisher | USENIX Association |
Pages | 513-527 |
Number of pages | 15 |
ISBN (Electronic) | 9781939133137 |
Publication status | Published - Feb 2020 |
Event | 17th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2020 - Santa Clara, United States Duration: 25 Feb 2020 → 27 Feb 2020 |
Conference
Conference | 17th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2020 |
---|---|
Country/Territory | United States |
City | Santa Clara |
Period | 25/02/20 → 27/02/20 |