Keynote Speakers
Robert Ricci is a Research Associate Professor in the School of Computing at the University of Utah, leader of the CloudLab testbed, and one of the directors of the Flux Research Group. He works in computing infrastructure,inlcuding networking, distributed systems, cloud computing, and more. Infrastructure is a very emprical field, requiring lots of implementation and experimentation, so he also works in experiment design and analysis, and in building testbeds for research; he has worked on Emulab and its sucessors, including CloudLab, parts of GENI, and Powder, since 2000. One of the fundamental parts of the research process is building on and comparing to existing systems, so he also works in research reproducibility.
Gathering Reliable Performance Measurements in the Cloud
Taking performance measurements in the cloud can be difficult: because the cloud is a shared, virtualized environment, there is necessarily interference from other tenants and artifacts from parts of the infrastructure that are hidden from users. Thus, it is important to design experiments well, so that they gather statistically meaningful results and minimize - or at least uncover - effects due to the cloud's implementation. This talk will cover some of the challenges associated with running experiments in the cloud, identifying specific pitfalls we have found in our own work, suggesting where others might be found, and making suggestions for achieving reliable performance measurements.
Tim Brecht is an Associate Professor in the Cheriton School of Computer Science at the University of Waterloo. He has previously held positions as a Visiting Researcher at Netflix, Visiting Scientist at IBM's Center for Advanced Studies, a Research Scientist with Hewlett Packard Labs, and a Visiting Professor at Ecole PolytechniqueFederal de Lausanne (EPFL). He was a past nominee for the 3M Outstanding Canadian Instructor Award and was a University of Waterloo, Cheriton Faculty Fellow (2016-2019). His research interests include: empirically evaluating, understanding and improving the performance of computer systems and networks; parallel and distributed computing; operating systems; and developing systems and devices to better support the Internet of Things.
Conducting Credible Performance Evaluations in Environments with Highly Variable Performance
The presence of high variability in environments in which experiments are conducted presents significant challenges when conducting performance evaluations. This talk will describe a framework for determining if methodologies used for conducting performance comparisons will result in fair comparisons with credible conclusions (or not). Our framework is able to show that existing, widely used methodologies are flawed and could lead to invalid conclusions, thus motivating work on new methodologies. The Randomized Multiple Interleaved Trials (RMIT) methodology, is designed explicitly for conducting fair comparisons in environments with high variability. RMIT passes our framework's tests for fairness and validity and the talk concludes with an explanation of why it is critical to use RMIT when conducting empirical performance evaluations.