Enabling Easy, Correct, and Reproducible Evaluation of Distributed Storage Systems With Evaluation as a Service
Conference: OSDI 2025 (Operating Systems Design and Implementation)
My Role: Worked extensively on multiple components, focusing on the Result Producer, graph generation, workload execution, and performance evaluation.
Status: Submitted
Designed systematic methods to measure latency, throughput, and performance for distributed storage systems. Created comprehensive benchmarking frameworks for comparing different system implementations.
The automation improvements significantly reduced setup time from days to hours, eliminated human error in data collection, and ensured consistent results across multiple experimental runs.
This work enables researchers to more easily evaluate and compare distributed storage systems, advancing the field through reproducible and accessible evaluation methodologies.