SPECcast: A Methodology for Fast Performance Evaluation with SPEC CPU 2017 MultiprogrammedWorkloads

Publication
International Conference on Parallel Processing
Date

Abstract

Performance comparison is a key task in computer architecture research. These evaluations might need to consider the scenario where resources are shared concurrently by a wide range of application classes. In many cases, well known benchmarking tools, such as SpecCPU do not provide evaluation metrics under such usage circumstances. Previous attempts to fill the gap with realistic workloads have reiled on random combination of applications, formulating performance comparison as a statistical task to reduce the population size. The computational cost of these approaches is substantial, given the large mix required to achieve statistically meaningful results. In this paper, we present SPECcast, a methodology for the SPEC CPU2017 suite, which can circumvent this issue. The idea relies on exploiting the inner application characteristics to minimize the computational cost without degrading the statistical significance of the results. Using manual source-code annotation, we determine a small portion of each application, denoted Region of Interest (ROI), that accurately resembles the whole program’s characteristics. Then, we develop synchronization mechanisms that can concurrently run any combination of applications in the cores of the system. This enables us to run multiprogrammed SPEC workloads ∼95% faster without losing statistical significance. A detailed validation of the proposed methodology will be performed and three different use cases for the methodology will be described: Fast performance evaluation of micro-architectural features (prefetching in this case) in real systems, system comparisons and application characterization using full-system simulation.