Many computational science community members recognize the need for improving our ability to assure the trustworthiness of computational results. At the same time, significant impediments make the goal difficult to realize. Through its reproducibility initiative, the SC conference series is committed to enabling better reproducibility of computational results.
SC16: The Beginning
SC16 introduced the first reproducibility efforts for the SC conference series technical papers program. This initial effort followed the strategy of Result and Artifact Review and Badging, asking for volunteers to complete an appendix to their paper that described the details of their software environment and computational experiments to the extent that an independent person could replicate their results.
“SC17 will continue the SC16 Artifacts Description appendix, making it mandatory for Best Paper or Best Student Paper consideration. SC17 will include a new reproducibility initiative, the Computational Results Analysis appendix, specifically addressing papers whose results come from boutique computing environments.”
Thirty authors volunteered for the SC16 effort, nine submitted appendices, and from among those, three papers were chosen as finalists and one will be used in the Student Cluster Competition for 2017. All papers that completed the appendix had it combined with their final paper in a single document, and had the ACM “Artifacts Available” badge added to their articles. These are all available in the ACM Digital library.
Reproducibility Initiative for SC17
The SC17 technical papers program is continuing and expanding SC16 efforts with a two-pronged approach:
- Artifact Description Appendix: We continue the SC16 effort to accept a short Artifact Description appendix. The appendix is optional. However, the Artifact Description appendix is required for consideration as Best Paper and Best Student Paper
- Computational Results Analysis Appendix: We introduce a new voluntary process that will further improve the confidence in results, particularly those that are non-replicable. This appendix will discuss how the authors performed pre-, peri- and post-execution analysis to better assure results trustworthiness.
Computational Results Analysis: Addressing Boutique Environments
SC receives a significant number of submitted papers that report results from leadership computing platforms or other boutique environments where reviewers would be unable to replicate the results, and even the authors themselves may be challenged to do so in the future.
In this setting, authors can improve confidence in results by other means. The computational results analysis appendix provides authors with an opportunity to discuss how confidence in their results can be increased even when the computational experiments cannot be rerun.
Approaches may include verification of manufactured solution, testing of preconditions, postconditions and invariants related to the problem or testing of other known properties such as conservation principles. These steps may be executed prior to, during or after execution of computational experiments, such that metadata about code and results can be used in post-execution analysis to confirm that the experiment executed as expected. This provides extra assurance that the results are correct.
Engaging the Student Cluster Competition
As with SC16, we will engage students once again in the reproducibility effort. We will continue the use of the Artifact Description appendices in the Student Cluster Competition. We will also include an effort related to Computational Results Analysis.