Replicability Label

Again, this year ACM WiSec continues the effort pioneered in 2017, towards supporting greater “reproducibility” in mobile and wireless security experimental research. The goal of this process is to increase the impact of mobile and wireless security research, enable dissemination of research results, code and experimental setups, and to enable the research community to build on prior experimental results. We recognize papers whose experimental results were replicated by an independent committee and provide a “replicability label” accordance to the terminology defined by ACM.

Authors of accepted papers can participate in this voluntary process by submitting supporting evidence of their experiments' replicability, following the instructions below. Authors are encouraged to plan ahead when running their experiments, in order to minimize the overhead of applying for this label.

To apply for the replicability label, the authors must:

  • Prepare a VirtualBox VM with all data/tools installed. It is expected that the authors include within this VM raw data (without any pre-processing) and all the scripts used for pre-processing.

  • For each graph/table, provide a directory (Fig_XXX, Table_XXX) which contains a script that enables the committee to regenerate that object.

  • Include in the home directory a README file, according to the following format template: README.txt .

  • Provide a link to downloading the VM (e.g. Google Drive or Dropbox), or request credentials to upload the VM to the conference storage system.

  • Submit the result on our replicability HotCRP submission system.

We encourage you to also release your code (e.g. on GitHub) and data (e.g. on Zenodo) independently of the submitted VM. If you do so, feel free to submit links to these releases together with the VM.

The deadline is May 24, 2020 (23:59 AoE).

If the committee can verify that all relevant data sets were included and the graphs/tables can be regenerated based on this, the committee will grant a replicability label and also provide a report on the regeneration process.

Replicability committee

  • Aanjhan Ranganathan, Northeastern University, USA co-chair
  • Yao Zheng, University of Hawaiʻi at Mānoa, USA co-chair
  • Hongyu Jin, KTH Royal Institute of Technology, Sweden
  • Mohammad Khodaei, KTH Royal Institute of Technology, Sweden
  • Max Maaß, Technische Universität Darmstadt, Germany
  • Sashank Narain, UMass Lowell, USA
  • Harshad Sathaye, Northeastern University, USA

Review process

The authors upload a VM containing all the necessary data and code required to replicate the results. At least two reviewers are asked to replicate the results for each submission. We ensure that the VMs are self contained to the maximum extent possible to eliminate any future version deprecation and backward compatibility issues. The reviewers clarify any issues directly with the authors (and may request updates to the original VM and code).

Publication of data and reproducability environments

The VMs will be made available, as always, on the ACM WiSec datasets server at wisecdata.ccs.neu.edu after the conference.

Replicable publications

The following papers are awarded with the ACM badges for:

Artifacts Evaluated – Functional
Artifacts Evaluated – Functional
The artifacts associated with the paper are of a quality that significantly exceeds minimal functionality. That is, they have all the qualities of the Artifacts Evaluated – Functional level, but, in addition, they are very carefully documented and well-structured to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.
Artifacts Available
Artifacts Available
Author-created artifacts relevant to this paper have been placed on a publically accessible archival repository. A DOI or link to this repository along with a unique identifier for the object is provided.
Results Reproduced
Results Reproduced
The main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author.