Digital Threats: Research and Practice (DTRAP)


Social Media


DTRAP Frequently Asked Questions

What is this FAQ?

This FAQ serves as a guide for DTRAP authors who are unfamiliar with the concepts implemented in the journal’s public reviewer rubric. The rubric communicates the expectations of the Editorial Board with respect to the standard of acceptability for publication in the journal. There is no reviewer score threshold above which acceptance for publication is guaranteed, nor is there a lower bound that guarantees rejection. Rather, the public reviewer rubric is in place to focus reviewer comments and to remind submitting authors that these specific elements should be satisfied in submitted manuscripts.

What is the relationship between the rubric categories?

Roughly, “relevance” and “novelty/utility” are measures of the quality of the research question. Higher-quality research questions are more relevant, and the corresponding research is more novel or useful.

“Internal validity,” “external validity,” “containment,” and “transparency” are measures of the quality of the research itself. Often, well-done research reaches an unsurprising or negative result. For the advancement of the field, these results are just as valuable as an exciting or surprising result. Accordingly, the journal does not aim to accept papers based on how surprising the result is. Rather, suitability for publication is based upon good research questions and justified, methodical searches for answers to those questions.

What is “Relevance”?

The “Relevance” category gauges the manuscript with respect to its importance to the field of digital threats. A result about an important sub-system or user population is more relevant than a result about a less important sub-system or user population. Because “importance” is subjective, it is crucial to provide supporting evidence in your paper for the importance of the impacted sub-system or population. The journal accepts submissions germane to any aspect of digital threats. See the journal’s scope statement for examples of relevant subject matter.

What is “Novelty / Utility”?

“Novelty” measures the degree to which the manuscript breaks new ground. “Utility” measures the degree to which the manuscript’s results are useful to members of the field. Typically, these qualities trade against each other: while breaking new ground, the utility often is not immediately apparent, and an extremely useful discovery often emerges from viewing established results from a new perspective. Nonetheless, novel work should be the breaking of genuinely new ground: substantiating that your work is novel requires explaining how prior work (in any of industry, government, or academia) does not answer the question at hand. Likewise, for a line of research to be useful, it should address, in a direct fashion, scaling to the community in question.

What is “Internal Validity”?

A manuscript proposing a result that is internally valid is one in which the method used to answer the manuscript’s research question contains all of the elements and steps needed to achieve the results. Any confounding factors or sources of error within the research method must be addressed. Accounting for such problems may be as simple as listing the possible sources and explaining why they do not negatively impact the research method. For presently insurmountable confounds, the manuscript should contain a clear discussion of the confound and attempts that were undertaken to circumvent it. Explicitly addressing all confounds enables the readers of the manuscript to contribute to the discussion and assist the field in overcoming the confound in future research.

What is “External Validity”?

A manuscript containing an externally valid result demonstrates how the research applies to the “real world” of digital threats outside the confines of the laboratory in which the research takes place. Manuscripts should contain explanations that detail how the experimental subject of study is relevant to the behavior of the real-world correlate. For example, claims made about the Internet at large must be justified by demonstrating that the data and research method apply to the whole Internet; if they do not, an explanation is required for any existing gaps. Local environments do not teach us about the Internet unless a demonstration is possible—and provided—for why the local environment is representative of the population itself.

What is “Containment”?

Containment addresses the responsible isolation of research from the “real world.” Research that involves bringing something into a controlled environment to study it needs to be accompanied by proper demonstration that the thing—and only the thing—was brought into isolation. Likewise, research that involves altering something and releasing it from the controlled environment into the “real world” needs to be accompanied by a demonstration that the research process does not damage the object of study nor the outside world. This requirement includes following the guidance of the IRB for studies with human subjects. Studies must be legal.

What is “Transparency”?

Transparency involves full disclosure of all methods that were used in the study. An effective manuscript will contain a “Methods” section that describes the research process. A reader should be able to reproduce the steps of the manuscript’s work: the reader (with suitable access to the necessary resources) should be able to repeat, reproduce, or corroborate the manuscript’s results based solely on the contents of the manuscript. In many cases involving technology, the journal recommends making available any source code used to arrive at the manuscript’s result, since the code is part of the research method.

Where can I learn more?

Regarding many of the concepts involved, as well as the importance of including them in research, see:

• Hatleback, Eric, and Jonathan M. Spring. "Exploring a mechanistic approach to experimentation in computing." Philosophy & Technology 27, no. 3 (2014): 441-459.

Regarding the importance of containment in cyber security in the vein of ethical research design and operation, see:

• “The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research.” August 2012. 

For an extended discussion of concepts relevant to experimental transparency (such as repeatreplicatereproduce, and corroborate), see:

• Dror G Feitelson. “From repeatability to reproducibility and corroboration”, ACM SIGOPS Operating Systems Review, pp. 3—11, 2015.

For extended discussion regarding the importance of making source code available as part of transparency, see: 

All ACM Journals | See Full Journal Index