1. INTRODUCTION
It is not uncommon during cybercrime investigations to discover malware. In R v Oliver (R v Oliver [2016] EWCA Crim 1053), a ‘Trojan defense’ was offered to account for the presence of indecent images on a computer, while in R v Dan (R v Dan [2019] EWCA Crim 1985) it was argued that a “malware bug potentially affected its operation.”
In the UK, the Criminal Procedure Rules (Ministry of Justice, 2015) stipulate that digital forensic practitioners have a duty to assist the court in their understanding of the evidence tendered. Therefore, they also have a duty to identify the capabilities of any malware identified during their investigation and make a determination as to the bearing it has (if any) on the conclusions reached.
To undertake this duty (and hence form an opinion on the impact of any malware), the forensic practitioner is reliant on their tools, skills, and knowledge of malware to detect, identify and study the behaviour of any identified malware. However, practitioners are relying on anecdotal or otherwise limited scientific principles to form their conclusions on such impact (Kennedy, 2017). This is contrary to scientifically based decision making, which is “an assumed trait of the practitioner, rather than a formally taught competency” (Horsman, 2019a).
This is in part due to the lack of an established methodology for malware forensics undertaken as part of a criminal investigation. The unpredictable nature of malware means this lack of an established methodology could violate legislation such as the Computer Misuse Act (1990). It could also violate technically led best practice guidelines (Williams, 2012) and, more recently, the quality focused Codes of Practice and Conduct (Forensic Science Regulator, 2020b), hereafter referred to as “the Codes.”
One aspect of such a methodology is the approach followed to both select and evaluate the tools used to analyse the malware and the artefacts it produces. Broadly speaking, there are two approaches used to study malware: dynamic and static. The former monitors malware that is operational. The latter examines malware in a passive state by studying the underlying code, for example. Tools will generally support one (sometimes both) of these approaches. Existing methods to evaluate tools in a conventional digital forensic examination include ‘dual-tool verification’ promoted by the Association of Chief Police Officers (ACPO) (Clarke, 2009). Arguments that a tool has been widely accepted in case law (Guidance Software Inc., 2014) are open to challenge when examined at a statistically significant scale (Kennedy, 2017) and limited in their utility (Horsman, 2019b).
This work provides a foundation to determine if a systematic basis for trusted practice could be established for evaluating malware artefact detection tools used within a forensic investigation. The contributions of the work are to (a) identify the legislative, technical, and quality requirements of malware forensic practice; (b) provide a framework to address these requirements.
The structure of this paper is as follows: Section 2 explores the background and related work, while section 3 derives the requirements for undertaking malware forensics. Section 4 describes a framework to address these requirements. Section 5 reflects on the framework and the extent to which the requirements have been addressed, while section 6 draws conclusions and identifies further work.
Table of Contents
- 1. INTRODUCTION
- 2. BACKGROUND AND RELATED WORK
- 3. REQUIREMENTS FOR CONDUCTING MALWARE FORENSICS
- 4. DESIGN OF THE FRAMEWORK
- 5. DISCUSSION
- 6. CONCLUSIONS AND FURTHER WORK
- 7. REFERENCES