Link Search Menu Expand Document
  1. RESULTS

RESULTS

After data screening and pretests, we were sufficiently confident in our analyses. The Muachly’s test of sphericity was not significant (χ2 = 3.22, p = .59), which indicates that the correlation matrix was not significantly different from the identity matrix in the correlations among variables (Myers, Well & Lorch, 2010). This combined with a relatively large sample size, we were confident that the assumption of sphericity had not been violated. In support of continuing with the remaining analyses, the test for homogenity of variances was validated because the scatter was relatively equal (Myers, et. al., 2010). Finally, given the correlations among pretest scores, we tested the group means using t-tests with Bonferroni, none of which were significant, therefore we determined that the groups were not statistically different from each other prior to training.

Table 1. Pretest Means and Pearson Correlations Among Groups Prior to Training Table 1. Pretest Means and Pearson Correlations Among Groups Prior to Training Assured of the integrity of our data, we tested our hypotheses using multivariate analysis of covariance (MANCOVA). There was some variance, hence we used the pretest scores as the covariate. We wanted to determine whether there were significant differences among the modes of training delivery on the applied performance outcome. The overall MANCOVA was significant (F = 1.33, p < .000, r2adj = .76) indicating that there were differences in the overall model. Since we posited that there would be applied performance differences based on training mode, hypotheses must be based on univariate results and not on the overall multivariate test, thus we conducted individual ANCOVA for the hypotheses.

Hypothesis 1 proposed that cybersecurity simulations (μ=301.11, σ = 0.26) would improve applied learning performance compared to conventional classroom/lab study alone (μ= 233.19, σ = 0.34). This hypothesis was supported (F = 7.88, p < 0.00, η2 = 0.29). Hypothesis 2 stated that live activities (μ= 256.19, σ = 0.20) would improve applied learning performance compared to conventional classroom/lab study alone (μ= 233.19, σ = 0.84). This hypothesis was not supported (F = 1.88, p = 0.19, η2 = 0.14).

Table 2. Posttest Means, F-Scores and Eta Squared for Hypotheses Table 2. Posttest Means, F-Scores and Eta Squared for Hypotheses

Finally, hypothesis 3 indicated that live activities combined with simulations and conventional classroom/lab study (μ=388.88, σ = 0.40) would improve applied performance compared to conventional classroom/lab study alone (μ= 233.19, σ = 0.34). This hypothesis was supported (F = 11.29, p < 0.00, η2 = 0.31). In summary, cybersecurity simulations improved applied performance over classroom and lab instruction. As seen by the differences in applied performance means in pretest scores, as well as the results, adding activities such as capture the flag and hackathons alone to the lecture/lab baseline appeared to add little benefit to the applied learning outcomes, yet when combined with simulations, that combination yielded the greatest gains in applied learning performance.


Table of Contents