Literature Review and Methodology
The literature review process entailed a thorough search of relevant scientific databases to include, but not limited to, the EBSCO database. A vast array of pertinent search terms was used to identify research focusing on assessing credibility, trust, and deception, among other relevant topics. Search terms included (with synonyms and closely related words); credibility, credibility assessment, trustworthiness, believability, digital trust, online behavior, online communication, cyber adversaries, trust, cybercrime, online deception, and fraud detection. Initially articles between 2005 and 2020 were reviewed; however, some additional studies that were relevant to the current article were published outside of the initial date range. Further studies were identified by examining the reference lists of all included articles and searching relevant websites.
Information Processing
As humans, we are influenced by our emotions and thoughts and the way we process information is not always considered, thoughtful, logical, or rational. The dual process theory of information processing stipulates that there are two main ways that we process information - one is an automatic process, and one is a slow, thoughtful process (Kahneman & Frederick, 2005). The former is known as System 1 thinking - it is fast thinking that is effortless and often emotion and heuristic driven. The latter is known as System 2 thinking, which requires cognitive resources, uses abstract thinking, and is both critical and logical. Both System 1 and System 2 are useful in different situations (e.g., driving a car versus calculating a complex mathematical problem).
Understanding the way that humans process information helps us understand how online credibility assessments are made. Research indicates that System 1 thinking dominates our information processing, whereby we prefer this quick thinking to effortful consideration (Sundar, Knobloch-Westerwick, & Hastall, 2007). Malicious cyber actors rely on the vulnerabilities of System 1 thinking to manipulate people to the desired behavior. Research has focused on identifying factors that can increase the susceptibility and vulnerability to online influence and System 1 thinking, which includes familiarity (Begg, Anas, & Farinacci, 1992), emotional triggers (Langenderfer & Shimp, 2001), salience of information (Igartua & Cheng, 2009), perceived credibility (Pornpitakpan, 2004), and propensity to trust (Bond & DePaulo, 2006).
Engaging System 1 processing is both time and energy efficient but increases the risk of poor decision-making, biased thinking, and impaired judgement (Tversky & Kahneman, 1974; Vishwanath, Harrison, & Ng, 2018). In addition, when dealing with a constant stream of online information, relying on System 1 thinking can increase the likelihood of deception being efficacious and unnoticed on social media (Vishwanath, 2015). While System 2 thinking can ensure that we make slow, deliberate decisions, it is not time or energy efficient. However, identifying when to engage System 2 processing and applying scrutiny and critical thinking to online information can increase the likelihood of identifying deception and manipulation by malicious cyber actors.
How humans process information is fundamental to assessing online credibility. Research has shown that there are roles for both System 1 and 2 thinking in assessing credibility (Metzger & Flanagin, 2015; Sundar, 2008). The following models of online credibility incorporate both the use of Systems 1 and 2 in evaluating online credibility. Our ability to engage System 2 thinking in online credibility assessment is limited by our experience and knowledge about contraindicators for credibility. This paper explores the scientific research on online credibility to increase our knowledge of System 2 thinking engagement and to propose an empirically based model of assessing online adversary credibility.
Credibility. In early work exploring the quality and credibility of information, Taylor (1986) suggested that people form judgments about information through assigning value to some and not to others. This then helps people make decisions about what information to use, share, and inform our actions. Since this time, researchers have examined the underlying factors of this ‘judgment’.
In credibility literature, two major dimensions have been found to be related to credibility – trustworthiness and expertise (Fogg & Tseng, 1999; Metzger, 2007). Trustworthiness is being perceived as truthful and honest, and information is considered trustworthy when it appears to be reliable, unbiased, and fair (Hilligoss & Rieh, 2008). Expertise is the perception of one’s knowledge, skill, and experience, which is linked to user assessments of validity and accuracy of information. As such, credibility is a subjective assessment of the quality of being trusted and believed.
The role of trust (as opposed to trustworthiness) has also been identified as related to credibility (Hovland, Janis, & Kelley, 1953). Trust refers to a set of beliefs, characteristics, and behaviors associated with the acceptance of risk, vulnerability, interdependence, expectations, insecurity, and action (Talboom & Pierson, 2013), while credibility refers to a perceived quality of a source, which may or may not result in trust (Rieh & Danielson, 2007).
Theoretical Frameworks for Understanding Online Credibility
Theoretical frameworks attempt to explain how people process online information to help them reach a credibility assessment, recognising that the majority of people cannot attend to and process all the online information (Lang, 2000). The following scientific literature review highlights the most relevant theoretical models of online credibility. These are summarized in Table 1.
Table 1. Summary of Existing Theoretical Frameworks for Understanding Online Credibility
Prominence-interpretation theory. This early theory considers two factors in the process of credibility assessment; a person notices something (prominence) and they form a judgment(interpretation).
Prominence-interpretation theory (Fogg, 2003). This early theory considers two factors in the process of credibility assessment; a person notices something (prominence) and they form a judgment (interpretation). Fogg (2003) identified five factors that affect prominence, which includes the involvement of the user (e.g., motivation and ability to process), online content (e.g., type of information, media source), task of user (e.g., seeking amusement, seeking education, making a transaction), user experience (e.g., familiarity with subject matter), and individual differences (e.g., need for cognition, education, learning style, personality). Interpretation occurs once a person notices an online communication and then interprets and assigns value to the message to make a credibility assessment. According to Fogg (2003), interpretation is affected by user assumptions (e.g., culture, beliefs, experiences), skills, and knowledge of user (e.g., level of competency in subject matter), context (e.g., environment, norms, expectations), and user goals (e.g., reasons for online engagement). This process is repetitive for online users, which may result in new cues being noticed and processed.
While this model is useful, it does not encapsulate all individual, technical, and social factors that influence interpretation. Many of these factors, such as the development of various social media and technologies, have evolved since this theory was developed. For example, this model does not consider the influence of other users, the prominence of social media as a source of news, and the development of mobile internet access.
MAIN model (Sundar, 2008). This model focuses on the role of cognitive heuristics in credibility assessments and is primarily concerned with the technological aspects of digital media (Sundar, 2008). Recognising that source, message, and medium are important, the MAIN model looks at the structure of online information to inform credibility judgments. Extensive research has informed this approach, which emphasizes four “affordances” (i.e. Modality, Agency, Interactivity, and Navigability) in digital media that serve to influence credibility assessments through heuristic processing. The author defines affordances as capabilities “that can shape the nature of content in a given medium” (Sundar, 2008, p. 75). Modality relates to the medium of delivery (e.g., credible website appearance, matches expectations with the real world), while Agency relates to the source of the information (e.g., is the source a reputable news company or user-generated content?). Interactivity is defined as interaction and activity. The more interaction and activity people engage in, the more likely they will perceive the information to be credible (e.g., a reviewer on Trip Advisor will perceive the reviews as more credible than someone who has never posted a review). Finally, Navigability is the ease of use and intuitiveness of interface features. Good navigability can trigger heuristic cues of credibility (e.g., provision of hyperlinks, use of navigational aids).
The MAIN model provides an extensive explanation of how digital media can trigger heuristic processing, and how this influences the credibility assessments of such media. However, it focuses on the structural and technical features, rather than the content. It also does not offer explanations for how people engage in analytical thinking processes in assessing credibility.
Dual processing models (Metzger, 2007; Wathen & Burkell, 2002). Dual processing models of credibility assessment focus on how people use credibility indicators in information processing and decision making. Early work recognised the importance of the interaction between source, receiver, and message in a credibility assessment (Wathen & Burkell, 2002). They proposed a dual process whereby surface and message factors are assessed to provide an overall credibility evaluation, shown in Table 2.
Fogg (2003) identified five factors that affect prominence, which includes the involvement of the user (e.g., motivation and ability to process), online content (e.g., type of information, media source), task of user (e.g., seeking amusement, seeking education, making a transaction), user experience (e.g., familiarity with subject matter), and individual differences (e.g., need for cognition, education, learning style, personality). Interpretation occurs once a person notices an online communication and then interprets and assigns value to the message to make a credibility assessment. According to Fogg (2003), interpretation is affected by user assumptions (e.g., culture, beliefs, experiences), skills, and knowledge of user (e.g., level of competency in subject matter), context (e.g., environment, norms, expectations), and user goals (e.g., reasons for online engagement). This process is repetitive for online users, which may result in new cues being noticed and processed.
While this model is useful, it does not encapsulate all individual, technical, and social factors that influence interpretation. Many of these factors, such as the development of various social media and technologies, have evolved since this theory was developed. For example, this model does not consider the influence of other users, the prominence of social media as a source of news, and the development of mobile internet access.
MAIN model. This model focuses on the role of cognitive heuristics in credibility assessments and is primarily concerned with the technological aspects of digital media (Sundar, 2008). Recognising that source, message, and medium are important, the MAIN model looks at the structure of online information to inform credibility judgments. Extensive research has informed this approach, which emphasizes four “affordances” (i.e. Modality, Agency, Interactivity, and Navigability) in digital media that serve to influence credibility assessments through heuristic processing. The author defines affordances as capabilities “that can shape the nature of content in a given medium” (Sundar, 2008, p. 75). Modality relates to the medium of delivery (e.g., credible website appearance, matches expectations with the real world), while Agency relates to the source of the information (e.g., is the source a reputable news company or user-generated content?). Interactivity is defined as interaction and activity. The more interaction and activity people engage in, the more likely they will perceive the information to be credible (e.g., a reviewer on Trip Advisor will perceive the reviews as more credible than someone who has never posted a review). Finally, Navigability is the ease of use and intuitiveness of interface features. Good navigability can trigger heuristic cues of credibility (e.g., provision of hyperlinks, use of navigational aids).
The MAIN model provides an extensive explanation of how digital media can trigger heuristic processing, and how this influences the credibility assessments of such media. However, it focuses on the structural and technical features, rather than the content. It also does not offer explanations for how people engage in analytical thinking processes in assessing credibility.
Dual processing models. Dual processing models of credibility assessment (Metzger, 2007; Wathen & Burkell, 2002) focus on how people use credibility indicators in information processing and decision making. Early work recognised the importance of the interaction between source, receiver, and message in a credibility assessment (Wathen & Burkell, 2002). They proposed a dual process whereby surface and message factors are assessed to provide an overall credibility evaluation, shown in Table 2.
According to this model, users first assess surface credibility, whereby the user considers how the online source looks and feels before moving on to assess the source and message credibility. In the final step of this model, the user synthesizes this information with their own previous knowledge to produce an overall credibility assessment.
Table 2. Wathen & Burkell’s factors of online credibility (2002)
This theory was a positive start to exploring the complexity of influences in assessing credibility. However, research has identified that surface credibility evaluations are influenced by source credibility, content accuracy, and currency (Wierzbicki, 2018). Thus, the steps in the model may not be ordered but rather a fluid and flexible processing of both information sources at the same time, which may rely too heavily on System 1 thinking.
Another model examined the dual roles of motivation and ability in evaluating credibility (Metzger, 2007). The impact of motivation and ability on credibility assessment is empirically supported, with one study showing that people who were motivated to obtain accurate information about a health issue were more likely to initially sift through information using heuristic processing (e.g., assessing surface credibility), before more critically appraising online information (i.e. begin to engage System 2 thinking) (Sillence, Briggs, Harris, & Fishwick, 2007). In another study, Flanagin & Metzger (2000) found that internet users with more experience were more likely to verify online information than less experienced users.
This model recognises that these processes are highly influenced by individual differences and user perceptions (e.g., demographics, experiences, user skills). If an individual is motivated to check credibility but has limited ability to do so (e.g., through unfamiliarity, poor access to the internet), this model proposes that they are more likely to rely on a heuristic evaluation (i.e. System 1 thinking). However, this model extends the MAIN model’s focus on heuristic processing by examining an individual’s motivation and their ability to evaluate, which may lead to a more systematic and thorough evaluation.
Unifying framework. Derived from empirical research, this model identifies three levels of credibility judgments: construct, heuristic, and interaction (Hilligoss & Rieh, 2008). These three levels are not considered to operate independently, but rather as different judgments at different levels impacting each other.
The construct level describes the user’s personal conceptualization of credibility, which provides a particular point of view for judging credibility. The construct level includes concepts such as truthfulness, believability, trustworthiness, objectivity, and reliability. In the associated empirical study, the authors note that trustworthiness was the definition of credibility that participants mentioned most frequently (Hilligoss & Rieh, 2008). The authors also noted that participants used different terms to describe credibility depending on the situation or type of information encountered, indicating that people will adapt their expectations of credibility based on context.
The heuristics level refers to cognitive shortcuts used to estimate credibility, which is akin to using System 1 thinking. In the associated study, participants referred to “convenient” and “quick” assessments, leading to an almost instant judgment of credibility. The study by Hilligoss and Rieh (2008) identified four types of heuristics that people rely on for credibility assessments. The first is media-related heuristics. Participants perceived that books and scholarly journal articles were consistently perceived as credible media, as compared to the internet. As such, media-specific heuristics can serve to increase or decrease credibility concerns and influence the processing of information. The second is source-related heuristics. This was identified as whether the source was familiar or unfamiliar (with familiar sources being perceived as more credible) and whether the source was primary versus secondary (with primary sources being more credible than secondary sources). The third heuristic identified was endorsement-based, whereby participants perceived information to be credible because it was endorsed, recommended, or believed by knowledgeable and trusted individuals. The last heuristic referred to aesthetics, whereby participants used the aesthetic appeal (e.g., how the website looks, how easy it was to navigate) of the online source as a source of credibility.
The interaction level relates to sources or content cues that occur during a specific interaction. The study identified that the interaction level involves three types of interactions: interactions with content cues, source peripheral cues, and information object peripheral cues. Content cues refers to the interaction with the content of the message. Hilligoss and Rieh (2008) found that the primary method by which people interact with content from a credibility assessment perspective was based on personal knowledge, followed by exploring additional sources of the information. Source peripheral cues are those surrounding the online information, such as affiliation, reputation, and type of institution, while peripheral information object cues pertain to the appearance or presentation of the information.
This theory is useful because it considers the user’s subjective credibility assessment, as well as heuristic processing, content, and source cues. Furthermore, it considers all this information in the individual context. This theory brings together several factors seen in previous models. However, it does not identify individual or personal differences that impact these assessments, such as technical expertise or the role of relationships.
Aggregated trustworthiness model. This model adds to the literature regarding the role of relationships and social dynamics (Jessen & Jørgensen, 2012). The aggregated trustworthiness model notes that, in the absence of an identified author, people still make credibility judgements about online information in the context of collective judgment, such as ‘likes’, ratings, or comments (Hargittai, Fullerton, Menchen– Trevino, & Thomas, 2010). The authors propose that the others’ feedback plays a role in credibility assessment through three processes: social validation, profiles, and authority/trustee. Social validation means the more people that acknowledge the information, the more likely it is to be perceived as credible. Profiles are the baseline for identity (e.g., social media profiles), while authority and trustee are the known brand or authority. This model shifts from traditional views of expertise and trustworthiness to incorporate online social dynamics. This recognition of the social processes online that are relevant to credibility assessment are evident where social media tools have replaced more traditional authoritative sources.
This model has not been empirically validated, although its concepts have some empirical support (Hargittai et al., 2010; Pettingill, 2006). It remains worthy of consideration due to its introduction of the construct of collective judgment and social processes involved in making a judgment about credibility. These concepts are paramount to credibility assessment in the modern day.
3-S model. The 3-S model encompasses Metzger’s (Metzger, 2007) ideas on the importance of motivation and ability in credibility assessment with a focus on trust. In this model, trust comprises of four levels to include individual, interpersonal, relational, and societal (Lucassen & Schraagen, 2011). Trust is defined as, “The willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party” (Mayer, Davis, & Schoorman, 1995, p. 712). Previous models of trust have identified factors including disposition to information, relevance, confidence, and willingness to trust (Kelton, Fleischmann & Wallace, 2008). This adds an interesting aspect of online credibility assessment as it pertains to an individual’s willingness to take risks regarding credibility and trust.
The 3-S model encompasses two main aspects, information characteristics and user characteristics (Figure 1). Both characteristics impact trust assessments. These two characteristics are akin to Metzger’s ability domain (Metzger, 2007). According to this model, when making a credibility judgment, a user will be influenced by the content of the information, how it is presented and the source of the information. After receiving this information, the 3-S model then proposes that there are three different strategies users may apply when judging credibility.
Figure 1. The 3S Model of Trust Source. Lucaasen & Schraagen (2011)
The user may apply their own expertise through domain expertise or information skills. Research has that experts approach information in their field of expertise differently than novices (Brand-Gruwel, Wopereis, & Vermetten, 2005; Chi, Feltovich, & Glaser, 1981). For example, domain experts are likely to base their judgment primarily on factual accuracy (Lucassen & Schraagen, 2011). In addition to domain expertise or information skills, the final user characteristic that influences the assessment of trust in this model is source experience. This is commensurate with source-related heuristics in the Unifying Framework (Hilligoss & Rieh, 2008). This model proposes that independent of domain expertise and information skills, source experience can serve to diminish or override System 2 thinking, where users passively rely on their previous experiences with the source.
There is an increase of online material specifically designed to distort, manipulate, or even delude the perceptions of digital consumers with limited empirical examination of cyber deception (Stech, Heckman, Hilliard, & Ballo, 2011). When considering those types of adversarial attempts to thwart online judgment, this model of trust is useful because it highlights several areas where we can explore credibility and trust facets, including both information and user characteristics.
As discussed earlier, while trust does not equate to credibility, credibility is required for trust to be established. This model provides a comprehensive exploration of online factors that may influence a trust judgment and thus credibility. However, this model lacks an explanation of how social dynamics can impact trust judgments online.
Table of Contents
- Introduction
- Literature Review and Methodology
- Proposed Model for Online Adversary Credibility Assessment
- Discussion
- Limitations and Future Directions
- Summary
- Declaration of Interest Statement
- References