This module is a resource for lecturers

Cybersecurity measures and usability

Ideally, responses to risks should be designed to protect the confidentiality, integrity, and availability of systems, networks, services, and data, while also ensuring the usability of these measures (NIST, 2018). The usability of digital devices (i.e., the ease with which they can be used) often takes precedence over the security of these devices and their contents (Whitten and Tygar, 1999). However, security and usability are not necessarily mutually exclusive (Sherwood, Clark, and Lynas, 2005). Cybersecurity measures can be both secure and usable.

Cybersecurity measures include those which seek to establish user identity in order to prevent unauthorized access to systems, services, and data. These authentication measures include "what you know" (e.g., passwords, passphrases, and PINs), "what you have" (e.g., smart cards and tokens), and "what you are" (e.g., biometrics, such as fingerprints) (Lehtinen, Russell, Gangemi Sr., 2006; Griffin, 2015). Multifactor authentication (MFA) requires two or more of these methods of authentication to establish user identity (Andress, 2014).

Another type of cybersecurity measure is access control. Access controls, which establish privileges, determine authorized access and prevent unauthorized access, include authentication measures and other measures designed to protect passwords and logins to systems, apps, websites, social media and other online platforms, and digital devices (Lehtinen, Russell, Gangemi Sr., 2006). A case in point is limiting the number of allowed attempts to enter a password on a smartphone. There is an option on smartphones that allows users to erase all data on the device after a certain number of failed attempts to enter a passcode. This feature was created to enable users to protect data on their devices in the event that the digital device was stolen and/or access is sought without user authorization.

Other examples of access controls include adding an incremental wait time each time a password is incorrectly provided and/or limiting the number of failed password attempts that could be provided daily and locking the user out of an account for a period of time (Lehtinen, Russell, Gangemi Sr., 2006). The controls aimed at login attempts are designed to protect against attempts to gain unauthorized access to user accounts. These time delays particularly are designed to protect against brute force attacks.

Did you know?

The Completely Automated Public Turing test to tell Computers and Humans Apart (or CAPTCHA) has been used to prevent brute force attacks. Nevertheless, research has shown that CAPTCHA images can be bypassed (e.g., Bursztein et al., 2014; Sivakorn, Polakis and Keromytis, 2016; Gao et al., 2014).

A brute force attack is the use of a script (i.e., computer programme) or bot (discussed in Cybercrime Module 2 on General Types of Cybercrime) to guess (by trial and error) user credentials (i.e., username and/or password/passcode) (for further information on brute force attacks, see Knusden and Robshaw, 2011, 95-108). Brute force attacks utilize, among other things, common passwords or breached login details. In 2018, it was revealed that the master password feature, which enabled users to encrypt the passwords stored in the browser of Mozilla Firefox (a web browser), could be easily compromised using a brute force attack (Trend Micro, 2018).

Did you know?

Brute force attacks can be conducted by skilled hackers and script-kiddies (i.e., individuals without any technological skills).

Want to learn more?

INFOSEC Institute. (2018). Popular Tools for Brute-force Attacks .

Passwords are either system-generated or user-generated (Adams, Sasse and Lundt, 1997). System-generated passwords (i.e., a password created by a programme) are difficult to guess and could withstand password crackers (although this depends on the length of the passwords). The issue with system-generated passwords is that they are difficult to remember. This leads individuals to record the password, by, for example, writing the password down or saving it on a browser, app or digital device. Because of this, user-generated passwords are preferred. However, the passwords generated by the user may also be difficult to remember. Systems, apps, and online platforms often have complex password rules for users to follow, requiring passwords to meet minimum set lengths and include combinations of upper and lower-case letters, numbers, and symbols. Therefore, like system-generated passwords, most user-generated passwords are difficult to remember.

Individuals are also encouraged to have a different password for each account (US Federal Trade Commission Consumer Information, 2017). The latter recommendation aims at minimizing the harm to individuals in the event that the credentials of one of their accounts is compromised. In 2017, a research company found a file online with 1.4 billion individuals' usernames and passwords for a variety of social networking, game, TV and movie streaming, and other online sites (Matthews, 2017). If any of these individuals recycle passwords, this breach puts their other online accounts (where the same username and password are used) in jeopardy. While the use of different and complex passwords for each account may provide some level of security for individuals, it ultimately adversely impacts its usability (i.e., its memorability). As Adams, Sasse and Lundt (1997) rightly point out, "more restrictions in authentication mechanisms create more usability problems" (p. 3).

Different authentication schemes have been proposed in lieu of passwords. The use of alternate authentication schemes, such as biometrics, is not without adverse social, legal, and even security consequences (Greenberg, 2017a; Greenberg, 2017b). Cases in point are Apple iPhones with TouchID that enables users to unlock their devices with a fingerprint, and FaceID, which, using facial recognition technology, enables users to unlock their phones with their face. In the United States, an individual cannot be compelled by criminal justice agents to provide their passwords; the same protection has not been extended to fingerprints and other biometrics (see, for example, Virginia v. Baust, 2014 and State v. Diamond, 2018, where the courts held that individuals can be compelled to use their fingerprint to unlock a phone) (for more information about this practice in the United States and the practices of other countries, such as India, Australia, and New Zealand, see below box on "Biometrics and the Privilege Against Self-Incrimination").

Biometrics and the Privilege Against Self-Incrimination

According to the Fifth Amendment to the United States Constitution, "No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a grand jury, except in cases arising in the land or naval forces, or in the militia, when in actual service in time of war or public danger; nor shall any person be subject for the same offense to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation."

One of the rights provided under the Fifth Amendment was the privilege against self-incrimination (otherwise known as the right against forced self-incrimination). In Schmerber v. California (1966), the court held that "the privilege protects an accused only from being compelled to testify against himself, or otherwise provide the state with evidence of a testimonial or communicative nature" (761). By contrast, the Fifth Amendment to the US Constitution "offers no protection against compulsion to submit to fingerprinting, photography, or measurements,…to appear in court, to stand, to assume stance, to walk, or to make a particular gesture" ( United States v. Wade, 1967, 223). Indeed, US courts can compel defendants to submit a blood sample, saliva sample, or voice exemplar, to name a few ( Schmerber v. California, 1966; U.S. v. Dionisio, 1973; People v. Smith, 1982).

Relying on United States v. Wade (1967), the judge in Virginia v. Baust (2014) concluded that because the fingerprint does not require the communication of knowledge by the defendant, it is not provided with Fifth Amendment protection - the same holds true for keys and the DNA of a defendant (3). In view of that, while passwords are protected under the Fifth Amendment, the same protection does not extend to fingerprints (and, by way of extension, other biometrics). The lack of Fifth Amendment protection of biometrics was reaffirmed in State v. Diamond (2018).

Ultimately, in the United States, "what you are" can be compelled (i.e., compelling individuals to provide their fingerprints and by way of extension, having an individual use their face to unlock smartphones that use facial recognition technology), whereas "what you know" generally cannot be compelled pursuant to the Fifth Amendment of the US Constitution (as this is against the privilege against self-incrimination). Similarly, other countries (e.g., Australia, New Zealand, and India, to name a few) and human rights courts (e.g., the European Court of Human Rights) do not consider the compulsion of the showing of a face, fingerprints, or other biometrics (e.g., footprints) as a violation of the privilege against self-incrimination (see, for example, Sorby v. Commonwealth, 1983; New Zealand, King v. McLellan, 1974; India, State of U.P. v. Sunil, 2017; and Saunders v. United Kingdom, 1996).

Humans are viewed as the weakest link in the cybersecurity chain (Crossler et al., 2013; Grossklags and Johnson, 2009; Sasse, Brostoff, and Weirich, 2001; Schneier, 2000). Indeed, several studies have shown that cybersecurity incidents (i.e., breaches and/or attacks on networks, systems, services and data) are the result of human error and failure to implement security measures (Safa and Maple, 2016; Pfleeger, Sasse and Furnham, 2014; Crossler et al. 2013). While much attention is placed on the role of humans in cybersecurity incidents, cybersecurity measures in place at the time of the incident can play a role in the cybersecurity incident. The reality is that cybersecurity measures (what they can actually do) and users' expectations of the performance of these security measures (what they think they do) often do not match (Ur et al., 2016; Gunson, et al., 2011; Furnell, 2005).

The literature and research on human-computer interactions proposes that the security of digital devices, systems, programmes, apps, and online platforms, among others, should be developed with users in mind (i.e., security by design; see Eloff and Eloff, 2002; Cranor and Garfinkel, 2005; Sasse and Flechais, 2005; Karat, Karat and Brodie, 2005; Dix et al., 2004; Balfanz, et al., 2004). This, however, is not common practice. The common practice is to build the systems and then seek to modify user interactions with the system to meet security needs (Nurse et al., 2011; Yee, 2004).

Did you know?

In the European Union, the Cybersecurity Act of 2018 created a "framework for European Cybersecurity Certificates for products, processes and services" in order to promote security by design through the "incorporate[ion of] security features in the early stages of …[the] technical design and development" of digital devices (European Commission, 2018).

Want to learn more?

European Commission. (2018). Cybersecurity Act .

Next: Situational crime prevention
Back to top