3 research outputs found

    IMPROVING COMPUTER-SYSTEM SECURITY WITH POLYMORPHIC WARNING DIALOGS AND SECURITY-CONDITIONING APPLICATIONS

    Get PDF
    Many computer security decisions depend on contextual information that computer systems cannot automatically obtain or verify. Users need to supply such information through, e.g., computer dialogs. Unfortunately, users often do not provide true information to computer systems, but rather (intentionally or automatically) input whatever information will quickly dismiss security dialogs and allow users to proceed with their primary goal (which is rarely computer security). Obviously, such user behavior can compromise computer systems' security. With the generalized use of the Internet today, an individual's insecure behavior can have severe negative consequences to his organization, including financial losses, unintended release of private information, or an inability to operate normally in everyday activities. In spite of such potential consequences, users continue to behave insecurely. Industry surveys and security researchers still find users to be the weakest link in the computer security chain.To address the aforementioned problems, we first propose a model that helps explain why users behave insecurely when operating computer systems. Then, based on that model, we propose and evaluate techniques that improve users' security behaviors by automatically manipulating antecedents and consequences of such behaviors. First, we propose the use of warning polymorphism, which randomizes options in security warning dialogs, and delays activation of some of those options, so as to avoid cuing automatic, possibly untrue user responses. Second, we contribute the notion of security-conditioning applications (SCAs), and implement and evaluate two types of such applications, namely, security-reinforcing applications (SRAs) and insecurity-punishing applications (IPAs). SRAs strengthen users' secure behaviors by reliably delivering reinforcing stimuli contingently upon such behaviors, according to a specific reinforcement policy and schedule. IPAs weaken users' insecure behaviors by reliably delivering aversive stimuli, pre-specified by a policy, contingently upon those behaviors. Finally, we devise vicarious security-conditioning interventions to prepare users for interaction with SCAs and accelerate the latter's security benefits and user acceptance.Results of empirical evaluations of our proposed techniques show that they are, indeed, effective in improving users' security behaviors, increasing computer systems' security. Moreover, we show that, with appropriate schedules and stimuli, such improvements are resistant to extinction over time
    corecore