6,256 research outputs found

    Predictive Analytics in Information Systems Research

    Get PDF
    This research essay highlights the need to integrate predictive analytics into information systems research and shows several concrete ways in which this goal can be accomplished. Predictive analytics include empirical methods (statistical and other) that generate data predictions as well as methods for assessing predictive power. Predictive analytics not only assist in creating practically useful models, they also play an important role alongside explanatory modeling in theory building and theory testing. We describe six roles for predictive analytics: new theory generation, measurement development, comparison of competing theories, improvement of existing models, relevance assessment, and assessment of the predictability of empirical phenomena. Despite the importance of predictive analytics, we find that they are rare in the empirical IS literature. Extant IS literature relies nearly exclusively on explanatory statistical modeling, where statistical inference is used to test and evaluate the explanatory power of underlying causal models, and predictive power is assumed to follow automatically from the explanatory model. However, explanatory power does not imply predictive power and thus predictive analytics are necessary for assessing predictive power and for building empirical models that predict well. To show that predictive analytics and explanatory statistical modeling are fundamentally disparate, we show that they are different in each step of the modeling process. These differences translate into different final models, so that a pure explanatory statistical model is best tuned for testing causal hypotheses and a pure predictive model is best in terms of predictive power. We convert a well-known explanatory paper on TAM to a predictive context to illustrate these differences and show how predictive analytics can add theoretical and practical value to IS research

    Predictive Analytics in Information Systems Research

    Full text link

    Augmenting Authentication with Context-Specific Behavioral Biometrics

    Get PDF
    Behavioral biometrics, being non-intrusive and cost-efficient, have the potential to assist user identification and authentication. However, user behaviors can vary significantly for different hardware, software, and applications. Research of behavioral biometrics is needed in the context of a specific application. Moreover, it is hard to collect user data in real world settings to assess how well behavioral biometrics can discriminate users. This work aims to improving authentication by behavioral biometrics obtained for user groups. User data of a webmail application are collected in a large-scale user experiment conducted on Amazon Mechanical Turk. Used in a continuous authentication scheme based on user groups, off-line identity attribution and online authentication analytic schemes are proposed to study the applicability of application-specific behavioral biometrics. Our results suggest that the useful user group identity can be effectively inferred from users’ operational interaction with the email application

    The all-too-flexible abductive method:ATOM's normative status

    Get PDF
    The author discusses the abductive theory of method (ATOM) by Brian Haig from a philosophical perspective, connecting his theory with a number of issues and trends in contemporary philosophy of science. It is argued that as it stands, the methodology presented by Haig is too permissive. Both the use of analogical reasoning and the application of exploratory factor analysis leave us with too many candidate theories to choose from, and explanatory coherence cannot be expected to save the day. The author ends with some suggestions to remedy the permissiveness and lack of normative force in ATOM, deriving from the experimental practice within which psychological data are produced. (c) 2008 Wiley Periodicals, Inc

    Security Evaluation of Support Vector Machines in Adversarial Environments

    Full text link
    Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector Machine Applications
    corecore