9,245 research outputs found

    False Identity Detection Using Complex Sentences

    Get PDF
    The use of faked identities is a current issue for both physical and online security. In this paper, we test the differences between subjects who report their true identity and the ones who give fake identity responding to control, simple, and complex questions. Asking complex questions is a new procedure for increasing liars' cognitive load, which is presented in this paper for the first time. The experiment consisted in an identity verification task, during which response time and errors were collected. Twenty participants were instructed to lie about their identity, whereas the other 20 were asked to respond truthfully. Different machine learning (ML) models were trained, reaching an accuracy level around 90-95% in distinguishing liars from truth tellers based on error rate and response time. Then, to evaluate the generalization and replicability of these models, a new sample of 10 participants were tested and classified, obtaining an accuracy between 80 and 90%. In short, results indicate that liars may be efficiently distinguished from truth tellers on the basis of their response times and errors to complex questions, with an adequate generalization accuracy of the classification models

    An Introduction to Mechanized Reasoning

    Get PDF
    Mechanized reasoning uses computers to verify proofs and to help discover new theorems. Computer scientists have applied mechanized reasoning to economic problems but -- to date -- this work has not yet been properly presented in economics journals. We introduce mechanized reasoning to economists in three ways. First, we introduce mechanized reasoning in general, describing both the techniques and their successful applications. Second, we explain how mechanized reasoning has been applied to economic problems, concentrating on the two domains that have attracted the most attention: social choice theory and auction theory. Finally, we present a detailed example of mechanized reasoning in practice by means of a proof of Vickrey's familiar theorem on second-price auctions

    Veracity Roadmap: Is Big Data Objective, Truthful and Credible?

    Get PDF
    This paper argues that big data can possess different characteristics, which affect its quality. Depending on its origin, data processing technologies, and methodologies used for data collection and scientific discoveries, big data can have biases, ambiguities, and inaccuracies which need to be identified and accounted for to reduce inference errors and improve the accuracy of generated insights. Big data veracity is now being recognized as a necessary property for its utilization, complementing the three previously established quality dimensions (volume, variety, and velocity), But there has been little discussion of the concept of veracity thus far. This paper provides a roadmap for theoretical and empirical definitions of veracity along with its practical implications. We explore veracity across three main dimensions: 1) objectivity/subjectivity, 2) truthfulness/deception, 3) credibility/implausibility – and propose to operationalize each of these dimensions with either existing computational tools or potential ones, relevant particularly to textual data analytics. We combine the measures of veracity dimensions into one composite index – the big data veracity index. This newly developed veracity index provides a useful way of assessing systematic variations in big data quality across datasets with textual information. The paper contributes to the big data research by categorizing the range of existing tools to measure the suggested dimensions, and to Library and Information Science (LIS) by proposing to account for heterogeneity of diverse big data, and to identify information quality dimensions important for each big data type

    Designing Bureaucratic Accountability

    Get PDF
    A central finding in earlier research is that attitudes to work generally are more positive among older workers than among younger workers. This result has been interpreted in two different ways, by the cultural and the structural hypotheses. The cultural hypothesis sees age differences as outcomes of generational differences. We would expect that different cohorts should hold different work attitudes and that the work values of an age group at an earlier point in time should be different from the work values of the same age group at a later point in time. The structural hypothesis sees age differences as expressions of labour market inequality between older and younger workers. This point of view leads us to expect that age differences in work attitudes will follow changes in the job structure and in working conditions. Drawing on data from the Swedish survey of living conditions (ULF), attitude change within the Swedish work force during the period 1979–2003 was examined. Three sub-periods, 1986/1987, 1994 through 1996 and 2001 through 2003 were compared to 1979, the year of reference. The main results showed that a consistently lower share of the work force held extrinsic work values in the subsequent periods and this applied to all age groups. The results did not support the assumption that broader cultural differences between generations are central explanations of differences in work values. Older workers held extrinsic work values to a lesser degree than younger workers regardless of period. Most strikingly the gap between the youngest group on the labour market (aged 16 –29) and the older ones widened during the period. Furthermore, class differences in the distribution of the extrinsic attitude were intact throughout the study period; manual employees were consistently more likely to hold an extrinsic attitude than were service class employees. This implies that differences in the probability of extrinsic work attitudes have been identifiable regardless of period, but that their prevalence has decreased since jobs involving features related to extrinsic work values have decreased since 1979.Originally included in thesis in manuscript form.Panel survey of ageing and the elderl

    It Depends on What the Meaning of False is: Falsity and Misleadingness in Commercial Speech Doctrine

    Get PDF
    While scholarship regarding the Supreme Court\u27s noncommercial speech doctrine has often focused on the level of protection for truthful, non-misleading commercial speech, scholars have paid little attention to the exclusion of false or misleading commercial speech from all First Amendment protection. Examining the underpinnings of the false and misleading speech exclusion illuminates the practical difficulties that abolishing the commercial speech doctrine would pose. Through a series of fact patterns in trademark and false advertising cases, this piece demonstrates that defining what is false or misleading is often debatable. If commercial speech were given First Amendment protection, consumer protection and First Amendment protection would be at odds. Rebutting the idea that constitutionally protected commercial speech could effectively address consumer abuses through fraud statues and would not be offensive to the First Amendment, the piece explains that subjecting commercial speech to First Amendment scrutiny would almost completely contract the scope of false advertising law and erode consumer protection. The piece concludes that while excluding commercial speech from constitutional protection has real costs, we are better off in a system that regulates false and misleading commercial speech without heightened First Amendment scrutiny

    The Pareto Frontier for Random Mechanisms

    Full text link
    We study the trade-offs between strategyproofness and other desiderata, such as efficiency or fairness, that often arise in the design of random ordinal mechanisms. We use approximate strategyproofness to define manipulability, a measure to quantify the incentive properties of non-strategyproof mechanisms, and we introduce the deficit, a measure to quantify the performance of mechanisms with respect to another desideratum. When this desideratum is incompatible with strategyproofness, mechanisms that trade off manipulability and deficit optimally form the Pareto frontier. Our main contribution is a structural characterization of this Pareto frontier, and we present algorithms that exploit this structure to compute it. To illustrate its shape, we apply our results for two different desiderata, namely Plurality and Veto scoring, in settings with 3 alternatives and up to 18 agents.Comment: Working Pape
    • …
    corecore