112 research outputs found

    FOUR ESSAYS ON OPTIMAL ANTITRUST ENFORCEMENT

    Get PDF
    This thesis consists of four essays related to optimal antitrust enforcement. The �rst essay provides a case study of EC ringleader cartels and discusses by means of a theoretical model the e¤ect of excluding ringleaders from leniency programmes on collusive prices. The second essay adds an experimental investigation of the former, and examines in particular the e¤ects on cartel formation, prices and stability. The third essay experimentally explores the substitutability of antitrust detection rates and �nes, and test whether di¤erent �ne and detection rate combinations with constant expected �nes achieve an equal level of deterrence. Lastly, the �nal essay discusses the role of antitrust enforcement on collusion when �rms can engage in avoidance activities

    Educational practices, funding, and the implementation of a reading first program

    Get PDF
    The National Reading Panel (NICHD, 2000) found that too many children struggle with learning to read. Through a Congressional mandate, the panel identified key skills and methods that were central to reading achievement in the grades kindergarten through third grade. According to the United States Department of Education (USDE) Summary of Discretionary Funds (2008b), over $6 billion were allocated to the schools to implement the research based strategies from 2002 to 2008. States throughout the country showed gains from the states’ first year of implementation to 2008 in grades 1-3 in comprehension assessments. The Reading First: Student Achievement, Teacher Empowerment, National Success (2008) archived governmental document noted that state education agencies reported improvement in third grade. Congress reduced funding in fiscal year 2008 by 61% and eventually discontinued the funding and the reading program. Now that the monies and program are gone, an investigation of Reading First schools in the Rio Grande Valley determined that student reading achievement and implementation of the Reading First practices were functions of funding and campus size in some grades. Additionally, the ANOVAs showed time effect significance in all five measurements: phonemic awareness, graphophonemics, listening comprehension, accuracy, and reading comprehension. Twelve (12) multiple linear regression analyses and fourteen (14) one-way repeated-measures analyses in kindergarten through second grade were conducted

    Experimental Designs, Meta-Modeling, and Meta-learning for Mixed-Factor Systems with Large Decision Spaces

    Get PDF
    Many Air Force studies require a design and analysis process that can accommodate for the computational challenges associated with complex systems, simulations, and real-world decisions. For systems with large decision spaces and a mixture of continuous, discrete, and categorical factors, nearly orthogonal-and-balanced (NOAB) designs can be used as efficient, representative subsets of all possible design points for system evaluation, where meta-models are then fitted to act as surrogates to system outputs. The mixed-integer linear programming (MILP) formulations used to construct first-order NOAB designs are extended to solve for low correlation between second-order model terms (i.e., two-way interactions and quadratics). The resulting second-order approaches are shown to improve design performance measures for second-order model parameter estimation and prediction variance as well as for protection from bias due to model misspecification with respect to second-order terms. Further extensions are developed to construct batch sequential NOAB designs, giving experimenters more flexibility by creating multiple stages of design points using different NOAB approaches, where simultaneous construction of stages is shown to outperform design augmentation overall. To reduce cost and add analytical rigor, meta-learning frameworks are developed for accurate and efficient selection of first-order NOAB designs as well as of meta-models that approximate mixed-factor systems

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison

    Essays in economics of crime prevention and behavior under uncertainty

    Get PDF
    This dissertation contains three chapters on topics in experimental economics and the economics of crime. The first chapter studies gender and sex differences in uncertainty attitudes by following a broader and more inclusive concept of gender instead of the conventional binary approach. The second chapter investigates the extent to which the behavior of individuals is interrelated in terms of crime prevention. Specifically, it studies whether social contagion could explain why some visible crime prevention measures are highly popular in some areas but rarely used in others. The third chapter demonstrates the crime-reducing effect of private crime preventive measures. It studies the role of the "potential" victims and how their actions can deter crime

    Value Incommensurability

    Get PDF
    Incommensurability is the impossibility to determine how two options relate to each other in terms of conventional comparative relations. This book features new research on incommensurability from philosophers who have shaped the field into what it is today, including John Broome, Ruth Chang and Wlodek Rabinowicz. The book covers four aspects relating to incommensurability. In the first part, the contributors synthesize research on the competing views of how to best explain incommensurability. Part II illustrates how incommensurability can help us deal with seemingly insurmountable problems in ethical theory and population ethics. The contributors address the Repugnant Conclusion, the Mere Addition Paradox and so-called Spectrum Arguments. The chapters in Part III outline and summarize problems caused by incommensurability for decision theory. Finally, Part IV tackles topics related to risk, uncertainty and incommensurability. Value Incommensurability: Ethics, Risk, and Decision-Making will be of interest to researchers and advanced students working in ethical theory, decision theory, action theory, and philosophy of economics

    Control and Analysis for Sequential Information based on Machine Learning

    Get PDF
    Sequential information is crucial for real-world applications that are related to time, which is same with time-series being described by sequence data followed by temporal order and regular intervals. In this thesis, we consider four major tasks of sequential information that include sequential trend prediction, control strategy optimisation, visual-temporal interpolation and visual-semantic sequential alignment. We develop machine learning theories and provide state-of-the-art models for various real-world applications that involve sequential processes, including the industrial batch process, sequential video inpainting, and sequential visual-semantic image captioning. The ultimate goal is about designing a hybrid framework that can unify diverse sequential information analysis and control systems For industrial process, control algorithms rely on simulations to find the optimal control strategy. However, few machine learning techniques can control the process using raw data, although some works use ML to predict trends. Most control methods rely on amounts of previous experiences, and cannot execute future information to optimize the control strategy. To improve the effectiveness of the industrial process, we propose improved reinforcement learning approaches that can modify the control strategy. We also propose a hybrid reinforcement virtual learning approach to optimise the long-term control strategy. This approach creates a virtual space that interacts with reinforcement learning to predict a virtual strategy without conducting any real experiments, thereby improving and optimising control efficiency. For sequential visual information analysis, we propose a dual-fusion transformer model to tackle the sequential visual-temporal encoding in video inpainting tasks. Our framework includes a flow-guided transformer with dual attention fusion, and we observe that the sequential information is effectively processed, resulting in promising inpainting videos. Finally, we propose a cycle-based captioning model for the analysis of sequential visual-semantic information. This model augments data from two views to optimise caption generation from an image, overcoming new few-shot and zero-shot settings. The proposed model can generate more accurate and informative captions by leveraging sequential visual-semantic information. Overall, the thesis contributes to analysing and manipulating sequential information in multi-modal real-world applications. Our flexible framework design provides a unified theoretical foundation to deploy sequential information systems in distinctive application domains. Considering the diversity of challenges addressed in this thesis, we believe our technique paves the pathway towards versatile AI in the new era

    Single-System and Dual-Process Accounts of Explicit and Implicit Memory

    Get PDF
    While some expressions of memory are accompanied by conscious awareness, others do not elicit this awareness yet still impact task performance. Theorists have long questioned whether these explicit and implicit forms of memory are governed by separate cognitive systems or a single system. There is much behavioural evidence for multiple-systems and dual-process accounts, mainly focusing on studies of recognition memory and long-term repetition priming. However, experimental results and mathematical modelling have shown the ability of a single-system account to explain the relationship between various forms of explicit and implicit memory. This thesis uses behavioural experiments and mathematical modelling to further investigate the predictions of single-system and dual-process accounts of explicit and implicit memory. Chapter 2 investigates the effect of response speeding in recognition and priming tasks, identifying model-based predictions prompted by this manipulation. Chapter 3 examines the effects of encoding variability on recognition memory, before extending this manipulation to priming to again investigate opposing predictions from single-system and dual-process models. Chapter 4 takes a different approach and investigates the relationship between cued recall and implicit memory in behavioural experiments, with and without the inclusion of a recognition task. Finally, Chapter 5 examines the relationship between free recall and implicit memory in three further experiments. The results of this thesis show that while models of recognition and priming make opposing predictions about the relationship between explicit and implicit memory, these predictions are often hard to test in practice. However, Chapter 4 confirms the relationship between cued recall and priming. Chapter 5 provides evidence for a similar relationship between free recall and priming. Both of these results align with a single-system view over a strict dual-process (and strict multiple-systems) account. With further experimentation, these results may inform future model development and the understanding of the fundamental relationships between explicit and implicit memory

    Value Incommensurability

    Get PDF
    Incommensurability is the impossibility to determine how two options relate to each other in terms of conventional comparative relations. This book features new research on incommensurability from philosophers who have shaped the field into what it is today, including John Broome, Ruth Chang and Wlodek Rabinowicz. The book covers four aspects relating to incommensurability. In the first part, the contributors synthesize research on the competing views of how to best explain incommensurability. Part II illustrates how incommensurability can help us deal with seemingly insurmountable problems in ethical theory and population ethics. The contributors address the Repugnant Conclusion, the Mere Addition Paradox and so-called Spectrum Arguments. The chapters in Part III outline and summarize problems caused by incommensurability for decision theory. Finally, Part IV tackles topics related to risk, uncertainty and incommensurability. Value Incommensurability: Ethics, Risk, and Decision-Making will be of interest to researchers and advanced students working in ethical theory, decision theory, action theory, and philosophy of economics
    • …
    corecore