1,911 research outputs found

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    2023-2024 Graduate School Catalog

    Get PDF
    You and your peers represent more than 67 countries and your shared scholarship spans 140 programs - from business administration and biomedical engineering to history, horticulture, musical performance, marine science, and more. Your ideas and interests will inform public health, create opportunities for art and innovation, contribute to the greater good, and positively impact economic development in Maine and beyond

    Rethinking the governance and delivery of the Cohesion Policy Funds : is the Recovery and Resilience Facility (RRF) a model?

    Get PDF
    Published online: November 2023The Cohesion Policy Funds (CPF) have faced continuous debate about their effectiveness in reaching specified performance objectives, while at the same time advancing broader EU policy goals. The Recovery and Resilience Facility (RRF)’s “performance-based financing” model, where payment is based on the fulfilment of milestones and targets, rather than reimbursement of eligible costs, is sometimes presented as a superior alternative and possible inspiration for the future of the CPF. The RRF model centralises authority in the hands of national governments and promises tighter integration of investment and reforms, with monitoring focusing on results instead of receipts. In this context, it is crucial to understand more precisely how the RRF model differs from that of CPF and how the RRF model has been working out in practice, in order to draw lessons for the future of the CPF, which is the goal of this paper.This expert paper has been sponsored by DG REGIO of the European Commission

    Making Connections: A Handbook for Effective Formal Mentoring Programs in Academia

    Get PDF
    This book, Making Connections: A Handbook for Effective Formal Mentoring Programs in Academia, makes a unique and needed contribution to the mentoring field as it focuses solely on mentoring in academia. This handbook is a collaborative institutional effort between Utah State University’s (USU) Empowering Teaching Open Access Book Series and the Mentoring Institute at the University of New Mexico (UNM). This book is available through (a) an e-book through Pressbooks, (b) a downloadable PDF version on USU’s Open Access Book Series website), and (c) a print version available for purchase on the USU Empower Teaching Open Access page, and on Amazon

    Summer/Fall 2023

    Get PDF

    On Reducing Undesirable Behavior in Deep Reinforcement Learning Models

    Full text link
    Deep reinforcement learning (DRL) has proven extremely useful in a large variety of application domains. However, even successful DRL-based software can exhibit highly undesirable behavior. This is due to DRL training being based on maximizing a reward function, which typically captures general trends but cannot precisely capture, or rule out, certain behaviors of the system. In this paper, we propose a novel framework aimed at drastically reducing the undesirable behavior of DRL-based software, while maintaining its excellent performance. In addition, our framework can assist in providing engineers with a comprehensible characterization of such undesirable behavior. Under the hood, our approach is based on extracting decision tree classifiers from erroneous state-action pairs, and then integrating these trees into the DRL training loop, penalizing the system whenever it performs an error. We provide a proof-of-concept implementation of our approach, and use it to evaluate the technique on three significant case studies. We find that our approach can extend existing frameworks in a straightforward manner, and incurs only a slight overhead in training time. Further, it incurs only a very slight hit to performance, or even in some cases - improves it, while significantly reducing the frequency of undesirable behavior

    Frontiers of Humanity and Beyond: Towards new critical understandings of borders. Working Papers

    Get PDF
    UIDB/04666/2020 UIDP/04666/2020publishersversionpublishe

    Taylor University Catalog 2023-2024

    Get PDF
    The 2023-2024 academic catalog of Taylor University in Upland, Indiana.https://pillars.taylor.edu/catalogs/1128/thumbnail.jp

    Solving Continuous Control via Q-learning

    Full text link
    While there has been substantial success for solving continuous control with actor-critic methods, simpler critic-only methods such as Q-learning find limited application in the associated high-dimensional action spaces. However, most actor-critic methods come at the cost of added complexity: heuristics for stabilisation, compute requirements and wider hyperparameter search spaces. We show that a simple modification of deep Q-learning largely alleviates these issues. By combining bang-bang action discretization with value decomposition, framing single-agent control as cooperative multi-agent reinforcement learning (MARL), this simple critic-only approach matches performance of state-of-the-art continuous actor-critic methods when learning from features or pixels. We extend classical bandit examples from cooperative MARL to provide intuition for how decoupled critics leverage state information to coordinate joint optimization, and demonstrate surprisingly strong performance across a variety of continuous control tasks

    TGRL: An Algorithm for Teacher Guided Reinforcement Learning

    Full text link
    Learning from rewards (i.e., reinforcement learning or RL) and learning to imitate a teacher (i.e., teacher-student learning) are two established approaches for solving sequential decision-making problems. To combine the benefits of these different forms of learning, it is common to train a policy to maximize a combination of reinforcement and teacher-student learning objectives. However, without a principled method to balance these objectives, prior work used heuristics and problem-specific hyperparameter searches to balance the two objectives. We present a principled\textit{principled} approach, along with an approximate implementation for dynamically\textit{dynamically} and automatically\textit{automatically} balancing when to follow the teacher and when to use rewards. The main idea is to adjust the importance of teacher supervision by comparing the agent's performance to the counterfactual scenario of the agent learning without teacher supervision and only from rewards. If using teacher supervision improves performance, the importance of teacher supervision is increased and otherwise it is decreased. Our method, Teacher Guided Reinforcement Learning\textit{Teacher Guided Reinforcement Learning} (TGRL), outperforms strong baselines across diverse domains without hyper-parameter tuning
    corecore