39,449 research outputs found

    Real-Time Context-Aware Microservice Architecture for Predictive Analytics and Smart Decision-Making

    Get PDF
    The impressive evolution of the Internet of Things and the great amount of data flowing through the systems provide us with an inspiring scenario for Big Data analytics and advantageous real-time context-aware predictions and smart decision-making. However, this requires a scalable system for constant streaming processing, also provided with the ability of decision-making and action taking based on the performed predictions. This paper aims at proposing a scalable architecture to provide real-time context-aware actions based on predictive streaming processing of data as an evolution of a previously provided event-driven service-oriented architecture which already permitted the context-aware detection and notification of relevant data. For this purpose, we have defined and implemented a microservice-based architecture which provides real-time context-aware actions based on predictive streaming processing of data. As a result, our architecture has been enhanced twofold: on the one hand, the architecture has been supplied with reliable predictions through the use of predictive analytics and complex event processing techniques, which permit the notification of relevant context-aware information ahead of time. On the other, it has been refactored towards a microservice architecture pattern, highly improving its maintenance and evolution. The architecture performance has been evaluated with an air quality case study

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Foreword

    Get PDF
    This Foreword provides an overview of Criminal Behavior and the Brain: When Law and Neuroscience Collide, a symposium hosted by the Fordham Law Review and cosponsored by the Fordham Law School Neuroscience and Law Center. While the field of neuroscience is vast—generally constituting “the branch of the life sciences that studies the brain and nervous system”— this symposium focused on the cutting-edge ties between neuroscience evidence and the different facets of criminal law. Such an intersection invited commentary from an expert group on a wide span of topics, ranging from the historical underpinnings between law and neuroscience to the treatment of young adults to the different roles of neuroscience in the context of sentencing, expert testimony, defenses, prediction, punishment, and rehabilitation, as well as the civil and criminal divide. These diverse subjects have an overarching theme in common: each pertains in some way to the criminal justice system’s effort to punish or rehabilitate more fairly and effectively

    The Supreme Court as the Major Barrier to Racial Equality

    Get PDF
    This Article suggests that the U.S. Supreme Court, through its decisions in cases alleging race discrimination, stands as a major barrier to racial equality in the United States. There are several aspects of its decisions that lead to this result. Between 1868 and 1954, the Equal Protection Clause of the Fourteenth Amendment, while it had been interpreted to strike down a few blatant forms of de jure discrimination, allowed government to separate the races based on the “separate but equal” fiction. Beginning in 1954, Brown and a series of subsequent decisions attacked this fiction, and for a period of nearly twenty years, the Court was intent on eliminating the vestiges of segregation in the schools, approving broad remedial orders. This changed drastically beginning in 1974 when the Court began limiting the available remedies and relieving school systems of the burdens imposed by court orders. Around the same time, the Court decided that equal protection plaintiffs needed to show a discriminatory governmental purpose in order to trigger meaningful constitutional protection. This meant that facially neutral laws and practices with discriminatory effects were largely constitutional. Beginning with Bakke in 1978, the Court made it difficult, and eventually nearly impossible, for government to take affirmative steps designed to promote equality. A majority of the Court determined that invidious and benign racial classifications should be treated the same under the Equal Protection Clause, with both subjected to strict scrutiny. This completed the Court’s interpretation of the Fourteenth Amendment in a manner that makes it a real barrier to racial equality: government is free to engage in invidious discrimination as long as it masks the real purpose, and affirmative steps designed by government to promote equality will be struck down as a violation of equal protection. Ironically, the constitutional amendment designed to promote freedom and equality for the newly-freed slaves now stands in the way of true freedom and equality

    Intelligent Coordination and Automation for Smart Home Accessories

    Get PDF
    Smarthome accessories are rapidly becoming more popular. Although many companies are making devices to take advantage of this market, most of the created smart devices are actually unintelligent. Currently, these smart home devices require meticulous, tedious configuration to get any sort of enhanced usability over their analog counterparts. We propose building a general model using machine learning and data science to automatically learn a user\u27s smart accessory usage to predict their configuration. We have identified the requirements, collected data, recognized the risks, implemented the system, and have met the goals we set out to accomplish

    Responsibility for implicit bias

    Get PDF
    Research programs in empirical psychology from the past two decades have revealed implicit biases. Although implicit processes are pervasive, unavoidable, and often useful aspects of our cognitions, they may also lead us into error. The most problematic forms of implicit cognition are those which target social groups, encoding stereotypes or reflecting prejudicial evaluative hierarchies. Despite intentions to the contrary, implicit biases can influence our behaviours and judgements, contributing to patterns of discriminatory behaviour. These patterns of discrimination are obviously wrong and unjust. But in remedying such wrongs, one question to be addressed concerns responsibility for implicit bias. Unlike some paradigmatic forms of wrongdoing, such discrimination is often unintentional, unendorsed, and perpetrated without awareness; and the harms are particularly damaging because they are cumulative and collectively perpetrated. So, what are we to make of questions of responsibility? In this article, we outline some of the main lines of recent philosophical thought, which address questions of responsibility for implicit bias. We focus on (a) the kind of responsibility at issue; (b) revisionist versus nonrevisionist conceptions of responsibility as applied to implicit bias; and (c) individual, institutional, and collective responsibility for implicit bias

    Responsibility for implicit bias

    Get PDF
    Research programs in empirical psychology from the past two decades have revealed implicit biases. Although implicit processes are pervasive, unavoidable, and often useful aspects of our cognitions, they may also lead us into error. The most problematic forms of implicit cognition are those which target social groups, encoding stereotypes or reflecting prejudicial evaluative hierarchies. Despite intentions to the contrary, implicit biases can influence our behaviours and judgements, contributing to patterns of discriminatory behaviour. These patterns of discrimination are obviously wrong and unjust. But in remedying such wrongs, one question to be addressed concerns responsibility for implicit bias. Unlike some paradigmatic forms of wrongdoing, such discrimination is often unintentional, unendorsed, and perpetrated without awareness; and the harms are particularly damaging because they are cumulative and collectively perpetrated. So, what are we to make of questions of responsibility? In this article, we outline some of the main lines of recent philosophical thought, which address questions of responsibility for implicit bias. We focus on (a) the kind of responsibility at issue; (b) revisionist versus nonrevisionist conceptions of responsibility as applied to implicit bias; and (c) individual, institutional, and collective responsibility for implicit bias
    • …
    corecore