17,643 research outputs found

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Algorithmic Jim Crow

    Get PDF
    This Article contends that current immigration- and security-related vetting protocols risk promulgating an algorithmically driven form of Jim Crow. Under the “separate but equal” discrimination of a historic Jim Crow regime, state laws required mandatory separation and discrimination on the front end, while purportedly establishing equality on the back end. In contrast, an Algorithmic Jim Crow regime allows for “equal but separate” discrimination. Under Algorithmic Jim Crow, equal vetting and database screening of all citizens and noncitizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting, and acting upon vetting and screening systems in ways that result in a disparate impact

    Towards Design Principles for Data-Driven Decision Making: An Action Design Research Project in the Maritime Industry

    Get PDF
    Data-driven decision making (DDD) refers to organizational decision-making practices that emphasize the use of data and statistical analysis instead of relying on human judgment only. Various empirical studies provide evidence for the value of DDD, both on individual decision maker level and the organizational level. Yet, the path from data to value is not always an easy one and various organizational and psychological factors mediate and moderate the translation of data-driven insights into better decisions and, subsequently, effective business actions. The current body of academic literature on DDD lacks prescriptive knowledge on how to successfully employ DDD in complex organizational settings. Against this background, this paper reports on an action design research study aimed at designing and implementing IT artifacts for DDD at one of the largest ship engine manufacturers in the world. Our main contribution is a set of design principles highlighting, besides decision quality, the importance of model comprehensibility, domain knowledge, and actionability of results

    'Datafication': Making sense of (big) data in a complex world

    Get PDF
    This is a pre-print of an article published in European Journal of Information Systems. The definitive publisher-authenticated version is available at the link below. Copyright @ 2013 Operational Research Society Ltd.No abstract available (Editorial

    Towards a big data regulation based on social and ethical values. The guidelines of the Council of Europe

    Get PDF
    This article discusses the main provisions of the Guidelines on big data and data protection recently adopted by the Consultative Committee of the Council of Europe. After an analysis of the changes in data processing caused by the use of the predictive analytics, the author outlines the impact assessment model suggested by the Guidelines to tackles the potential risks of big data applications. This procedure of risk-assessment represents a key element to address the challenges of Big Data, since it goes beyond the traditional data protection impact assessment encompassing the social and ethical consequences of the use of data, which are the most important and critical aspects of the future algorithmic society

    Deploying Machine Learning for a Sustainable Future

    Get PDF
    To meet the environmental challenges of a warming planet and an increasingly complex, high tech economy, government must become smarter about how it makes policies and deploys its limited resources. It specifically needs to build a robust capacity to analyze large volumes of environmental and economic data by using machine-learning algorithms to improve regulatory oversight, monitoring, and decision-making. Three challenges can be expected to drive the need for algorithmic environmental governance: more problems, less funding, and growing public demands. This paper explains why algorithmic governance will prove pivotal in meeting these challenges, but it also presents four likely obstacles that environmental agencies will need to surmount if they are to take full advantage of big data and predictive analytics. First, agencies must invest in upgrading their information technology infrastructure to take advantage of computational advances. Relatively modest technology investments, if made wisely, could support the use of algorithmic tools that could yield substantial savings in other administrative costs. Second, agencies will need to confront emerging concerns about privacy, fairness, and transparency associated with its reliance on Big Data and algorithmic analyses. Third, government agencies will need to strengthen their human capital so that they have the personnel who understand how to use machine learning responsibly. Finally, to work well, algorithms will need clearly defined objectives. Environmental officials will need to continue to engage with elected officials, members of the public, environmental groups, and industry representatives to forge clarity and consistency over how various risk and regulatory objectives should be specified in machine learning tools. Overall, with thoughtful planning, adequate resources, and responsible management, governments should be able to overcome the obstacles that stand in the way of the use of artificial intelligence to improve environmental sustainability. If policy makers and the public will recognize the need for smarter governance, they can then start to tackle obstacles that stand in its way and better position society for a more sustainable future

    Data analytics and algorithms in policing in England and Wales: Towards a new policy framework

    Get PDF
    RUSI was commissioned by the Centre for Data Ethics and Innovation (CDEI) to conduct an independent study into the use of data analytics by police forces in England and Wales, with a focus on algorithmic bias. The primary purpose of the project is to inform CDEI’s review of bias in algorithmic decision-making, which is focusing on four sectors, including policing, and working towards a draft framework for the ethical development and deployment of data analytics tools for policing. This paper focuses on advanced algorithms used by the police to derive insights, inform operational decision-making or make predictions. Biometric technology, including live facial recognition, DNA analysis and fingerprint matching, are outside the direct scope of this study, as are covert surveillance capabilities and digital forensics technology, such as mobile phone data extraction and computer forensics. However, because many of the policy issues discussed in this paper stem from general underlying data protection and human rights frameworks, these issues will also be relevant to other police technologies, and their use must be considered in parallel to the tools examined in this paper. The project involved engaging closely with senior police officers, government officials, academics, legal experts, regulatory and oversight bodies and civil society organisations. Sixty nine participants took part in the research in the form of semi-structured interviews, focus groups and roundtable discussions. The project has revealed widespread concern across the UK law enforcement community regarding the lack of official national guidance for the use of algorithms in policing, with respondents suggesting that this gap should be addressed as a matter of urgency. Any future policy framework should be principles-based and complement existing police guidance in a ‘tech-agnostic’ way. Rather than establishing prescriptive rules and standards for different data technologies, the framework should establish standardised processes to ensure that data analytics projects follow recommended routes for the empirical evaluation of algorithms within their operational context and evaluate the project against legal requirements and ethical standards. The new guidance should focus on ensuring multi-disciplinary legal, ethical and operational input from the outset of a police technology project; a standard process for model development, testing and evaluation; a clear focus on the human–machine interaction and the ultimate interventions a data driven process may inform; and ongoing tracking and mitigation of discrimination risk

    A Review and Characterization of Progressive Visual Analytics

    Get PDF
    Progressive Visual Analytics (PVA) has gained increasing attention over the past years. It brings the user into the loop during otherwise long-running and non-transparent computations by producing intermediate partial results. These partial results can be shown to the user for early and continuous interaction with the emerging end result even while it is still being computed. Yet as clear-cut as this fundamental idea seems, the existing body of literature puts forth various interpretations and instantiations that have created a research domain of competing terms, various definitions, as well as long lists of practical requirements and design guidelines spread across different scientific communities. This makes it more and more difficult to get a succinct understanding of PVA’s principal concepts, let alone an overview of this increasingly diverging field. The review and discussion of PVA presented in this paper address these issues and provide (1) a literature collection on this topic, (2) a conceptual characterization of PVA, as well as (3) a consolidated set of practical recommendations for implementing and using PVA-based visual analytics solutions
    corecore