65,085 research outputs found
Recommended from our members
Disrupting Illicit Supply Networks: New Applications of Operations Research and Data Analytics to End Modern Slavery
Report from a 2017 National Science Foundation workshop on promising research directions for applications of operations research and data analytics toward the disruption of illicit supply networks like human trafficking. The workshop was funded by the NSF’s Operations Engineering (ENG) and the Law & Social Sciences Program (SBE) under grant # CMMI-1726895. The report addresses the opportunity to apply advances from the fields of operations research, management science, analytics, machine learning, and data science toward the development of disruptive interventions against illicit networks. Such an extension of the current research agenda for trafficking would move understanding of such dynamic systems from descriptive characterization and predictive estimation toward improved dynamic operational control.Bureau of Business Researc
Low-Cost Air Quality Monitoring Tools: From Research to Practice (A Workshop Summary).
In May 2017, a two-day workshop was held in Los Angeles (California, U.S.A.) to gather practitioners who work with low-cost sensors used to make air quality measurements. The community of practice included individuals from academia, industry, non-profit groups, community-based organizations, and regulatory agencies. The group gathered to share knowledge developed from a variety of pilot projects in hopes of advancing the collective knowledge about how best to use low-cost air quality sensors. Panel discussion topics included: (1) best practices for deployment and calibration of low-cost sensor systems, (2) data standardization efforts and database design, (3) advances in sensor calibration, data management, and data analysis and visualization, and (4) lessons learned from research/community partnerships to encourage purposeful use of sensors and create change/action. Panel discussions summarized knowledge advances and project successes while also highlighting the questions, unresolved issues, and technological limitations that still remain within the low-cost air quality sensor arena
Seafloor characterization using airborne hyperspectral co-registration procedures independent from attitude and positioning sensors
The advance of remote-sensing technology and data-storage capabilities has progressed in the last decade to commercial multi-sensor data collection. There is a constant need to characterize, quantify and monitor the coastal areas for habitat research and coastal management. In this paper, we present work on seafloor characterization that uses hyperspectral imagery (HSI). The HSI data allows the operator to extend seafloor characterization from multibeam backscatter towards land and thus creates a seamless ocean-to-land characterization of the littoral zone
Physical Multi-Layer Phantoms for Intra-Body Communications
This paper presents approaches to creating tissue mimicking materials that
can be used as phantoms for evaluating the performance of Body Area Networks
(BAN). The main goal of the paper is to describe a methodology to create a
repeatable experimental BAN platform that can be customized depending on the
BAN scenario under test. Comparisons between different material compositions
and percentages are shown, along with the resulting electrical properties of
each mixture over the frequency range of interest for intra-body
communications; 100 KHz to 100 MHz. Test results on a composite multi-layer
sample are presented confirming the efficacy of the proposed methodology. To
date, this is the first paper that provides guidance on how to decide on
concentration levels of ingredients, depending on the exact frequency range of
operation, and the desired matched electrical characteristics (conductivity vs.
permittivity), to create multi-layer phantoms for intra-body communication
applications
The Intuitive Appeal of Explainable Machines
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself
Recommended from our members
Reinventing discovery learning: a field-wide research program
© 2017, Springer Science+Business Media B.V., part of Springer Nature. Whereas some educational designers believe that students should learn new concepts through explorative problem solving within dedicated environments that constrain key parameters of their search and then support their progressive appropriation of empowering disciplinary forms, others are critical of the ultimate efficacy of this discovery-based pedagogical philosophy, citing an inherent structural challenge of students constructing historically achieved conceptual structures from their ingenuous notions. This special issue presents six educational research projects that, while adhering to principles of discovery-based learning, are motivated by complementary philosophical stances and theoretical constructs. The editorial introduction frames the set of projects as collectively exemplifying the viability and breadth of discovery-based learning, even as these projects: (a) put to work a span of design heuristics, such as productive failure, surfacing implicit know-how, playing epistemic games, problem posing, or participatory simulation activities; (b) vary in their target content and skills, including building electric circuits, solving algebra problems, driving safely in traffic jams, and performing martial-arts maneuvers; and (c) employ different media, such as interactive computer-based modules for constructing models of scientific phenomena or mathematical problem situations, networked classroom collective “video games,” and intercorporeal master–student training practices. The authors of these papers consider the potential generativity of their design heuristics across domains and contexts
The Art of Fault Injection
Classical greek philosopher considered the foremost virtues to be temperance, justice, courage, and prudence. In this paper we relate these cardinal virtues to the correct methodological approaches that researchers should follow when setting up a fault injection experiment. With this work we try to understand where the "straightforward pathway" lies, in order to highlight those common methodological errors that deeply influence the coherency and the meaningfulness of fault injection experiments. Fault injection is like an art, where the success of the experiments depends on a very delicate balance between modeling, creativity, statistics, and patience
Interoperability and information sharing
Communication and information sharing are two of the most pressing issues facing the public safety community today. In previous chapters of this volume, authors have made note of the changing public safety landscape as it relates to the need for enhanced information and intelligence sharing among a broad cross-section of organizations. Public safety organizations, particularly law enforcement agencies, have been quick to adopt emerging technologies that have allowed for greater communication and information sharing capacities. While substantial improvements have been made over the decades that enhanced communication and information sharing, many challenges remain in the move to seamlessly integrated communication capacities. The key challenge in the upcoming decades relates to the technical and cultural changes necessary to achieve integrated communication systems. There is no shortage of resources given to increasing the communications capacity of the public safety community, yet serious challenges remain in the degree of interoperability within and across public safety domains. Interoperability has in many ways become the defining issue in the arenas of communications and information sharing. This chapter will provide an overview of critical historical events that placed questions of interoperability and information sharing on the national agenda. The chapter will also provide an overview of national models for information sharing
Recommended from our members
Border Security: Understanding Threats at U.S. Borders
[Excerpt] The United States confronts a wide array of threats at U.S. borders, ranging from terrorists who may have weapons of mass destruction, to transnational criminals smuggling drugs or counterfeit goods, to unauthorized migrants intending to live and work in the United States. Given this diversity of threats, how may Congress and the Department of Homeland Security (DHS) set border security priorities and allocate scarce enforcement resources?
In general, DHS’s answer to this question is organized around risk management, a process that involves risk assessment and the allocation of resources based on a cost-benefit analysis. This report focuses on the first part of this process by identifying border threats and describing a framework for understanding risks at U.S. borders. DHS employs models to classify threats as relatively high- or low-risk for certain planning and budgeting exercises and to implement certain border security programs. Members of Congress may wish to use similar models to evaluate the costs and benefits of potential border security policies and to allocate border enforcement resources. This report discusses some of the issues involved in modeling border-related threats
- …