6,951 research outputs found

    Trustworthy maps

    Get PDF
    Maps get used for decision making about the world\u27s most pressing problems (e.g., climate change, refugee crises, biodiversity loss, rising inequality, pandemic disease). Although maps have historically been a trusted source of information, changes in society (e.g., lower levels of trust in decision makers) and in mapmaking technologies and practices (e.g., anyone can now make their own maps) mean that we need to spend some time thinking about how, when, and why people trust maps and mapmaking processes. This is critically important if we want stakeholders to engage constructively with the information we present in maps, because they are unlikely to do so if they do not trust what they see. Here I outline three questions about trust and maps that I think need research attention. First, how can we focus map readers\u27 attention on the trustworthiness of mapped data, especially if trustworthiness changes as in the case of real-time data sources? Second, does presenting uncertainty information on maps affect the level of trust map readers have in the map, and if so, does trust vary depending on how the uncertainty information is presented? Finally, how does virality affect trust? Are viral maps less trusted? The time and resources required to develop a better understanding of how trust in maps might be changing will be repaid. The world needs good information to guide policy- and decision-making. Well designed maps can help stakeholders to work together to solve problems, but only if they are trusted

    Selection of Statistical Software for Solving Big Data Problems: A Guide for Businesses, Students, and Universities

    Get PDF
    The need for analysts with expertise in big data software is becoming more apparent in today’s society. Unfortunately, the demand for these analysts far exceeds the number available. A potential way to combat this shortage is to identify the software taught in colleges or universities. This article will examine four data analysis software—Excel add-ins, SPSS, SAS, and R—and we will outline the cost, training, and statistical methods/tests/uses for each of these software. It will further explain implications for universities and future students

    Barriers to Predictive Analytics Use for Policy Decision-Making Effectiveness in Turbulent Times: A Case Study of Fukushima Nuclear Accident

    Get PDF
    Predictive analytics are data-driven software tools that draw on confirmed relationships between variables to predict future outcomes. Hence they may provide government with new analytical capabilities for enhancing policy decision-making effectiveness in turbulent environments. However, predictive analytics system use research is still lacking. Therefore, this study adapts the existing model of strategic decision-making effectiveness to examine government use of predictive analytics in turbulent times and to identify barriers to using information effectively in enhancing policy decision making effectiveness. We use a case study research to address two research questions in the context of the 2011 Fukushima nuclear accident. Our study found varying levels of proactive use of SPEEDI predictive analytics system during the escalating nuclear reactor meltdowns between Japan’s central government agencies and between the central and the state government levels. Using the model, we argue that procedural rationality and political behavior can be used to explain some observed variations

    Towards Explainability of UAV-Based Convolutional Neural Networks for Object Classification

    Get PDF
    f autonomous systems using trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR), a new NASA Convergent Aeronautical Solutions (CAS) Project. One critical research element of ATTRACTOR is explainability of the decision-making across relevant subsystems of an autonomous system. The ability to explain why an autonomous system makes a decision is needed to establish a basis of trustworthiness to safely complete a mission. Convolutional Neural Networks (CNNs) are popular visual object classifiers that have achieved high levels of classification performances without clear insight into the mechanisms of the internal layers and features. To explore the explainability of the internal components of CNNs, we reviewed three feature visualization methods in a layer-by-layer approach using aviation related images as inputs. Our approach to this is to analyze the key components of a classification event in order to generate component labels for features of the classified image at different layers of depths. For example, an airplane has wings, engines, and landing gear. These could possibly be identified somewhere in the hidden layers from the classification and these descriptive labels could be provided to a human or machine teammate while conducting a shared mission and to engender trust. Each descriptive feature may also be decomposed to a combination of primitives such as shapes and lines. We expect that knowing the combination of shapes and parts that create a classification will enable trust in the system and insight into creating better structures for the CNN
    • 

    corecore