20,683 research outputs found

    Human-machine conversations to support multi-agency missions

    Get PDF
    In domains such as emergency response, environmental monitoring, policing and security, sensor and information networks are deployed to assist human users across multiple agencies to conduct missions at or near the 'front line'. These domains present challenging problems in terms of human-machine collaboration: human users need to task the network to help them achieve mission objectives, while humans (sometimes the same individuals) are also sources of mission-critical information. We propose a natural language-based conversational approach to supporting humanmachine working in mission-oriented sensor networks. We present a model for human-machine and machine-machine interactions in a realistic mission context, and evaluate the model using an existing surveillance mission scenario. The model supports the flow of conversations from full natural language to a form of Controlled Natural Language (CNL) amenable to machine processing and automated reasoning, including high-level information fusion tasks. We introduce a mechanism for presenting the gist of verbose CNL expressions in a more convenient form for human users. We show how the conversational interactions supported by the model include requests for expansions and explanations of machine-processed information

    Conversational Sensing

    Full text link
    Recent developments in sensing technologies, mobile devices and context-aware user interfaces have made it possible to represent information fusion and situational awareness as a conversational process among actors - human and machine agents - at or near the tactical edges of a network. Motivated by use cases in the domain of security, policing and emergency response, this paper presents an approach to information collection, fusion and sense-making based on the use of natural language (NL) and controlled natural language (CNL) to support richer forms of human-machine interaction. The approach uses a conversational protocol to facilitate a flow of collaborative messages from NL to CNL and back again in support of interactions such as: turning eyewitness reports from human observers into actionable information (from both trained and untrained sources); fusing information from humans and physical sensors (with associated quality metadata); and assisting human analysts to make the best use of available sensing assets in an area of interest (governed by management and security policies). CNL is used as a common formal knowledge representation for both machine and human agents to support reasoning, semantic information fusion and generation of rationale for inferences, in ways that remain transparent to human users. Examples are provided of various alternative styles for user feedback, including NL, CNL and graphical feedback. A pilot experiment with human subjects shows that a prototype conversational agent is able to gather usable CNL information from untrained human subjects

    Workshop proceedings: Information Systems for Space Astrophysics in the 21st Century, volume 1

    Get PDF
    The Astrophysical Information Systems Workshop was one of the three Integrated Technology Planning workshops. Its objectives were to develop an understanding of future mission requirements for information systems, the potential role of technology in meeting these requirements, and the areas in which NASA investment might have the greatest impact. Workshop participants were briefed on the astrophysical mission set with an emphasis on those missions that drive information systems technology, the existing NASA space-science operations infrastructure, and the ongoing and planned NASA information systems technology programs. Program plans and recommendations were prepared in five technical areas: Mission Planning and Operations; Space-Borne Data Processing; Space-to-Earth Communications; Science Data Systems; and Data Analysis, Integration, and Visualization

    The Cyborg Astrobiologist: Testing a Novelty-Detection Algorithm on Two Mobile Exploration Systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah

    Full text link
    (ABRIDGED) In previous work, two platforms have been developed for testing computer-vision algorithms for robotic planetary exploration (McGuire et al. 2004b,2005; Bartolo et al. 2007). The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone-camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon color, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone-camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colors to test this algorithm. The algorithm robustly recognized previously-observed units by their color, while requiring only a single image or a few images to learn colors as familiar, demonstrating its fast learning capability.Comment: 28 pages, 12 figures, accepted for publication in the International Journal of Astrobiolog

    Coalitions of things: supporting ISR tasks via Internet of Things approaches

    Get PDF
    In the wake of rapid maturing of Internet of Things (IoT) approaches and technologies in the commercial sector, the IoT is increasingly seen as a key ‘disruptive’ technology in military environments. Future operational environments are expected to be characterized by a lower proportion of human participants and a higher proportion of autonomous and semi-autonomous devices. This view is reflected in both US ‘third offset’ and UK ‘information age’ thinking and is likely to have a profound effect on how multinational coalition operations are conducted in the future. Much of the initial consideration of IoT adoption in the military domain has rightly focused on security concerns, reflecting similar cautions in the early era of electronic commerce. As IoT approaches mature, this initial technical focus is likely to shift to considerations of interactivity and policy. In this paper, rather than considering the broader range of IoT applications in the military context, we focus on roles for IoT concepts and devices in future intelligence, surveillance and reconnaissance (ISR) tasks, drawing on experience in sensor-mission resourcing and human-computer collaboration (HCC) for ISR. We highlight the importance of low training overheads in the adoption of IoT approaches, and the need to balance proactivity and interactivity (push vs pull modes). As with sensing systems over the last decade, we emphasize that, to be valuable in ISR tasks, IoT devices will need a degree of mission-awareness in addition to an ability to self-manage their limited resources (power, memory, bandwidth, computation, etc). In coalition operations, the management and potential sharing of IoT devices and systems among partners (e.g., in cross-coalition tactical-edge ISR teams) becomes a key issue due heterogeneous factors such as language, policy, procedure and doctrine. Finally, we briefly outline a platform that we have developed in order to experiment with human-IoT teaming on ISR tasks, in both physical and virtual settings

    The Roles of Design and Cybernetics for Planetary Probe Missions

    Get PDF
    Planetary probe missions—as part of an overall space exploration strategy—have helped us to experience and learn about planets and moons in our solar system, with sizable atmospheres. These engineering and scientific achievements contributed to our evolving understanding of the universe around us. While the natural phenomena of the world are independent of humanity, their scientific exploration is part of our human experience. The humanities discipline provides reflections from an anthropocentric point of view, while design requires an active participation by humans. Thus, from these three categories of Sciences, Humanities and Design, we can place scientists into the Science category, while engineers, designers, and other practitioners who create novel parts, systems, artifacts, and processes are part of the Design category. In engineering, once the initial needs (usability or desirability) are identified, technology goals and requirements (feasibility) are given, and the resources (viability) are provided, a project is being developed through a mostly linear fashion. Complex multi-part systems, and mission architectures require system-thinking and integrated-thinking, where iterative methods are used. In a cybernetic sense, throughout project execution feedback is provided to the engineers and project managers (regulators). In a linear engineering and management framework the gathered information allows the regulator to make required adjustments to achieve the set out technical development goals within the available resources. At a higher strategic level within the organizational hierarchy, there can be additional misaligned contributing factors to projects, turning a linear engineering development into an incomplete problem with changing requirements and without a clear possible solution. This is termed to be a wicked problem. In comparison, design is a non-linear discipline, where the feed-back broadens the regulator’s understanding and knowledge (variety) allowing the designer to identify new previously unseen options from an added dynamic anthropocentric perspective. Design Thinking not only accounts for usability, feasibility and viability, but harmonies them in a human centered way. Designing items for human spaceflight, we call humanly space objects, requires special considerations, yet some of these could be applied to planetary probe missions as well. Through multiple divergence and convergence design phases, options are created to understand the problem at hand, from which the root problem is identified. Subsequently, design trade options are created, then the perceived best approach is selected for development. In this paper we discuss the generalized category and various aspects of planetary probe missions through the lens of cybernetics and non-linear design, as applied to mission architectures, system design, operational processes, and ways of communicating the findings throughout all development and mission phases to various stakeholders. We will also discuss how the understanding and leveraging of cybernetics and human centered design can enhance current practices and innovative space technology developments, which are still dominated by engineering, technology and management approaches

    Flying Unmanned Aircraft: A Pilot's Perspective

    Get PDF
    The National Aeronautics and Space Administration (NASA) is pioneering various Unmanned Aircraft System (UAS) technologies and procedures which may enable routine access to the National Airspace System (NAS), with an aim for Next Gen NAS. These tools will aid in the development of technologies and integrated capabilities that will enable high value missions for science, security, and defense, and open the door to low-cost, extreme-duration, stratospheric flight. A century of aviation evolution has resulted in accepted standards and best practices in the design of human-machine interfaces, the displays and controls of which serve to optimize safe and efficient flight operations and situational awareness. The current proliferation of non-standard, aircraft-specific flight crew interfaces in UAS, coupled with the inherent limitations of operating UAS without in-situ sensory input and feedback (aural, visual, and vestibular cues), has increased the risk of mishaps associated with the design of the "cockpit." The examples of current non- or sub- standard design features range from "annoying" and "inefficient", to those that are difficult to manipulate or interpret in a timely manner, as well as to those that are "burdensome" and "unsafe." A concerted effort is required to establish best practices and standards for the human-machine interfaces, for the pilot as well as the air traffic controller. In addition, roles, responsibilities, knowledge, and skill sets are subject to redefining the terms, "pilot" and "air traffic controller", with respect to operating UAS, especially in the Next-Gen NAS. The knowledge, skill sets, training, and qualification standards for UAS operations must be established, and reflect the aircraft-specific human-machine interfaces and control methods. NASA s recent experiences flying its MQ-9 Ikhana in the NAS for extended duration, has enabled both NASA and the FAA to realize the full potential for UAS, as well as understand the implications of current limitations. Ikhana is a Predator-B/Reaper UAS, built by General Atomics, Aeronautical Systems, Inc., and modified for research. Since 2007, the aircraft has been flown seasonally with a wing-mounted pod containing an infrared scanner, utilized to provide real-time wildfire geo-location data to various fire-fighting agencies in the western U.S. The multi-agency effort included an extensive process to obtain flight clearance from the FAA to operate under special provisions, given that UAS in general do not fully comply with current airspace regulations (e.g. sense-and-avoid requirements)
    • …
    corecore