32 research outputs found

    M. tuberculosis Reprograms Hematopoietic Stem Cells to Limit Myelopoiesis and Impair Trained Immunity

    Get PDF
    A greater understanding of hematopoietic stem cell (HSC) regulation is required for dissecting protective versus detrimental immunity to pathogens that cause chronic infections such as Mycobacterium tuberculosis (Mtb). We have shown that systemic administration of Bacille Calmette-Guérin (BCG) or ß-glucan reprograms HSCs in the bone marrow (BM) via a type II interferon (IFN-II) or interleukin-1 (IL1) response, respectively, which confers protective trained immunity against Mtb. Here, we demonstrate that, unlike BCG or ß-glucan, Mtb reprograms HSCs via an IFN-I response that suppresses myelopoiesis and impairs development of protective trained immunity to Mtb. Mechanistically, IFN-I signaling dysregulates iron metabolism, depolarizes mitochondrial membrane potential, and induces cell death specifically in myeloid progenitors. Additionally, activation of the IFN-I/iron axis in HSCs impairs trained immunity to Mtb infection. These results identify an unanticipated immune evasion strategy of Mtb in the BM that controls the magnitude and intrinsic anti-microbial capacity of innate immunity to infection

    Sensor Fusion for Intelligent Behavior on Small Unmanned Ground Vehicles

    No full text
    Sensors commonly mounted on small unmanned ground vehicles (UGVs) include visible light and thermal cameras, scanning LIDAR, and ranging sonar. Sensor data from these sensors is vital to emerging autonomous robotic behaviors. However, sensor data from any given sensor can become noisy or erroneous under a range of conditions, reducing the reliability of autonomous operations. We seek to increase this reliability through data fusion. Data fusion includes characterizing the strengths and weaknesses of each sensor modality and combining their data in a way such that the result of the data fusion provides more accurate data than any single sensor. We describe data fusion efforts applied to two autonomous behaviors: leader-follower and human presence detection. The behaviors are implemented and tested in a variety of realistic conditions

    An adaptive localization system for outdoor/indoor navigation for autonomous robots

    No full text
    Many envisioned applications of mobile robotic systems require the robot to navigate in complex urban environments. This need is particularly critical if the robot is to perform as part of a synergistic team with human forces in military operations. Historically, the development of autonomous navigation for mobile robots has targeted either outdoor or indoor scenarios, but not both, which is not how humans operate. This paper describes efforts to fuse component technologies into a complete navigation system, allowing a robot to seamlessly transition between outdoor and indoor environments. Under the Joint Robotics Program’s Technology Transfer project, empirical evaluations of various localization approaches were conducted to assess their maturity levels and performance metrics in different exterior/interior settings. The methodologies compared include Markov localization, global positioning system, Kalman filtering, and fuzzy-logic. Characterization of these technologies highlighted their best features, which were then fused into an adaptive solution. A description of the final integrated system is discussed, including a presentation of the design, experimental results, and a formal demonstration to attendees of the Unmanned Systems Capabilities Conference II in San Diego in December 2005

    LAYERED AUGMENTED VIRTUALITY

    No full text
    Advancements to robotic platform functionalities and autonomy make it necessary to enhance the current capabilities of the operator control unit (OCU) for the operator to better understand the information provided from the robot. Augmented virtuality is one technique that can be used to improve the user interface, augmenting a virtual-world representation with information from onboard sensors and human input. Standard techniques for displaying information, such as embedding information icons from sensor payloads and external systems (e.g. other robots), could result in serious information overload, making it difficult to sort out the relevant aspects of the tactical picture. This paper illustrates a unique, layered approach to augmented virtuality that specifically addresses this need for optimal situational awareness. We describe our efforts to implement three display layers that sort the information based on component, platform, and mission needs. KEY WORDS robotics, unmanned systems, augmented virtuality, multirobot controller 1. Background Supervising and controlling autonomous robotic behaviors requires a suitable high-level human-robot interface to facilitate an increased understanding of the robot’s actions and intent, better perception of the information provided by the robot, and an overall enhancement of situational awareness. The Robotic

    Towards a warfighter’s associate: eliminating the operator control unit

    No full text
    In addition to the challenges of equipping a mobile robot with the appropriate sensors, actuators, and processing electronics necessary to perform some useful function, there coexists the equally important challenge of effectively controlling the system’s desired actions. This need is particularly critical if the intent is to operate in conjunction with human forces in a military application, as any low-level distractions can seriously reduce a warfighter’s chances of survival in hostile environments. Historically there can be seen a definitive trend towards making the robot smarter in order to reduce the control burden on the operator, and while much progress has been made in laboratory prototypes, all equipment deployed in theatre to date has been strictly teleoperated. There exists a definite tradeoff between the value added by the robot, in terms of how it contributes to the performance of the mission, and the loss of effectiveness associated with the operator control unit. From a command-and-control perspective, the ultimate goal would be to eliminate the need for a separate robot controller altogether, since it represents an unwanted burden and potential liability from the operator’s perspective. This paper introduces the long-term concept of a supervised autonomous Warfighter’s Associate, which employs a natural-language interface for communication with (and oversight by) its human counterpart. More realistic near-term solutions to achieve intermediate success are then presented, along with actual results to date. The primary application discussed is military, but the concept also applies to law enforcement, space exploration, and search-and-rescue scenarios

    Using Advanced Computer Vision Algorithms on Small Mobile Robots

    No full text
    The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments
    corecore