74 research outputs found

    Artificial Intelligence and Machine Learning: A Perspective on Integrated Systems Opportunities and Challenges for Multi-Domain Operations

    Get PDF
    This paper provides a perspective on historical background, innovation and applications of Artificial Intelligence (AI) and Machine Learning (ML), data successes and systems challenges, national security interests, and mission opportunities for system problems. AI and ML today are used interchangeably, or together as AI/ML, and are ubiquitous among many industries and applications. The recent explosion, based on a confluence of new ML algorithms, large data sets, and fast and cheap computing, has demonstrated impressive results in classification and regression and used for prediction, and decision-making. Yet, AI/ML today lacks a precise definition, and as a technical discipline, it has grown beyond its origins in computer science. Even though there are impressive feats, primarily of ML, there still is much work needed in order to see the systems benefits of AI, such as perception, reasoning, planning, acting, learning, communicating, and abstraction. Recent national security interests in AI/ML have focused on problems including multidomain operations (MDO), and this has renewed the focus on a systems view of AI/ML. This paper will address the solutions for systems from an AI/ML perspective and that these solutions will draw from methods in AI and ML, as well as computational methods in control, estimation, communication, and information theory, as in the early days of cybernetics. Along with the focus on developing technology, this paper will also address the challenges of integrating these AI/ML systems for warfare

    Machine-Learning Space Applications on SmallSat Platforms with TensorFlow

    Get PDF
    Due to their attractive benefits, which include affordability, comparatively low development costs, shorter development cycles, and availability of launch opportunities, SmallSats have secured a growing commercial and educational interest for space development. However, despite these advantages, SmallSats, and especially CubeSats, suffer from high failure rates and (with few exceptions to date) have had low impact in providing entirely novel, market-redefining capabilities. To enable these more complex science and defense opportunities in the future, small-spacecraft computing capabilities must be flexible, robust, and intelligent. To provide more intelligent computing, we propose employing machine intelligence on space development platforms, which can contribute to more efficient communications, improve spacecraft reliability, and assist in coordination and management of single or multiple spacecraft autonomously. Using TensorFlow, a popular, open-source, machine-learning framework developed by Google, modern SmallSat computers can run TensorFlow graphs (principal component of TensorFlow applications) with both TensorFlow and TensorFlow Lite. The research showcased in this paper provides a flight-demonstration example, using terrestrial-scene image products collected in flight by our STP-H5/CSP system, currently deployed on the International Space Station, of various Convolutional Neural Networks (CNNs) to identify and characterize newly captured images. This paper compares CNN architectures including MobileNetV1, MobileNetV2, Inception-ResNetV2, and NASNet Mobile

    Data Science at USNA

    Get PDF
    Data Science is increasingly important to the Navy and Marine Corps. We survey some of the ways that civilian institutions are delivering Data Science curriculum, outline a vision for developing Data Science curriculum at the United States Naval Academy (USNA), and summarize some of the accomplishments and planned activities of the Data Science group at USNA

    Energy Efficient Neocortex-Inspired Systems with On-Device Learning

    Get PDF
    Shifting the compute workloads from cloud toward edge devices can significantly improve the overall latency for inference and learning. On the contrary this paradigm shift exacerbates the resource constraints on the edge devices. Neuromorphic computing architectures, inspired by the neural processes, are natural substrates for edge devices. They offer co-located memory, in-situ training, energy efficiency, high memory density, and compute capacity in a small form factor. Owing to these features, in the recent past, there has been a rapid proliferation of hybrid CMOS/Memristor neuromorphic computing systems. However, most of these systems offer limited plasticity, target either spatial or temporal input streams, and are not demonstrated on large scale heterogeneous tasks. There is a critical knowledge gap in designing scalable neuromorphic systems that can support hybrid plasticity for spatio-temporal input streams on edge devices. This research proposes Pyragrid, a low latency and energy efficient neuromorphic computing system for processing spatio-temporal information natively on the edge. Pyragrid is a full-scale custom hybrid CMOS/Memristor architecture with analog computational modules and an underlying digital communication scheme. Pyragrid is designed for hierarchical temporal memory, a biomimetic sequence memory algorithm inspired by the neocortex. It features a novel synthetic synapses representation that enables dynamic synaptic pathways with reduced memory usage and interconnects. The dynamic growth in the synaptic pathways is emulated in the memristor device physical behavior, while the synaptic modulation is enabled through a custom training scheme optimized for area and power. Pyragrid features data reuse, in-memory computing, and event-driven sparse local computing to reduce data movement by ~44x and maximize system throughput and power efficiency by ~3x and ~161x over custom CMOS digital design. The innate sparsity in Pyragrid results in overall robustness to noise and device failure, particularly when processing visual input and predicting time series sequences. Porting the proposed system on edge devices can enhance their computational capability, response time, and battery life

    A Scalable Flash-Based Hardware Architecture for the Hierarchical Temporal Memory Spatial Pooler

    Get PDF
    Hierarchical temporal memory (HTM) is a biomimetic machine learning algorithm focused upon modeling the structural and algorithmic properties of the neocortex. It is comprised of two components, realizing pattern recognition of spatial and temporal data, respectively. HTM research has gained momentum in recent years, leading to both hardware and software exploration of its algorithmic formulation. Previous work on HTM has centered on addressing performance concerns; however, the memory-bound operation of HTM presents significant challenges to scalability. In this work, a scalable flash-based storage processor unit, Flash-HTM (FHTM), is presented along with a detailed analysis of its potential scalability. FHTM leverages SSD flash technology to implement the HTM cortical learning algorithm spatial pooler. The ability for FHTM to scale with increasing model complexity is addressed with respect to design footprint, memory organization, and power efficiency. Additionally, a mathematical model of the hardware is evaluated against the MNIST dataset, yielding 91.98% classification accuracy. A fully custom layout is developed to validate the design in a TSMC 180nm process. The area and power footprints of the spatial pooler are 30.538mm2 and 5.171mW, respectively. Storage processor units have the potential to be viable platforms to support implementations of HTM at scale

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved
    • …
    corecore