3,321 research outputs found

    Edge-Computing Deep Learning-Based Computer Vision Systems

    Get PDF
    Computer vision has become ubiquitous in today\u27s society, with applications ranging from medical imaging to visual diagnostics to aerial monitoring to self-driving vehicles and many more. Common to many of these applications are visual perception systems which consist of classification, localization, detection, and segmentation components, just to name a few. Recently, the development of deep neural networks (DNN) have led to great advancements in pushing state-of-the-art performance in each of these areas. Unlike traditional computer vision algorithms, DNNs have the ability to generalize features previously hand-crafted by engineers specific to the application; this assumption models the human visual system\u27s ability to generalize its surroundings. Moreover, convolutional neural networks (CNN) have been shown to not only match, but exceed performance of traditional computer vision algorithms as the filters of the network are able to learn important features present in the data. In this research we aim to develop numerous applications including visual warehouse diagnostics and shipping yard managements systems, aerial monitoring and tracking from the perspective of the drone, perception system model for an autonomous vehicle, and vehicle re-identification for surveillance and security. The deep learning models developed for each application attempt to match or exceed state-of-the-art performance in both accuracy and inference time; however, this is typically a trade-off when designing a network where one or the other can be maximized. We investigate numerous object-detection architectures including Faster R-CNN, SSD, YOLO, and a few other variations in an attempt to determine the best architecture for each application. We constrain performance metrics to only investigate inference times rather than training times as none of the optimizations performed in this research have any effect on training time. Further, we will also investigate re-identification of vehicles as a separate application add-on to the object-detection pipeline. Re-identification will allow for a more robust representation of the data while leveraging techniques for security and surveillance. We also investigate comparisons between architectures that could possibly lead to the development of new architectures with the ability to not only perform inference relatively quickly (or in close-to real-time), but also match the state-of-the-art in accuracy performance. New architecture development, however, depends on the application and its requirements; some applications need to run on edge-computing (EC) devices, while others have slightly larger inference windows which allow for cloud computing with powerful accelerators

    Rapid detection of multi-QR codes based on multistage stepwise discrimination and a compressed mobilenet.

    Get PDF
    Poor real-time performance in multi-QR codes detection has been a bottleneck in QR code decoding based Internet-of-Things (IoT) systems. To tackle this issue, we propose in this paper a rapid detection approach, which consists of Multistage Stepwise Discrimination (MSD) and a Compressed MobileNet. Inspired by the object category determination analysis, the preprocessed QR codes are extracted accurately on a small scale using the MSD. Guided by the small scale of the image and the end-to-end detection model, we obtain a lightweight Compressed MobileNet in a deep weight compression manner to realize rapid inference of multi-QR codes. The Average Detection Precision (ADP), Multiple Box Rate (MBR) and running time are used for quantitative evaluation of the efficacy and efficiency. Compared with a few state-of-the-art methods, our approach has higher detection performance in rapid and accurate extraction of all the QR codes. The approach is conducive to embedded implementation in edge devices along with a bit of overhead computation to further benefit a wide range of real-time IoT applications

    MaestROB: A Robotics Framework for Integrated Orchestration of Low-Level Control and High-Level Reasoning

    Full text link
    This paper describes a framework called MaestROB. It is designed to make the robots perform complex tasks with high precision by simple high-level instructions given by natural language or demonstration. To realize this, it handles a hierarchical structure by using the knowledge stored in the forms of ontology and rules for bridging among different levels of instructions. Accordingly, the framework has multiple layers of processing components; perception and actuation control at the low level, symbolic planner and Watson APIs for cognitive capabilities and semantic understanding, and orchestration of these components by a new open source robot middleware called Project Intu at its core. We show how this framework can be used in a complex scenario where multiple actors (human, a communication robot, and an industrial robot) collaborate to perform a common industrial task. Human teaches an assembly task to Pepper (a humanoid robot from SoftBank Robotics) using natural language conversation and demonstration. Our framework helps Pepper perceive the human demonstration and generate a sequence of actions for UR5 (collaborative robot arm from Universal Robots), which ultimately performs the assembly (e.g. insertion) task.Comment: IEEE International Conference on Robotics and Automation (ICRA) 2018. Video: https://www.youtube.com/watch?v=19JsdZi0TW

    Simple identification tools in FishBase

    Get PDF
    Simple identification tools for fish species were included in the FishBase information system from its inception. Early tools made use of the relational model and characters like fin ray meristics. Soon pictures and drawings were added as a further help, similar to a field guide. Later came the computerization of existing dichotomous keys, again in combination with pictures and other information, and the ability to restrict possible species by country, area, or taxonomic group. Today, www.FishBase.org offers four different ways to identify species. This paper describes these tools with their advantages and disadvantages, and suggests various options for further development. It explores the possibility of a holistic and integrated computeraided strategy

    Survey and Systematization of Secure Device Pairing

    Full text link
    Secure Device Pairing (SDP) schemes have been developed to facilitate secure communications among smart devices, both personal mobile devices and Internet of Things (IoT) devices. Comparison and assessment of SDP schemes is troublesome, because each scheme makes different assumptions about out-of-band channels and adversary models, and are driven by their particular use-cases. A conceptual model that facilitates meaningful comparison among SDP schemes is missing. We provide such a model. In this article, we survey and analyze a wide range of SDP schemes that are described in the literature, including a number that have been adopted as standards. A system model and consistent terminology for SDP schemes are built on the foundation of this survey, which are then used to classify existing SDP schemes into a taxonomy that, for the first time, enables their meaningful comparison and analysis.The existing SDP schemes are analyzed using this model, revealing common systemic security weaknesses among the surveyed SDP schemes that should become priority areas for future SDP research, such as improving the integration of privacy requirements into the design of SDP schemes. Our results allow SDP scheme designers to create schemes that are more easily comparable with one another, and to assist the prevention of persisting the weaknesses common to the current generation of SDP schemes.Comment: 34 pages, 5 figures, 3 tables, accepted at IEEE Communications Surveys & Tutorials 2017 (Volume: PP, Issue: 99

    ARMBench: An Object-centric Benchmark Dataset for Robotic Manipulation

    Full text link
    This paper introduces Amazon Robotic Manipulation Benchmark (ARMBench), a large-scale, object-centric benchmark dataset for robotic manipulation in the context of a warehouse. Automation of operations in modern warehouses requires a robotic manipulator to deal with a wide variety of objects, unstructured storage, and dynamically changing inventory. Such settings pose challenges in perceiving the identity, physical characteristics, and state of objects during manipulation. Existing datasets for robotic manipulation consider a limited set of objects or utilize 3D models to generate synthetic scenes with limitation in capturing the variety of object properties, clutter, and interactions. We present a large-scale dataset collected in an Amazon warehouse using a robotic manipulator performing object singulation from containers with heterogeneous contents. ARMBench contains images, videos, and metadata that corresponds to 235K+ pick-and-place activities on 190K+ unique objects. The data is captured at different stages of manipulation, i.e., pre-pick, during transfer, and after placement. Benchmark tasks are proposed by virtue of high-quality annotations and baseline performance evaluation are presented on three visual perception challenges, namely 1) object segmentation in clutter, 2) object identification, and 3) defect detection. ARMBench can be accessed at http://armbench.comComment: To appear at the IEEE Conference on Robotics and Automation (ICRA), 202

    Blur Classification Using Segmentation Based Fractal Texture Analysis

    Get PDF
    The objective of vision based gesture recognition is to design a system, which can understand the human actions and convey the acquired information with the help of captured images. An image restoration approach is extremely required whenever image gets blur during acquisition process since blurred images can severely degrade the performance of such systems. Image restoration recovers a true image from a degraded version. It is referred as blind restoration if blur information is unidentified. Blur identification is essential before application of any blind restoration algorithm. This paper presents a blur identification approach which categories a hand gesture image into one of the sharp, motion, defocus and combined blurred categories. Segmentation based fractal texture analysis extraction algorithm is utilized for featuring the neural network based classification system. The simulation results demonstrate the preciseness of proposed method

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable
    corecore