123,064 research outputs found

    Data Management in Industry 4.0: State of the Art and Open Challenges

    Full text link
    Information and communication technologies are permeating all aspects of industrial and manufacturing systems, expediting the generation of large volumes of industrial data. This article surveys the recent literature on data management as it applies to networked industrial environments and identifies several open research challenges for the future. As a first step, we extract important data properties (volume, variety, traffic, criticality) and identify the corresponding data enabling technologies of diverse fundamental industrial use cases, based on practical applications. Secondly, we provide a detailed outline of recent industrial architectural designs with respect to their data management philosophy (data presence, data coordination, data computation) and the extent of their distributiveness. Then, we conduct a holistic survey of the recent literature from which we derive a taxonomy of the latest advances on industrial data enabling technologies and data centric services, spanning all the way from the field level deep in the physical deployments, up to the cloud and applications level. Finally, motivated by the rich conclusions of this critical analysis, we identify interesting open challenges for future research. The concepts presented in this article thematically cover the largest part of the industrial automation pyramid layers. Our approach is multidisciplinary, as the selected publications were drawn from two fields; the communications, networking and computation field as well as the industrial, manufacturing and automation field. The article can help the readers to deeply understand how data management is currently applied in networked industrial environments, and select interesting open research opportunities to pursue

    Haptic Assembly and Prototyping: An Expository Review

    Full text link
    An important application of haptic technology to digital product development is in virtual prototyping (VP), part of which deals with interactive planning, simulation, and verification of assembly-related activities, collectively called virtual assembly (VA). In spite of numerous research and development efforts over the last two decades, the industrial adoption of haptic-assisted VP/VA has been slower than expected. Putting hardware limitations aside, the main roadblocks faced in software development can be traced to the lack of effective and efficient computational models of haptic feedback. Such models must 1) accommodate the inherent geometric complexities faced when assembling objects of arbitrary shape; and 2) conform to the computation time limitation imposed by the notorious frame rate requirements---namely, 1 kHz for haptic feedback compared to the more manageable 30-60 Hz for graphic rendering. The simultaneous fulfillment of these competing objectives is far from trivial. This survey presents some of the conceptual and computational challenges and opportunities as well as promising future directions in haptic-assisted VP/VA, with a focus on haptic assembly from a geometric modeling and spatial reasoning perspective. The main focus is on revisiting definitions and classifications of different methods used to handle the constrained multibody simulation in real-time, ranging from physics-based and geometry-based to hybrid and unified approaches using a variety of auxiliary computational devices to specify, impose, and solve assembly constraints. Particular attention is given to the newly developed 'analytic methods' inherited from motion planning and protein docking that have shown great promise as an alternative paradigm to the more popular combinatorial methods.Comment: Technical Report, University of Connecticut, 201

    Mathematical Software: Past, Present, and Future

    Full text link
    This paper provides some reflections on the field of mathematical software on the occasion of John Rice's 65th birthday. I describe some of the common themes of research in this field and recall some significant events in its evolution. Finally, I raise a number of issues that are of concern to future developments.Comment: To appear in the Proceedings of the International Symposium on Computational Sciences, Purdue University, May 21-22, 1999. 20 page

    Predicting How to Distribute Work Between Algorithms and Humans to Segment an Image Batch

    Full text link
    Foreground object segmentation is a critical step for many image analysis tasks. While automated methods can produce high-quality results, their failures disappoint users in need of practical solutions. We propose a resource allocation framework for predicting how best to allocate a fixed budget of human annotation effort in order to collect higher quality segmentations for a given batch of images and automated methods. The framework is based on a prediction module that estimates the quality of given algorithm-drawn segmentations. We demonstrate the value of the framework for two novel tasks related to predicting how to distribute annotation efforts between algorithms and humans. Specifically, we develop two systems that automatically decide, for a batch of images, when to recruit humans versus computers to create 1) coarse segmentations required to initialize segmentation tools and 2) final, fine-grained segmentations. Experiments demonstrate the advantage of relying on a mix of human and computer efforts over relying on either resource alone for segmenting objects in images coming from three diverse modalities (visible, phase contrast microscopy, and fluorescence microscopy)

    An Information Theoretic Measure for Robot Expressivity

    Full text link
    This paper presents a principled way to think about articulated movement for artificial agents and a measurement of platforms that produce such movement. In particular, in human-facing scenarios, the shape evolution of robotic platforms will become essential in creating systems that integrate and communicate with human counterparts. This paper provides a tool to measure the expressive capacity or expressivity of articulated platforms. To do this, it points to the synergistic relationship between computation and mechanization. Importantly, this way of thinking gives an information theoretic basis for measuring and comparing robots of increasing complexity and capability. The paper will provide concrete examples of this measure in application to current robotic platforms. It will also provide a comparison between the computational and mechanical capabilities of robotic platforms and analyze order-of-magnitude trends over the last 15 years. Implications for future work made by the paper are to provide a method by which to quantify movement imitation, outline a way of thinking about designing expressive robotic systems, and contextualize the capabilities of current robotic systems.Comment: Rejected from RSS 201

    Framework for Version Control & Dependency Link of Components & Products in a Software Product Line

    Full text link
    Software product line deals with the assembly of products from existing core assets commonly known as components and continuous growth in the core assets as we proceed with production. This idea has emerged as vital in terms of software development from component-based architecture. Since in software product line one has to deal with number of products and components simultaneous therefore there is a need to develop a strategy, which will help to store components and products information in such a way that they can be traced easily for further development. This storage strategy should reflect a relationship between products and components so that product history with reference to components can be traced and vise versa. In this paper we have presented a tree structure based storage strategy for components and products in software product line. This strategy will enable us to store the vital information about components and products with a relationship of their composition and utilization. We implemented this concept and simulated the software product line environment

    Optimization under Uncertainty in the Era of Big Data and Deep Learning: When Machine Learning Meets Mathematical Programming

    Full text link
    This paper reviews recent advances in the field of optimization under uncertainty via a modern data lens, highlights key research challenges and promise of data-driven optimization that organically integrates machine learning and mathematical programming for decision-making under uncertainty, and identifies potential research opportunities. A brief review of classical mathematical programming techniques for hedging against uncertainty is first presented, along with their wide spectrum of applications in Process Systems Engineering. A comprehensive review and classification of the relevant publications on data-driven distributionally robust optimization, data-driven chance constrained program, data-driven robust optimization, and data-driven scenario-based optimization is then presented. This paper also identifies fertile avenues for future research that focuses on a closed-loop data-driven optimization framework, which allows the feedback from mathematical programming to machine learning, as well as scenario-based optimization leveraging the power of deep learning techniques. Perspectives on online learning-based data-driven multistage optimization with a learning-while-optimizing scheme is presented

    Logic BIST: State-of-the-Art and Open Problems

    Full text link
    Many believe that in-field hardware faults are too rare in practice to justify the need for Logic Built-In Self-Test (LBIST) in a design. Until now, LBIST was primarily used in safety-critical applications. However, this may change soon. First, even if costly methods like burn-in are applied, it is no longer possible to get rid of all latent defects in devices at leading-edge technology. Second, demands for high reliability spread to consumer electronics as smartphones replace our wallets and IDs. However, today many ASIC vendors are reluctant to use LBIST. In this paper, we describe the needs for successful deployment of LBIST in the industrial practice and discuss how these needs can be addressed. Our work is hoped to attract a wider attention to this important research topic.Comment: 6 pages, 3 figure

    In-Band Full-Duplex Wireless: Challenges and Opportunities

    Full text link
    In-band full-duplex (IBFD) operation has emerged as an attractive solution for increasing the throughput of wireless communication systems and networks. With IBFD, a wireless terminal is allowed to transmit and receive simultaneously in the same frequency band. This tutorial paper reviews the main concepts of IBFD wireless. Because one the biggest practical impediments to IBFD operation is the presence of self-interference, i.e., the interference caused by an IBFD node's own transmissions to its desired receptions, this tutorial surveys a wide range of IBFD self-interference mitigation techniques. Also discussed are numerous other research challenges and opportunities in the design and analysis of IBFD wireless systems

    On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models

    Full text link
    This paper addresses the general problem of reinforcement learning (RL) in partially observable environments. In 2013, our large RL recurrent neural networks (RNNs) learned from scratch to drive simulated cars from high-dimensional video input. However, real brains are more powerful in many ways. In particular, they learn a predictive model of their initially unknown environment, and somehow use it for abstract (e.g., hierarchical) planning and reasoning. Guided by algorithmic information theory, we describe RNN-based AIs (RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending sequences of tasks, some of them provided by the user, others invented by the RNNAI itself in a curious, playful fashion, to improve its RNN-based world model. Unlike our previous model-building RNN-based RL machines dating back to 1990, the RNNAI learns to actively query its model for abstract reasoning and planning and decision making, essentially "learning to think." The basic ideas of this report can be applied to many other cases where one RNN-like system exploits the algorithmic information content of another. They are taken from a grant proposal submitted in Fall 2014, and also explain concepts such as "mirror neurons." Experimental results will be described in separate papers.Comment: 36 pages, 1 figure. arXiv admin note: substantial text overlap with arXiv:1404.782
    • …
    corecore