142 research outputs found

    A framework for context-aware driver status assessment systems

    Get PDF
    The automotive industry is actively supporting research and innovation to meet manufacturers' requirements related to safety issues, performance and environment. The Green ITS project is among the efforts in that regard. Safety is a major customer and manufacturer concern. Therefore, much effort have been directed to developing cutting-edge technologies able to assess driver status in term of alertness and suitability. In that regard, we aim to create with this thesis a framework for a context-aware driver status assessment system. Context-aware means that the machine uses background information about the driver and environmental conditions to better ascertain and understand driver status. The system also relies on multiple sensors, mainly video and audio. Using context and multi-sensor data, we need to perform multi-modal analysis and data fusion in order to infer as much knowledge as possible about the driver. Last, the project is to be continued by other students, so the system should be modular and well-documented. With this in mind, a driving simulator integrating multiple sensors was built. This simulator is a starting point for experimentation related to driver status assessment, and a prototype of software for real-time driver status assessment is integrated to the platform. To make the system context-aware, we designed a driver identification module based on audio-visual data fusion. Thus, at the beginning of driving sessions, the users are identified and background knowledge about them is loaded to better understand and analyze their behavior. A driver status assessment system was then constructed based on two different modules. The first one is for driver fatigue detection, based on an infrared camera. Fatigue is inferred via percentage of eye closure, which is the best indicator of fatigue for vision systems. The second one is a driver distraction recognition system, based on a Kinect sensor. Using body, head, and facial expressions, a fusion strategy is employed to deduce the type of distraction a driver is subject to. Of course, fatigue and distraction are only a fraction of all possible drivers' states, but these two aspects have been studied here primarily because of their dramatic impact on traffic safety. Through experimental results, we show that our system is efficient for driver identification and driver inattention detection tasks. Nevertheless, it is also very modular and could be further complemented by driver status analysis, context or additional sensor acquisition

    Using Interval Constrained Petri Nets and Fuzzy Method for Regulation of Quality: The Case of Weight in Tobacco Factory

    Get PDF
    The existence of maximal durations drastically modifies the performance evaluation in Discrete Event Systems (DES). The same particularity may be found on systems where the associated constraints do not concern the time. For example weight measures, in chemical industry, are used in order to control the quantity of consumed raw materials. This parameter also takes a fundamental part in the product quality as the correct transformation process is based upon a given percentage of each essence. Weight regulation therefore increases the global productivity of the system by decreasing the quantity of rejected products. In this paper we present an approach based on mixing different characteristics theories, the fuzzy system and Petri net system to describe the behaviour. An industriel application on a tobacco manufacturing plant, where the critical parameter is the weight is presented as an illustration

    Exploration Strategies for Incremental Learning of Object-Based Visual Saliency

    Get PDF
    International audienceSearching for objects in an indoor environment can be drastically improved if a task-specific visual saliency is available. We describe a method to learn such an object-based visual saliency in an intrinsically motivated way using an environment exploration mechanism. We first define saliency in a geometrical manner and use this definition to discover salient elements given an attentive but costly observation of the environment. These elements are used to train a fast classifier that predicts salient objects given large-scale visual features. In order to get a better and faster learning, we use intrinsic motivation to drive our observation selection, based on uncertainty and novelty detection. Our approach has been tested on RGB-D images, is real-time, and outperforms several state-of-the-art methods in the case of indoor object detection

    An Iterative Method for the Design Process of Mode Handling Model

    Get PDF
    This paper focuses on formal verification and validation of a model dedicated to mode handling of flexible manufacturing systems. The model is specified using the synchronous formalism Safe State Machines. A structured framework for the design process is presented. The obtained model is characterized by a strong hierarchy and concurrency that is why within the design process an iterative approach for specification, verification and validation is propose in order to improve this process. The main properties being verified are presented and the approach is illustrated through an example of a manufacturing production cell

    On the Use of Intrinsic Motivation for Visual Saliency Learning

    Get PDF
    International audienceThe use of intrinsic motivation for the task of learning sensori-motor properties has received a lot of attention over the last few years, but only little work has been provided toward using intrinsic motivation for the task of learning visual signals. In this paper, we propose to apply the main ideas of the Intelligent Adaptive Curiosity (IAC) for the task of visual saliency learning. We here present RL-IAC, an adapted version of IAC that uses reinforcement learning to deal with time consuming displacements while actively learning saliency based on local learning progress. We also introduce the use of a backward evaluation to deal with a learner that is shared between several regions. We demonstrate the good performance of RL-IAC compared to other exploration techniques, and we discuss the performance of other intrinsic motivation sources instead of learning progress in our problem

    Apprentissage incrémental de la saillance visuelle pour des applications robotique

    Get PDF
    National audienceNous proposons une méthode d'apprentissage incrémental de la saillance visuelle par un mécanisme d'exploration de l'environnement. Partant d'une définition géométrique de la saillance des objets, notre système observe de façon attentive et ciblée son environnement, jusqu'à découvrir des éléments saillants. Un classifieur permet alors d'apprendre les caractéristiques visuelles correspondantes afin de pouvoir ensuite prédire rapidement les positions des objets sans analyse géométrique. Notre approche a été testée sur des images RGBD, fonctionne en temps réel et dépasse plusieurs méthodes de l'état de l'art sur le contexte particulier de la détection d'objets en intérieur

    RL-IAC: An Exploration Policy for Online Saliency Learning on an Autonomous Mobile Robot

    Get PDF
    International audienceIn the context of visual object search and localization, saliency maps provide an efficient way to find object candidates in images. Unlike most approaches, we propose a way to learn saliency maps directly on a robot, by exploring the environment, discovering salient objects using geometric cues, and learning their visual aspects. More importantly, we provide an autonomous exploration strategy able to drive the robot for the task of learning saliency. For that, we describe the Reinforcement Learning-Intelligent Adaptive Curiosity algorithm (RL-IAC), a mechanism based on IAC (Intelligent Adaptive Curiosity) able to guide the robot through areas of the space where learning progress is high, while minimizing the time spent to move in its environment without learning. We demonstrate first that our saliency approach is an efficient tool to generate relevant object boxes proposal in the input image and significantly outperforms state-of-the-art algorithms. Second, we show that RL-IAC can drastically decrease the required time for learning saliency compared to random exploration

    Time Disturbances and Filtering of Sensors Signals in Tolerant Multi-product Job-shops with Time Constraints

    Get PDF
    This paper deals with supervision in critical time manufacturing jobshops without assembling tasks. Such systems have a robustness property to deal with time disturbances. A filtering mechanism of sensors signals integrating the robustness values is proposed. It provides the avoidance of control freezing if the time disturbance is in the robustness intervals. This constitutes an enhancement of the filtering mechanism since it makes it possible to continue the production in a degraded mode providing the guarantees of quality and safety. When a symptom of abnormal functioning is claimed by the filtering mechanism, it is imperative to localize the time disturbance occurrence. Based upon controlled P-time Petri nets as a modeling tool, a series of lemmas are quoted in order to build a theory dealing with the localization problem

    Exploring to learn visual saliency: The RL-IAC approach

    Get PDF
    International audienceThe problem of object localization and recognition on autonomous mobile robots is still an active topic. In this context, we tackle the problem of learning a model of visual saliency directly on a robot. This model, learned and improved on-the-fly during the robot's exploration provides an efficient tool for localizing relevant objects within their environment. The proposed approach includes two intertwined components. On the one hand, we describe a method for learning and incrementally updating a model of visual saliency from a depth-based object detector. This model of saliency can also be exploited to produce bounding box proposals around objects of interest. On the other hand, we investigate an autonomous exploration technique to efficiently learn such a saliency model. The proposed exploration, called Reinforcement Learning-Intelligent Adaptive Curiosity (RL-IAC) is able to drive the robot's exploration so that samples selected by the robot are likely to improve the current model of saliency. We then demonstrate that such a saliency model learned directly on a robot outperforms several state-of-the-art saliency techniques, and that RL-IAC can drastically decrease the required time for learning a reliable saliency model
    • …
    corecore