102 research outputs found

    A review on intelligent monitoring and activity interpretation

    Get PDF
    This survey paper provides a tour of the various monitoring and activity interpretation frameworks found in the literature. The needs of monitoring and interpretation systems are presented in relation to the area where they have been developed or applied. Their evolution is studied to better understand the characteristics of current systems. After this, the main features of monitoring and activity interpretation systems are defined.Este trabajo presenta una revisión de los marcos de trabajo para monitorización e interpretación de actividades presentes en la literatura. Dependiendo del área donde dichos marcos se han desarrollado o aplicado, se han identificado diferentes necesidades. Además, para comprender mejor las particularidades de los marcos de trabajo, esta revisión realiza un recorrido por su evolución histórica. Posteriormente, se definirían las principales características de los sistemas de monitorización e interpretación de actividades.This work was partially supported by Spanish Ministerio de Economía y Competitividad / FEDER under DPI2016-80894-R grant

    Decision making in dynamic information environments

    Get PDF
    If there is no knowledge about the state of the world, getting the appropriate response to an event becomes impossible. Situations of uncertainty are common in the most varied environments and have the potential to impair or even stop the decision-making process. Thus, reaching an outcome in such situations requires the development of decision frameworks that account for missing, contradictory or uncertain information

    Collective responses of a large mackerel school depend on the size and speed of a robotic fish but not on tail motion

    Get PDF
    So far, actuated fish models have been used to study animal interactions in small-scale controlled experiments. This study, conducted in a semi-controlled setting, investigates robot5interactions with a large wild-caught marine fish school (∼3000 individuals) in their natural social environment. Two towed fish robots were used to decouple size, tail motion and speed in a series of sea-cage experiments. Using high-resolution imaging sonar and sonar-video blind scoring, we monitored and classified the school's collective reaction towards the fish robots as attraction or avoidance. We found that two key releasers—the size and the speed of the robotic fish—were responsible for triggering either evasive reactions or following responses. At the same time, we found fish reactions to the tail motion to be insignificant. The fish evaded a fast-moving robot even if it was small. However, mackerels following propensity was greater towards a slow small robot. When moving slowly, the larger robot triggered significantly more avoidance responses than a small robot. Our results suggest that the collective responses of a large school exposed to a robotic fish could be manipulated by tuning two principal releasers—size and speed. These results can help to design experimental methods for in situ observations of wild fish schools or to develop underwater robots for guiding and interacting with free-ranging aggregated aquatic organisms.This work was financed by the Norwegian Research Council (grant 204229/F20) and Estonian Government Target Financing (grant SF0140018s12). JCC was partially supported by a grant from Iceland, Liechtenstein and Norway through the EEA Financial Mechanism, operated by Universidad Complutense de Madrid. We are grateful to A. Totland for his technical help. The animal collection was approved by The Royal Norwegian Ministry of Fisheries, and the experiment was approved by the Norwegian Animal Research Authority. The Institute of Marine Research is permitted to conduct experiments at the Austevoll aquaculture facility by the Norwegian Biological Resource Committee and the Norwegian Animal Research Committee (Forsøksdyrutvalget)

    Canting heliostats with computer vision and theoretical imaging

    Get PDF
    Solar Power Tower technology requires accurate techniques to ensure the optical performance of the heliostats both in commissioning and operation phases. This paper presents a technique based on target reflection to detect and correct canting errors in heliostat facets. A camera mounted on the back of a target heliostat sees an object heliostat and the target facets in reflection. The pixels difference between detected and theoretical borders determines the canting errors. Experiments in a lab scale testbed show that canting errors can be corrected up to an average value of around as low as 0.15 mrad. Experiments were also performed on a real heliostat at Plataforma Solar de Almería. As a result, canting errors (up to 5 mrad) have been reduced below 0.75 mrad. Mirror slope errors, which can be noticeable in large facets, becomes the largest source of inaccuracy in the presented method.This work has been supported by the Madrid Government (Comunidad de Madrid) under the Multiannual Agreement with UC3M in the line of "Fostering Young Doctors Research" (VISHELIO-CM-UC3M), and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation). The authors appreciate the help provided by the technical staff at PSA during the experimental campaign. Funding for APC: Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2022)

    Applications and Trends in Social Robotics

    Get PDF
    The study has received funding from two projects: Development of social robots to help seniors with cognitive impairment (ROBSEN), financed by the Spanish Ministry of Economy; and RoboCity2030-IIICM, funded by the Comunidad de Madrid and co-financed by the European Union Structural Funds

    Semantic information for robot navigation: a survey

    Get PDF
    There is a growing trend in robotics for implementing behavioural mechanisms based on human psychology, such as the processes associated with thinking. Semantic knowledge has opened new paths in robot navigation, allowing a higher level of abstraction in the representation of information. In contrast with the early years, when navigation relied on geometric navigators that interpreted the environment as a series of accessible areas or later developments that led to the use of graph theory, semantic information has moved robot navigation one step further. This work presents a survey on the concepts, methodologies and techniques that allow including semantic information in robot navigation systems. The techniques involved have to deal with a range of tasks from modelling the environment and building a semantic map, to including methods to learn new concepts and the representation of the knowledge acquired, in many cases through interaction with users. As understanding the environment is essential to achieve high-level navigation, this paper reviews techniques for acquisition of semantic information, paying attention to the two main groups: human-assisted and autonomous techniques. Some state-of-the-art semantic knowledge representations are also studied, including ontologies, cognitive maps and semantic maps. All of this leads to a recent concept, semantic navigation, which integrates the previous topics to generate high-level navigation systems able to deal with real-world complex situationsThe research leading to these results has received funding from HEROITEA: Heterogeneous 480 Intelligent Multi-Robot Team for Assistance of Elderly People (RTI2018-095599-B-C21), funded by Spanish 481 Ministerio de Economía y Competitividad. The research leading to this work was also supported project "Robots sociales para estimulacón física, cognitiva y afectiva de mayores"; funded by the Spanish State Research Agency under grant 2019/00428/001. It is also funded by WASP-AI Sweden; and by Spanish project Robotic-Based Well-Being Monitoring and Coaching for Elderly People during Daily Life Activities (RTI2018-095599-A-C22)

    A proposal for local and global human activities identification

    Get PDF
    There are a number of solutions to automate the monotonous task of looking at a monitor to find suspicious behaviors in video surveillance scenarios. Detecting strange objects and intruders, or tracking people and objects, is essential for surveillance and safety in crowded environments. The present work deals with the idea of jointly modeling simple and complex behaviors to report local and global human activities in natural scenes. In order to validate our proposal we have performed some tests with some CAVIAR test cases. In this paper we show some relevant results for some study cases related to visual surveillance, namely ?speed detection?, ?position and direction analysis?, and ?possible cashpoint holdup detection?

    Detecting and Classifying Human Touches in a Social Robot Through Acoustic Sensing and Machine Learning

    Get PDF
    An important aspect in Human-Robot Interaction is responding to different kinds of touch stimuli. To date, several technologies have been explored to determine how a touch is perceived by a social robot, usually placing a large number of sensors throughout the robot's shell. In this work, we introduce a novel approach, where the audio acquired from contact microphones located in the robot's shell is processed using machine learning techniques to distinguish between different types of touches. The system is able to determine when the robot is touched (touch detection), and to ascertain the kind of touch performed among a set of possibilities: stroke, tap, slap, and tickle (touch classification). This proposal is cost-effective since just a few microphones are able to cover the whole robot's shell since a single microphone is enough to cover each solid part of the robot. Besides, it is easy to install and configure as it just requires a contact surface to attach the microphone to the robot's shell and plug it into the robot's computer. Results show the high accuracy scores in touch gesture recognition. The testing phase revealed that Logistic Model Trees achieved the best performance, with an F-score of 0.81. The dataset was built with information from 25 participants performing a total of 1981 touch gestures.The research leading to these results has received funding from the projects: Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad; and RoboCity2030-III-CM, funded by Comunidad de Madrid and cofunded by Structural Funds of the EU.Publicad

    Robust people segmentation by static infrared surveillance camera

    Get PDF
    In this paper, a new approach to real-time people segmentation through processing images captured by an infrared camera is introduced. The approach starts detecting human candidate blobs processed through traditional image thresholding techniques. Afterwards, the blobs are refined with the objective of validating the content of each blob. The question to be solved is if each blob contains one single human candidate or more than one. If the blob contains more than one possible human, the blob is divided to fit each new candidate in height and width
    corecore