105 research outputs found
ROSLab Sharing ROS Code Interactively With Docker and JupyterLab
The success of the Robot Operating System (ROS) and the advance of open source ideas have radically changed and improved the experience of sharing software among members of the robotics community. Yet the lack of a suitable workflow for continuous integration and verification in robotics represents a significant obstacle to developing software that can be run by independent users for testing and reusing purposes
A Model of Artificial Genotype and Norm of Reaction in a Robotic System
The genes of living organisms serve as large stores of information for replicating their behavior and morphology over generations. The evolutionary view of genetics that has inspired artificial systems with a Mendelian approach does not take into account the interaction between species and with the environment to generate a particular phenotype. In this paper, a genotype model is suggested to shape the relationship with the phenotype and the environment in an artificial system. A method to obtain a genotype from a population of a particular robotic system is also proposed. Finally, we show that this model presents a similar behavior to that of living organisms in what regards the concept of norm of reaction.This paper describes research done at the UJI Robotic
Intelligence Laboratory. Support for this laboratory is provided in part by Ministerio
de Econom´ıa y Competitividad (DPI2015-69041-R), by Generalitat Valenciana
(PROMETEOII/2014/028) and by Universitat Jaume I (P1-1B2014-52,
PREDOC/ 2013/06)
Robot depth estimation inspired by fixational movements
Distance estimation is a challenge for robots, human
beings and other animals in their adaptation to changing environments. Different approaches have been proposed to tackle
this problem based on classical vision algorithms or, more
recently, deep learning. We present a novel approach inspired
by mechanisms involved in fixational movements to estimate a
depth image with a monocular camera. An algorithm based
on microsaccades and head movements during visual fixation
is presented. It combines the images generated by these micromovements with the ego-motion signal, to compute the depth
map. Systematic experiments using the Baxter robot in the
Gazebo/ROS simulator are described to test the approach in two
different scenarios, and evaluate the influence of its parameters
and its robustness in the presence of noise
Predicting the internal model of a robotic system from its morphology
The estimation of the internal model of a robotic system results from the interaction of its morphology, sensors and
actuators, with a particular environment. Model learning techniques, based on supervised machine learning, are
widespread for determining the internal model. An important limitation of such approaches is that once a model has
been learnt, it does not behave properly when the robot morphology is changed. From this it follows that there must
exist a relationship between them. We propose a model for this correlation between the morphology and the internal
model parameters, so that a new internal model can be predicted when the morphological parameters are modified.
Di erent neural network architectures are proposed to address this high dimensional regression problem. A case
study is analyzed in detail to illustrate and evaluate the performance of the approach, namely, a pan-tilt robot head
executing saccadic movements. The best results are obtained for an architecture with parallel neural networks due
to the independence of its outputs. Theses results can have a great significance since the predicted parameters can
dramatically speed up the adaptation process following a change in morpholog
Robot Vision for Manipulation: A Trip to Real-World Applications
Along the last decades, Robotics research has taken a major turn from laboratories to factories and ordinary real-world environments. Consequently, new issues to be overcome have arisen, specially when autonomous, dexterous robots are in place. In this paper, we present this evolution in the case of robot vision for manipulation through several robot developments, by analysing their challenges and proposed solutions. This overview highlights the need of using different techniques depending on the task at hand and the scenario to work in.This work was supported in part by the Ministerio de EconomĂa y Competitividad under Grant DPI2015-69041-R, in part by Universitat Jaume I under Grant UJI-B2018-74, and in part by Generalitat Valenciana under Grant PROMETEO/2020/034 and GV/2020/051
Fostering Progress in Performance Evaluation and Benchmarking of Robotic and Automation Systems
We have shared benchmarks for many engineering systems and products in the market that can be used to compare solutions and systems. We can compare cars in terms of maximum speed, acceleration, and maximum torque; computers in terms of flops, random access memory, and hard disk capacity; and smartphones in terms of battery life and screen dimensions. We also have shared usability metrics based on human factors, which are used to compare the ease of use of different software interfaces. When we come to the evaluation and the comparison of how intelligent, robust, adaptive, and antifragile the behaviors of robots are in performing a given set of tasks, such as daily life activities with daily life objects such as in a kitchen or a hospital room, we are in trouble
Integrating sensor models in deep learning boosts performance: application to monocular depth estimation in warehouse automation
Deep learning is the mainstream paradigm in computer vision and machine learning,
but performance is usually not as good as expected when used for applications in robot vision.
The problem is that robot sensing is inherently active, and often, relevant data is scarce for many
application domains. This calls for novel deep learning approaches that can offer a good performance
at a lower data consumption cost. We address here monocular depth estimation in warehouse
automation with new methods and three different deep architectures. Our results suggest that the
incorporation of sensor models and prior knowledge relative to robotic active vision, can consistently
improve the results and learning performance from fewer than usual training samples, as compared
to standard data-driven deep learning
Effects of behavioral risk factors and social-environmental factors on non-communicable diseases in South Korea: a national survey approach
Non-communicable diseases (NCDs) are one of the major health threats in the world.
Thus, identifying the factors that influence NCDs is crucial to monitor and manage diseases. This
study investigates the effects of social-environmental and behavioral risk factors on NCDs as well
as the effects of social-environmental factors on behavioral risk factors using an integrated research
model. This study used a dataset from the 2017 Korea National Health and Nutrition Examination
Survey. After filtering incomplete responses, 5462 valid responses remained. Items including
one’s social-environmental factors (household income, education level, and region), behavioral
factors (alcohol use, tobacco use, and physical activity), and NCDs histories were used for analyses.
To develop a comprehensive index of each factor that allows comparison between different concepts,
the researchers assigned scores to indicators of the factors and calculated a ratio of the scores.
A series of path analyses were conducted to determine the extent of relationships among NCDs and
risk factors. The results showed that social-environmental factors have notable effects on stroke,
myocardial infarction, angina, diabetes, and gastric, liver, colon, lung, and thyroid cancers. The
results indicate that the effects of social-environmental and behavioral risk factors on NCDs vary
across the different types of diseases. The effects of social-environmental factors and behavioral risk
factors significantly affected NCDs. However, the effect of social-environmental factors on behavioral
risk factors was not supported. Furthermore, social-environmental factors and behavioral risk factors
affect NCDs in a similar way. However, the effects of behavioral risk factors were smaller than those
of social-environmental factors. The current research suggests taking a comprehensive view of risk
factors to further understand the antecedents of NCDs in South Korea
Adaptive saccade controller inspired by the primates' cerebellum
Saccades are fast eye movements that allow humans and robots to bring the visual target in the center of the visual field. Saccades are open loop with respect to the vision system, thus their execution require a precise knowledge of the internal model of the oculomotor system. In this work, we modeled the saccade control, taking inspiration from the recurrent loops between the cerebellum and the brainstem. In this model, the brainstem acts as a fixed-inverse model of the oculomotor system, while the cerebellum acts as an adaptive element that learns the internal model of the oculomotor system. The adaptive filter is implemented using a state-of-the-art neural network, called I-SSGPR. The proposed approach, namely recurrent architecture, was validated through experiments performed both in simulation and on an antropomorphic robotic head. Moreover, we compared the recurrent architecture with another model of the cerebellum, the feedback error learning. Achieved results show that the recurrent architecture outperforms the feedback error learning in terms of accuracy and insensitivity to the choice of the feedback controller
An Omnidirectional Platform for Education and Research in Cooperative Robotics
In this paper we present a new, affordable, omnidirectional robot platform which is suitable
for research and education in cooperative robotics. We design and implement the platform for
the purpose of multi-agent object manipulation and transportation. The design consists of three
omnidirectional wheels with two additional traction wheels, making multirobot object manipulation
possible. It is validated by performing simple experiments using a setup with one robot and one
target object. The execution flow of a simple task (Approach–Press–Lift–Hold–Set) is studied. In
addition, we experiment to find the limits of the applied pressure and object orientation under certain
conditions. The experiments demonstrate the significance of our inexpensive platform for research
and education by proving its feasibility of use in topics such as collaborative robotics, physical
interaction, and mobile manipulation
- …