2,821 research outputs found
The passive operating mode of the linear optical gesture sensor
The study evaluates the influence of natural light conditions on the
effectiveness of the linear optical gesture sensor, working in the presence of
ambient light only (passive mode). The orientations of the device in reference
to the light source were modified in order to verify the sensitivity of the
sensor. A criterion for the differentiation between two states: "possible
gesture" and "no gesture" was proposed. Additionally, different light
conditions and possible features were investigated, relevant for the decision
of switching between the passive and active modes of the device. The criterion
was evaluated based on the specificity and sensitivity analysis of the binary
ambient light condition classifier. The elaborated classifier predicts ambient
light conditions with the accuracy of 85.15%. Understanding the light
conditions, the hand pose can be detected. The achieved accuracy of the hand
poses classifier trained on the data obtained in the passive mode in favorable
light conditions was 98.76%. It was also shown that the passive operating mode
of the linear gesture sensor reduces the total energy consumption by 93.34%,
resulting in 0.132 mA. It was concluded that optical linear sensor could be
efficiently used in various lighting conditions.Comment: 10 pages, 14 figure
Computer- and robot-assisted Medical Intervention
Medical robotics includes assistive devices used by the physician in order to
make his/her diagnostic or therapeutic practices easier and more efficient.
This chapter focuses on such systems. It introduces the general field of
Computer-Assisted Medical Interventions, its aims, its different components and
describes the place of robots in that context. The evolutions in terms of
general design and control paradigms in the development of medical robots are
presented and issues specific to that application domain are discussed. A view
of existing systems, on-going developments and future trends is given. A
case-study is detailed. Other types of robotic help in the medical environment
(such as for assisting a handicapped person, for rehabilitation of a patient or
for replacement of some damaged/suppressed limbs or organs) are out of the
scope of this chapter.Comment: Handbook of Automation, Shimon Nof (Ed.) (2009) 000-00
Prefrontal cortex activation upon a demanding virtual hand-controlled task: A new frontier for neuroergonomics
open9noFunctional near-infrared spectroscopy (fNIRS) is a non-invasive vascular-based functional neuroimaging technology that can assess, simultaneously from multiple cortical areas, concentration changes in oxygenated-deoxygenated hemoglobin at the level of the cortical microcirculation blood vessels. fNIRS, with its high degree of ecological validity and its very limited requirement of physical constraints to subjects, could represent a valid tool for monitoring cortical responses in the research field of neuroergonomics. In virtual reality (VR) real situations can be replicated with greater control than those obtainable in the real world. Therefore, VR is the ideal setting where studies about neuroergonomics applications can be performed. The aim of the present study was to investigate, by a 20-channel fNIRS system, the dorsolateral/ventrolateral prefrontal cortex (DLPFC/VLPFC) in subjects while performing a demanding VR hand-controlled task (HCT). Considering the complexity of the HCT, its execution should require the attentional resources allocation and the integration of different executive functions. The HCT simulates the interaction with a real, remotely-driven, system operating in a critical environment. The hand movements were captured by a high spatial and temporal resolution 3-dimensional (3D) hand-sensing device, the LEAP motion controller, a gesture-based control interface that could be used in VR for tele-operated applications. Fifteen University students were asked to guide, with their right hand/forearm, a virtual ball (VB) over a virtual route (VROU) reproducing a 42 m narrow road including some critical points. The subjects tried to travel as long as possible without making VB fall. The distance traveled by the guided VB was 70.2 ± 37.2 m. The less skilled subjects failed several times in guiding the VB over the VROU. Nevertheless, a bilateral VLPFC activation, in response to the HCT execution, was observed in all the subjects. No correlation was found between the distance traveled by the guided VB and the corresponding cortical activation. These results confirm the suitability of fNIRS technology to objectively evaluate cortical hemodynamic changes occurring in VR environments. Future studies could give a contribution to a better understanding of the cognitive mechanisms underlying human performance either in expert or non-expert operators during the simulation of different demanding/fatiguing activities.openCarrieri, Marika; Petracca, Andrea; Lancia, Stefania; Basso Moro, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, ValentinaCarrieri, Marika; Petracca, Andrea; Lancia, Stefania; BASSO MORO, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentin
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
RGBD Datasets: Past, Present and Future
Since the launch of the Microsoft Kinect, scores of RGBD datasets have been
released. These have propelled advances in areas from reconstruction to gesture
recognition. In this paper we explore the field, reviewing datasets across
eight categories: semantics, object pose estimation, camera tracking, scene
reconstruction, object tracking, human actions, faces and identification. By
extracting relevant information in each category we help researchers to find
appropriate data for their needs, and we consider which datasets have succeeded
in driving computer vision forward and why.
Finally, we examine the future of RGBD datasets. We identify key areas which
are currently underexplored, and suggest that future directions may include
synthetic data and dense reconstructions of static and dynamic scenes.Comment: 8 pages excluding references (CVPR style
VANET Applications: Hot Use Cases
Current challenges of car manufacturers are to make roads safe, to achieve
free flowing traffic with few congestions, and to reduce pollution by an
effective fuel use. To reach these goals, many improvements are performed
in-car, but more and more approaches rely on connected cars with communication
capabilities between cars, with an infrastructure, or with IoT devices.
Monitoring and coordinating vehicles allow then to compute intelligent ways of
transportation. Connected cars have introduced a new way of thinking cars - not
only as a mean for a driver to go from A to B, but as smart cars - a user
extension like the smartphone today. In this report, we introduce concepts and
specific vocabulary in order to classify current innovations or ideas on the
emerging topic of smart car. We present a graphical categorization showing this
evolution in function of the societal evolution. Different perspectives are
adopted: a vehicle-centric view, a vehicle-network view, and a user-centric
view; described by simple and complex use-cases and illustrated by a list of
emerging and current projects from the academic and industrial worlds. We
identified an empty space in innovation between the user and his car:
paradoxically even if they are both in interaction, they are separated through
different application uses. Future challenge is to interlace social concerns of
the user within an intelligent and efficient driving
Human robot interaction in a crowded environment
Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3].
Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person.
Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]
- …