2 research outputs found
Recommended from our members
Exploration of human search behaviour: a multidisciplinary perspective
The following work presents an exploration of human search behaviour both from biological
and computational perspectives. Search behaviour is defined as the movements
made by an organism while attempting to find a resource. This work describes some of
the principal procedures used to record movement, methods for analysing the data and
possible ways of interpreting the data. In order to obtain a database of searching behaviour,
an experimental setup was built and tested to generate the search paths of human
participants. The test arena occupied part of a football field and the targets consisted of
an array of 20 golf balls. In the first set of experiments, a random and regular distribution
of targets were tested. For each distribution, three distinct conspicuity levels were
constructed: a cryptic level, in which targets were painted the same colour as the grass,
a semi-conspicuous level in which targets were left white and a conspicuous condition in
which the position of each target was marked by a red flag, protruding one metre from the
ground. The subjects tested were 9-11 year old children and their search paths were collected using a GPS device. Subjects did not recognise the spatial cues regarding the way targets were spatially distributed. A minimal decision model, the bouncing search model, was built based on the characteristics of the childrens search paths. The model produced an outstanding fit of the children’s behavioural data. In the second set of experiments, a new group of children were tested for two new distributions obtained by arranging the targets in patches. Again, children appeared unable to recognise spatial information during the collection processes. The children’s behaviour once again produced a good match with that of the bouncing search model. This work introduces several new methodological aspects to be explored to further understand the decision processes involved when humans search. Also, it illustrates that integrating biology and computational science can result in innovative research
Body-relative navigation guidance using uncalibrated cameras
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 89-97) and index.The ability to navigate through the world is an essential capability to humans. In a variety of situations, people do not have the time, the opportunity or the capability to learn the layout of the environment before visiting an area. Examples include soldiers in the field entering an unknown building, firefighters responding to an emergency, or a visually impaired person walking through the city. In absence of external source of localization (such as GPS), the system must rely on internal sensing to provide navigation guidance to the user. In order to address real-world situations, the method must provide spatially extended, temporally consistent navigation guidance, through cluttered and dynamic environments. While recent research has largely focused on metric methods based on calibrated cameras, the work presented in this thesis demonstrates a novel approach to navigation using uncalibrated cameras. During the first visit of the environment, the method builds a topological representation of the user's exploration path, which we refer to as the place graph. The method then provides navigation guidance from any place to any other in the explored environment. On one hand, a localization algorithm determines the location of the user in the graph. On the other hand, a rotation guidance algorithm provides a directional cue towards the next graph node in the user's body frame. Our method makes little assumption about the environment except that it contains descriptive visual features. It requires no intrinsic or extrinsic camera calibration, and relies instead on a method that learns the correlation between user rotation and feature correspondence across cameras. We validate our approach using several ground truth datasets. In addition, we show that our approach is capable of guiding a robot equipped with a local obstacle avoidance capability through real, cluttered environments. Finally, we validate our system with nine untrained users through several kilometers of indoor environments.by Olivier Koch.Ph.D