70,155 research outputs found
Monitoring wild animal communities with arrays of motion sensitive camera traps
Studying animal movement and distribution is of critical importance to
addressing environmental challenges including invasive species, infectious
diseases, climate and land-use change. Motion sensitive camera traps offer a
visual sensor to record the presence of a broad range of species providing
location -specific information on movement and behavior. Modern digital camera
traps that record video present new analytical opportunities, but also new data
management challenges. This paper describes our experience with a terrestrial
animal monitoring system at Barro Colorado Island, Panama. Our camera network
captured the spatio-temporal dynamics of terrestrial bird and mammal activity
at the site - data relevant to immediate science questions, and long-term
conservation issues. We believe that the experience gained and lessons learned
during our year long deployment and testing of the camera traps as well as the
developed solutions are applicable to broader sensor network applications and
are valuable for the advancement of the sensor network research. We suggest
that the continued development of these hardware, software, and analytical
tools, in concert, offer an exciting sensor-network solution to monitoring of
animal populations which could realistically scale over larger areas and time
spans
Independent Motion Detection with Event-driven Cameras
Unlike standard cameras that send intensity images at a constant frame rate,
event-driven cameras asynchronously report pixel-level brightness changes,
offering low latency and high temporal resolution (both in the order of
micro-seconds). As such, they have great potential for fast and low power
vision algorithms for robots. Visual tracking, for example, is easily achieved
even for very fast stimuli, as only moving objects cause brightness changes.
However, cameras mounted on a moving robot are typically non-stationary and the
same tracking problem becomes confounded by background clutter events due to
the robot ego-motion. In this paper, we propose a method for segmenting the
motion of an independently moving object for event-driven cameras. Our method
detects and tracks corners in the event stream and learns the statistics of
their motion as a function of the robot's joint velocities when no
independently moving objects are present. During robot operation, independently
moving objects are identified by discrepancies between the predicted corner
velocities from ego-motion and the measured corner velocities. We validate the
algorithm on data collected from the neuromorphic iCub robot. We achieve a
precision of ~ 90 % and show that the method is robust to changes in speed of
both the head and the target.Comment: 7 pages, 6 figure
Vision-based analysis of pedestrian traffic data
Reducing traffic congestion has become a major issue within urban environments. Traditional approaches, such as increasing road sizes, may prove impossible in certain scenarios, such as city centres, or ineffectual if current predictions of large growth in world traffic volumes hold true. An alternative approach lies with increasing the management efficiency of pre-existing infrastructure and public transport systems through the use of Intelligent Transportation Systems (ITS). In this paper, we focus on the requirement of obtaining robust pedestrian traffic flow data within these areas. We propose the use of a flexible and robust stereo-vision pedestrian detection and tracking approach as a basis for obtaining this information. Given this framework, we propose the use of a pedestrian indexing scheme and a suite of tools, which facilitates the declaration of user-defined pedestrian events or requests for specific statistical traffic flow data. The detection of the required events or the constant flow of statistical information can be incorporated into a variety of ITS solutions for applications in traffic management, public transport systems and urban planning
Interoperable services based on activity monitoring in ambient assisted living environments
Ambient Assisted Living (AAL) is considered as the main technological solution that will enable the aged and people in recovery to maintain their independence and a consequent high quality of life for a longer period of time than would otherwise be the case. This goal is achieved by monitoring humanâs activities and deploying the appropriate collection of services to set environmental features and satisfy user preferences in a given context. However, both human monitoring and services deployment are particularly hard to accomplish due to the uncertainty and ambiguity characterising human actions, and heterogeneity of hardware devices composed in an AAL system. This research addresses both the aforementioned challenges by introducing 1) an innovative system, based on Self Organising Feature Map (SOFM), for automatically classifying the resting location of a moving object in an indoor environment and 2) a strategy able to generate context-aware based Fuzzy Markup Language (FML) services in order to maximize the usersâ comfort and hardware interoperability level. The overall system runs on a distributed embedded platform with a specialised ceiling- mounted video sensor for intelligent activity monitoring. The system has the ability to learn resting locations, to measure overall activity levels, to detect specific events such as potential falls and to deploy the right sequence of fuzzy services modelled through FML for supporting people in that particular context. Experimental results show less than 20% classification error in monitoring human activities and providing the right set of services, showing the robustness of our approach over others in literature with minimal power consumption
State space and movement specification in open population spatial capture-recapture models.
With continued global changes, such as climate change, biodiversity loss, and habitat fragmentation, the need for assessment of long-term population dynamics and population monitoring of threatened species is growing. One powerful way to estimate population size and dynamics is through capture-recapture methods. Spatial capture (SCR) models for open populations make efficient use of capture-recapture data, while being robust to design changes. Relatively few studies have implemented open SCR models, and to date, very few have explored potential issues in defining these models. We develop a series of simulation studies to examine the effects of the state-space definition and between-primary-period movement models on demographic parameter estimation. We demonstrate the implications on a 10-year camera-trap study of tigers in India. The results of our simulation study show that movement biases survival estimates in open SCR models when little is known about between-primary-period movements of animals. The size of the state-space delineation can also bias the estimates of survival in certain cases.We found that both the state-space definition and the between-primary-period movement specification affected survival estimates in the analysis of the tiger dataset (posterior mean estimates of survival ranged from 0.71 to 0.89). In general, we suggest that open SCR models can provide an efficient and flexible framework for long-term monitoring of populations; however, in many cases, realistic modeling of between-primary-period movements is crucial for unbiased estimates of survival and density
Automatic sensor-based detection and classification of climbing activities
This article presents a method to automatically detect and classify climbing
activities using inertial measurement units (IMUs) attached to the wrists, feet
and pelvis of the climber. The IMUs record limb acceleration and angular
velocity. Detection requires a learning phase with manual annotation to
construct the statistical models used in the cusum algorithm. Full-body
activity is then classified based on the detection of each IMU
- âŠ