704 research outputs found
Natural ultrasonic echoes from wing beating insects are encoded by collicular neurons in the CF-FM bat, Rhinolophus f errumequinum
1. Acoustic reflections from a wing beating moth to an 80 kHz ultrasonic signal were recorded from six different incident angles and analyzed in spectral and time domains. The recorded echoes as well as independent components of amplitude and frequency modulations of the echoes were employed as acoustic stimuli during single unit studies.
2. The responses of single inferior colliculus neurons to these stimuli were recorded from four horseshoe bats,Rhinolophus ferrumequinum, a species which uses a long constant frequency (CF) sound with a final frequency modulated (FM) sweep during echolocation. All neurons responding to wing beat echoes reliably encoded the fundamental wing beat frequency as well as the more refined frequency and amplitude modulations.
3. These neurons may provide the bat a neural mechanism to detect periodically moving targets against a cluttered background and also to discriminate various insect species on the basis of their wing beat patterns
A computer vision approach to classification of birds in flight from video sequences
Bird populations are an important bio-indicator; so collecting reliable data is useful for ecologists helping conserve and manage fragile ecosystems. However, existing manual monitoring methods are labour-intensive, time-consuming, and error-prone. The aim of our work is to develop a reliable system, capable of automatically classifying individual bird species in flight from videos. This is challenging, but appropriate for use in the field, since there is often a requirement to identify in flight, rather than when stationary. We present our work in progress, which uses combined appearance and motion features to classify and present experimental results across seven species using Normal Bayes classifier with majority voting and achieving a classification rate of 86%
Natural ultrasonic echoes from wing beating insects are encoded by collicular neurons in the CF-FM bat, Rhinolophus f errumequinum
1. Acoustic reflections from a wing beating moth to an 80 kHz ultrasonic signal were recorded from six different incident angles and analyzed in spectral and time domains. The recorded echoes as well as independent components of amplitude and frequency modulations of the echoes were employed as acoustic stimuli during single unit studies.
2. The responses of single inferior colliculus neurons to these stimuli were recorded from four horseshoe bats,Rhinolophus ferrumequinum, a species which uses a long constant frequency (CF) sound with a final frequency modulated (FM) sweep during echolocation. All neurons responding to wing beat echoes reliably encoded the fundamental wing beat frequency as well as the more refined frequency and amplitude modulations.
3. These neurons may provide the bat a neural mechanism to detect periodically moving targets against a cluttered background and also to discriminate various insect species on the basis of their wing beat patterns
Recommended from our members
Improving the efficiency and accuracy of nocturnal bird Surveys through equipment selection and partial automation
This thesis was submitted for the degree of Engineering Doctorate and awarded by Brunel University.Birds are a key environmental asset and this is recognised through comprehensive legislation and policy ensuring their protection and conservation. Many species are active at night and surveys are required to understand the implications of proposed developments such as towers and reduce possible conflicts with these structures. Night vision devices are commonly used in nocturnal surveys, either to scope an area for bird numbers and activity, or in remotely sensing an area to determine potential risk. This thesis explores some practical and theoretical approaches that can improve the accuracy, confidence and efficiency of nocturnal bird surveillance. As image intensifiers and thermal imagers have operational differences, each device has associated strengths and limitations. Empirical work established that image intensifiers are best used for species identification of birds against the ground or vegetation. Thermal imagers perform best in detection tasks and monitoring bird airspace usage. The typically used approach of viewing bird survey video from remote sensing in its entirety is a slow, inaccurate and inefficient approach. Accuracy can be significantly improved by viewing the survey video at half the playback speed. Motion detection efficiency and accuracy can be greatly improved through the use of adaptive background subtraction and cumulative image differencing. An experienced ornithologist uses bird flight style and wing oscillations to identify bird species. Changes in wing oscillations can be represented in a single inter-frame similarity matrix through area-based differencing. Bird species classification can then be automated using singular value decomposition to reduce the matrices to one-dimensional vectors for training a feed-forward neural network
Classification of bird species from video using appearance and motion features
The monitoring of bird populations can provide important information on the state of sensitive ecosystems; however, the manual collection of reliable population data is labour-intensive, time-consuming, and potentially error prone. Automated monitoring using computer vision is therefore an attractive proposition, which could facilitate the collection of detailed data on a much larger scale than is currently possible.
A number of existing algorithms are able to classify bird species from individual high quality detailed images often using manual inputs (such as a priori parts labelling). However, deployment in the field necessitates fully automated in-flight classification, which remains an open challenge due to poor image quality, high and rapid variation in pose, and similar appearance of some species. We address this as a fine-grained classification problem, and have collected a video dataset of thirteen bird classes (ten species and another with three colour variants) for training and evaluation. We present our proposed algorithm, which selects effective features from a large pool of appearance and motion features. We compare our method to others which use appearance features only, including image classification using state-of-the-art Deep Convolutional Neural Networks (CNNs). Using our algorithm we achieved a 90% correct classification rate, and we also show that using effectively selected motion and appearance features together can produce results which outperform state-of-the-art single image classifiers. We also show that the most significant motion features improve correct classification rates by 7% compared to using appearance features alone
CC Sculptoris: A superhumping intermediate polar
We present high speed optical, spectroscopic and Swift X-ray observations
made during the dwarf nova superoutburst of CC Scl in November 2011. An orbital
period of 1.383 h and superhump period of 1.443 h were measured, but the
principal new finding is that CC Scl is a previously unrecognised intermediate
polar, with a white dwarf spin period of 389.49 s which is seen in both optical
and Swift X-ray light curves only during the outburst. In this it closely
resembles the old nova GK Per, but unlike the latter has one of the shortest
orbital periods among intermediate polars.Comment: Accepted for publication in MNRAS; 11 pages, 19 figure
3D pose estimation of flying animals in multi-view video datasets
Flying animals such as bats, birds, and moths are actively studied by researchers wanting to better understand these animals’ behavior and flight characteristics. Towards this goal, multi-view videos of flying animals have been recorded both in lab- oratory conditions and natural habitats. The analysis of these videos has shifted over time from manual inspection by scientists to more automated and quantitative approaches based on computer vision algorithms.
This thesis describes a study on the largely unexplored problem of 3D pose estimation of flying animals in multi-view video data. This problem has received little attention in the computer vision community where few flying animal datasets exist. Additionally, published solutions from researchers in the natural sciences have not taken full advantage of advancements in computer vision research. This thesis addresses this gap by proposing three different approaches for 3D pose estimation of flying animals in multi-view video datasets, which evolve from successful pose estimation paradigms used in computer vision. The first approach models the appearance of a flying animal with a synthetic 3D graphics model and then uses a Markov Random Field to model 3D pose estimation over time as a single optimization problem. The second approach builds on the success of Pictorial Structures models and further improves them for the case where only a sparse set of landmarks are annotated in training data. The proposed approach first discovers parts from regions of the training images that are not annotated. The discovered parts are then used to generate more accurate appearance likelihood terms which in turn produce more accurate landmark localizations. The third approach takes advantage of the success of deep learning models and adapts existing deep architectures to perform landmark localization. Both the second and third approaches perform 3D pose estimation by first obtaining accurate localization of key landmarks in individual views, and then using calibrated cameras and camera geometry to reconstruct the 3D position of key landmarks.
This thesis shows that the proposed algorithms generate first-of-a-kind and leading results on real world datasets of bats and moths, respectively. Furthermore, a variety of resources are made freely available to the public to further strengthen the connection between research communities
- …