19 research outputs found

    Event Detection by Feature Unpredictability in Phase-Contrast Videos of Cell Cultures

    Full text link
    Abstract. In this work we propose a novel framework for generic event monitoring in live cell culture videos, built on the assumption that un-predictable observations should correspond to biological events. We use a small set of event-free data to train a multioutput multikernel Gaussian process model that operates as an event predictor by performing autore-gression on a bank of heterogeneous features extracted from consecutive frames of a video sequence. We show that the prediction error of this model can be used as a probability measure of the presence of relevant events, that can enable users to perform further analysis or monitoring of large-scale non-annotated data. We validate our approach in two phase-contrast sequence data sets containing mitosis and apoptosis events: a new private dataset of human bone cancer (osteosarcoma) cells and a benchmark dataset of stem cells

    Monocular Visual Scene Understanding from Mobile Platforms

    Get PDF
    Automatic visual scene understanding is one of the ultimate goals in computer vision and has been in the field’s focus since its early beginning. Despite continuous effort over several years, applications such as autonomous driving and robotics are still unsolved and subject to active research. In recent years, improved probabilistic methods became a popular tool for current state-of-the-art computer vision algorithms. Additionally, high resolution digital imaging devices and increased computational power became available. By leveraging these methodical and technical advancements current methods obtain encouraging results in well defined environments for robust object class detection, tracking and pixel-wise semantic scene labeling and give rise to renewed hope for further progress in scene understanding for real environments. This thesis improves state-of-the-art scene understanding with monocular cameras and aims for applications on mobile platforms such as service robots or driver assistance for automotive safety. It develops and improves approaches for object class detection and semantic scene labeling and integrates those into models for global scene reasoning which exploit context at different levels. To enhance object class detection, we perform a thorough evaluation for people and pedestrian detection with the popular sliding window framework. In particular, we address pedestrian detection from a moving camera and provide new benchmark datasets for this task. As frequently used single-window metrics can fail to predict algorithm performance, we argue for application-driven image-based evaluation metrics, which allow a better system assessment. We propose and analyze features and their combination based on visual and motion cues. Detection performance is evaluated systematically for different feature-classifiers combinations which is crucial to yield best results. Our results indicate that cue combination with complementary features allow improved performance. Despite camera ego-motion, we obtain significantly better detection results for motion-enhanced pedestrian detectors. Realistic onboard applications demand real-time processing with frame rates of 10 Hz and higher. In this thesis we propose to exploit parallelism in order to achieve the required runtime performance for sliding window object detection. In a case study we employ commodity graphics hardware for the popular histograms of oriented gradients (HOG) detection approach and achieve a significant speed-up compared to a baseline CPU implementation. Furthermore, we propose an integrated dynamic conditional random field model for joint semantic scene labeling and object detection in highly dynamic scenes. Our model improves semantic context modeling and fuses low-level filter bank responses with more global object detections. Recognition performance is increased for object as well as scene classes. Integration over time needs to account for different dynamics of objects and scene classes but yields more robust results. Finally, we propose a probabilistic 3D scene model that encompasses multi-class object detection, object tracking, scene labeling, and 3D geometric relations. This integrated 3D model is able to represent complex interactions like inter-object occlusion, physical exclusion between objects, and geometric context. Inference in this model allows to recover 3D scene context and perform 3D multi-object tracking from a mobile observer, for objects of multiple categories, using only monocular video as input. Our results indicate that our joint scene tracklet model for the evidence collected over multiple frames substantially improves performance. All experiments throughout this thesis are performed on challenging real world data. We contribute several datasets that were recorded from moving cars in urban and sub-urban environments. Highly dynamic scenes are obtained while driving in normal traffic on rural roads. Our experiments support that joint models, which integrate semantic scene labeling, object detection and tracking, are well suited to improve the individual stand-alone tasks’ performance

    Growth and dispersal patterns of Tribolium castaneum in different size habitats

    Get PDF
    Citation: Hall, B. (2017). Growth and dispersal patterns of Tribolium castaneum in different size habitats . 1st Annual Undergraduate Research Experience in Entomology Symposium, November 16, 2016. Manhattam, KS.Competition for space, resources, and mates plays an important role in the survivorship of many organisms (Sbilordo et al. 2011). Understanding how competition affects a population is a crucial component in ensuring the survival of threatened and endangered species (Halliday et al. 2015). But what affect does an organism’s habitat size have on its ability to grow in population? Habitat size and competition have an inverse relationship. As the habitat decreases in size, there is an increase in intraspecific competition. In this experiment, we tested this relationship. We found that Tribolium castaneum produced less offspring in smaller containers compared to larger ones. They also had larger distances between individuals in larger containers. This research helps support the hypothesis that habitat destruction can negatively affect the growth of a population (Van Allen et al. 2016)

    Joint 3d estimation of objects and scene layout

    No full text
    We propose a novel generative model that is able to reason jointly about the 3D scene layout as well as the 3D location and orientation of objects in the scene. In particular, we infer the scene topology, geometry as well as traffic activities from a short video sequence acquired with a single camera mounted on a moving car. Our generative model takes advantage of dynamic information in the form of vehicle tracklets as well as static information coming from semantic labels and ge-ometry (i.e., vanishing points). Experiments show that our approach outperforms a discriminative baseline based on multiple kernel learning (MKL) which has ac-cess to the same image information. Furthermore, as we reason about objects in 3D, we are able to significantly increase the performance of state-of-the-art object detectors in their ability to estimate object orientation.

    Real-time Full-body Visual Traits Recognition from Image Sequences

    No full text
    The automatic recognition of human visual traits from images is a challenging computer vision task. Visual traits describe for example gender and age, or other properties of a person that can be derived from visual appearance. Gathering anonymous knowledge about people from visual cues bears potential for many interesting applications, as for example in the area of human machine interfacing, targeted advertisement or video surveillance. Most related work investigates visual traits recognition from facial features of a person, with good recognition performance. Few systems have recently applied recognition on low resolution full-body images, which shows lower performance than the facial regions but already can deliver classification results even if no face is visible. Obviously full-body classification is more challenging, mainly due to large variations in body pose, clothing and occlusion. In our study we present an approach to human visual traits recognition, based on Histogram of oriented Gradients (HoG), colour features and Support Vector Machines (SVM). In this experimental study we focus on gender classification. Motivated by our application of real-time adaptive advertisement on public situated displays, and unlike previous works, we perform a thorough evaluation on much more comprehensive datasets that include hard cases like side- and back views. The extended annotations used in our evaluation will be published. We further show that a hierarchical classification scheme to disambiguate a person's directional orientation and additional colour features can increase recognition rates. Finally, we demonstrate that temporal integration of per-frame classification scores significantly improves the overall classification performance for tracked individuals and clearly outperforms current state-of-the-art accuracy for single images
    corecore