8,583 research outputs found

    Event-Based Motion Segmentation by Motion Compensation

    Full text link
    In contrast to traditional cameras, whose pixels have a common exposure time, event-based cameras are novel bio-inspired sensors whose pixels work independently and asynchronously output intensity changes (called "events"), with microsecond resolution. Since events are caused by the apparent motion of objects, event-based cameras sample visual information based on the scene dynamics and are, therefore, a more natural fit than traditional cameras to acquire motion, especially at high speeds, where traditional cameras suffer from motion blur. However, distinguishing between events caused by different moving objects and by the camera's ego-motion is a challenging task. We present the first per-event segmentation method for splitting a scene into independently moving objects. Our method jointly estimates the event-object associations (i.e., segmentation) and the motion parameters of the objects (or the background) by maximization of an objective function, which builds upon recent results on event-based motion-compensation. We provide a thorough evaluation of our method on a public dataset, outperforming the state-of-the-art by as much as 10%. We also show the first quantitative evaluation of a segmentation algorithm for event cameras, yielding around 90% accuracy at 4 pixels relative displacement.Comment: When viewed in Acrobat Reader, several of the figures animate. Video: https://youtu.be/0q6ap_OSBA

    Attentive monitoring of multiple video streams driven by a Bayesian foraging strategy

    Full text link
    In this paper we shall consider the problem of deploying attention to subsets of the video streams for collating the most relevant data and information of interest related to a given task. We formalize this monitoring problem as a foraging problem. We propose a probabilistic framework to model observer's attentive behavior as the behavior of a forager. The forager, moment to moment, focuses its attention on the most informative stream/camera, detects interesting objects or activities, or switches to a more profitable stream. The approach proposed here is suitable to be exploited for multi-stream video summarization. Meanwhile, it can serve as a preliminary step for more sophisticated video surveillance, e.g. activity and behavior analysis. Experimental results achieved on the UCR Videoweb Activities Dataset, a publicly available dataset, are presented to illustrate the utility of the proposed technique.Comment: Accepted to IEEE Transactions on Image Processin

    Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences

    Get PDF
    Results: We present an application that enables the quantitative analysis of multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. Conclusions: By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. There is a pressing need for visualization and analysis tools for 5-D live cell image data. We combine accurate unsupervised processes with an intuitive visualization of the results. Our validation interface allows for each data set to be corrected to 100% accuracy, ensuring that downstream data analysis is accurate and verifiable. Our tool is the first to combine all of these aspects, leveraging the synergies obtained by utilizing validation information from stereo visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc

    Nonbreeding Eastern Curlews Numenius madagascariensis Do Not Increase the Rate of Intake or Digestive Efficiency before Long-Distance Migration because of an Apparent Digestive Constraint

    Get PDF
    The possibility of premigratory modulation in gastric digestive performance was investigated in a long-distance migrant, the eastern curlew (Numenius madagascariensis), in eastern Australia. The rate of intake in the curlews was limited by the rate of digestion but not by food availability. It was hypothesized that before migration, eastern curlews would meet the increased energy demand by increasing energy consumption. It was predicted that (1) an increase in the rate of intake and the corresponding rate of gastric throughput would occur or (2) the gastric digestive efficiency would increase between the mid-nonbreeding and premigratory periods. Neither crude intake rate (the rate of intake calculated including inactive pauses; 0.22 g DM [grams dry mass] or 3.09 kJ min(-1)) nor the rate of gastric throughput (0.15 g DM or 2.85 kJ min(-1)) changed over time. Gastric digestive efficiency did not improve between the periods (91%) nor did the estimated overall energy assimilation efficiency (63% and 58%, respectively). It was concluded that the crustacean-dominated diet of the birds is processed at its highest rate and efficiency throughout a season. It appears that without a qualitative shift in diet, no increase in intake rate is possible. Accepting these findings at their face value poses the question of how and over what time period the eastern curlews store the nutrients necessary for the ensuing long, northward nonstop flight

    Perceptual Scale Expansion: An Efficient Angular Coding Strategy For Locomotor Space

    Get PDF
    Whereas most sensory information is coded on a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for the angular variables important to precise motor control. In four experiments, we show that the perceived declination of gaze, like the perceived orientation of surfaces, is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and nonverbal measures (Experiments 1 and 2), as well as in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching, while allowing for accurate spatial action to be understood as the result of calibration

    "Influence Sketching": Finding Influential Samples In Large-Scale Regressions

    Full text link
    There is an especially strong need in modern large-scale data analysis to prioritize samples for manual inspection. For example, the inspection could target important mislabeled samples or key vulnerabilities exploitable by an adversarial attack. In order to solve the "needle in the haystack" problem of which samples to inspect, we develop a new scalable version of Cook's distance, a classical statistical technique for identifying samples which unusually strongly impact the fit of a regression model (and its downstream predictions). In order to scale this technique up to very large and high-dimensional datasets, we introduce a new algorithm which we call "influence sketching." Influence sketching embeds random projections within the influence computation; in particular, the influence score is calculated using the randomly projected pseudo-dataset from the post-convergence Generalized Linear Model (GLM). We validate that influence sketching can reliably and successfully discover influential samples by applying the technique to a malware detection dataset of over 2 million executable files, each represented with almost 100,000 features. For example, we find that randomly deleting approximately 10% of training samples reduces predictive accuracy only slightly from 99.47% to 99.45%, whereas deleting the same number of samples with high influence sketch scores reduces predictive accuracy all the way down to 90.24%. Moreover, we find that influential samples are especially likely to be mislabeled. In the case study, we manually inspect the most influential samples, and find that influence sketching pointed us to new, previously unidentified pieces of malware.Comment: fixed additional typo

    Orbital debris research at NASA Johnson Space Center, 1986-1988

    Get PDF
    Research on orbital debris has intensified in recent years as the number of debris objects in orbit has grown. The population of small debris has now reached the level that orbital debris has become an important design factor for the Space Station. The most active center of research in this field has been the NASA Lyndon B. Johnson Space Center. Work is being done on the measurement of orbital debris, development of models of the debris population, and development of improved shielding against hypervelocity impacts. Significant advances have been made in these areas. The purpose of this document is to summarize these results and provide references for further study

    Localisation and tracking of people using distributed UWB sensors

    Get PDF
    In vielen Überwachungs- und Rettungsszenarien ist die Lokalisierung und Verfolgung von Personen in Innenräumen auf nichtkooperative Weise erforderlich. Für die Erkennung von Objekten durch Wände in kurzer bis mittlerer Entfernung, ist die Ultrabreitband (UWB) Radartechnologie aufgrund ihrer hohen zeitlichen Auflösung und Durchdringungsfähigkeit Erfolg versprechend. In dieser Arbeit wird ein Prozess vorgestellt, mit dem Personen in Innenräumen mittels UWB-Sensoren lokalisiert werden können. Er umfasst neben der Erfassung von Messdaten, Abstandschätzungen und dem Erkennen von Mehrfachzielen auch deren Ortung und Verfolgung. Aufgrund der schwachen Reflektion von Personen im Vergleich zum Rest der Umgebung, wird zur Personenerkennung zuerst eine Hintergrundsubtraktionsmethode verwendet. Danach wird eine konstante Falschalarmrate Methode zur Detektion und Abstandschätzung von Personen angewendet. Für Mehrfachziellokalisierung mit einem UWB-Sensor wird eine Assoziationsmethode entwickelt, um die Schätzungen des Zielabstandes den richtigen Zielen zuzuordnen. In Szenarien mit mehreren Zielen kann es vorkommen, dass ein näher zum Sensor positioniertes Ziel ein anderes abschattet. Ein Konzept für ein verteiltes UWB-Sensornetzwerk wird vorgestellt, in dem sich das Sichtfeld des Systems durch die Verwendung mehrerer Sensoren mit unterschiedlichen Blickfeldern erweitert lässt. Hierbei wurde ein Prototyp entwickelt, der durch Fusion von Sensordaten die Verfolgung von Mehrfachzielen in Echtzeit ermöglicht. Dabei spielen insbesondere auch Synchronisierungs- und Kooperationsaspekte eine entscheidende Rolle. Sensordaten können durch Zeitversatz und systematische Fehler gestört sein. Falschmessungen und Rauschen in den Messungen beeinflussen die Genauigkeit der Schätzergebnisse. Weitere Erkenntnisse über die Zielzustände können durch die Nutzung zeitlicher Informationen gewonnen werden. Ein Mehrfachzielverfolgungssystem wird auf der Grundlage des Wahrscheinlichkeitshypothesenfilters (Probability Hypothesis Density Filter) entwickelt, und die Unterschiede in der Systemleistung werden bezüglich der von den Sensoren ausgegebene Informationen, d.h. die Fusion von Ortungsinformationen und die Fusion von Abstandsinformationen, untersucht. Die Information, dass ein Ziel detektiert werden sollte, wenn es aufgrund von Abschattungen durch andere Ziele im Szenario nicht erkannt wurde, wird als dynamische Überdeckungswahrscheinlichkeit beschrieben. Die dynamische Überdeckungswahrscheinlichkeit wird in das Verfolgungssystem integriert, wodurch weniger Sensoren verwendet werden können, während gleichzeitig die Performanz des Schätzers in diesem Szenario verbessert wird. Bei der Methodenauswahl und -entwicklung wurde die Anforderung einer Echtzeitanwendung bei unbekannten Szenarien berücksichtigt. Jeder untersuchte Aspekt der Mehrpersonenlokalisierung wurde im Rahmen dieser Arbeit mit Hilfe von Simulationen und Messungen in einer realistischen Umgebung mit UWB Sensoren verifiziert.Indoor localisation and tracking of people in non-cooperative manner is important in many surveillance and rescue applications. Ultra wideband (UWB) radar technology is promising for through-wall detection of objects in short to medium distances due to its high temporal resolution and penetration capability. This thesis tackles the problem of localisation of people in indoor scenarios using UWB sensors. It follows the process from measurement acquisition, multiple target detection and range estimation to multiple target localisation and tracking. Due to the weak reflection of people compared to the rest of the environment, a background subtraction method is initially used for the detection of people. Subsequently, a constant false alarm rate method is applied for detection and range estimation of multiple persons. For multiple target localisation using a single UWB sensor, an association method is developed to assign target range estimates to the correct targets. In the presence of multiple targets it can happen that targets closer to the sensor induce shadowing over the environment hindering the detection of other targets. A concept for a distributed UWB sensor network is presented aiming at extending the field of view of the system by using several sensors with different fields of view. A real-time operational prototype has been developed taking into consideration sensor cooperation and synchronisation aspects, as well as fusion of the information provided by all sensors. Sensor data may be erroneous due to sensor bias and time offset. Incorrect measurements and measurement noise influence the accuracy of the estimation results. Additional insight of the targets states can be gained by exploiting temporal information. A multiple person tracking framework is developed based on the probability hypothesis density filter, and the differences in system performance are highlighted with respect to the information provided by the sensors i.e. location information fusion vs range information fusion. The information that a target should have been detected when it is not due to shadowing induced by other targets is described as dynamic occlusion probability. The dynamic occlusion probability is incorporated into the tracking framework, allowing fewer sensors to be used while improving the tracker performance in the scenario. The method selection and development has taken into consideration real-time application requirements for unknown scenarios at every step. Each investigated aspect of multiple person localization within the scope of this thesis has been verified using simulations and measurements in a realistic environment using M-sequence UWB sensors
    corecore