352 research outputs found

    Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate-Coding and Coincidence Processing. Application to Feed-Forward ConvNets

    Get PDF
    Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given “frame rate”. Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of Event-driven sensor is the so called Dynamic-Vision-Sensor (DVS) where each pixel computes relative changes of light, or “temporal contrast”. The sensor output consists of a continuous flow of pixel events which represent the moving objects in the scene. Pixel events become available with micro second delays with respect to “reality”. These events can be processed “as they flow” by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper we present a methodology for mapping from a properly trained neural network in a conventional Frame-driven representation, to an Event-driven representation. The method is illustrated by studying Event-driven Convolutional Neural Networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The Event-driven ConvNet is fed with recordings obtained from a real DVS camera. The Event-driven ConvNet is simulated with a dedicated Event-driven simulator, and consists of a number of Event-driven processing modules the characteristics of which are obtained from individually manufactured hardware modules

    Markerless View Independent Gait Analysis with Self-camera Calibration

    No full text
    We present a new method for viewpoint independent markerless gait analysis. The system uses a single camera, does not require camera calibration and works with a wide range of directions of walking. These properties make the proposed method particularly suitable for identification by gait, where the advantages of completely unobtrusiveness, remoteness and covertness of the biometric system preclude the availability of camera information and use of marker based technology. Tests on more than 200 video sequences with subjects walking freely along different walking directions have been performed. The obtained results show that markerless gait analysis can be achieved without any knowledge of internal or external camera parameters and that the obtained data that can be used for gait biometrics purposes. The performance of the proposed method is particularly encouraging for its appliance in surveillance scenarios

    Temporally coherent 4D reconstruction of complex dynamic scenes

    Get PDF
    This paper presents an approach for reconstruction of 4D temporally coherent models of complex dynamic scenes. No prior knowledge is required of scene structure or camera calibration allowing reconstruction from multiple moving cameras. Sparse-to-dense temporal correspondence is integrated with joint multi-view segmentation and reconstruction to obtain a complete 4D representation of static and dynamic objects. Temporal coherence is exploited to overcome visual ambiguities resulting in improved reconstruction of complex scenes. Robust joint segmentation and reconstruction of dynamic objects is achieved by introducing a geodesic star convexity constraint. Comparative evaluation is performed on a variety of unstructured indoor and outdoor dynamic scenes with hand-held cameras and multiple people. This demonstrates reconstruction of complete temporally coherent 4D scene models with improved nonrigid object segmentation and shape reconstruction.Comment: To appear in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016 . Video available at: https://www.youtube.com/watch?v=bm_P13_-Ds

    Novel vision based estimation techniques for the analysis of cavitation bubbles

    Get PDF
    Visualization and analysis of micro/nano structures throughout multiphase ow have received signi cant attention in recent years due to remarkable advances in micro imaging technologies. In this context, monitoring bubbles and describing their structural and motion characteristics are crucial for hydrodynamic cavitation in biomedical applications. In this thesis, novel vision based estimation techniques are developed for the analysis of cavitation bubbles. Cone angle of multiphase bubbly ow and distributions of scattered bubbles around main ow are important quantities in positioning the ori ce of cavitation generator towards the target and controlling the destructive cavitation e ect. To estimate the cone angle of the ow, a Kalman lter which utilizes 3D Gaussian modeling of multiphase ow and edge pixels of the cross-section is implemented. Scattered bubble swarm distributions around main ow are assumed to be Gaussian and geometric properties of the covariance matrix of the bubble position data are exploited. Moreover, a new method is developed to track evolution of single, double and triple rising bubbles during hydrodynamic cavitation. Proposed tracker fuses shape and motion features of the individually detected bubbles and employs the well-known Bhattacharyya distance. Furthermore, contours of the tracked bubbles are modeled using elliptic Fourier descriptors (EFD) to extract invariant properties of single rising bubbles throughout the motion. To verify the proposed techniques, hydrodynamic cavitating bubbles are generated under 10 to 120 bars inlet pressures and monitored via Particle Shadow Sizing (PSS) technique. Experimental results are quite promising

    Analysis of Bas-Relief Generation Techniques

    Get PDF
    Simplifying the process of generating relief sculptures has been an interesting topic of research in the past decade. A relief is a type of sculpture that does not entirely extend into three-dimensional space. Instead, it has details that are carved into a flat surface, like wood or stone, such that there are slight elevations from the flat plane that define the subject of the sculpture. When viewed orthogonally straight on, a relief can look like a full sculpture or statue in the respect that a full sense of depth from the subject can be perceived. Creating such a model manually is a tedious and difficult process, akin to the challenges a painter may face when designing a convincing painting. Like with painting, certain digital tools (3D modeling programs most commonly) can make the process a little easier, but can still take a lot of time to obtain sufficient details. To further simplify the process of relief generation, a sizable amount of research has gone into developing semi-automated processes of creating reliefs based on different types of models. These methods can vary in many ways, including the type of input used, the computational time required, and the quality of the resulting model. The performance typically depends on the type of operations applied to the input model, and usually user-specified parameters to modify its appearance. In this thesis, we try to accomplish a few related topics. First, we analyze previous work in the field and briefly summarize the procedures to emphasize a variety of ways to solve the problem. We then look at specific algorithms for generating reliefs from 2D and 3D models. After explaining two of each type, a “basic” approach, and a more sophisticated one, we compare the algorithms based on their difficulty to implement, the quality of the results, and the time to process. The final section will include some more sample results of the previous algorithms, and will suggest possible ideas to enhance their results, which could be applied in continuing research on the topic

    Efficient algorithms for occlusion culling and shadows

    Get PDF
    The goal of this research is to develop more efficient techniques for computing the visibility and shadows in real-time rendering of three-dimensional scenes. Visibility algorithms determine what is visible from a camera, whereas shadow algorithms solve the same problem from the viewpoint of a light source. In rendering, a lot of computational resources are often spent on primitives that are not visible in the final image. One visibility algorithm for reducing the overhead is occlusion culling, which quickly discards the objects or primitives that are obstructed from the view by other primitives. A new method is presented for performing occlusion culling using silhouettes of meshes instead of triangles. Additionally, modifications are suggested to occlusion queries in order to reduce their computational overhead. The performance of currently available graphics hardware depends on the ordering of input primitives. A new technique, called delay streams, is proposed as a generic solution to order-dependent problems. The technique significantly reduces the pixel processing requirements by improving the efficiency of occlusion culling inside graphics hardware. Additionally, the memory requirements of order-independent transparency algorithms are reduced. A shadow map is a discretized representation of the scene geometry as seen by a light source. Typically the discretization causes difficult aliasing issues, such as jagged shadow boundaries and incorrect self-shadowing. A novel solution is presented for suppressing all types of aliasing artifacts by providing the correct sampling points for shadow maps, thus fully abandoning the previously used regular structures. Also, a simple technique is introduced for limiting the shadow map lookups to the pixels that get projected inside the shadow map. The fillrate problem of hardware-accelerated shadow volumes is greatly reduced with a new hierarchical rendering technique. The algorithm performs per-pixel shadow computations only at visible shadow boundaries, and uses lower resolution shadows for the parts of the screen that are guaranteed to be either fully lit or fully in shadow. The proposed techniques are expected to improve the rendering performance in most real-time applications that use 3D graphics, especially in computer games. More efficient algorithms for occlusion culling and shadows are important steps towards larger, more realistic virtual environments.reviewe

    The University of Southampton Multi-Biometric Tunnel and introducing a novel 3D gait dataset

    Full text link

    The feasibility of high resolution, three-dimensional reconstruction of metal-coated surfaces in structural biology

    Get PDF
    >Magister Scientiae - MScLife is an emergent property of a complex network of interacting cellular-machines. Three-dimensional (3D), cellular structure captured at supra-atomic resolution has the potential to revolutionise our understanding of the interactions, dynamics and structure of these machines: proteins, organelles and other cellular constituents, in their normal functional states. Techniques, capable of acquiring 3D cellular structure at sufficient resolution to enable identification and interpretation of individual macromolecules in the cellular milieu, have the potential to provide this data. Advances in cryo-preservation, preparation and metal-coating techniques allow images of the surfaces of in situ macromolecules to be obtained in a life-like state by field emission scanning – and transmission electron microscopy (FE/SEM, FE/TEM) at a resolution of 2-4 nm. A large body of macromolecular structural information has been obtained using these techniques, but while the images produced provide a qualitative impression of three-dimensionality, computational methods are required to extract quantitative 3D structure. In order to test the feasibility of applying various photogrammetric and tomographic algorithms to micrographs of well-preserved metal-coated biological surfaces, several algorithms were attempted on a variety of FE/SEM and TEM micrographs. A stereoscopic algorithm was implemented and applied to FESEM stereo images of the nuclear pore basket, resulting in a high quality digital elevation map. A SEM rotation series of an object of complicated topology (ant) was reconstructed volumetrically by silhouette-intersection. Finally, the iterative helical real-space reconstruction technique as applied to cryo-TEM micrographs of unidirectionally heavy-metal shadowed. These preliminary results confirm that 3D information obtained from multiple TEM or SEM surface images could be applied to the problem of 3D macromolecular imaging in the cellular context. However, each of the various methods described here comes with peculiar topological, resolution and geometrical limitations, some of which are inherent shortcomings of the methodologies described; others might be overcome with improved algorithms. Combined with carefully designed surface experiments, some of the methods investigated here could provide novel insights and extend current surface-imaging studies. Docking of atomic resolution structures into low-resolution maps derived from surface imaging experiments is a particularly exciting prospect
    • …
    corecore