9 research outputs found

    Shape and motion under varying illumination: unifying structure from motion, photometric stereo, and multiview stereo

    Full text link

    Dynamic 3D surface reconstruction using a hand-held camera

    Get PDF

    Automatic Camera Model Selection for Multibody Motion Segmentation

    Get PDF
    We study the problem of segmenting independently moving objects in a video sequence. Several algorithms exist for classifying the trajectories of the feature points into independent motions, but the performance depends on the validity of the underlying camera imaging model. In this paper, we present a scheme for automatically selecting the best model using the geometric AIC before the segmentation stage, Using real video sequences, we confirm that the segmentation accuracy indeed improves if the segmentation is based on the selected model. We also show that the trajectory data can be compressed into low-dimensional vectors using the selected model. This is very effective in reducing the computation time for a long video sequence

    Estimating particle velocity from dual-camera mixed reality video images using 3D particle tracking velocimetry

    Get PDF
    Mixed reality (MR) systems integrate diverse sensors, allowing users to better visualize and quantify surrounding environmental processes. Some existing mixed reality headsets include synchronized front-facing cameras that, among other things, can be used to track naturally occurring tracer particles (such as dust or snowflakes) to estimate particle velocity field in real time. The current work presents a 3D particle tracking velocimetry (PTV) method for use with MR systems, which combines various monocular cues to match particles between corresponding stereo images. Binocular disparity is used to estimate particle distance from an observer. Individual particles are tracked through time and used to construct the vector field of a scene. A digital display of velocity vectors can be broadcasted into a user’s surrounding environment with the MR headset to be used as a flow visualization tool. The mixed reality particle tracking velocimetry (MR-PTV) approach was optimized to perform in natural conditions where particle size, particle color, and lighting are non-uniform. The approach was first tested using synthetic particle image data obtained by discrete element method simulations then experimentally validated for particles transported by a flume flow using the Microsoft HoloLens 2 MR headset. Uniform flow and flow around a body were considered experimentally. Experimental velocity measurements are compared to computational fluid dynamics results. The resulting MR-PTV system can be used for a variety of industry, scientific and recreational purposes for field-based measurement of particle velocities in real time

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance
    corecore