6,076 research outputs found

    A model-driven method for the systematic literature review of qualitative empirical research

    Get PDF
    This paper explores a model-driven method for systematic literature reviews (SLRs), for use where the empirical studies found in the literature search are based on qualitative research. SLRs are an important component of the evidence-based practice (EBP) paradigm, which is receiving increasing attention in information systems (IS) but has not yet been widely-adopted. We illustrate the model-driven approach to SLRs via an example focused on the use of BPMN (Business Process Modelling Notation) in organizations. We discuss in detail the process followed in using the model-driven SLR method, and show how it is based on a hermeneutic cycle of reading and interpreting, in order to develop and refine a model which synthesizes the research findings of previous qualitative studies. This study can serve as an exemplar for other researchers wishing to carry out model-driven SLRs. We conclude with our reflections on the method and some suggestions for further researc

    Multiple View Geometry For Video Analysis And Post-production

    Get PDF
    Multiple view geometry is the foundation of an important class of computer vision techniques for simultaneous recovery of camera motion and scene structure from a set of images. There are numerous important applications in this area. Examples include video post-production, scene reconstruction, registration, surveillance, tracking, and segmentation. In video post-production, which is the topic being addressed in this dissertation, computer analysis of the motion of the camera can replace the currently used manual methods for correctly aligning an artificially inserted object in a scene. However, existing single view methods typically require multiple vanishing points, and therefore would fail when only one vanishing point is available. In addition, current multiple view techniques, making use of either epipolar geometry or trifocal tensor, do not exploit fully the properties of constant or known camera motion. Finally, there does not exist a general solution to the problem of synchronization of N video sequences of distinct general scenes captured by cameras undergoing similar ego-motions, which is the necessary step for video post-production among different input videos. This dissertation proposes several advancements that overcome these limitations. These advancements are used to develop an efficient framework for video analysis and post-production in multiple cameras. In the first part of the dissertation, the novel inter-image constraints are introduced that are particularly useful for scenes where minimal information is available. This result extends the current state-of-the-art in single view geometry techniques to situations where only one vanishing point is available. The property of constant or known camera motion is also described in this dissertation for applications such as calibration of a network of cameras in video surveillance systems, and Euclidean reconstruction from turn-table image sequences in the presence of zoom and focus. We then propose a new framework for the estimation and alignment of camera motions, including both simple (panning, tracking and zooming) and complex (e.g. hand-held) camera motions. Accuracy of these results is demonstrated by applying our approach to video post-production applications such as video cut-and-paste and shadow synthesis. As realistic image-based rendering problems, these applications require extreme accuracy in the estimation of camera geometry, the position and the orientation of the light source, and the photometric properties of the resulting cast shadows. In each case, the theoretical results are fully supported and illustrated by both numerical simulations and thorough experimentation on real data

    How to Track Protists in Three Dimensions

    Get PDF
    We present an apparatus optimized for tracking swimming microorganisms in the size range 10-1000 microns, in three dimensions (3D), far from surfaces, and with negligible background convective fluid motion. CCD cameras attached to two long working distance microscopes synchronously image the sample from two perpendicular directions, with narrowband dark-field or bright-field illumination chosen to avoid triggering a phototactic response. The images from the two cameras can be combined to yield 3D tracks of the organism. Using additional, highly directional broad-spectrum illumination with millisecond timing control the phototactic trajectories in 3D of organisms ranging from Chlamydomonas to Volvox can be studied in detail. Surface-mediated hydrodynamic interactions can also be investigated without convective interference. Minimal modifications to the apparatus allow for studies of chemotaxis and other taxes.Comment: 8 pages, 7 figure

    A Model-Driven Method for the Systematic Literature Review of Qualitative Empirical Research

    Get PDF
    This paper explores a new model-driven method for systematic literature reviews (SLRs), for use where the empirical studies found in the literature search are based on qualitative research. SLRs are an important component of the evidence-based practice (EBP) paradigm, which is receiving increasing attention in information systems (IS) but has not yet been widely-adopted. We illustrate the model-driven approach to SLRs via an example focused on the use of BPMN (Business Process Modelling Notation) in organizations. We discuss in detail the process followed in using the model-driven SLR method, and show how it is based on a hermeneutic cycle of reading and interpreting, in order to develop and refine a model which synthesizes the research findings of qualitative studies. This study can serve as an exemplar for other researchers wishing to carry out model-driven SLRs. We conclude with our reflections on the method and some suggestions for further research

    Static scene illumination estimation from video with applications

    Get PDF
    We present a system that automatically recovers scene geometry and illumination from a video, providing a basis for various applications. Previous image based illumination estimation methods require either user interaction or external information in the form of a database. We adopt structure-from-motion and multi-view stereo for initial scene reconstruction, and then estimate an environment map represented by spherical harmonics (as these perform better than other bases). We also demonstrate several video editing applications that exploit the recovered geometry and illumination, including object insertion (e.g., for augmented reality), shadow detection, and video relighting
    • …
    corecore