278 research outputs found

    Towards dynamic camera calibration for constrained flexible mirror imaging

    Get PDF
    Flexible mirror imaging systems consisting of a perspective camera viewing a scene reflected in a flexible mirror can provide direct control over image field-of-view and resolution. However, calibration of such systems is difficult due to the vast range of possible mirror shapes and the flexible nature of the system. This paper proposes the fundamentals of a dynamic calibration approach for flexible mirror imaging systems by examining the constrained case of single dimensional flexing. The calibration process consists of an initial primary calibration stage followed by in-service dynamic calibration. Dynamic calibration uses a linear approximation to initialise a non-linear minimisation step, the result of which is the estimate of the mirror surface shape. The method is easier to implement than existing calibration methods for flexible mirror imagers, requiring only two images of a calibration grid for each dynamic calibration update. Experimental results with both simulated and real data are presented that demonstrate the capabilities of the proposed approach

    The Secret Lives of Ebooks: A Paratextual Analysis Illuminates a Veil of Usage Statistics

    Get PDF
    This study applies the method of paratextual analysis to six electronic books, or ebooks, in an academic library collection at a small liberal arts college. Two books are selected from each of three platforms: ebrary, EBSCO, and SpringerLink. The characteristics of each book are described, including design and readership, as well as 2 years of usage statistics from the specific library, and altmetrics where available. The paratextual study leads to a closer investigation of the usage statistics themselves and concludes that despite industry standards, they are not calculated consistently across vendor platforms and that while these data are invisible to researchers outside of the library, there are also essential elements that librarians mistakenly take at face value when comparing ebook usage from multiple vendors

    Three dimensional information estimation and tracking for moving objects detection using two cameras framework

    Get PDF
    Calibration, matching and tracking are major concerns to obtain 3D information consisting of depth, direction and velocity. In finding depth, camera parameters and matched points are two necessary inputs. Depth, direction and matched points can be achieved accurately if cameras are well calibrated using manual traditional calibration. However, most of the manual traditional calibration methods are inconvenient to use because markers or real size of an object in the real world must be provided or known. Self-calibration can solve the traditional calibration limitation, but not on depth and matched points. Other approaches attempted to match corresponding object using 2D visual information without calibration, but they suffer low matching accuracy under huge perspective distortion. This research focuses on achieving 3D information using self-calibrated tracking system. In this system, matching and tracking are done under self-calibrated condition. There are three contributions introduced in this research to achieve the objectives. Firstly, orientation correction is introduced to obtain better relationship matrices for matching purpose during tracking. Secondly, after having relationship matrices another post-processing method, which is status based matching, is introduced for improving object matching result. This proposed matching algorithm is able to achieve almost 90% of matching rate. Depth is estimated after the status based matching. Thirdly, tracking is done based on x-y coordinates and the estimated depth under self-calibrated condition. Results show that the proposed self-calibrated tracking system successfully differentiates the location of objects even under occlusion in the field of view, and is able to determine the direction and the velocity of multiple moving objects

    Center Symmetric Local Multilevel Pattern Based Descriptor and Its Application in Image Matching

    Get PDF
    This paper presents an effective local image region description method, called CS-LMP (Center Symmetric Local Multilevel Pattern) descriptor, and its application in image matching. The CS-LMP operator has no exponential computations, so the CS-LMP descriptor can encode the differences of the local intensity values using multiply quantization levels without increasing the dimension of the descriptor. Compared with the binary/ternary pattern based descriptors, the CS-LMP descriptor has better descriptive ability and computational efficiency. Extensive image matching experimental results testified the effectiveness of the proposed CS-LMP descriptor compared with other existing state-of-the-art descriptors

    Self-supervised Interest Point Detection and Description for Fisheye and Perspective Images

    Full text link
    Keypoint detection and matching is a fundamental task in many computer vision problems, from shape reconstruction, to structure from motion, to AR/VR applications and robotics. It is a well-studied problem with remarkable successes such as SIFT, and more recent deep learning approaches. While great robustness is exhibited by these techniques with respect to noise, illumination variation, and rigid motion transformations, less attention has been placed on image distortion sensitivity. In this work, we focus on the case when this is caused by the geometry of the cameras used for image acquisition, and consider the keypoint detection and matching problem between the hybrid scenario of a fisheye and a projective image. We build on a state-of-the-art approach and derive a self-supervised procedure that enables training an interest point detector and descriptor network. We also collected two new datasets for additional training and testing in this unexplored scenario, and we demonstrate that current approaches are suboptimal because they are designed to work in traditional projective conditions, while the proposed approach turns out to be the most effective.Comment: CVPR Workshop on Omnidirectional Computer Vision, 202
    corecore