6,162 research outputs found

    Selective visual odometry for accurate AUV localization

    Get PDF
    In this paper we present a stereo visual odometry system developed for autonomous underwater vehicle localization tasks. The main idea is to make use of only highly reliable data in the estimation process, employing a robust keypoint tracking approach and an effective keyframe selection strategy, so that camera movements are estimated with high accuracy even for long paths. Furthermore, in order to limit the drift error, camera pose estimation is referred to the last keyframe, selected by analyzing the feature temporal flow. The proposed system was tested on the KITTI evaluation framework and on the New Tsukuba stereo dataset to assess its effectiveness on long tracks and different illumination conditions. Results of a live archaeological campaign in the Mediterranean Sea, on an AUV equipped with a stereo camera pair, show that our solution can effectively work in underwater environments

    Fast adaptive frame preprocessing for 3D reconstruction

    Get PDF
    This paper presents a new online preprocessing strategy to detect and discard ongoing bad frames in video sequences. These include frames where an accurate localization between corresponding points is difficult, such as for blurred frames, or which do not provide relevant information with respect to the previous frames in terms of texture, image contrast and non-flat areas. Unlike keyframe selectors and deblurring methods, the proposed approach is a fast preprocessing working on a simple gradient statistic, that does not require to compute complex time-consuming image processing, such as the computation of image feature keypoints, previous poses and 3D structure, or to know a priori the input sequence. The presented method provides a fast and useful frame pre-analysis which can be used to improve further image analysis tasks, including also the keyframe selection or the blur detection, or to directly filter the video sequence as shown in the paper, improving the final 3D reconstruction by discarding noisy frames and decreasing the final computation time by removing some redundant frames. This scheme is adaptive, fast and works at runtime by exploiting the image gradient statistic of the last few frames of the video sequence. Experimental results show that the proposed frame selection strategy is robust and improves the final 3D reconstruction both in terms of number of obtained 3D points and reprojection error, also reducing the computational time

    Restoration and enhancement of historical stereo photos

    Get PDF
    Restoration of digital visual media acquired from repositories of historical photographic and cinematographic material is of key importance for the preservation, study and transmission of the legacy of past cultures to the coming generations. In this paper, a fully automatic approach to the digital restoration of historical stereo photographs is proposed, referred to as Stacked Median Restoration plus (SMR+). The approach exploits the content redundancy in stereo pairs for detecting and fixing scratches, dust, dirt spots and many other defects in the original images, as well as improving contrast and illumination. This is done by estimating the optical flow between the images, and using it to register one view onto the other both geometrically and photometrically. Restoration is then accomplished in three steps: (1) image fusion according to the stacked median operator, (2) low-resolution detail enhancement by guided supersampling, and (3) iterative visual consistency checking and refinement. Each step implements an original algorithm specifically designed for this work. The restored image is fully consistent with the original content, thus improving over the methods based on image hallucination. Comparative results on three different datasets of historical stereograms show the effectiveness of the proposed approach, and its superiority over single-image denoising and super-resolution methods. Results also show that the performance of the state-of-the-art single-image deep restoration network Bringing Old Photo Back to Life (BOPBtL) can be strongly improved when the input image is pre-processed by SMR+

    Accurate keyframe selection and keypoint tracking for robust visual odometry

    Get PDF
    This paper presents a novel stereo visual odometry (VO) framework based on structure from motion, where a robust keypoint tracking and matching is combined with an effective keyframe selection strategy. In order to track and find correct feature correspondences a robust loop chain matching scheme on two consecutive stereo pairs is introduced. Keyframe selection is based on the proportion of features with high temporal disparity. This criterion relies on the observation that the error in the pose estimation propagates from the uncertainty of 3D points—higher for distant points, that have low 2D motion. Comparative results based on three VO datasets show that the proposed solution is remarkably effective and robust even for very long path lengths

    A vision-based fully automated approach to robust image cropping detection

    Get PDF
    The definition of valid and robust methodologies for assessing the authenticity of digital information is nowadays critical to contrast social manipulation through the media. A key research topic in multimedia forensics is the development of methods for detecting tampered content in large image collections without any human intervention. This paper introduces AMARCORD (Automatic Manhattan-scene AsymmetRically CrOpped imageRy Detector), a fully automated detector for exposing evidences of asymmetrical image cropping on Manhattan-World scenes. The proposed solution estimates and exploits the camera principal point, i.e., a physical feature extracted directly from the image content that is quite insensitive to image processing operations, such as compression and resizing, typical of social media platforms. Robust computer vision techniques are employed throughout, so as to cope with large sources of noise in the data and improve detection performance. The method leverages a novel metric based on robust statistics, and is also capable to decide autonomously whether the image at hand is tractable or not. The results of an extensive experimental evaluation covering several cropping scenarios demonstrate the effectiveness and robustness of our approac

    The Small Sized Galericini from F32 “Terre Rosse” fissure filling (Gargano, Southeastern Italy) and its biochronological implications.

    Get PDF
    Deinogalerix is by far the better known Galericinae (Galericini according toVan den Hoek Ostende, 2001) from the Gargano fissure filling thanks to very careful description of Butler 1980. However, another moon rat, very small sized, belonging to the Galericini tribe, is virtually present in all the assemblages from the “terre rosse” fissure filling of the Gargano. It was first mentioned in the pioneering report of Freudenthal (1971) and by Butler (1980) in his study of the “gigantic” Deinogaleri and it has been quoted in several faunal lists. It has been ascribed to Parasorex by Van den Hoek Ostende (2001), and to Galerix (Apulogalerix) by Fanfani (1999). Van den Hoek Ostende (2001) considered it and the large Deinogalerix as derived from a common ancestor. Up to date, however, a detailed description of the characters of this gymnure has never been presented to the scientific community. Indeed, only De Giuli et al. (1987) used the size of the mandible in six selected samples to describe the variation along their proposed biochonology of the terre rosse. We present here the morphological description of a sample from Fissure Filling F32, that is considered to represent the youngest phase of population of the Gargano Paleoarchipelago. This sample has been chosen since it is very rich and is not affected by taphonomic biases. The philogenetic relationships of the Gargano Galericini are discussed. The description of a new very primitive species of Deingalerix from Pirro 12 fissure filling (Villier, 2011) opens new perspectives to investigate the relationships between the small and the gigantic Galericine of the Gargano

    Collective Bargaining and the Evolution of Wage Inequality in Italy

    Get PDF
    Italian male wage inequality has increased at a relatively fast pace from the mid-1980s until the early 2000s, while it has been persistently flat since then. We analyse this trend, focusing on the period of most rapid growth in pay dispersion. By accounting for worker and firm fixed effects, it is shown that workers' heterogeneity has been a major determinant of increased wage inequalities, while variability in firm wage policies has declined over time. We also show that the growth in pay dispersion has entirely occurred between livelli di inquadramento, that is, job titles defined by national industry-wide collective bargaining institutions, for which specific minimum wages apply. We conclude that the underlying market forces determining wage inequality have been largely channelled into the tight tracks set by the centralized system of industrial relations

    ATLAS and CMS applications on the WorldGrid testbed

    Full text link
    WorldGrid is an intercontinental testbed spanning Europe and the US integrating architecturally different Grid implementations based on the Globus toolkit. It has been developed in the context of the DataTAG and iVDGL projects, and successfully demonstrated during the WorldGrid demos at IST2002 (Copenhagen) and SC2002 (Baltimore). Two HEP experiments, ATLAS and CMS, successful exploited the WorldGrid testbed for executing jobs simulating the response of their detectors to physics eve nts produced by real collisions expected at the LHC accelerator starting from 2007. This data intensive activity has been run since many years on local dedicated computing farms consisting of hundreds of nodes and Terabytes of disk and tape storage. Within the WorldGrid testbed, for the first time HEP simulation jobs were submitted and run indifferently on US and European resources, despite of their underlying different Grid implementations, and produced data which could be retrieved and further analysed on the submitting machine, or simply stored on the remote resources and registered on a Replica Catalogue which made them available to the Grid for further processing. In this contribution we describe the job submission from Europe for both ATLAS and CMS applications, performed through the GENIUS portal operating on top of an EDG User Interface submitting to an EDG Resource Broker, pointing out the chosen interoperability solutions which made US and European resources equivalent from the applications point of view, the data management in the WorldGrid environment, and the CMS specific production tools which were interfaced to the GENIUS portal.Comment: Poster paper from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 10 pages, PDF. PSN TUCP004; added credit to funding agenc

    The Grid-distributed data analysis in CMS

    Get PDF
    The CMS experiment will soon produce a huge amount of data (a few PBytes per year) that will be distributed and stored in many computing centres spread across the countries participating in the collaboration. Data will be available to the whole CMS physicists: this will be possible thanks to the services provided by supported Grids. CRAB is the CMS collaboration tool developed to allow physicists to access and analyze data stored over world-wide sites. It aims to simplify the data discovery process and the jobs creation, execution and monitoring tasks hiding the details related both to Grid infrastructures and CMS analysis framework. We will discuss the recent evolution of this tool from its standalone version up to the clientserver architecture adopted for particularly challenging workload volumes and we will report the usage statistics collected from the CRAB community, involving so far almost 600 distinct users
    • …
    corecore