8,547 research outputs found

    Development of a laser scanning thickness measurement inspection system

    Get PDF
    The quality specifications for products and the materials used in them are becoming ever more demanding. The solution to the many visual inspection quality assurance (QA) problems is the use of automatic in-line surface inspection systems. These need to achieve uniform product quality at high throughput speeds. As a result, there is a need for systems that allow 100% in-line testing of materials and surfaces. To reach this goal laser technology integrated with computer control technology provides a useful solution. In this work, a high speed, low cost, and high accuracy non-contact laser scanning inspection system is developed. The system is capable of measuring the thickness of solid, non-transparent objects using the principle of laser-optical triangulation. Measurement accuracies and repeatabilities to the micrometer level are achieved with the developed system

    LOFAR Sparse Image Reconstruction

    Get PDF
    Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods Aims. Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework Methods. We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data Results. We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions. Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKAComment: Published in A&A, 19 pages, 9 figure

    Robust convex optimisation techniques for autonomous vehicle vision-based navigation

    Get PDF
    This thesis investigates new convex optimisation techniques for motion and pose estimation. Numerous computer vision problems can be formulated as optimisation problems. These optimisation problems are generally solved via linear techniques using the singular value decomposition or iterative methods under an L2 norm minimisation. Linear techniques have the advantage of offering a closed-form solution that is simple to implement. The quantity being minimised is, however, not geometrically or statistically meaningful. Conversely, L2 algorithms rely on iterative estimation, where a cost function is minimised using algorithms such as Levenberg-Marquardt, Gauss-Newton, gradient descent or conjugate gradient. The cost functions involved are geometrically interpretable and can statistically be optimal under an assumption of Gaussian noise. However, in addition to their sensitivity to initial conditions, these algorithms are often slow and bear a high probability of getting trapped in a local minimum or producing infeasible solutions, even for small noise levels. In light of the above, in this thesis we focus on developing new techniques for finding solutions via a convex optimisation framework that are globally optimal. Presently convex optimisation techniques in motion estimation have revealed enormous advantages. Indeed, convex optimisation ensures getting a global minimum, and the cost function is geometrically meaningful. Moreover, robust optimisation is a recent approach for optimisation under uncertain data. In recent years the need to cope with uncertain data has become especially acute, particularly where real-world applications are concerned. In such circumstances, robust optimisation aims to recover an optimal solution whose feasibility must be guaranteed for any realisation of the uncertain data. Although many researchers avoid uncertainty due to the added complexity in constructing a robust optimisation model and to lack of knowledge as to the nature of these uncertainties, and especially their propagation, in this thesis robust convex optimisation, while estimating the uncertainties at every step is investigated for the motion estimation problem. First, a solution using convex optimisation coupled to the recursive least squares (RLS) algorithm and the robust H filter is developed for motion estimation. In another solution, uncertainties and their propagation are incorporated in a robust L convex optimisation framework for monocular visual motion estimation. In this solution, robust least squares is combined with a second order cone program (SOCP). A technique to improve the accuracy and the robustness of the fundamental matrix is also investigated in this thesis. This technique uses the covariance intersection approach to fuse feature location uncertainties, which leads to more consistent motion estimates. Loop-closure detection is crucial in improving the robustness of navigation algorithms. In practice, after long navigation in an unknown environment, detecting that a vehicle is in a location it has previously visited gives the opportunity to increase the accuracy and consistency of the estimate. In this context, we have developed an efficient appearance-based method for visual loop-closure detection based on the combination of a Gaussian mixture model with the KD-tree data structure. Deploying this technique for loop-closure detection, a robust L convex posegraph optimisation solution for unmanned aerial vehicle (UAVs) monocular motion estimation is introduced as well. In the literature, most proposed solutions formulate the pose-graph optimisation as a least-squares problem by minimising a cost function using iterative methods. In this work, robust convex optimisation under the L norm is adopted, which efficiently corrects the UAV’s pose after loop-closure detection. To round out the work in this thesis, a system for cooperative monocular visual motion estimation with multiple aerial vehicles is proposed. The cooperative motion estimation employs state-of-the-art approaches for optimisation, individual motion estimation and registration. Three-view geometry algorithms in a convex optimisation framework are deployed on board the monocular vision system for each vehicle. In addition, vehicle-to-vehicle relative pose estimation is performed with a novel robust registration solution in a global optimisation framework. In parallel, and as a complementary solution for the relative pose, a robust non-linear H solution is designed as well to fuse measurements from the UAVs’ on-board inertial sensors with the visual estimates. The suggested contributions have been exhaustively evaluated over a number of real-image data experiments in the laboratory using monocular vision systems and range imaging devices. In this thesis, we propose several solutions towards the goal of robust visual motion estimation using convex optimisation. We show that the convex optimisation framework may be extended to include uncertainty information, to achieve robust and optimal solutions. We observed that convex optimisation is a practical and very appealing alternative to linear techniques and iterative methods

    Final results of Borexino Phase-I on low energy solar neutrino spectroscopy

    Full text link
    Borexino has been running since May 2007 at the LNGS with the primary goal of detecting solar neutrinos. The detector, a large, unsegmented liquid scintillator calorimeter characterized by unprecedented low levels of intrinsic radioactivity, is optimized for the study of the lower energy part of the spectrum. During the Phase-I (2007-2010) Borexino first detected and then precisely measured the flux of the 7Be solar neutrinos, ruled out any significant day-night asymmetry of their interaction rate, made the first direct observation of the pep neutrinos, and set the tightest upper limit on the flux of CNO neutrinos. In this paper we discuss the signal signature and provide a comprehensive description of the backgrounds, quantify their event rates, describe the methods for their identification, selection or subtraction, and describe data analysis. Key features are an extensive in situ calibration program using radioactive sources, the detailed modeling of the detector response, the ability to define an innermost fiducial volume with extremely low background via software cuts, and the excellent pulse-shape discrimination capability of the scintillator that allows particle identification. We report a measurement of the annual modulation of the 7 Be neutrino interaction rate. The period, the amplitude, and the phase of the observed modulation are consistent with the solar origin of these events, and the absence of their annual modulation is rejected with higher than 99% C.L. The physics implications of phase-I results in the context of the neutrino oscillation physics and solar models are presented

    Discerning Aggregation in Homogeneous Ensembles: A General Description of Photon Counting Spectroscopy in Diffusing Systems

    Full text link
    In order to discern aggregation in solutions, we present a quantum mechanical analog of the photon statistics from fluorescent molecules diffusing through a focused beam. A generating functional is developed to fully describe the experimental physical system as well as the statistics. Histograms of the measured time delay between photon counts are fit by an analytical solution describing the static as well as diffusing regimes. To determine empirical fitting parameters, fluorescence correlation spectroscopy is used in parallel to the photon counting. For expedient analysis, we find that the distribution's deviation from a single Poisson shows a difference between two single fluor moments or a double fluor aggregate of the same total intensities. Initial studies were performed on fixed-state aggregates limited to dimerization. However preliminary results on reactive species suggest that the method can be used to characterize any aggregating system.Comment: 30 pages, 5 figure

    Detector-Agnostic Phase-Space Distributions

    Full text link
    The representation of quantum states via phase-space functions constitutes an intuitive technique to characterize light. However, the reconstruction of such distributions is challenging as it demands specific types of detectors and detailed models thereof to account for their particular properties and imperfections. To overcome these obstacles, we derive and implement a measurement scheme that enables a reconstruction of phase-space distributions for arbitrary states whose functionality does not depend on the knowledge of the detectors, thus defining the notion of detector-agnostic phase-space distributions. Our theory presents a generalization of well-known phase-space quasiprobability distributions, such as the Wigner function. We implement our measurement protocol, using state-of-the-art transition-edge sensors without performing a detector characterization. Based on our approach, we reveal the characteristic features of heralded single- and two-photon states in phase space and certify their nonclassicality with high statistical significance
    • …
    corecore