576 research outputs found

    The Solution of a Problem of Searching for Three Numbers, of Which the Sum, Product, and the Sum of Their Products Taken Two at a Time, Are Square Numbers

    Get PDF
    This paper first appeared in Novi Commentarii academiae scientiarum Petropolitanae, Volume 8, pp. 64-73 and is reprinted in Opera Omnia: Series 1, Volume 2, pp.519-530. Its Eneström number is E270. Euler improves his results significantly in On Three Square Numbers, of Which the Sum and the Sum of Products Two Apiece will be a Square (E523)

    Study of wavelength-shifting chemicals for use in large-scale water Cherenkov detectors

    Full text link
    Cherenkov detectors employ various methods to maximize light collection at the photomultiplier tubes (PMTs). These generally involve the use of highly reflective materials lining the interior of the detector, reflective materials around the PMTs, or wavelength-shifting sheets around the PMTs. Recently, the use of water-soluble wavelength-shifters has been explored to increase the measurable light yield of Cherenkov radiation in water. These wave-shifting chemicals are capable of absorbing light in the ultravoilet and re-emitting the light in a range detectable by PMTs. Using a 250 L water Cherenkov detector, we have characterized the increase in light yield from three compounds in water: 4-Methylumbelliferone, Carbostyril-124, and Amino-G Salt. We report the gain in PMT response at a concentration of 1 ppm as: 1.88 ±\pm 0.02 for 4-Methylumbelliferone, stable to within 0.5% over 50 days, 1.37 ±\pm 0.03 for Carbostyril-124, and 1.20 ±\pm 0.02 for Amino-G Salt. The response of 4-Methylumbelliferone was modeled, resulting in a simulated gain within 9% of the experimental gain at 1 ppm concentration. Finally, we report an increase in neutron detection performance of a large-scale (3.5 kL) gadolinium-doped water Cherenkov detector at a 4-Methylumbelliferone concentration of 1 ppm.Comment: 7 pages, 9 figures, Submitted to Nuclear Instruments and Methods

    Scene Coordinate Regression with Angle-Based Reprojection Loss for Camera Relocalization

    Get PDF
    Image-based camera relocalization is an important problem in computer vision and robotics. Recent works utilize convolutional neural networks (CNNs) to regress for pixels in a query image their corresponding 3D world coordinates in the scene. The final pose is then solved via a RANSAC-based optimization scheme using the predicted coordinates. Usually, the CNN is trained with ground truth scene coordinates, but it has also been shown that the network can discover 3D scene geometry automatically by minimizing single-view reprojection loss. However, due to the deficiencies of the reprojection loss, the network needs to be carefully initialized. In this paper, we present a new angle-based reprojection loss, which resolves the issues of the original reprojection loss. With this new loss function, the network can be trained without careful initialization, and the system achieves more accurate results. The new loss also enables us to utilize available multi-view constraints, which further improve performance.Comment: ECCV 2018 Workshop (Geometry Meets Deep Learning

    Fast and Accurate Camera Covariance Computation for Large 3D Reconstruction

    Full text link
    Estimating uncertainty of camera parameters computed in Structure from Motion (SfM) is an important tool for evaluating the quality of the reconstruction and guiding the reconstruction process. Yet, the quality of the estimated parameters of large reconstructions has been rarely evaluated due to the computational challenges. We present a new algorithm which employs the sparsity of the uncertainty propagation and speeds the computation up about ten times \wrt previous approaches. Our computation is accurate and does not use any approximations. We can compute uncertainties of thousands of cameras in tens of seconds on a standard PC. We also demonstrate that our approach can be effectively used for reconstructions of any size by applying it to smaller sub-reconstructions.Comment: ECCV 201

    A review of astrophysics experiments on intense lasers

    Get PDF
    Astrophysics has traditionally been pursued at astronomical observatories and on theorists’ computers. Observations record images from space, and theoretical models are developed to explain the observations. A component often missing has been the ability to test theories and models in an experimental setting where the initial and final states are well characterized. Intense lasers are now being used to recreate aspects of astrophysical phenomena in the laboratory, allowing the creation of experimental testbeds where theory and modeling can be quantitatively tested against data. We describe here several areas of astrophysics—supernovae, supernova remnants, gamma-ray bursts, and giant planets—where laser experiments are under development to test our understanding of these phenomena. © 2000 American Institute of Physics.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/71013/2/PHPAEN-7-5-1641-1.pd

    Learning and Matching Multi-View Descriptors for Registration of Point Clouds

    Full text link
    Critical to the registration of point clouds is the establishment of a set of accurate correspondences between points in 3D space. The correspondence problem is generally addressed by the design of discriminative 3D local descriptors on the one hand, and the development of robust matching strategies on the other hand. In this work, we first propose a multi-view local descriptor, which is learned from the images of multiple views, for the description of 3D keypoints. Then, we develop a robust matching approach, aiming at rejecting outlier matches based on the efficient inference via belief propagation on the defined graphical model. We have demonstrated the boost of our approaches to registration on the public scanning and multi-view stereo datasets. The superior performance has been verified by the intensive comparisons against a variety of descriptors and matching methods

    Progressive Structure from Motion

    Full text link
    Structure from Motion or the sparse 3D reconstruction out of individual photos is a long studied topic in computer vision. Yet none of the existing reconstruction pipelines fully addresses a progressive scenario where images are only getting available during the reconstruction process and intermediate results are delivered to the user. Incremental pipelines are capable of growing a 3D model but often get stuck in local minima due to wrong (binding) decisions taken based on incomplete information. Global pipelines on the other hand need the access to the complete viewgraph and are not capable of delivering intermediate results. In this paper we propose a new reconstruction pipeline working in a progressive manner rather than in a batch processing scheme. The pipeline is able to recover from failed reconstructions in early stages, avoids to take binding decisions, delivers a progressive output and yet maintains the capabilities of existing pipelines. We demonstrate and evaluate our method on diverse challenging public and dedicated datasets including those with highly symmetric structures and compare to the state of the art.Comment: Accepted to ECCV 201

    Speeding up structure from motion on large scenes using parallelizable partitions

    Get PDF
    Structure from motion based 3D reconstruction takes a lot of time for large scenes which consist of thousands of input images. We propose a method that speeds up the reconstruction of large scenes by partitioning it into smaller scenes, and then recombining those. The main benefit here is that each subscene can be optimized in parallel. We present a widely usable subdivision method, and show that the difference between the result after partitioning and recombination, and the state of the art structure from motion reconstruction on the entire scene, is negligible
    • …
    corecore