1,044 research outputs found

    LEVEL-BASED CORRESPONDENCE APPROACH TO COMPUTATIONAL STEREO

    Get PDF
    One fundamental problem in computational stereo reconstruction is correspondence. Correspondence is the method of detecting the real world object reflections in two camera views. This research focuses on correspondence, proposing an algorithm to improve such detection for low quality cameras (webcams) while trying to achieve real-time image processing. Correspondence plays an important role in computational stereo reconstruction and it has a vast spectrum of applicability. This method is useful in other areas such as structure from motion reconstruction, object detection, tracking in robot vision and virtual reality. Due to its importance, a correspondence method needs to be accurate enough to meet the requirement of such fields but it should be less costly and easy to use and configure, to be accessible by everyone. By comparing current local correspondence method and discussing their weakness and strength, this research tries to enhance an algorithm to improve previous works to achieve fast detection, less costly and acceptable accuracy to meet the requirement of reconstruction. In this research, the correspondence is divided into four stages. Two stages of preprocessing which are noise reduction and edge detection have been compared with respect to different methods available. In the next stage, the feature detection process is introduced and discussed focusing on possible solutions to reduce errors created by system or problem occurring in the scene such as occlusion. Lastly, in the final stage it elaborates different methods of displaying reconstructed result. Different sets of data are processed based on the steps involved in correspondence and the results are discussed and compared in detail. The finding shows how this system can achieve high speed and acceptable outcome despite of poor quality input. As a conclusion, some possible improvements are proposed based on ultimate outcome

    Towards Tight Bounds for the Streaming Set Cover Problem

    Full text link
    We consider the classic Set Cover problem in the data stream model. For nn elements and mm sets (mnm\geq n) we give a O(1/δ)O(1/\delta)-pass algorithm with a strongly sub-linear O~(mnδ)\tilde{O}(mn^{\delta}) space and logarithmic approximation factor. This yields a significant improvement over the earlier algorithm of Demaine et al. [DIMV14] that uses exponentially larger number of passes. We complement this result by showing that the tradeoff between the number of passes and space exhibited by our algorithm is tight, at least when the approximation factor is equal to 11. Specifically, we show that any algorithm that computes set cover exactly using (12δ1)({1 \over 2\delta}-1) passes must use Ω~(mnδ)\tilde{\Omega}(mn^{\delta}) space in the regime of m=O(n)m=O(n). Furthermore, we consider the problem in the geometric setting where the elements are points in R2\mathbb{R}^2 and sets are either discs, axis-parallel rectangles, or fat triangles in the plane, and show that our algorithm (with a slight modification) uses the optimal O~(n)\tilde{O}(n) space to find a logarithmic approximation in O(1/δ)O(1/\delta) passes. Finally, we show that any randomized one-pass algorithm that distinguishes between covers of size 2 and 3 must use a linear (i.e., Ω(mn)\Omega(mn)) amount of space. This is the first result showing that a randomized, approximate algorithm cannot achieve a space bound that is sublinear in the input size. This indicates that using multiple passes might be necessary in order to achieve sub-linear space bounds for this problem while guaranteeing small approximation factors.Comment: A preliminary version of this paper is to appear in PODS 201

    Scalable sparse covariance estimation via self-concordance

    Get PDF
    We consider the class of convex minimization problems, composed of a self-concordant function, such as the logdet\log\det metric, a convex data fidelity term h()h(\cdot) and, a regularizing -- possibly non-smooth -- function g()g(\cdot). This type of problems have recently attracted a great deal of interest, mainly due to their omnipresence in top-notch applications. Under this \emph{locally} Lipschitz continuous gradient setting, we analyze the convergence behavior of proximal Newton schemes with the added twist of a probable presence of inexact evaluations. We prove attractive convergence rate guarantees and enhance state-of-the-art optimization schemes to accommodate such developments. Experimental results on sparse covariance estimation show the merits of our algorithm, both in terms of recovery efficiency and complexity.Comment: 7 pages, 1 figure, Accepted at AAAI-1

    Evaluating the interaction of 308-nm xenon chloride excimer laser with human dentin and enamel hard tissues

    Get PDF
    Background: The pulsed output of the 308 nm XeCl laser and its photoablation action rather than photothermal action offers the ability to remove dental hard tissues with minimal generation of heat in the tissue. Materials and Methods: A total of 20 human molar teeth (ten teeth used as enamel samples and ten teeth used as dentin samples after removing the enamel tissue from their crowns) were irradiated by the laser. The crown of each sample was regarded as a cube which its lateral sides were exposed in 2Hz frequency without water cooling. Also, 18 holes for all enamel samples and 18 holes for all dentin samples were obtained. Three different amounts of energy were selected as a variable factor with 6 different numbers of pulses in each energy. The images of these holes were prepared by optic and computer combining, and the amounts of the ablation depth and effective ablation area were calculated using the MATLAB software. Results: The amounts of ablation depth were increased with increasing the number of pulses for both enamel and dentin tissues. The amounts of ablation depth were also increased with increasing the amounts of energy for both enamel and dentin tissues. The greater amounts of ablation depth and effective ablation area were observed in the dentin tissue rather than the enamel tissue. The borders of created holes were reported sharp and clear. Conclusion: The application of the XeCl laser for hard tissue removal and cavity preparation can be possible after some certain modifications

    Improved Diversity Maximization Algorithms for Matching and Pseudoforest

    Full text link
    In this work we consider the diversity maximization problem, where given a data set XX of nn elements, and a parameter kk, the goal is to pick a subset of XX of size kk maximizing a certain diversity measure. [CH01] defined a variety of diversity measures based on pairwise distances between the points. A constant factor approximation algorithm was known for all those diversity measures except ``remote-matching'', where only an O(logk)O(\log k) approximation was known. In this work we present an O(1)O(1) approximation for this remaining notion. Further, we consider these notions from the perpective of composable coresets. [IMMM14] provided composable coresets with a constant factor approximation for all but ``remote-pseudoforest'' and ``remote-matching'', which again they only obtained a O(logk)O(\log k) approximation. Here we also close the gap up to constants and present a constant factor composable coreset algorithm for these two notions. For remote-matching, our coreset has size only O(k)O(k), and for remote-pseudoforest, our coreset has size O(k1+ε)O(k^{1+\varepsilon}) for any ε>0\varepsilon > 0, for an O(1/ε)O(1/\varepsilon)-approximate coreset.Comment: 27 pages, 1 table. Accepted to APPROX, 202

    Improved Diversity Maximization Algorithms for Matching and Pseudoforest

    Get PDF
    corecore