37,419 research outputs found

    Improving 6D Pose Estimation of Objects in Clutter via Physics-aware Monte Carlo Tree Search

    Full text link
    This work proposes a process for efficiently searching over combinations of individual object 6D pose hypotheses in cluttered scenes, especially in cases involving occlusions and objects resting on each other. The initial set of candidate object poses is generated from state-of-the-art object detection and global point cloud registration techniques. The best-scored pose per object by using these techniques may not be accurate due to overlaps and occlusions. Nevertheless, experimental indications provided in this work show that object poses with lower ranks may be closer to the real poses than ones with high ranks according to registration techniques. This motivates a global optimization process for improving these poses by taking into account scene-level physical interactions between objects. It also implies that the Cartesian product of candidate poses for interacting objects must be searched so as to identify the best scene-level hypothesis. To perform the search efficiently, the candidate poses for each object are clustered so as to reduce their number but still keep a sufficient diversity. Then, searching over the combinations of candidate object poses is performed through a Monte Carlo Tree Search (MCTS) process that uses the similarity between the observed depth image of the scene and a rendering of the scene given the hypothesized pose as a score that guides the search procedure. MCTS handles in a principled way the tradeoff between fine-tuning the most promising poses and exploring new ones, by using the Upper Confidence Bound (UCB) technique. Experimental results indicate that this process is able to quickly identify in cluttered scenes physically-consistent object poses that are significantly closer to ground truth compared to poses found by point cloud registration methods.Comment: 8 pages, 4 figure

    Fault Localization Models in Debugging

    Full text link
    Debugging is considered as a rigorous but important feature of software engineering process. Since more than a decade, the software engineering research community is exploring different techniques for removal of faults from programs but it is quite difficult to overcome all the faults of software programs. Thus, it is still remains as a real challenge for software debugging and maintenance community. In this paper, we briefly introduced software anomalies and faults classification and then explained different fault localization models using theory of diagnosis. Furthermore, we compared and contrasted between value based and dependencies based models in accordance with different real misbehaviours and presented some insight information for the debugging process. Moreover, we discussed the results of both models and manifested the shortcomings as well as advantages of these models in terms of debugging and maintenance.Comment: 58-6

    CMS software deployment on OSG

    Get PDF
    A set of software deployment tools has been developed for the installation, verification, and removal of a CMS software release. The tools that are mainly targeted for the deployment on the OSG have the features of instant release deployment, corrective resubmission of the initial installation job, and an independent web-based deployment portal with Grid security infrastructure login mechanism. We have been deploying over 500 installations and found the tools are reliable and adaptable to cope with problems with changes in the Grid computing environment and the software releases. We present the design of the tools, statistics that we gathered during the operation of the tools, and our experience with the CMS software deployment on the OSG Grid computing environment

    Transparent code authentication at the processor level

    Get PDF
    The authors present a lightweight authentication mechanism that verifies the authenticity of code and thereby addresses the virus and malicious code problems at the hardware level eliminating the need for trusted extensions in the operating system. The technique proposed tightly integrates the authentication mechanism into the processor core. The authentication latency is hidden behind the memory access latency, thereby allowing seamless on-the-fly authentication of instructions. In addition, the proposed authentication method supports seamless encryption of code (and static data). Consequently, while providing the software users with assurance for authenticity of programs executing on their hardware, the proposed technique also protects the software manufacturers’ intellectual property through encryption. The performance analysis shows that, under mild assumptions, the presented technique introduces negligible overhead for even moderate cache sizes

    Premise Selection for Mathematics by Corpus Analysis and Kernel Methods

    Get PDF
    Smart premise selection is essential when using automated reasoning as a tool for large-theory formal proof development. A good method for premise selection in complex mathematical libraries is the application of machine learning to large corpora of proofs. This work develops learning-based premise selection in two ways. First, a newly available minimal dependency analysis of existing high-level formal mathematical proofs is used to build a large knowledge base of proof dependencies, providing precise data for ATP-based re-verification and for training premise selection algorithms. Second, a new machine learning algorithm for premise selection based on kernel methods is proposed and implemented. To evaluate the impact of both techniques, a benchmark consisting of 2078 large-theory mathematical problems is constructed,extending the older MPTP Challenge benchmark. The combined effect of the techniques results in a 50% improvement on the benchmark over the Vampire/SInE state-of-the-art system for automated reasoning in large theories.Comment: 26 page

    Validation of the new Hipparcos reduction

    Full text link
    Context.A new reduction of the astrometric data as produced by the Hipparcos mission has been published, claiming accuracies for nearly all stars brighter than magnitude Hp = 8 to be better, by up to a factor 4, than in the original catalogue. Aims.The new Hipparcos astrometric catalogue is checked for the quality of the data and the consistency of the formal errors as well as the possible presence of error correlations. The differences with the earlier publication are explained. Methods. The internal errors are followed through the reduction process, and the external errors are investigated on the basis of a comparison with radio observations of a small selection of stars, and the distribution of negative parallaxes. Error correlation levels are investigated and the reduction by more than a factor 10 as obtained in the new catalogue is explained. Results.The formal errors on the parallaxes for the new catalogue are confirmed. The presence of a small amount of additional noise, though unlikely, cannot be ruled out. Conclusions. The new reduction of the Hipparcos astrometric data provides an improvement by a factor 2.2 in the total weight compared to the catalogue published in 1997, and provides much improved data for a wide range of studies on stellar luminosities and local galactic kinematics.Comment: 12 pages, 19 figures, accepted for publication by Astronomy and Astrophysic
    corecore