196 research outputs found

    3D surface reconstruction for lower limb prosthetic model using modified radon transform

    Get PDF
    Computer vision has received increased attention for the research and innovation on three-dimensional surface reconstruction with aim to obtain accurate results. Although many researchers have come up with various novel solutions and feasibility of the findings, most require the use of sophisticated devices which is computationally expensive. Thus, a proper countermeasure is needed to resolve the reconstruction constraints and create an algorithm that is able to do considerably fast reconstruction by giving attention to devices equipped with appropriate specification, performance and practical affordability. This thesis describes the idea to realize three-dimensional surface of the residual limb models by adopting the technique of tomographic imaging coupled with the strategy based on multiple-views from a digital camera and a turntable. The surface of an object is reconstructed from uncalibrated two-dimensional image sequences of thirty-six different projections with the aid of Radon transform algorithm and shape-from-silhouette. The results show that the main objective to reconstruct three-dimensional surface of lower limb model has been successfully achieved with reasonable accuracy as the starting point to reconstruct three-dimensional surface and extract digital reading of an amputated lower limb model where the maximum percent error obtained from the computation is approximately 3.3 % for the height whilst 7.4%, 7.9% and 8.1% for the diameters at three specific heights of the objects. It can be concluded that the reconstruction of three-dimensional surface for the developed method is particularly dependent to the effects the silhouette generated where high contrast two-dimensional images contribute to higher accuracy of the silhouette extraction. The advantage of the concept presented in this thesis is that it can be done with simple experimental setup and the reconstruction of three-dimensional model neither involves expensive equipment nor require any service by an expert to handle sophisticated mechanical scanning system

    FUZZY KERNEL REGRESSION FOR REGISTRATION AND OTHER IMAGE WARPING APPLICATIONS

    Get PDF
    In this dissertation a new approach for non-rigid medical im- age registration is presented. It relies onto a probabilistic framework based on the novel concept of Fuzzy Kernel Regression. The theoric framework, after a formal introduction is applied to develop several complete registration systems, two of them are interactive and one is fully automatic. They all use the composition of local deforma- tions to achieve the final alignment. Automatic one is based onto the maximization of mutual information to produce local affine aligments which are merged into the global transformation. Mutual Information maximization procedure uses gradient descent method. Due to the huge amount of data associated to medical images, a multi-resolution topology is embodied, reducing processing time. The distance based interpolation scheme injected facilitates the similairity measure op- timization by attenuating the presence of local maxima in the func- tional. System blocks are implemented on GPGPUs allowing efficient parallel computation of large 3d datasets using SIMT execution. Due to the flexibility of Mutual Information, it can be applied to multi- modality image scans (MRI, CT, PET, etc.). Both quantitative and qualitative experiments show promising results and great potential for future extension. Finally the framework flexibility is shown by means of its succesful application to the image retargeting issue, methods and results are presented

    Ontology specific visual canvas generation to facilitate sense-making-an algorithmic approach

    Get PDF
    Ontologies are domain-specific conceptualizations that are both human and machine-readable. Due to this remarkable attribute of ontologies, its applications are not limited to computing domains. Banking, medicine, agriculture, and law are a few of the non-computing domains, where ontologies are being used very effectively. When creating ontologies for non-computing domains, involvement of the non-computing domain specialists like bankers, lawyers, farmers become very vital. Hence, they are not semantic specialists, particularly designed visualization assistance is required for the ontology schema verifications and sense-making. Existing visualization methods are not fine-tuned for non-technical domain specialists and there are lots of complexities. In this research, a novel algorithm capable of generating domain specialists’ friendlier visualization canvas has been explored. This proposed algorithm and the visualization canvas has been tested for three different domains and overall success of 85% has been yielded

    Novel graph analytics for enhancing data insight

    No full text
    Graph analytics is a fast growing and significant field in the visualization and data mining community, which is applied on numerous high-impact applications such as, network security, finance, and health care, providing users with adequate knowledge across various patterns within a given system. Although a series of methods have been developed in the past years for the analysis of unstructured collections of multi-dimensional points, graph analytics has only recently been explored. Despite the significant progress that has been achieved recently, there are still many open issues in the area, concerning not only the performance of the graph mining algorithms, but also producing effective graph visualizations in order to enhance human perception. The current thesis deals with the investigation of novel methods for graph analytics, in order to enhance data insight. Towards this direction, the current thesis proposes two methods so as to perform graph mining and visualization. Based on previous works related to graph mining, the current thesis suggests a set of novel graph features that are particularly efficient in identifying the behavioral patterns of the nodes on the graph. The specific features proposed, are able to capture the interaction of the neighborhoods with other nodes on the graph. Moreover, unlike previous approaches, the graph features introduced herein, include information from multiple node neighborhood sizes, thus capture long-range correlations between the nodes, and are able to depict the behavioral aspects of each node with high accuracy. Experimental evaluation on multiple datasets, shows that the use of the proposed graph features for the graph mining procedure, provides better results than the use of other state-of-the-art graph features. Thereafter, the focus is laid on the improvement of graph visualization methods towards enhanced human insight. In order to achieve this, the current thesis uses non-linear deformations so as to reduce visual clutter. Non-linear deformations have been previously used to magnify significant/cluttered regions in data or images for reducing clutter and enhancing the perception of patterns. Extending previous approaches, this work introduces a hierarchical approach for non-linear deformation that aims to reduce visual clutter by magnifying significant regions, and leading to enhanced visualizations of one/two/three-dimensional datasets. In this context, an energy function is utilized, which aims to determine the optimal deformation for every local region in the data, taking the information from multiple single-layer significance maps into consideration. The problem is subsequently transformed into an optimization problem for the minimization of the energy function under specific spatial constraints. Extended experimental evaluation provides evidence that the proposed hierarchical approach for the generation of the significance map surpasses current methods, and manages to effectively identify significant regions and deliver better results. The thesis is concluded with a discussion outlining the major achievements of the current work, as well as some possible drawbacks and other open issues of the proposed approaches that could be addressed in future works.Open Acces

    Structure-aware shape processing

    Full text link

    Structure-aware shape processing

    Full text link

    Depth Estimation Using 2D RGB Images

    Get PDF
    Single image depth estimation is an ill-posed problem. That is, it is not mathematically possible to uniquely estimate the 3rd dimension (or depth) from a single 2D image. Hence, additional constraints need to be incorporated in order to regulate the solution space. As a result, in the first part of this dissertation, the idea of constraining the model for more accurate depth estimation by taking advantage of the similarity between the RGB image and the corresponding depth map at the geometric edges of the 3D scene is explored. Although deep learning based methods are very successful in computer vision and handle noise very well, they suffer from poor generalization when the test and train distributions are not close. While, the geometric methods do not have the generalization problem since they benefit from temporal information in an unsupervised manner. They are sensitive to noise, though. At the same time, explicitly modeling of a dynamic scenes as well as flexible objects in traditional computer vision methods is a big challenge. Considering the advantages and disadvantages of each approach, a hybrid method, which benefits from both, is proposed here by extending traditional geometric models’ abilities to handle flexible and dynamic objects in the scene. This is made possible by relaxing geometric computer vision rules from one motion model for some areas of the scene into one for every pixel in the scene. This enables the model to detect even small, flexible, floating debris in a dynamic scene. However, it makes the optimization under-constrained. To change the optimization from under-constrained to over-constrained while maintaining the model’s flexibility, ”moving object detection loss” and ”synchrony loss” are designed. The algorithm is trained in an unsupervised fashion. The primary results are in no way comparable to the current state of the art. Because the training process is so slow, it is difficult to compare it to the current state of the art. Also, the algorithm lacks stability. In addition, the optical flow model is extremely noisy and naive. At the end, some solutions are suggested to address these issues
    • 

    corecore