90 research outputs found

    Feature Match for Medical Images

    Get PDF
    This paper represents an algorithm for Feature Match, a summed up estimated approximate nearest neighbor field (ANNF) calculation system, between a source and target image. The proposed calculation can estimate ANNF maps between any image sets, not as a matter of course related. This generalization is accomplished through proper spatial-range changes. To register ANNF maps, worldwide shading adjustment is connected as a reach change on the source picture. Image patches from the pair of pictures are approximated utilizing low-dimensional elements, which are utilized alongside KD-tree to appraise the ANNF map. This ANNF guide is further enhanced in view of picture coherency and spatial changes. The proposed generalization, empowers to handle a more extensive scope of vision applications, which have not been handled utilizing the ANNF structure. Here one such application is outlined namely: optic plate discovery .This application manages restorative imaging, where optic circles are found in retinal pictures utilizing a sound optic circle picture as regular target picture. ANNF mappings is used in this application and is shown experimentally that the proposed approaches are faster and accurate, compared with the state-of the-art techniques

    Query processing of spatial objects: Complexity versus Redundancy

    Get PDF
    The management of complex spatial objects in applications, such as geography and cartography, imposes stringent new requirements on spatial database systems, in particular on efficient query processing. As shown before, the performance of spatial query processing can be improved by decomposing complex spatial objects into simple components. Up to now, only decomposition techniques generating a linear number of very simple components, e.g. triangles or trapezoids, have been considered. In this paper, we will investigate the natural trade-off between the complexity of the components and the redundancy, i.e. the number of components, with respect to its effect on efficient query processing. In particular, we present two new decomposition methods generating a better balance between the complexity and the number of components than previously known techniques. We compare these new decomposition methods to the traditional undecomposed representation as well as to the well-known decomposition into convex polygons with respect to their performance in spatial query processing. This comparison points out that for a wide range of query selectivity the new decomposition techniques clearly outperform both the undecomposed representation and the convex decomposition method. More important than the absolute gain in performance by a factor of up to an order of magnitude is the robust performance of our new decomposition techniques over the whole range of query selectivity

    Application of Principal Component Analysis to Decision Support System

    Get PDF
    Decision support systems are software which are used to develop insight into system behavior and help managers to make effective plans and decisions. Simulation and modeling are the basic weapons which are used to simplify the problem, abstract system behavior, state and explore the relationship among the components of the system, understand system essence and behavior, predict the results and utilize knowledge to help decision maker to make high quality decisions. One type of decision support system addresses the problem to select a choice from many alternatives [George, 1996]. In other words, the problem is to evaluate and rank a finite number of alternatives with respect to a finite number of criteria. Rank computation depends on the values of the criteria variables and their weight values which directly determine the influence of the variables. How to weight each criteria and how the weights influence the preference of the alternatives is a very important part in decision research. Much research has been done in this area, but most of it is subjective. The best weight value should depict the information of the data set and system behavior. Principal component analysis (PCA) can reduce the dimensionality of the data set and simplify the interrelated variables while retaining most of the information presented in the data set. Much research has indicated principal component analysis has an intuitively satisfying interpretation and illustrated its application in areas where judgments are not easy to come by [Ahamad, 1967; Bailey, 1956; Cahalan, 1983; Chang, 1988; Cochran and Home, 1977; Dawkins, 1989; Jolicoeur, 1959; Jolicoeur and Mosimann, 1960; Kloek and Mennes, 1960; Lee and Chang, 1976; Rao, 1964; Sloan, 1983; Wold, 1976]. Dawkins [Dawkins, 1989], using the first principal component of the national track records from principal component analysis. ranked the world track perfonnance. But principal components are influenced by roundoff error, sample data variation and sampling error. How the rank value changes when the weight is changed and what are the intervals of the weights with the restriction that the final ranking of the alternatives does not change? The objective of this research is to explore the application of PCA in decision support systems and investigate the model behavior under small changes in its assumption and its parameters, understand the key variables and their relationships which can most affect the model solutions and corresponding decisions, validate the model and find better and robust solutions for some particular problems. A decision support system is implemented as part of this research. It is implemented using the MS Visual C++ programming language under the MS Windows 95 environment. The system provides a graphical user interface (GUI) to view results. The remainder of this thesis is organized as folJows. Computation of PCA, application of the PCA, sensitivity analysis of the decision systems are studied in Chapter 2. Design and implementation of the system are explained in Chapter 3, also the process, class, architecture and key algorithms were briefly explained in this Chapter. The result and the interface were shown in Chapter 4. Chapter 5 gives conclusion and some directions for future work

    Efficient similarity search in high-dimensional data spaces

    Get PDF
    Similarity search in high-dimensional data spaces is a popular paradigm for many modern database applications, such as content based image retrieval, time series analysis in financial and marketing databases, and data mining. Objects are represented as high-dimensional points or vectors based on their important features. Object similarity is then measured by the distance between feature vectors and similarity search is implemented via range queries or k-Nearest Neighbor (k-NN) queries. Implementing k-NN queries via a sequential scan of large tables of feature vectors is computationally expensive. Building multi-dimensional indexes on the feature vectors for k-NN search also tends to be unsatisfactory when the dimensionality is high. This is due to the poor index performance caused by the dimensionality curse. Dimensionality reduction using the Singular Value Decomposition method is the approach adopted in this study to deal with high-dimensional data. Noting that for many real-world datasets, data distribution tends to be heterogeneous, dimensionality reduction on the entire dataset may cause a significant loss of information. More efficient representation is sought by clustering the data into homogeneous subsets of points, and applying dimensionality reduction to each cluster respectively, i.e., utilizing local rather than global dimensionality reduction. The thesis deals with the improvement of the efficiency of query processing associated with local dimensionality reduction methods, such as the Clustering and Singular Value Decomposition (CSVD) and the Local Dimensionality Reduction (LDR) methods. Variations in the implementation of CSVD are considered and the two methods are compared from the viewpoint of the compression ratio, CPU time, and retrieval efficiency. An exact k-NN algorithm is presented for local dimensionality reduction methods by extending an existing multi-step k-NN search algorithm, which is designed for global dimensionality reduction. Experimental results show that the new method requires less CPU time than the approximate method proposed original for CSVD at a comparable level of accuracy. Optimal subspace dimensionality reduction has the intent of minimizing total query cost. The problem is complicated in that each cluster can retain a different number of dimensions. A hybrid method is presented, combining the best features of the CSVD and LDR methods, to find optimal subspace dimensionalities for clusters generated by local dimensionality reduction methods. The experiments show that the proposed method works well for both real-world datasets and synthetic datasets

    A systematic review on machine learning models for online learning and examination systems

    Get PDF
    Examinations or assessments play a vital role in every student’s life; they determine their future and career paths. The COVID pandemic has left adverse impacts in all areas, including the academic field. The regularized classroom learning and face-to-face real-time examinations were not feasible to avoid widespread infection and ensure safety. During these desperate times, technological advancements stepped in to aid students in continuing their education without any academic breaks. Machine learning is a key to this digital transformation of schools or colleges from real-time to online mode. Online learning and examination during lockdown were made possible by Machine learning methods. In this article, a systematic review of the role of Machine learning in Lockdown Exam Management Systems was conducted by evaluating 135 studies over the last five years. The significance of Machine learning in the entire exam cycle from pre-exam preparation, conduction of examination, and evaluation were studied and discussed. The unsupervised or supervised Machine learning algorithms were identified and categorized in each process. The primary aspects of examinations, such as authentication, scheduling, proctoring, and cheat or fraud detection, are investigated in detail with Machine learning perspectives. The main attributes, such as prediction of at-risk students, adaptive learning, and monitoring of students, are integrated for more understanding of the role of machine learning in exam preparation, followed by its management of the post-examination process. Finally, this review concludes with issues and challenges that machine learning imposes on the examination system, and these issues are discussed with solutions

    Concepts for the Representation, Storage, and Retrieval of Spatio-Temporal Objects in 3D/4D Geo-Informations-Systems

    Get PDF
    The quickly increasing number of spatio-temporal applications in fields like environmental management or geology is a new challenge to the development of database systems. This thesis addresses three areas of the problem of integrating spatio-temporal objects into databases. First, a new representational model for continuously changing, spatial 3D objects is introduced and transferred into a small system of classes within an object-oriented database framework. The model extends simplicial cell complexes to the spatio-temporal setting. The problem of closure under certain operations is investigated. Second, internal data structures are introduced that represent instances of the (user-level) spatio-temporal classes. A new technique provides a compromise between compact storage and efficient retrieval of spatio-temporal objects. These structures correspond to temporal graphs and support updates as well as the maintainance of connected components over time. Third, it is shown how to realise further operations on the new type of objects. Among these operations are range queries, intersection tests, and the Euclidean distance function

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Retrieval of Spatially Similar Images using Quadtree-based Indexing

    Get PDF
    Multimedia applications involving image retrieval demand fast response, which requires efficient database indexing. Generally, a two-level indexing scheme in an image database can help to reduce the search space against a given query image. The first level is required to significantly reduce the search space for the second-stage of comparisons and must be computationally efficient. It is also required to guarantee that no new false negatives may result. In this thesis, we propose a new image signature representation for the first level of a two-level image indexing scheme that is based on hierarchical decomposition of image space into spatial arrangement of image features (quadtrees). We also formally prove that the proposed signature representation scheme not only results in fewer number of matching signatures but also does not result in any new false negative. Further, the performance of the retrieval scheme with proposed ignature representation is evaluated for various feature point detection algorithms
    • …
    corecore