6,808 research outputs found

    Visualization of multidimensional data with collocated paired coordinates and general line coordinates

    Get PDF
    Often multidimensional data are visualized by splitting n-D data to a set of low dimensional data. While it is useful it destroys integrity of n-D data, and leads to a shallow understanding complex n-D data. To mitigate this challenge a difficult perceptual task of assembling low-dimensional visualized pieces to the whole n-D vectors must be solved. Another way is a lossy dimension reduction by mapping n-D vectors to 2-D vectors (e.g., Principal Component Analysis). Such 2-D vectors carry only a part of information from n-D vectors, without a way to restore n-D vectors exactly from it. An alternative way for deeper understanding of n-D data is visual representations in 2-D that fully preserve n-D data. Methods of Parallel and Radial coordinates are such methods. Developing new methods that preserve dimensions is a long standing and challenging task that we address by proposing Paired Coordinates that is a new type of n-D data visual representation and by generalizing Parallel and Radial coordinates as a General Line coordinates. The important novelty of the concept of the Paired Coordinates is that it uses a single 2-D plot to represent n-D data as an oriented graph based on the idea of collocation of pairs of attributes. The advantage of the General Line Coordinates and Paired Coordinates is in providing a common framework that includes Parallel and Radial coordinates and generating a large number of new visual representations of multidimensional data without lossy dimension reduction

    Image Information Mining Systems

    Get PDF

    Tools Used in Big Data Analytics

    Get PDF
    Big data is the current state of the art topic creating its unique place in the research and industry minds to look into depth of topic to get valuable results needed to meet the future data mining and analysis needs. Big data refers to enormous amounts of unstructured data created as a result of high performance applications ranging from scientific to social networks, from e-government to medical information system and so on. So, there also prevails the need of to analyze the data to get valuable data results from it. This paper deals with analytic emphasis on big data and what are the different tools used for big data analysis In this paper, different sections through an overlook on different aspects on big data such as big data analysis, big data storage techniques and tools used for big data analysis

    Edge-based mining of frequent subgraphs from graph streams

    Get PDF
    In the current era of Big data, high volumes of valuable data can be generated at a high velocity from high-varieties of data sources in various real-life applications ranging from sensor networks to social networks, from bio-informatics to chemical informatics. In addition, Big data are also available in business, education, engineering, finance, healthcare, scientific, telecommunication, and transportation domains. A collection of these data can be viewed as a big dynamic graph structure. Embedded in them are implicit, previously unknown, and potentially useful knowledge. Consequently, efficient knowledge discovery algorithms for mining frequent subgraphs from these dynamic streaming graph structured data are in demand. On the one hand, some existing algorithms discover collections of frequently co-occurring edges, which may be disjoint. On the other hand, some other existing algorithms discover frequent subgraphs by requiring very large memory space. With high volumes of Big data, available memory space may be limited. To discover collections of frequently co-occurring connected edges, we present in this paper two efficient algorithms that require small memory space. Evaluation results show the efficiency of our edge-based algorithms in mining frequent subgraphs from graph streams

    Outlier Detection in Heterogeneous Datasets using Automatic Tuple Expansion

    Get PDF
    Rapidly developing areas of information technology are generating massive amounts of data. Human errors, sensor failures, and other unforeseen circumstances unfortunately tend to undermine the quality and consistency of these datasets by introducing outliers -- data points that exhibit surprising behavior when compared to the rest of the data. Characterizing, locating, and in some cases eliminating these outliers offers interesting insight about the data under scrutiny and reinforces the confidence that one may have in conclusions drawn from otherwise noisy datasets. In this paper, we describe a tuple expansion procedure which reconstructs rich information from semantically poor SQL data types such as strings, integers, and floating point numbers. We then use this procedure as the foundation of a new user-guided outlier detection framework, dBoost, which relies on inference and statistical modeling of heterogeneous data to flag suspicious fields in database tuples. We show that this novel approach achieves good classification performance, both in traditional numerical datasets and in highly non-numerical contexts such as mostly textual datasets. Our implementation is publicly available, under version 3 of the GNU General Public License

    Ontology based warehouse modeling of fractured reservoir ecosystems - for an effective borehole and petroleum production management

    Get PDF
    • …
    corecore