116,317 research outputs found

    A Survey of Parallel Data Mining

    Get PDF
    With the fast, continuous increase in the number and size of databases, parallel data mining is a natural and cost-effective approach to tackle the problem of scalability in data mining. Recently there has been a considerable research on parallel data mining. However, most projects focus on the parallelization of a single kind of data mining algorithm/paradigm. This paper surveys parallel data mining with a broader perspective. More precisely, we discuss the parallelization of data mining algorithms of four knowledge discovery paradigms, namely rule induction, instance-based learning, genetic algorithms and neural networks. Using the lessons learned from this discussion, we also derive a set of heuristic principles for designing efficient parallel data mining algorithms

    A Review on: Association Rule Mining Using Privacy for Partitioned Database

    Get PDF
    Data Analysis techniques that are Association manage mining and Frequent thing set mining are two prominent and broadly utilized for different applications. The conventional framework concentrated independently on vertically parceled database and on a level plane apportioned databases on the premise of this presenting a framework which concentrate on both on a level plane and vertically divided databases cooperatively with protection safeguarding component. Information proprietors need to know the continuous thing sets or affiliation rules from an aggregate information set and unveil or uncover as few data about their crude information as could reasonably be expected to other information proprietors and outsiders. To guarantee information protection a Symmetric Encryption Technique is utilized to show signs of improvement result. Cloud supported successive thing set mining arrangement used to exhibit an affiliation govern mining arrangement. The subsequent arrangements are intended for outsourced databases that permit various information proprietors to proficiently share their information safely without trading off on information protection. Information security is one of the key procedures in outsourcing information to different outside clients. Customarily Fast Distribution Mining calculation was proposed for securing conveyed information. These business locales an issue by secure affiliation governs over parceled information in both even and vertical. A Frequent thing sets calculation and Distributed affiliation administer digging calculation is used for doing above method adequately in divided information, which incorporates administrations of the information in outsourcing process for disseminated databases. This work keeps up or keeps up proficient security over vertical and flat perspective of representation in secure mining applications

    Risk Assessment of Yellow Pine Mining Area

    Get PDF
    The risk assessment process involves identifying and characterizing hazards, determining dose-response relationships, and assessing possible exposures to toxins in order to inform risk management. The goal of this project was to complete the steps of a risk assessment and develop a perspective on risk management. Data collected from publicly available databases, scholarly articles, and targeted sources were used to perform a risk assessment for Yellow Pine Mining Area. Social, economic, political, and/or legal perspectives were considered when determining the best risk management approaches

    Knowledge construction: the role of data mining tools

    Get PDF
    This paper seeks to integrate the process of knowledge discovery in databases in the wider context of the creation and sharing of organisational knowledge. The focus on the process of knowledge discovery has been mainly technological. The paper attempts to enrich that perspective by stressing the insights gained by integrating the knowledge discovery process into the social process of knowledge construction that makes KDD meaningful. In order to achieve this goal, a test case is presented. A component of the database of the Portuguese Army was used to test the PADRÃO system. This system integrates a set of databases and principles of qualitative spatial reasoning, which are implemented in the Clementine Data Mining system. The process and the results obtained are then discussed in order to stress the insights that emerge when the focus changes from technology to the social construction of knowledge

    Monitoring land use changes using geo-information : possibilities, methods and adapted techniques

    Get PDF
    Monitoring land use with geographical databases is widely used in decision-making. This report presents the possibilities, methods and adapted techniques using geo-information in monitoring land use changes. The municipality of Soest was chosen as study area and three national land use databases, viz. Top10Vector, CBS land use statistics and LGN, were used. The restrictions of geo-information for monitoring land use changes are indicated. New methods and adapted techniques improve the monitoring result considerably. Providers of geo-information, however, should coordinate on update frequencies, semantic content and spatial resolution to allow better possibilities of monitoring land use by combining data sets

    An information-driven framework for image mining

    Get PDF
    [Abstract]: Image mining systems that can automatically extract semantically meaningful information (knowledge) from image data are increasingly in demand. The fundamental challenge in image mining is to determine how low-level, pixel representation contained in a raw image or image sequence can be processed to identify high-level spatial objects and relationships. To meet this challenge, we propose an efficient information-driven framework for image mining. We distinguish four levels of information: the Pixel Level, the Object Level, the Semantic Concept Level, and the Pattern and Knowledge Level. High-dimensional indexing schemes and retrieval techniques are also included in the framework to support the flow of information among the levels. We believe this framework represents the first step towards capturing the different levels of information present in image data and addressing the issues and challenges of discovering useful patterns/knowledge from each level

    The Hidden Web, XML and Semantic Web: A Scientific Data Management Perspective

    Get PDF
    The World Wide Web no longer consists just of HTML pages. Our work sheds light on a number of trends on the Internet that go beyond simple Web pages. The hidden Web provides a wealth of data in semi-structured form, accessible through Web forms and Web services. These services, as well as numerous other applications on the Web, commonly use XML, the eXtensible Markup Language. XML has become the lingua franca of the Internet that allows customized markups to be defined for specific domains. On top of XML, the Semantic Web grows as a common structured data source. In this work, we first explain each of these developments in detail. Using real-world examples from scientific domains of great interest today, we then demonstrate how these new developments can assist the managing, harvesting, and organization of data on the Web. On the way, we also illustrate the current research avenues in these domains. We believe that this effort would help bridge multiple database tracks, thereby attracting researchers with a view to extend database technology.Comment: EDBT - Tutorial (2011
    • 

    corecore