55 research outputs found

    A Analysis of Different Type of Advance database System For Data Mining Based on Basic Factor

    Get PDF
    Normal databases are unable to handling such as large range and large amount of data. Then we need database to support creating, storage, indexing and retrieval of large and wide variety of data for mining. This research paper presents different ways of data mining for advance data as multimedia, spatial, Time-series and heterogeneous data and management of database is given to help to creating, storing, indexing and retrieval. This includes advanced data structures and use of metadata to store advance data like multimedia, spatial, Time-series and heterogeneous database. This research paper claim the database management systems should be extended to arrange new type of data and enable to search based on their contents. Media, Geometry, Time, Calendar objects and all type objects are modeled as attributes of abstract data types. This paper will be describe multimedia, spatial, time series and heterogeneous database as a point of data mining methods, database management technique, data type and application for advance database. DOI: 10.17762/ijritcc2321-8169.15020

    Environmental Decision-making utilizing a Web GIS to Monitor Hazardous Industrial Emissions in the Valencian community of Spain

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.Air pollution is a critical issue in contemporary times. For this reason, officials and environmental managers are in need of suitable tools for visualization, manipulation and analysis of environmental data. Environmental concerns in Europe have encouraged the European Environmental Agency (EEA) to create the European Pollutant Release and Transfer Register (E-PRTR). The E-PRTR is vital and valuable because society will benefit if the data are used to improve monitoring and consequently advance environmental management. However, the data are not accessible in an interoperable way, which complicates their use and does not allow for a contribution to environmental monitoring. This paper describes a Web GIS system developed for the monitoring of industrial emissions using environmental data released by the EEA. Four research objectives are addressed: (1) design and create an interoperable spatial database to store environmental data, (2) develop a Web GIS to manipulate the spatial database, facilitate air pollution monitoring and enhance risk assessment, (3) implement OGC standards to provide data interoperability and integration into a Web GIS, (4) create a model to simulate distribution of air pollutants and assess a population’s exposure to industrial emissions. The proposed approach towards interoperability is an adoption of servicebased architecture for implementation of a three-tier Web GIS application. This system’s prototype is developed using open source tools for the Valencian Community of Spain

    The United States Marine Corps Data Collaboration Requirements: Retrieving and Integrating Data From Multiple Databases

    Get PDF
    The goal of this research is to develop an information sharing and database integration model and suggest a framework to fully satisfy the United States Marine Corps collaboration requirements as well as its information sharing and database integration needs. This research is exploratory; it focuses on only one initiative: the IT-21 initiative. The IT-21 initiative dictates The Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st Century Force. The IT-21 initiative states that Navy and Marine Corps information infrastructure will be based largely on commercial systems and services, and the Department of the Navy must ensure that these systems are seamlessly integrated and that information transported over the infrastructure is protected and secure. The Delphi Technique, a qualitative method approach, was used to develop a Holistic Model and to suggest a framework for information sharing and database integration. Data was primarily collected from mid-level to senior information officers, with a focus on Chief Information Officers. In addition, an extensive literature review was conducted to gain insight about known similarities and differences in Strategic Information Management, information sharing strategies, and database integration strategies. It is hoped that the Armed Forces and the Department of Defense will benefit from future development of the information sharing and database integration Holistic Model

    Achieving Class-Based QoS for Transactional Workloads

    Full text link

    Comparison of Graph Databases and Relational Databases When Handling Large-Scale Social Data

    Get PDF
    Over the past few years, with the rapid development of mobile technology, more people use mobile social applications, such as Facebook, Twitter and Weibo, in their daily lives, and there is an increasing amount of social data. Thus, finding a suitable storage approach to store and process the social data, especially for the large-scale social data, should be important for the social network companies. Traditionally, a relational database, which represents data in terms of tables, is widely used in the legacy applications. However, a graph database, which is a kind of NoSQL databases, is in a rapid development to handle the growing amount of unstructured or semi-structured data. The two kinds of storage approaches have their own advantages. For example, a relational database should be a more mature storage approach, and a graph database can handle graph-like data in an easier way. In this research, a comparison of capabilities for storing and processing large-scale social data between relational databases and graph databases is applied. Two kinds of analysis, the quantitative research analysis of storage cost and executing time and the qualitative analysis of five criteria, including maturity, ease of programming, flexibility, security and data visualization, are taken into the comparison to evaluate the performance of relational databases and graph databases when handling large-scale social data. Also, a simple mobile social application is developed for experiments. The comparison is used to figure out which kind of database is more suitable for handling large-scale social data, and it can compare more graph database models with real-world social data sets in the future research

    Dynacore Final Report , Plasma Physics prototype

    Get PDF
    The generation and behaviour of plasma in a fusion device and its interaction with sur-rounding materials is studied by observing several phenomena that will accompany a plasma discharge. These phenomena are recorded by means of so called Diagnostics. These are instruments that comprise complex electronic equipment, coupled to various sensors. The generation of the plasma is also governed by electronic systems that control different parameters of the fusion device, the Tokamak, and of auxiliary equipment

    High Energy Astrophysics Program

    Get PDF
    This report reviews activities performed-by members of the USRA contract team during the six months of the reporting period and projected activities during the coming six months. Activities take place at the Goddard Space Flight Center, visiting the Laboratory for High Energy Astrophysics. Developments concern instrumentation, observation, data analysis, and theoretical work in Astrophysics. Missions supported include: Advanced Satellite for Cosmology and Astrophysics (ASCA); X-ray Timing Experiment (XTE); X-ray Spectrometer (XRS); Astro-E; High Energy Astrophysics Science Archive Research Center (HEASARC), and others

    Designing and developing a prototype indigenous knowledge database and devising a knowledge management framework

    Get PDF
    Thesis (M. Tech.) - Central University of Technology, Free State, 2009The purpose of the study was to design and develop a prototype Indigenous Knowledge (IK) database that will be productive within a Knowledge Management (KM) framework specifically focused on IK. The need to develop a prototype IK database that can help standardise the work being done in the field of IK within South Africa has been established in the Indigenous Knowledge Systems (IKS) policy, which stated that “common standards would enable the integration of widely scattered and distributed references on IKS in a retrievable form. This would act as a bridge between indigenous and other knowledge systems” (IKS policy, 2004:33). In particular within the indigenous people’s organizations, holders of IK, whether individually or collectively, have a claim that their knowledge should not be exploited for elitist purposes without direct benefit to their empowerment and the improvement of their livelihoods. Establishing guidelines and a modus operandi (KM framework) are important, especially when working with communities. Researchers go into communities to gather their knowledge and never return to the communities with their results. The communities feel enraged and wronged. Creating an IK network can curb such behaviour or at least inform researchers/organisations that this behaviour is damaging. The importance of IK is that IK provides the basis for problem-solving strategies for local communities, especially the poor, which can help reduce poverty. IK is a key element of the “social capital” of the poor; their main asset to invest in the struggle for survival, to produce food, to provide shelter, or to achieve control of their own lives. It is closely intertwined with their livelihoods. Many aspects of KM and IK were discussed and a feasibility study for a KM framework was conducted to determine if any existing KM frameworks can work in an organisation that works with IK. Other factors that can influence IK are: guidelines for implementing a KM framework, information management, quality management, human factors/capital movement, leading role players in the field of IK, Intellectual Property Rights (IPR), ethics, guidelines for doing fieldwork, and a best plan for implementation. At this point, the focus changes from KM and IK to the prototype IK database and the technical design thereof. The focus is shifted to a more hands-on development by looking at the different data models and their underlying models. A well-designed database facilitates data management and becomes a valuable generator of information. A poorly designed database is likely to become a breeding ground for redundant data. The conceptual design stage used data modelling to create an abstract database structure that represents real-world objects in the most authentic way possible. The tools used to design the database are platform independent software; therefore the design can be implemented on many different platforms. An elementary prototype graphical user interface was designed in order to illustrate the database’s three main functions: adding new members, adding new IK records, and searching the IK database. The IK database design took cognisance of what is currently prevailing in South Africa and the rest of the world with respect to IK and database development. The development of the database was done in such a way as to establish a standard database design for IK systems in South Africa. The goal was to design and develop a database that can be disseminated to researchers/organisations working in the field of IK so that the use of a template database can assist work in the field. Consequently the work in the field will be collected in the same way and based on the same model. At a later stage, the databases could be interlinked and South Africa can have one large knowledge repository for IK

    Semantic validation in spatio-temporal schema integration

    Get PDF
    This thesis proposes to address the well-know database integration problem with a new method that combines functionality from database conceptual modeling techniques with functionality from logic-based reasoners. We elaborate on a hybrid - modeling+validation - integration approach for spatio-temporal information integration on the schema level. The modeling part of our methodology is supported by the spatio-temporal conceptual model MADS, whereas the validation part of the integration process is delegated to the description logics validation services. We therefore adhere to the principle that, rather than extending either formalism to try to cover all desirable functionality, a hybrid system, where the database component and the logic component would cooperate, each one performing the tasks for which it is best suited, is a viable solution for semantically rich information management. First, we develop a MADS-based flexible integration approach where the integrated schema designer has several viable ways to construct a final integrated schema. For different related schema elements we provide the designer with four general policies and with a set of structural solutions or structural patterns within each policy. To always guarantee an integrated solution, we provide for a preservation policy with multi-representation structural pattern. To state the inter-schema mappings, we elaborate on a correspondence language with explicit spatial and temporal operators. Thus, our correspondence language has three facets: structural, spatial, and temporal, allowing to relate the thematic representation as well as the spatial and temporal features. With the inter-schema mappings, the designer can state correspondences between related populations, and define the conditions that rule the matching at the instance level. These matching rules can then be used in query rewriting procedures or to match the instances within the data integration process. We associate a set of putative structural patterns to each type of population correspondence, providing a designer with a patterns' selection for flexible integrated schema construction. Second, we enhance our integration method by employing validation services of the description logic formalism. It is not guaranteed that the designer can state all the inter-schema mappings manually, and that they are all correct. We add the validation phase to ensure validity and completeness of the inter-schema mappings set. Inter-schema mappings cannot be validated autonomously, i.e., they are validated against the data model and the schemas they link. Thus, to implement our validation approach, we translate the data model, the source schemas and the inter-schema mappings into a description logic formalism, preserving the spatial and temporal semantics of the MADS data model. Thus, our modeling approach in description logic insures that the model designer will correctly define spatial and temporal schema elements and inter-schema mappings. The added value of the complete translation (i.e., including the data model and the source schemas) is that we validate not only the inter-schema mappings, but also the compliance of the source schemas to the data model, and infer implicit relationships within them. As the result of the validation procedure, the schema designer obtains the complete and valid set of inter-schema mappings and a set of valid (flexible) schematic patterns to apply to construct an integrated schema that meets application requirements. To further our work, we model a framework in which a schema designer is able to follow our integration method and realize the schema integration task in an assisted way. We design two models, UML and SEAM models, of a system that provides for integration functionalities. The models describe a framework where several tools are employed together, each involved in the service it is best suited for. We define the functionalities and the cooperation between the composing elements of the framework and detail the logics of the integration process in an UML activity diagram and in a SEAM operation model
    • …
    corecore