254 research outputs found
BioWarehouse: a bioinformatics database warehouse toolkit
BACKGROUND: This article addresses the problem of interoperation of heterogeneous bioinformatics databases. RESULTS: We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. CONCLUSION: BioWarehouse embodies significant progress on the database integration problem for bioinformatics
A cooperative framework for molecular biology database integration using image object selection.
The theme and the concept of 'Molecular Biology Database Integrationâ and the problems associated with this concept initiated the idea for this Ph.D research. The available technologies facilitate to analyse the data independently and discretely but it fails to integrate the data resources for more meaningful information. This along with the integration issues created the scope for this Ph.D research.
The research has reviewed the 'database interoperability' problems and it has suggested a framework for integrating the molecular biology databases. The framework has proposed to develop a cooperative environment to share information on the basis of common purpose for the molecular biology databases. The research has also reviewed other implementation and interoperability issues for laboratory based, dedicated and target specific database. The research has addressed the following issues:
- diversity of molecular biology databases schemas, schema constructs and schema implementation
-multi-database query using image object keying
-database integration technologies using context graph
- automated navigation among these databases
This thesis has introduced a new approach for database implementation. It has introduced an interoperable component database concept to initiate multidatabase query on gene mutation data. A number of data models have been proposed for gene mutation data which is the basis for integrating the target specific component database to be integrated with the federated information system. The proposed data models are: data models for genetic trait analysis, classification of gene mutation data, pathological lesion data and laboratory data. The main feature of this component database is non-overlapping attributes and it will follow non-redundant integration approach as explained in the thesis. This will be achieved by storing attributes which will not have the union or intersection of any attributes that exist in public domain molecular biology databases. Unlike data warehousing technique, this feature is quite unique and novel. The component database will be integrated with other biological data sources for sharing information in a cooperative environment. This/involves developing new tools. The thesis explains the role of these new tools which are: meta data extractor, mapping linker, query generator and result interpreter. These tools are used for a transparent integration without creating any global schema of the participating databases.
The thesis has also established the concept of image object keying for multidatabase query and it has proposed a relevant algorithm for matching protein spot in gel electrophoresis image. An object spot in gel electrophoresis image will initiate the query when it is selected by the user. It matches the selected spot with other similar spots in other resource databases. This image object keying method is an alternative to conventional multidatabase query which requires writing complex SQL scripts. This method also resolve the semantic conflicts that exist among molecular biology databases.
The research has proposed a new framework based on the context of the web data for interactions with different biological data resources. A formal description of the resource context is described in the thesis. The implementation of the context into Resource Document Framework (RDF) will be able to increase the interoperability by providing the description of the resources and the navigation plan for accessing the web based databases. A higher level construct is developed (has, provide and access) to implement the context into RDF for web interactions. The interactions within the resources are achieved by utilising an integration domain to extract the required information with a single instance and without writing any query scripts. The
integration domain allows to navigate and to execute the query plan within the resource databases. An extractor module collects elements from different target webs and unify them as a whole object in a single page. The proposed framework is tested to find specific information e.g., information on Alzheimer's disease, from public domain biology resources, such as, Protein Data Bank, Genome Data Bank, Online Mendalian Inheritance in Man and local database. Finally, the thesis proposes further propositions and plans for future work
A cooperative framework for molecular biology database integration using image object selection
The theme and the concept of 'Molecular Biology Database Integration' and the problems associated with this concept initiated the idea for this Ph.D research. The available technologies facilitate to analyse the data independently and discretely but it fails to integrate the data resources for more meaningful information. This along with the integration issues created the scope for this Ph.D research. The research has reviewed the 'database interoperability' problems and it has suggested a framework for integrating the molecular biology databases. The framework has proposed to develop a cooperative environment to share information on the basis of common purpose for the molecular biology databases. The research has also reviewed other implementation and interoperability issues for laboratory based, dedicated and target specific database. The research has addressed the following issues: diversity of molecular biology databases schemas, schema constructs and schema implementation multi-database query using image object keying, database integration technologies using context graph, automated navigation among these databases. This thesis has introduced a new approach for database implementation. It has introduced an interoperable component database concept to initiate multidatabase query on gene mutation data. A number of data models have been proposed for gene mutation data which is the basis for integrating the target specific component database to be integrated with the federated information system. The proposed data models are: data models for genetic trait analysis, classification of gene mutation data, pathological lesion data and laboratory data. The main feature of this component database is non-overlapping attributes and it will follow non-redundant integration approach as explained in the thesis. This will be achieved by storing attributes which will not have the union or intersection of any attributes that exist in public domain molecular biology databases. Unlike data warehousing technique, this feature is quite unique and novel. The component database will be integrated with other biological data sources for sharing information in a cooperative environment. This involves developing new tools. The thesis explains the role of these new tools which are: meta data extractor, mapping linker, query generator and result interpreter. These tools are used for a transparent integration without creating any global schema of the participating databases. The thesis has also established the concept of image object keying for multidatabase query and it has proposed a relevant algorithm for matching protein spot in gel electrophoresis image. An object spot in gel electrophoresis image will initiate the query when it is selected by the user. It matches the selected spot with other similar spots in other resource databases. This image object keying method is an alternative to conventional multidatabase query which requires writing complex SQL scripts. This method also resolve the semantic conflicts that exist among molecular biology databases. The research has proposed a new framework based on the context of the web data for interactions with different biological data resources. A formal description of the resource context is described in the thesis. The implementation of the context into Resource Document Framework (RDF) will be able to increase the interoperability by providing the description of the resources and the navigation plan for accessing the web based databases. A higher level construct is developed (has, provide and access) to implement the context into RDF for web interactions. The interactions within the resources are achieved by utilising an integration domain to extract the required information with a single instance and without writing any query scripts. The integration domain allows to navigate and to execute the query plan within the resource databases. An extractor module collects elements from different target webs and unify them as a whole object in a single page. The proposed framework is tested to find specific information e.g., information on Alzheimer's disease, from public domain biology resources, such as, Protein Data Bank, Genome Data Bank, Online Mendalian Inheritance in Man and local database. Finally, the thesis proposes further propositions and plans for future work
Database Integration: the Key to Data Interoperability
Most of new databases are no more built from scratch, but re-use existing data from several autonomous data stores. To facilitate application development, the data to be re-used should preferably be redefined as a virtual database, providing for the logical unification of the underlying data sets. This unification process is called database integration. This chapter provides a global picture of the issues raised and the approaches that have been proposed to tackle the problem
Introducing Dynamic Behavior in Amalgamated Knowledge Bases
The problem of integrating knowledge from multiple and heterogeneous sources
is a fundamental issue in current information systems. In order to cope with
this problem, the concept of mediator has been introduced as a software
component providing intermediate services, linking data resources and
application programs, and making transparent the heterogeneity of the
underlying systems. In designing a mediator architecture, we believe that an
important aspect is the definition of a formal framework by which one is able
to model integration according to a declarative style. To this purpose, the use
of a logical approach seems very promising. Another important aspect is the
ability to model both static integration aspects, concerning query execution,
and dynamic ones, concerning data updates and their propagation among the
various data sources. Unfortunately, as far as we know, no formal proposals for
logically modeling mediator architectures both from a static and dynamic point
of view have already been developed. In this paper, we extend the framework for
amalgamated knowledge bases, presented by Subrahmanian, to deal with dynamic
aspects. The language we propose is based on the Active U-Datalog language, and
extends it with annotated logic and amalgamation concepts. We model the sources
of information and the mediator (also called supervisor) as Active U-Datalog
deductive databases, thus modeling queries, transactions, and active rules,
interpreted according to the PARK semantics. By using active rules, the system
can efficiently perform update propagation among different databases. The
result is a logical environment, integrating active and deductive rules, to
perform queries and update propagation in an heterogeneous mediated framework.Comment: Other Keywords: Deductive databases; Heterogeneous databases; Active
rules; Update
A Scalable Blocking Framework for Multidatabase Privacy-preserving Record Linkage
Today many application domains, such as national statistics,
healthcare, business analytic, fraud detection, and national
security, require data to be integrated from multiple databases.
Record linkage (RL) is a process used in data integration which
links multiple databases to identify matching records that belong
to the same entity. RL enriches the usefulness of data by
removing duplicates, errors, and inconsistencies which improves
the effectiveness of decision making in data analytic
applications.
Often, organisations are not willing or authorised to share the
sensitive information in their databases with any other party due
to privacy and confidentiality regulations. The linkage of
databases of different organisations is an emerging research area
known as privacy-preserving record linkage (PPRL). PPRL
facilitates the linkage of databases by ensuring the privacy of
the entities in these databases.
In multidatabase (MD) context, PPRL is significantly challenged
by the intrinsic exponential growth in the number of potential
record pair comparisons. Such linkage often requires significant
time and computational resources to produce the resulting
matching sets of records. Due to increased risk of collusion,
preserving the privacy of the data is more problematic with an
increase of number of parties involved in the linkage process.
Blocking is commonly used to scale the linkage of large
databases. The aim of blocking is to remove those record pairs
that correspond to non-matches (refer to different entities).
Many techniques have been proposed for RL and PPRL for blocking
two databases. However, many of these techniques are not suitable
for blocking multiple databases. This creates a need to develop
blocking technique for the multidatabase linkage context as
real-world applications increasingly require more than two
databases.
This thesis is the first to conduct extensive research on
blocking for multidatabase privacy-preserved record linkage
(MD-PPRL). We consider several research problems in blocking of
MD-PPRL. First, we start with a broad background literature on
PPRL. This allow us to identify the main research gaps that need
to be investigated in MD-PPRL. Second, we introduce a blocking
framework for MD-PPRL which provides more flexibility and control
to database owners in the block generation process. Third, we
propose different techniques that are used in our framework for
(1) blocking of multiple databases, (2) identifying blocks that
need to be compared across subgroups of these databases, and (3)
filtering redundant record pair comparisons by the efficient
scheduling of block comparisons to improve the scalability of
MD-PPRL. Each of these techniques covers an important aspect of
blocking in real-world MD-PPRL applications. Finally, this thesis
reports on an extensive evaluation of the combined application of
these methods with real datasets, which illustrates that they
outperform existing approaches in term of scalability, accuracy,
and privacy
Improving National and Homeland Security through a proposed Laboratory for nformation Globalization and Harmonization Technologies (LIGHT)
A recent National Research Council study found that: "Although there are many private and public databases that
contain information potentially relevant to counter terrorism programs, they lack the necessary context definitions
(i.e., metadata) and access tools to enable interoperation with other databases and the extraction of meaningful and
timely information" [NRC02, p.304, emphasis added] That sentence succinctly describes the objectives of this
project. Improved access and use of information are essential to better identify and anticipate threats, protect
against and respond to threats, and enhance national and homeland security (NHS), as well as other national
priority areas, such as Economic Prosperity and a Vibrant Civil Society (ECS) and Advances in Science and
Engineering (ASE). This project focuses on the creation and contributions of a Laboratory for Information
Globalization and Harmonization Technologies (LIGHT) with two interrelated goals:
(1) Theory and Technologies: To research, design, develop, test, and implement theory and technologies for
improving the reliability, quality, and responsiveness of automated mechanisms for reasoning and resolving semantic
differences that hinder the rapid and effective integration (int) of systems and data (dmc) across multiple
autonomous sources, and the use of that information by public and private agencies involved in national and
homeland security and the other national priority areas involving complex and interdependent social systems (soc).
This work builds on our research on the COntext INterchange (COIN) project, which focused on the integration of
diverse distributed heterogeneous information sources using ontologies, databases, context mediation algorithms,
and wrapper technologies to overcome information representational conflicts. The COIN approach makes it
substantially easier and more transparent for individual receivers (e.g., applications, users) to access and exploit
distributed sources. Receivers specify their desired context to reduce ambiguities in the interpretation of information
coming from heterogeneous sources. This approach significantly reduces the overhead involved in the integration of
multiple sources, improves data quality, increases the speed of integration, and simplifies maintenance in an
environment of changing source and receiver context - which will lead to an effective and novel distributed
information grid infrastructure. This research also builds on our Global System for Sustainable Development
(GSSD), an Internet platform for information generation, provision, and integration of multiple domains, regions,
languages, and epistemologies relevant to international relations and national security.
(2) National Priority Studies: To experiment with and test the developed theory and technologies on practical
problems of data integration in national priority areas. Particular focus will be on national and homeland security,
including data sources about conflict and war, modes of instability and threat, international and regional
demographic, economic, and military statistics, money flows, and contextualizing terrorism defense and response.
Although LIGHT will leverage the results of our successful prior research projects, this will be the first research
effort to simultaneously and effectively address ontological and temporal information conflicts as well as
dramatically enhance information quality. Addressing problems of national priorities in such rapidly changing
complex environments requires extraction of observations from disparate sources, using different interpretations, at
different points in times, for different purposes, with different biases, and for a wide range of different uses and
users. This research will focus on integrating information both over individual domains and across multiple domains.
Another innovation is the concept and implementation of Collaborative Domain Spaces (CDS), within which
applications in a common domain can share, analyze, modify, and develop information. Applications also can span
multiple domains via Linked CDSs. The PIs have considerable experience with these research areas and the
organization and management of such large scale international and diverse research projects.
The PIs come from three different Schools at MIT: Management, Engineering, and Humanities, Arts & Social
Sciences. The faculty and graduate students come from about a dozen nationalities and diverse ethnic, racial, and
religious backgrounds. The currently identified external collaborators come from over 20 different organizations and
many different countries, industrial as well as developing. Specific efforts are proposed to engage even more
women, underrepresented minorities, and persons with disabilities.
The anticipated results apply to any complex domain that relies on heterogeneous distributed data to address and
resolve compelling problems. This initiative is supported by international collaborators from (a) scientific and
research institutions, (b) business and industry, and (c) national and international agencies. Research products
include: a System for Harmonized Information Processing (SHIP), a software platform, and diverse applications in
research and education which are anticipated to significantly impact the way complex organizations, and society in
general, understand and manage critical challenges in NHS, ECS, and ASE
Improving National and Homeland Security through a proposed Laboratory for Information Globalization and Harmonization Technologies (LIGHT)
A recent National Research Council study found that: "Although there are many private and public databases that
contain information potentially relevant to counter terrorism programs, they lack the necessary context definitions
(i.e., metadata) and access tools to enable interoperation with other databases and the extraction of meaningful and
timely information" [NRC02, p.304, emphasis added] That sentence succinctly describes the objectives of this
project. Improved access and use of information are essential to better identify and anticipate threats, protect
against and respond to threats, and enhance national and homeland security (NHS), as well as other national
priority areas, such as Economic Prosperity and a Vibrant Civil Society (ECS) and Advances in Science and
Engineering (ASE). This project focuses on the creation and contributions of a Laboratory for Information
Globalization and Harmonization Technologies (LIGHT) with two interrelated goals:
(1) Theory and Technologies: To research, design, develop, test, and implement theory and technologies for
improving the reliability, quality, and responsiveness of automated mechanisms for reasoning and resolving semantic
differences that hinder the rapid and effective integration (int) of systems and data (dmc) across multiple
autonomous sources, and the use of that information by public and private agencies involved in national and
homeland security and the other national priority areas involving complex and interdependent social systems (soc).
This work builds on our research on the COntext INterchange (COIN) project, which focused on the integration
of diverse distributed heterogeneous information sources using ontologies, databases, context mediation algorithms,
and wrapper technologies to overcome information representational conflicts. The COIN approach makes it
substantially easier and more transparent for individual receivers (e.g., applications, users) to access and exploit
distributed sources. Receivers specify their desired context to reduce ambiguities in the interpretation of information
coming from heterogeneous sources. This approach significantly reduces the overhead involved in the integration of
multiple sources, improves data quality, increases the speed of integration, and simplifies maintenance in an
environment of changing source and receiver context - which will lead to an effective and novel distributed
information grid infrastructure. This research also builds on our Global System for Sustainable Development
(GSSD), an Internet platform for information generation, provision, and integration of multiple domains, regions,
languages, and epistemologies relevant to international relations and national security.
(2) National Priority Studies: To experiment with and test the developed theory and technologies on practical
problems of data integration in national priority areas. Particular focus will be on national and homeland security,
including data sources about conflict and war, modes of instability and threat, international and regional
demographic, economic, and military statistics, money flows, and contextualizing terrorism defense and response.
Although LIGHT will leverage the results of our successful prior research projects, this will be the first research
effort to simultaneously and effectively address ontological and temporal information conflicts as well as
dramatically enhance information quality. Addressing problems of national priorities in such rapidly changing
complex environments requires extraction of observations from disparate sources, using different interpretations, at
different points in times, for different purposes, with different biases, and for a wide range of different uses and
users. This research will focus on integrating information both over individual domains and across multiple domains.
Another innovation is the concept and implementation of Collaborative Domain Spaces (CDS), within which
applications in a common domain can share, analyze, modify, and develop information. Applications also can span
multiple domains via Linked CDSs. The PIs have considerable experience with these research areas and the
organization and management of such large scale international and diverse research projects.
The PIs come from three different Schools at MIT: Management, Engineering, and Humanities, Arts & Social
Sciences. The faculty and graduate students come from about a dozen nationalities and diverse ethnic, racial, and
religious backgrounds. The currently identified external collaborators come from over 20 different organizations
and many different countries, industrial as well as developing. Specific efforts are proposed to engage even more
women, underrepresented minorities, and persons with disabilities.
The anticipated results apply to any complex domain that relies on heterogeneous distributed data to address and
resolve compelling problems. This initiative is supported by international collaborators from (a) scientific and
research institutions, (b) business and industry, and (c) national and international agencies. Research products
include: a System for Harmonized Information Processing (SHIP), a software platform, and diverse applications in
research and education which are anticipated to significantly impact the way complex organizations, and society in
general, understand and manage critical challenges in NHS, ECS, and ASE
Segmenting and labeling query sequences in a multidatabase environment
When gathering information from multiple independent data sources, users will generally pose a sequence of queries to each source, combine (union) or cross-reference (join) the results in order to obtain the information they need. Furthermore, when gathering information, there is a fair bit of trial and error involved, where queries are recursively refined according to the results of a previous query in the sequence. From the point of view of an outside observer, the aim of such a sequence of queries may not be immediately obvious. We investigate the problem of isolating and characterizing subsequences representing coherent information retrieval goals out of a sequence of queries sent by a user to different data sources over a period of time. The problem has two sub-problems: segmenting the sequence into subsequences, each representing a discrete goal; and labeling each query in these subsequences according to how they contribute to the goal. We propose a method in which a discriminative probabilistic model (a Conditional Random Field) is trained with pre-labeled sequences. We have tested the accuracy with which such a model can infer labels and segmentation on novel sequences. Results show that the approach is very accurate (> 95% accuracy) when there are no spurious queries in the sequence and moderately accurate even in the presence of substantial noise (âŒ70% accuracy when 15% of queries in the sequence are spurious). © 2011 Springer-Verlag
- âŠ