13 research outputs found

    Discovering lesser known molecular players and mechanistic patterns in Alzheimer's disease using an integrative disease modelling approach

    Get PDF
    Convergence of exponentially advancing technologies is driving medical research with life changing discoveries. On the contrary, repeated failures of high-profile drugs to battle Alzheimer's disease (AD) has made it one of the least successful therapeutic area. This failure pattern has provoked researchers to grapple with their beliefs about Alzheimer's aetiology. Thus, growing realisation that Amyloid-β and tau are not 'the' but rather 'one of the' factors necessitates the reassessment of pre-existing data to add new perspectives. To enable a holistic view of the disease, integrative modelling approaches are emerging as a powerful technique. Combining data at different scales and modes could considerably increase the predictive power of the integrative model by filling biological knowledge gaps. However, the reliability of the derived hypotheses largely depends on the completeness, quality, consistency, and context-specificity of the data. Thus, there is a need for agile methods and approaches that efficiently interrogate and utilise existing public data. This thesis presents the development of novel approaches and methods that address intrinsic issues of data integration and analysis in AD research. It aims to prioritise lesser-known AD candidates using highly curated and precise knowledge derived from integrated data. Here much of the emphasis is put on quality, reliability, and context-specificity. This thesis work showcases the benefit of integrating well-curated and disease-specific heterogeneous data in a semantic web-based framework for mining actionable knowledge. Furthermore, it introduces to the challenges encountered while harvesting information from literature and transcriptomic resources. State-of-the-art text-mining methodology is developed to extract miRNAs and its regulatory role in diseases and genes from the biomedical literature. To enable meta-analysis of biologically related transcriptomic data, a highly-curated metadata database has been developed, which explicates annotations specific to human and animal models. Finally, to corroborate common mechanistic patterns — embedded with novel candidates — across large-scale AD transcriptomic data, a new approach to generate gene regulatory networks has been developed. The work presented here has demonstrated its capability in identifying testable mechanistic hypotheses containing previously unknown or emerging knowledge from public data in two major publicly funded projects for Alzheimer's, Parkinson's and Epilepsy diseases

    Simple identification tools in FishBase

    Get PDF
    Simple identification tools for fish species were included in the FishBase information system from its inception. Early tools made use of the relational model and characters like fin ray meristics. Soon pictures and drawings were added as a further help, similar to a field guide. Later came the computerization of existing dichotomous keys, again in combination with pictures and other information, and the ability to restrict possible species by country, area, or taxonomic group. Today, www.FishBase.org offers four different ways to identify species. This paper describes these tools with their advantages and disadvantages, and suggests various options for further development. It explores the possibility of a holistic and integrated computeraided strategy

    OGRS2012 Symposium Proceedings

    Get PDF
    Do you remember the Open Source Geospatial Research and Education Symposium (OGRS) in Nantes? "Les Machines de l’Île", the Big Elephant, the "Storm Boat" with Claramunt, Petit et al. (2009), and "le Biniou et la Bombarde"? A second edition of OGRS was promised, and that promise is now fulfilled in OGRS 2012, Yverdon-les-Bains, Switzerland, October 24-26, 2012. OGRS is a meeting dedicated to sharing knowledge, new solutions, methods, practices, ideas and trends in the field of geospatial information through the development and the use of free and open source software in both research and education. In recent years, the development of geospatial free and open source software (GFOSS) has breathed new life into the geospatial domain. GFOSS has been extensively promoted by FOSS4G events, which evolved from meetings which gathered together interested GFOSS development communities to a standard business conference. More in line with the academic side of the FOSS4G conferences, OGRS is a rather neutral forum whose goal is to assemble a community whose main concern is to find new solutions by sharing knowledge and methods free of software license limits. This is why OGRS is primarily concerned with the academic world, though it also involves public institutions, organizations and companies interested in geospatial innovation. This symposium is therefore not an exhibition for presenting existing industrial software solutions, but an event we hope will act as a catalyst for research and innovation and new collaborations between research teams, public agencies and industries. An educational aspect has recently been added to the content of the symposium. This important addition examines the knowledge triangle - research, education, and innovation - through the lens of how open source methods can improve education efficiency. Based on their experience, OGRS contributors bring to the table ideas on how open source training is likely to offer pedagogical advantages to equip students with the skills and knowledge necessary to succeed in tomorrow’s geospatial labor market. OGRS brings together a large collection of current innovative research projects from around the world, with the goal of examining how research uses and contributes to open source initiatives. By presenting their research, OGRS contributors shed light on how the open-source approach impacts research, and vice-versa. The organizers of the symposium wish to demonstrate how the use and development of open source software strengthen education, research and innovation in geospatial fields. To support this approach, the present proceedings propose thirty short papers grouped under the following thematic headings: Education, Earth Science & Landscape, Data, Remote Sensing, Spatial Analysis, Urban Simulation and Tools. These papers are preceded by the contributions of the four keynote speakers: Prof Helena Mitasova, Dr Gérard Hégron, Prof Sergio Rey and Prof Robert Weibel, who share their expertise in research and education in order to highlight the decisive advantages of openness over the limits imposed by the closed-source license system

    Data quality issues in electronic health records for large-scale databases

    Get PDF
    Data Quality (DQ) in Electronic Health Records (EHRs) is one of the core functions that play a decisive role to improve the healthcare service quality. The DQ issues in EHRs are a noticeable trend to improve the introduction of an adaptive framework for interoperability and standards in Large-Scale Databases (LSDB) management systems. Therefore, large data communications are challenging in the traditional approaches to satisfy the needs of the consumers, as data is often not capture directly into the Database Management Systems (DBMS) in a seasonably enough fashion to enable their subsequent uses. In addition, large data plays a vital role in containing plenty of treasures for all the fields in the DBMS. EHRs technology provides portfolio management systems that allow HealthCare Organisations (HCOs) to deliver a higher quality of care to their patients than that which is possible with paper-based records. EHRs are in high demand for HCOs to run their daily services as increasing numbers of huge datasets occur every day. Efficient EHR systems reduce the data redundancy as well as the system application failure and increase the possibility to draw all necessary reports. However, one of the main challenges in developing efficient EHR systems is the inherent difficulty to coherently manage data from diverse heterogeneous sources. It is practically challenging to integrate diverse data into a global schema, which satisfies the need of users. The efficient management of EHR systems using an existing DBMS present challenges because of incompatibility and sometimes inconsistency of data structures. As a result, no common methodological approach is currently in existence to effectively solve every data integration problem. The challenges of the DQ issue raised the need to find an efficient way to integrate large EHRs from diverse heterogeneous sources. To handle and align a large dataset efficiently, the hybrid algorithm method with the logical combination of Fuzzy-Ontology along with a large-scale EHRs analysis platform has shown the results in term of improved accuracy. This study investigated and addressed the raised DQ issues to interventions to overcome these barriers and challenges, including the provision of EHRs as they pertain to DQ and has combined features to search, extract, filter, clean and integrate data to ensure that users can coherently create new consistent data sets. The study researched the design of a hybrid method based on Fuzzy-Ontology with performed mathematical simulations based on the Markov Chain Probability Model. The similarity measurement based on dynamic Hungarian algorithm was followed by the Design Science Research (DSR) methodology, which will increase the quality of service over HCOs in adaptive frameworks

    Tools for identifying biodiversity: progress and problems

    Get PDF
    The correct identification of organisms is fundamental not only for the assessment and the conservation of biodiversity, but also in agriculture, forestry, the food and pharmaceutical industries, forensic biology, and in the broad field of formal and informal education at all levels. In this book, the reader will find short presentations of current and upcoming projects (EDIT, KeyToNature, STERNA, Species 2000, Fishbase, BHL, ViBRANT, etc.), plus a large panel of short articles on software, taxonomic applications, use of e-keys in the educational field, and practical applications. Single-access keys are now available on most recent electronic devices; the collaborative and semantic web opens new ways to develop and to share applications; the automatic processing of molecular data and images is now based on validated systems; identification tools appear as an efficient support for environmental education and training; the monitoring of invasive and protected species and the study of climate change require intensive identifications of specimens, which opens new markets for identification research

    Reinventing the Social Scientist and Humanist in the Era of Big Data

    Get PDF
    This book explores the big data evolution by interrogating the notion that big data is a disruptive innovation that appears to be challenging existing epistemologies in the humanities and social sciences. Exploring various (controversial) facets of big data such as ethics, data power, and data justice, the book attempts to clarify the trajectory of the epistemology of (big) data-driven science in the humanities and social sciences
    corecore