4 research outputs found

    An approach to build JSON-based Domain Specific Languages solutions for web applications

    Full text link
    Because of their level of abstraction, Domain-Specific Languages (DSLs) enable building applications that ease software implementation. In the context of web applications, we can find a lot of technologies and programming languages for server-side applications that provide fast, robust, and flexible solutions, whereas those for client-side applications are limited, and mostly restricted to directly use JavaScript, HTML5, CSS3, JSON and XML. This article presents a novel approach to creating DSL-based web applications using JSON grammar (JSON-DSL) for both, the server and client side. The approach includes an evaluation engine, a programming model and an integrated web development environment that support it. The evaluation engine allows the execution of the elements created with the programming model. For its part, the programming model allows the definition and specification of JSON-DSLs, the implementation of JavaScript components, the use of JavaScript templates provided by the engine, the use of link connectors to heterogeneous information sources, and the integration with other widgets, web components and JavaScript frameworks. To validate the strength and capacity of our approach, we have developed four case studies that use the integrated web development environment to apply the programming model and check the results within the evaluation engin

    BrAPI-an application programming interface for plant breeding applications

    Get PDF
    Motivation: Modern genomic breeding methods rely heavily on very large amounts of phenotyping and genotyping data, presenting new challenges in effective data management and integration. Recently, the size and complexity of datasets have increased significantly, with the result that data are often stored on multiple systems. As analyses of interest increasingly require aggregation of datasets from diverse sources, data exchange between disparate systems becomes a challenge. Results: To facilitate interoperability among breeding applications, we present the public plant Breeding Application Programming Interface (BrAPI). BrAPI is a standardized web service API specification. The development of BrAPI is a collaborative, community-based initiative involving a growing global community of over a hundred participants representing several dozen institutions and companies. Development of such a standard is recognized as critical to a number of important large breeding system initiatives as a foundational technology. The focus of the first version of the API is on providing services for connecting systems and retrieving basic breeding data including germplasm, study, observation, and marker data. A number of BrAPI-enabled applications, termed BrAPPs, have been written, that take advantage of the emerging support of BrAPI by many databases

    Développement de méthodes d'intégration de données biologiques à l'aide d'Elasticsearch

    Get PDF
    En biologie, les données apparaissent à toutes les étapes des projets, de la préparation des études à la publication des résultats. Toutefois, de nombreux aspects limitent leur utilisation. Le volume, la vitesse de production ainsi que la variété des données produites ont fait entrer la biologie dans une ère dominée par le phénomène des données massives. Depuis 1980 et afin d'organiser les données générées, la communauté scientifique a produit de nombreux dépôts de données. Ces dépôts peuvent contenir des données de divers éléments biologiques par exemple les gènes, les transcrits, les protéines et les métabolites, mais aussi d'autres concepts comme les toxines, le vocabulaire biologique et les publications scientifiques. Stocker l'ensemble de ces données nécessite des infrastructures matérielles et logicielles robustes et pérennes. À ce jour, de par la diversité biologique et les architectures informatiques présentes, il n'existe encore aucun dépôt centralisé contenant toutes les bases de données publiques en biologie. Les nombreux dépôts existants sont dispersés et généralement autogérés par des équipes de recherche les ayant publiées. Avec l'évolution rapide des technologies de l'information, les interfaces de partage de données ont, elles aussi, évolué, passant de protocoles de transfert de fichiers à des interfaces de requêtes de données. En conséquence, l'accès à l'ensemble des données dispersées sur les nombreux dépôts est disparate. Cette diversité d'accès nécessite l'appui d'outils d'automatisation pour la récupération de données. Lorsque plusieurs sources de données sont requises dans une étude, le cheminement des données suit différentes étapes. La première est l'intégration de données, notamment en combinant de multiples sources de données sous une interface d'accès unifiée. Viennent ensuite des exploitations diverses comme l'exploration au travers de scripts ou de visualisations, les transformations et les analyses. La littérature a montré de nombreuses initiatives de systèmes informatiques de partage et d'uniformisation de données. Toutefois, la complexité induite par ces multiples systèmes continue de contraindre la diffusion des données biologiques. En effet, la production toujours plus forte de données, leur gestion et les multiples aspects techniques font obstacle aux chercheurs qui veulent exploiter ces données et les mettre à disposition. L'hypothèse testée pour cette thèse est que l'exploitation large des données pouvait être actualisée avec des outils et méthodes récents, notamment un outil nommé Elasticsearch. Cet outil devait permettre de combler les besoins déjà identifiés dans la littérature, mais également devait permettre d'ajouter des considérations plus récentes comme le partage facilité des données. La construction d'une architecture basée sur cet outil de gestion de données permet de les partager selon des standards d'interopérabilité. La diffusion des données selon ces standards peut être autant appliquée à des opérations de fouille de données biologiques que pour de la transformation et de l'analyse de données. Les résultats présentés dans le cadre de ma thèse se basent sur des outils pouvant être utilisés par l'ensemble des chercheurs, en biologie mais aussi dans d'autres domaines. Il restera cependant à les appliquer et à les tester dans les divers autres domaines afin d'en identifier précisément les limites.In biology, data appear at all stages of projects, from study preparation to publication of results. However, many aspects limit their use. The volume, the speed of production and the variety of data produced have brought biology into an era dominated by the phenomenon of "Big Data" (or massive data). Since 1980 and in order to organize the generated data, the scientific community has produced numerous data repositories. These repositories can contain data of various biological elements such as genes, transcripts, proteins and metabolites, but also other concepts such as toxins, biological vocabulary and scientific publications. Storing all of this data requires robust and durable hardware and software infrastructures. To date, due to the diversity of biology and computer architectures present, there is no centralized repository containing all the public databases in biology. Many existing repositories are scattered and generally self-managed by research teams that have published them. With the rapid evolution of information technology, data sharing interfaces have also evolved from file transfer protocols to data query interfaces. As a result, access to data set dispersed across the many repositories is disparate. This diversity of access requires the support of automation tools for data retrieval. When multiple data sources are required in a study, the data flow follows several steps, first of which is data integration, combining multiple data sources under a unified access interface. It is followed by various exploitations such as exploration through scripts or visualizations, transformations and analyses. The literature has shown numerous initiatives of computerized systems for sharing and standardizing data. However, the complexity induced by these multiple systems continues to constrain the dissemination of biological data. Indeed, the ever-increasing production of data, its management and multiple technical aspects hinder researchers who want to exploit these data and make them available. The hypothesis tested for this thesis is that the wide exploitation of data can be updated with recent tools and methods, in particular a tool named Elasticsearch. This tool should fill the needs already identified in the literature, but also should allow adding more recent considerations, such as easy data sharing. The construction of an architecture based on this data management tool allows sharing data according to interoperability standards. Data dissemination according to these standards can be applied to biological data mining operations as well as to data transformation and analysis. The results presented in my thesis are based on tools that can be used by all researchers, in biology but also in other fields. However, applying and testing them in various other fields remains to be studied in order to identify more precisely their limits

    Generation and Applications of Knowledge Graphs in Systems and Networks Biology

    Get PDF
    The acceleration in the generation of data in the biomedical domain has necessitated the use of computational approaches to assist in its interpretation. However, these approaches rely on the availability of high quality, structured, formalized biomedical knowledge. This thesis has the two goals to improve methods for curation and semantic data integration to generate high granularity biological knowledge graphs and to develop novel methods for using prior biological knowledge to propose new biological hypotheses. The first two publications describe an ecosystem for handling biological knowledge graphs encoded in the Biological Expression Language throughout the stages of curation, visualization, and analysis. Further, the second two publications describe the reproducible acquisition and integration of high-granularity knowledge with low contextual specificity from structured biological data sources on a massive scale and support the semi-automated curation of new content at high speed and precision. After building the ecosystem and acquiring content, the last three publications in this thesis demonstrate three different applications of biological knowledge graphs in modeling and simulation. The first demonstrates the use of agent-based modeling for simulation of neurodegenerative disease biomarker trajectories using biological knowledge graphs as priors. The second applies network representation learning to prioritize nodes in biological knowledge graphs based on corresponding experimental measurements to identify novel targets. Finally, the third uses biological knowledge graphs and develops algorithmics to deconvolute the mechanism of action of drugs, that could also serve to identify drug repositioning candidates. Ultimately, the this thesis lays the groundwork for production-level applications of drug repositioning algorithms and other knowledge-driven approaches to analyzing biomedical experiments
    corecore