4,440 research outputs found

    Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"

    Get PDF
    According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient. The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself. Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners. • The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another. • The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion. The behaviour of the entities may vary over time. • The systems operate with incomplete information about the environment. For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered. The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems. This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative. We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration

    Using Neural Networks for Relation Extraction from Biomedical Literature

    Full text link
    Using different sources of information to support automated extracting of relations between biomedical concepts contributes to the development of our understanding of biological systems. The primary comprehensive source of these relations is biomedical literature. Several relation extraction approaches have been proposed to identify relations between concepts in biomedical literature, namely, using neural networks algorithms. The use of multichannel architectures composed of multiple data representations, as in deep neural networks, is leading to state-of-the-art results. The right combination of data representations can eventually lead us to even higher evaluation scores in relation extraction tasks. Thus, biomedical ontologies play a fundamental role by providing semantic and ancestry information about an entity. The incorporation of biomedical ontologies has already been proved to enhance previous state-of-the-art results.Comment: Artificial Neural Networks book (Springer) - Chapter 1

    Are We Legislating Away Our Scientific Future? The Database Debate

    Get PDF
    The ambiguity of the present copyright laws governing the protection of databases creates a situation where database owners, unsure of how IP laws safeguard their information, overprotect their data with oppressive licenses and technological mechanisms (condoned by the DMCA) that impede interoperation. Databases are fundamental to scientific research, yet the lack of interoperability between databases and limited access inhibits this research. The US Congress, spurred by the European Database Directive, and heavily lobbied by the commercial database industry, is presently considering ways to legislate database protections; most of the present suggestions for legislation will be detrimental to scientific progress. The author agrees that new legislation is necessary, but not to provide extra-copyright protections, as database owners would like, but to create an environment wherein data is easily accessible to academic research and interoperability is encouraged; yet simultaneously providing database owners with incentives to produce new databases. One possibility would be to introduce standardized compulsory licensing of databases to academics following an embargo period where databases could be sold at free-market prices (to recoup costs). Databases would be given some sort of intellectual property protection both during and after this embargo in return for a limiting of technical safeguards and conforming to interoperability standards

    Establishment of computational biology in Greece and Cyprus: Past, present, and future.

    Get PDF
    We review the establishment of computational biology in Greece and Cyprus from its inception to date and issue recommendations for future development. We compare output to other countries of similar geography, economy, and size—based on publication counts recorded in the literature—and predict future growth based on those counts as well as national priority areas. Our analysis may be pertinent to wider national or regional communities with challenges and opportunities emerging from the rapid expansion of the field and related industries. Our recommendations suggest a 2-fold growth margin for the 2 countries, as a realistic expectation for further expansion of the field and the development of a credible roadmap of national priorities, both in terms of research and infrastructure funding

    The myths and realities of Bayesian chronological modeling revealed

    Get PDF
    We review the history of Bayesian chronological modeling in archaeology and demonstrate that there has been a surge over the past several years in American archaeological applications. Most of these applications have been performed by archaeologists who are self-taught in this method because formal training opportunities in Bayesian chronological modeling are infrequently provided. We define and address misconceptions about Bayesian chronological modeling that we have encountered in conversations with colleagues and in anonymous reviews, some of which have been expressed in the published literature. Objectivity and scientific rigor is inherent in the Bayesian chronological modeling process. Each stage of this process is described in detail, and we present examples of this process in practice. Our concluding discussion focuses on the potential that Bayesian chronological modeling has for enhancing understandings of important topics

    Of dups and dinos : evolution at the K/Pg boundary

    Get PDF
    Fifteen years into sequencing entire plant genomes, more than 30 paleopolyploidy events could be mapped on the tree of flowering plants (and many more when also transcriptome data sets are considered). While some genome duplications are very old and have occurred early in the evolution of dicots and monocots, or even before, others are more recent and seem to have occurred independently in many different plant lineages. Strikingly, a majority of these duplications date somewhere between 55 and 75 million years ago (mya), and thus likely correlate with the K/Pg boundary. If true, this would suggest that plants that had their genome duplicated at that time, had an increased chance to survive the most recent mass extinction event, at 66 mya, which wiped out a majority of plant and animal life, including all non-avian dinosaurs. Here, we review several processes, both neutral and adaptive, that might explain the establishment of polyploid plants, following the K/Pg mass extinction

    Does hybridization between divergent progenitors drive whole-genome duplication?

    Get PDF
    This is the peer reviewed version of the following article: BUGGS, R. J. A., SOLTIS, P. S. and SOLTIS, D. E. (2009), Does hybridization between divergent progenitors drive whole-genome duplication?. Molecular Ecology, 18: 3334–3339, which has been published in final form at http://dx.doi.org/10.1111/j.1365-294X.2009.04285.x This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving
    • …
    corecore