19,403 research outputs found

    Interoperability in IoT through the semantic profiling of objects

    Get PDF
    The emergence of smarter and broader people-oriented IoT applications and services requires interoperability at both data and knowledge levels. However, although some semantic IoT architectures have been proposed, achieving a high degree of interoperability requires dealing with a sea of non-integrated data, scattered across vertical silos. Also, these architectures do not fit into the machine-to-machine requirements, as data annotation has no knowledge on object interactions behind arriving data. This paper presents a vision of how to overcome these issues. More specifically, the semantic profiling of objects, through CoRE related standards, is envisaged as the key for data integration, allowing more powerful data annotation, validation, and reasoning. These are the key blocks for the development of intelligent applications.Portuguese Science and Technology Foundation (FCT) [UID/MULTI/00631/2013

    Distributed data cache designs for clustered VLIW processors

    Get PDF
    Wire delays are a major concern for current and forthcoming processors. One approach to deal with this problem is to divide the processor into semi-independent units referred to as clusters. A cluster usually consists of a local register file and a subset of the functional units, while the L1 data cache typically remains centralized in What we call partially distributed architectures. However, as technology evolves, the relative latency of such a centralized cache will increase, leading to an important impact on performance. In this paper, we propose partitioning the L1 data cache among clusters for clustered VLIW processors. We refer to this kind of design as fully distributed processors. In particular; we propose and evaluate three different configurations: a snoop-based cache coherence scheme, a word-interleaved cache, and flexible LO-buffers managed by the compiler. For each alternative, instruction scheduling techniques targeted to cyclic code are developed. Results for the Mediabench suite'show that the performance of such fully distributed architectures is always better than the performance of a partially distributed one with the same amount of resources. In addition, the key aspects of each fully distributed configuration are explored.Peer ReviewedPostprint (published version

    The students’ acceptance and use of their university’s virtual learning environment

    Get PDF
    The proliferation of digital and mobile devices, including; smartphones and tablets has led policy makers and practitioners to include these ubiquitous technologies in the realms of education. A thorough review of the relevant literature suggests that both students as well as their course instructors are becoming increasingly acquainted with the adoption of education technologies in the higher educational context. Hence, this study explores the university students’ readiness to engage with the virtual learning environment (VLE). The methodology has integrated measuring items that were drawn from the educational technology literature, including the unified theory of acceptance and use of technology, to better understand the students’ perceptions towards VLE. It investigated whether they were influenced by their instructors or by fellow students to use VLE. The results suggest that most of the research participants were using this technology as they believed that it supported them in their learning outcomes. The findings also revealed that the students were not coerced by their course instructors or by other individuals to engage with VLE. Moreover, the university’s facilitating conditions had a significant effect on the participants’ usage of VLE. In conclusion, this contribution puts forward key implications to practitioners. It also clarifies the limitations of this study and proposes future research directions.peer-reviewe

    Simultaneous Inference in General Parametric Models

    Get PDF
    Simultaneous inference is a common problem in many areas of application. If multiple null hypotheses are tested simultaneously, the probability of rejecting erroneously at least one of them increases beyond the pre-specified significance level. Simultaneous inference procedures have to be used which adjust for multiplicity and thus control the overall type I error rate. In this paper we describe simultaneous inference procedures in general parametric models, where the experimental questions are specified through a linear combination of elemental model parameters. The framework described here is quite general and extends the canonical theory of multiple comparison procedures in ANOVA models to linear regression problems, generalized linear models, linear mixed effects models, the Cox model, robust linear models, etc. Several examples using a variety of different statistical models illustrate the breadth of the results. For the analyses we use the R add-on package multcomp, which provides a convenient interface to the general approach adopted here

    Profiling a decade of information systems frontiers’ research

    Get PDF
    This article analyses the first ten years of research published in the Information Systems Frontiers (ISF) from 1999 to 2008. The analysis of the published material includes examining variables such as most productive authors, citation analysis, universities associated with the most publications, geographic diversity, authors’ backgrounds and research methods. The keyword analysis suggests that ISF research has evolved from establishing concepts and domain of information systems (IS), technology and management to contemporary issues such as outsourcing, web services and security. The analysis presented in this paper has identified intellectually significant studies that have contributed to the development and accumulation of intellectual wealth of ISF. The analysis has also identified authors published in other journals whose work largely shaped and guided the researchers published in ISF. This research has implications for researchers, journal editors, and research institutions

    National Mesothelioma Virtual Bank: A standard based biospecimen and clinical data resource to enhance translational research

    Get PDF
    Background: Advances in translational research have led to the need for well characterized biospecimens for research. The National Mesothelioma Virtual Bank is an initiative which collects annotated datasets relevant to human mesothelioma to develop an enterprising biospecimen resource to fulfill researchers' need. Methods: The National Mesothelioma Virtual Bank architecture is based on three major components: (a) common data elements (based on College of American Pathologists protocol and National North American Association of Central Cancer Registries standards), (b) clinical and epidemiologic data annotation, and (c) data query tools. These tools work interoperably to standardize the entire process of annotation. The National Mesothelioma Virtual Bank tool is based upon the caTISSUE Clinical Annotation Engine, developed by the University of Pittsburgh in cooperation with the Cancer Biomedical Informatics Grid™ (caBIG™, see http://cabig.nci.nih.gov). This application provides a web-based system for annotating, importing and searching mesothelioma cases. The underlying information model is constructed utilizing Unified Modeling Language class diagrams, hierarchical relationships and Enterprise Architect software. Result: The database provides researchers real-time access to richly annotated specimens and integral information related to mesothelioma. The data disclosed is tightly regulated depending upon users' authorization and depending on the participating institute that is amenable to the local Institutional Review Board and regulation committee reviews. Conclusion: The National Mesothelioma Virtual Bank currently has over 600 annotated cases available for researchers that include paraffin embedded tissues, tissue microarrays, serum and genomic DNA. The National Mesothelioma Virtual Bank is a virtual biospecimen registry with robust translational biomedical informatics support to facilitate basic science, clinical, and translational research. Furthermore, it protects patient privacy by disclosing only de-identified datasets to assure that biospecimens can be made accessible to researchers. © 2008 Amin et al; licensee BioMed Central Ltd
    • …
    corecore