260,834 research outputs found

    Data DNA: The Next Generation of Statistical Metadata

    Get PDF
    Describes the components of a complete statistical metadata system and suggests ways to create and structure metadata for better access and understanding of data sets by diverse users

    Internet skills performance tests: are people ready for eHealth?

    Get PDF
    Background:\ud Despite the amount of online health information, there are several barriers that limit the Internet’s adoption as a source of health information. One of these barriers is highlighted in conceptualizations of the digital divide which include the differential possession of Internet skills, or “eHealth literacy”. Most measures of Internet skills among populations at large use self-assessments. The research discussed here applies a multifaceted definition of Internet skills and uses actual performance tests.\ud \ud Objective:\ud The purpose of this study was to assess how ready a sample of the general population is for eHealth. More specifically, four types of Internet skills were measured in a performance test in which subjects had to complete health-related assignments on the Internet.\ud \ud Methods:\ud From November 1, 2009, through February 28, 2010, 88 subjects participated in the study. Subjects were randomly selected from a telephone directory. A selective quota sample was used divided over equal subsamples of gender, age, and education. Each subject had to accomplish assignments on the Internet. The Internet skills accounted for were categorized as operational (basic skills to use the Internet), formal (navigation and orientation), information (finding information), and strategic (using the information for personal benefits). The tests took approximately 1.5 hours and were conducted in a University office, making the setting equally new for all. Successful completion and time spent on the assignments—the two main outcomes—were directly measured by the test leader.\ud \ud Results:\ud The subjects successfully completed an average of 73% (5.8/8) of the operational Internet skill tasks and an average of 73% (2.9/4) of the formal Internet skill tasks. Of the information Internet skills tasks, an average of 50% (1.5/3) was completed successfully and, of the strategic Internet skills tasks, 35% (0.7/2). Only 28% (25/88) of the subjects were able to successfully complete all operational skills tasks, 39% (34/88) all formal skills tasks, 13% (11/88) all information skills tasks, and 20% (18/88) both the strategic skill tasks. The time spent on the assignments varied substantially. Age and education were the most important contributors to the operational and formal Internet skills. Regarding the formal Internet skills, years of Internet experience also had some influence. Educational level of attainment was the most important contributor to the information and strategic Internet skills.\ud \ud Conclusions:\ud Although the amount of online health-related information and services is consistently growing, it appears that the general population lacks the skills to keep up. Most problematic appear to be the lack of information and strategic Internet skills, which, in the context of health, are very important. The lack of these skills is also problematic for members of younger generations, who are often considered skilled Internet users. This primarily seems to account for the operational and formal Internet skills. The results of the study strongly call for policies to increase the level of Internet skills

    Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality

    Get PDF
    We survey diverse approaches to the notion of information: from Shannon entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov complexity are presented: randomness and classification. The survey is divided in two parts published in a same volume. Part II is dedicated to the relation between logic and information system, within the scope of Kolmogorov algorithmic information theory. We present a recent application of Kolmogorov complexity: classification using compression, an idea with provocative implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses how Kolmogorov complexity, besides being a foundation to randomness, is also related to classification. Another approach to classification is also considered: the so-called "Google classification". It uses another original and attractive idea which is connected to the classification using compression and to Kolmogorov complexity from a conceptual point of view. We present and unify these different approaches to classification in terms of Bottom-Up versus Top-Down operational modes, of which we point the fundamental principles and the underlying duality. We look at the way these two dual modes are used in different approaches to information system, particularly the relational model for database introduced by Codd in the 70's. This allows to point out diverse forms of a fundamental duality. These operational modes are also reinterpreted in the context of the comprehension schema of axiomatic set theory ZF. This leads us to develop how Kolmogorov's complexity is linked to intensionality, abstraction, classification and information system.Comment: 43 page

    Statically Checking Web API Requests in JavaScript

    Full text link
    Many JavaScript applications perform HTTP requests to web APIs, relying on the request URL, HTTP method, and request data to be constructed correctly by string operations. Traditional compile-time error checking, such as calling a non-existent method in Java, are not available for checking whether such requests comply with the requirements of a web API. In this paper, we propose an approach to statically check web API requests in JavaScript. Our approach first extracts a request's URL string, HTTP method, and the corresponding request data using an inter-procedural string analysis, and then checks whether the request conforms to given web API specifications. We evaluated our approach by checking whether web API requests in JavaScript files mined from GitHub are consistent or inconsistent with publicly available API specifications. From the 6575 requests in scope, our approach determined whether the request's URL and HTTP method was consistent or inconsistent with web API specifications with a precision of 96.0%. Our approach also correctly determined whether extracted request data was consistent or inconsistent with the data requirements with a precision of 87.9% for payload data and 99.9% for query data. In a systematic analysis of the inconsistent cases, we found that many of them were due to errors in the client code. The here proposed checker can be integrated with code editors or with continuous integration tools to warn programmers about code containing potentially erroneous requests.Comment: International Conference on Software Engineering, 201

    Towards automated knowledge-based mapping between individual conceptualisations to empower personalisation of Geospatial Semantic Web

    No full text
    Geospatial domain is characterised by vagueness, especially in the semantic disambiguation of the concepts in the domain, which makes defining universally accepted geo- ontology an onerous task. This is compounded by the lack of appropriate methods and techniques where the individual semantic conceptualisations can be captured and compared to each other. With multiple user conceptualisations, efforts towards a reliable Geospatial Semantic Web, therefore, require personalisation where user diversity can be incorporated. The work presented in this paper is part of our ongoing research on applying commonsense reasoning to elicit and maintain models that represent users' conceptualisations. Such user models will enable taking into account the users' perspective of the real world and will empower personalisation algorithms for the Semantic Web. Intelligent information processing over the Semantic Web can be achieved if different conceptualisations can be integrated in a semantic environment and mismatches between different conceptualisations can be outlined. In this paper, a formal approach for detecting mismatches between a user's and an expert's conceptual model is outlined. The formalisation is used as the basis to develop algorithms to compare models defined in OWL. The algorithms are illustrated in a geographical domain using concepts from the SPACE ontology developed as part of the SWEET suite of ontologies for the Semantic Web by NASA, and are evaluated by comparing test cases of possible user misconceptions

    Supporting Semantically Enhanced Web Service Discovery for Enterprise Application Integration

    Get PDF
    The availability of sophisticated Web service discovery mechanisms is an essential prerequisite for increasing the levels of efficiency and automation in EAI. In this chapter, we present an approach for developing service registries building on the UDDI standard and offering semantically-enhanced publication and discovery capabilities in order to overcome some of the known limitations of conventional service registries. The approach aspires to promote efficiency in EAI in a number of ways, but primarily by automating the task of evaluating service integrability on the basis of the input and output messages that are defined in the Web service’s interface. The presented solution combines the use of three technology standards to meet its objectives: OWL-DL, for modelling service characteristics and performing fine-grained service matchmaking via DL reasoning, SAWSDL, for creating semantically annotated descriptions of service interfaces, and UDDI, for storing and retrieving syntactic and semantic information about services and service providers
    • 

    corecore