270 research outputs found

    Data informed learning: A next phase data literacy framework for higher education

    Get PDF
    This poster was presented at the Association for Information Science and Technology’s (ASIS&T) Annual Meeting in St. Louis, MO on November 9, 2015. Accessing, using and managing data is increasingly recognized as an important learning outcome in higher education. Approaches to data literacy have typically been informed by information literacy. New approaches to information literacy have emerged that address how information is used in the different disciplinary contexts in which people learn and work. Successful approaches to data literacy will also need to address contextual concerns. Informed learning is an approach to information literacy that purposefully addresses contextual concerns by suggesting pedagogic strategies for enabling students to use information in ways that support discipline-focused learning outcomes. As part of an ongoing investigation, we advance data informed learning as a framework for data literacy in higher education that emphasizes how data are used to learn and communicate within disciplinary learning contexts. Drawing from informed learning, we outline principles and characteristics of data informed learning, and suggest future directions to investigate ways that data are used in real-world environments

    An evaluation of Bradfordizing effects

    Get PDF
    The purpose of this paper is to apply and evaluate the bibliometric method Bradfordizing for information retrieval (IR) experiments. Bradfordizing is used for generating core document sets for subject-specific questions and to reorder result sets from distributed searches. The method will be applied and tested in a controlled scenario of scientific literature databases from social and political sciences, economics, psychology and medical science (SOLIS, SoLit, USB Köln Opac, CSA Sociological Abstracts, World Affairs Online, Psyndex and Medline) and 164 standardized topics. An evaluation of the method and its effects is carried out in two laboratory-based information retrieval experiments (CLEF and KoMoHe) using a controlled document corpus and human relevance assessments. The results show that Bradfordizing is a very robust method for re-ranking the main document types (journal articles and monographs) in today’s digital libraries (DL). The IR tests show that relevance distributions after re-ranking improve at a significant level if articles in the core are compared with articles in the succeeding zones. The items in the core are significantly more often assessed as relevant, than items in zone 2 (z2) or zone 3 (z3). The improvements between the zones are statistically significant based on the Wilcoxon signed-rank test and the paired T-Test

    Design implications for task-specific search utilities for retrieval and re-engineering of code

    Get PDF
    The importance of information retrieval systems is unquestionable in the modern society and both individuals as well as enterprises recognise the benefits of being able to find information effectively. Current code-focused information retrieval systems such as Google Code Search, Codeplex or Koders produce results based on specific keywords. However, these systems do not take into account developers’ context such as development language, technology framework, goal of the project, project complexity and developer’s domain expertise. They also impose additional cognitive burden on users in switching between different interfaces and clicking through to find the relevant code. Hence, they are not used by software developers. In this paper, we discuss how software engineers interact with information and general-purpose information retrieval systems (e.g. Google, Yahoo!) and investigate to what extent domain-specific search and recommendation utilities can be developed in order to support their work-related activities. In order to investigate this, we conducted a user study and found that software engineers followed many identifiable and repeatable work tasks and behaviours. These behaviours can be used to develop implicit relevance feedback-based systems based on the observed retention actions. Moreover, we discuss the implications for the development of task-specific search and collaborative recommendation utilities embedded with the Google standard search engine and Microsoft IntelliSense for retrieval and re-engineering of code. Based on implicit relevance feedback, we have implemented a prototype of the proposed collaborative recommendation system, which was evaluated in a controlled environment simulating the real-world situation of professional software engineers. The evaluation has achieved promising initial results on the precision and recall performance of the system

    Who shares health and medical scholarly articles on Facebook?

    Get PDF
    Over a million journal articles had been shared on public Facebook pages by 2017, but little is known about who is sharing (posting links to) these papers and whether mention counts could be an impact indicator. This study classified users who had posted about 749 links on Facebook before October 2017 mentioning 500 medical and health-related research articles, obtained using altmetric.com data. Most accounts (68%) belonged to groups, including online communities, journals, academic organizations, and societies. Of individual profiles, academics accounted for only 4%, but the largest group were health care professionals (16%). More than half (58%) of all Facebook accounts examined were not academic. The non-academic dominance suggests that public Facebook posts linking to health-related articles are mostly used to facilitate scientific knowledge flow between nonacademic professionals and the public. Therefore, Facebook mention counts may be a combined academic and non-academic attention indicator in the health and medical domains

    Spatially Explicit Data: Stewardship and Ethical Challenges in Science

    Get PDF
    Scholarly communication is at an unprecedented turning point created in part by the increasing saliency of data stewardship and data sharing. Formal data management plans represent a new emphasis in research, enabling access to data at higher volumes and more quickly, and the potential for replication and augmentation of existing research. Data sharing has recently transformed the practice, scope, content, and applicability of research in several disciplines, in particular in relation to spatially specific data. This lends exciting potentiality, but the most effective ways in which to implement such changes, particularly for disciplines involving human subjects and other sensitive information, demand consideration. Data management plans, stewardship, and sharing, impart distinctive technical, sociological, and ethical challenges that remain to be adequately identified and remedied. Here, we consider these and propose potential solutions for their amelioration

    “It’s just a theory”: trainee science teachers’ misunderstandings of key scientific terminology

    Get PDF
    Background: This article presents the findings from a survey of 189 pre-service science teachers who were asked to provide definitions of key scientific terms ('theory'; 'fact'; 'law'; 'hypothesis'). The survey was a scoping and mapping exercise to establish the range and variety of definitions. Methods: Graduates on a pre-service science teacher training course were asked to complete a short, free response survey and define key science terminology a >95% response rate was achieved and respondents definitions were categorised according to a best fit model. Results: In some cases, definitions contrary to accepted scientific meanings were given. In other cases, terminology was defined in a wholly non-scientific way, e.g., one-fifth of the respondents defined a ‘law’ in the context of rules that govern society rather than in a scientific context. Science graduates’ definitions and their understanding of key terminology is poor despite their study of science in formal university settings (with many respondents being recent science graduates). Conclusions: Key terminology in science, such as 'theory', 'law', 'fact', 'hypothesis', tends not to be taught and defined with consideration for the differences in meaning that different audiences/users give to them. This article calls for better instruction for pre-service science teachers’ in the importance of accurate and precise definitions of key science terminology in order to better differentiate between the scientific and colloquial usage of key terms

    Research data management and libraries: Relationships, activities, drivers and influences

    Get PDF
    The management of research data is now a major challenge for research organisations. Vast quantities of born-digital data are being produced in a wide variety of forms at a rapid rate in universities. This paper analyses the contribution of academic libraries to research data management (RDM) in the wider institutional context. In particular it: examines the roles and relationships involved in RDM, identifies the main components of an RDM programme, evaluates the major drivers for RDM activities, and analyses the key factors influencing the shape of RDM developments. The study is written from the perspective of library professionals, analysing data from 26 semi-structured interviews of library staff from different UK institutions. This is an early qualitative contribution to the topic complementing existing quantitative and case study approaches. Results show that although libraries are playing a significant role in RDM, there is uncertainty and variation in the relationship with other stakeholders such as IT services and research support offices. Current emphases in RDM programmes are on developments of policies and guidelines, with some early work on technology infrastructures and support services. Drivers for developments include storage, security, quality, compliance, preservation, and sharing with libraries associated most closely with the last three. The paper also highlights a ‘jurisdictional’ driver in which libraries are claiming a role in this space. A wide range of factors, including governance, resourcing and skills, are identified as influencing ongoing developments. From the analysis, a model is constructed designed to capture the main aspects of an institutional RDM programme. This model helps to clarify the different issues involved in RDM, identifying layers of activity, multiple stakeholders and drivers, and a large number of factors influencing the implementation of any initiative. Institutions may usefully benchmark their activities against the data and model in order to inform ongoing RDM activity

    What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals?

    Get PDF
    BACKGROUND: We conducted this analysis to determine i) which journals publish high-quality, clinically relevant studies in internal medicine, general/family practice, general practice nursing, and mental health; and ii) the proportion of clinically relevant articles in each journal. METHODS: We performed an analytic survey of a hand search of 170 general medicine, general healthcare, and specialty journals for 2000. Research staff assessed individual articles by using explicit criteria for scientific merit for healthcare application. Practitioners assessed the clinical importance of these articles. Outcome measures were the number of high-quality, clinically relevant studies published in the 170 journal titles and how many of these were published in each of four discipline-specific, secondary "evidence-based" journals (ACP Journal Club for internal medicine and its subspecialties; Evidence-Based Medicine for general/family practice; Evidence-Based Nursing for general practice nursing; and Evidence-Based Mental Health for all aspects of mental health). Original studies and review articles were classified for purpose: therapy and prevention, screening and diagnosis, prognosis, etiology and harm, economics and cost, clinical prediction guides, and qualitative studies. RESULTS: We evaluated 60,352 articles from 170 journal titles. The pass criteria of high-quality methods and clinically relevant material were met by 3059 original articles and 1073 review articles. For ACP Journal Club (internal medicine), four titles supplied 56.5% of the articles and 27 titles supplied the other 43.5%. For Evidence-Based Medicine (general/family practice), five titles supplied 50.7% of the articles and 40 titles supplied the remaining 49.3%. For Evidence-Based Nursing (general practice nursing), seven titles supplied 51.0% of the articles and 34 additional titles supplied 49.0%. For Evidence-Based Mental Health (mental health), nine titles supplied 53.2% of the articles and 34 additional titles supplied 46.8%. For the disciplines of internal medicine, general/family practice, and mental health (but not general practice nursing), the number of clinically important articles was correlated withScience Citation Index (SCI) Impact Factors. CONCLUSIONS: Although many clinical journals publish high-quality, clinically relevant and important original studies and systematic reviews, the articles for each discipline studied were concentrated in a small subset of journals. This subset varied according to healthcare discipline; however, many of the important articles for all disciplines in this study were published in broad-based healthcare journals rather than subspecialty or discipline-specific journals

    Scatter networks: a new approach for analysing information scatter

    Full text link
    Information on any given topic is often scattered across the Web. Previously this scatter has been characterized through the inequality of distribution of facts (i.e. pieces of information) across webpages. Such an approach conceals how specific facts (e.g. rare facts) occur in specific types of pages (e.g. fact-rich pages). To reveal such regularities, we construct bipartite networks, consisting of two types of vertices: the facts contained in webpages and the webpages themselves. Such a representation enables the application of a series of network analysis techniques, revealing structural features such as connectivity, robustness and clustering. Not only does network analysis yield new insights into information scatter, but we also illustrate the benefit of applying new and existing analysis techniques directly to a bipartite network as opposed to its one-mode projection. We discuss the implications of each network feature to the users’ ability to find comprehensive information online. Finally, we compare the bipartite graph structure of webpages and facts with the hyperlink structure between the webpages.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/58170/2/njp7_7_231.pd
    • 

    corecore