63,020 research outputs found

    Using Google Analytics, Voyant and Other Tools to Better Understand Use of Manuscript Collections at L. Tom Perry Special Collections

    Get PDF
    [Excerpt] Developing strategies for making data-driven, objective decisions for digitization and value-added processing. based on patron usage has been an important effort in the L. Tom Perry Special Collections (hereafter Perry Special Collections). In a previous study, the authors looked at how creating a matrix using both Web analytics and in-house use statistics could provide a solid basis for making decisions about which collections to digitize as well as which collections merited deeper description. Along with providing this basis for decision making, the study also revealed some intriguing insights into how our collections were being used and raised some important questions about the impact of description on both digital and physical usage. We have continued analyzing the data from our first study and that data forms the basis of the current study. It is helpful to review the major outcomes of our previous study before looking at what we have learned in this deeper analysis. In the first study, we utilized three sources of statistical data to compare two distinct data points (in-house use and online finding aid use) and determine if there were any patterns or other information that would help curators in the department make better decisions about the items or collections selected for digitization or value-added processing. To obtain our data points, we combined two data sources related to the in-person use of manuscript collections in the Perry Special Collections reading room and one related to the use of finding aids for manuscript collections made available online through the department’s Finding Aid database ( http://findingaid.lib.byu.edu/). We mapped the resulting data points into a four quadrant graph (see figure 1)

    Representation Independent Analytics Over Structured Data

    Full text link
    Database analytics algorithms leverage quantifiable structural properties of the data to predict interesting concepts and relationships. The same information, however, can be represented using many different structures and the structural properties observed over particular representations do not necessarily hold for alternative structures. Thus, there is no guarantee that current database analytics algorithms will still provide the correct insights, no matter what structures are chosen to organize the database. Because these algorithms tend to be highly effective over some choices of structure, such as that of the databases used to validate them, but not so effective with others, database analytics has largely remained the province of experts who can find the desired forms for these algorithms. We argue that in order to make database analytics usable, we should use or develop algorithms that are effective over a wide range of choices of structural organizations. We introduce the notion of representation independence, study its fundamental properties for a wide range of data analytics algorithms, and empirically analyze the amount of representation independence of some popular database analytics algorithms. Our results indicate that most algorithms are not generally representation independent and find the characteristics of more representation independent heuristics under certain representational shifts

    The lifecycle of provenance metadata and its associated challenges and opportunities

    Full text link
    This chapter outlines some of the challenges and opportunities associated with adopting provenance principles and standards in a variety of disciplines, including data publication and reuse, and information sciences

    How can SMEs benefit from big data? Challenges and a path forward

    Get PDF
    Big data is big news, and large companies in all sectors are making significant advances in their customer relations, product selection and development and consequent profitability through using this valuable commodity. Small and medium enterprises (SMEs) have proved themselves to be slow adopters of the new technology of big data analytics and are in danger of being left behind. In Europe, SMEs are a vital part of the economy, and the challenges they encounter need to be addressed as a matter of urgency. This paper identifies barriers to SME uptake of big data analytics and recognises their complex challenge to all stakeholders, including national and international policy makers, IT, business management and data science communities. The paper proposes a big data maturity model for SMEs as a first step towards an SME roadmap to data analytics. It considers the ‘state-of-the-art’ of IT with respect to usability and usefulness for SMEs and discusses how SMEs can overcome the barriers preventing them from adopting existing solutions. The paper then considers management perspectives and the role of maturity models in enhancing and structuring the adoption of data analytics in an organisation. The history of total quality management is reviewed to inform the core aspects of implanting a new paradigm. The paper concludes with recommendations to help SMEs develop their big data capability and enable them to continue as the engines of European industrial and business success. Copyright © 2016 John Wiley & Sons, Ltd.Peer ReviewedPostprint (author's final draft

    Data solutions to transform agriculture: progress and experiences

    Get PDF
    • …
    corecore