597 research outputs found

    Discovery Tools and Local Metadata Requirements in Academic Libraries

    Get PDF
    As the second decade of the twenty-first century commences, academic librarians who work to promote collection access must not only contend with a vast array of content available in a wide range of formats, but they must also ensure that new technologies developed to accommodate user search behaviors yield satisfactory outcomes. Next generation discovery tools are designed to streamline the search process and facilitate better search results by incorporating metadata from proprietary and local collections, then by providing relevancy-ranked results. This paper investigates the implications of discovery tool use for accessing materials housed in institutional repositories and special collections, in particular, how the discovery of these materials depends on local metadata creation practices. This paper surveys current research pertaining to metadata quality issues that may put unique local collections at risk for being overlooked in meta-search relevancy rankings, and considers ways in which academic libraries can address this issue as well as areas for future research

    Alternate Means of Digital Design Communication

    Get PDF
    This thesis reconceptualises communication in digital design as an integrated social and technical process. The friction in the communicative processes pertaining to digital design can be traced to the fact that current research and practice emphasise technical concerns at the expense of social aspects of design communication. With the advent of BIM (Building Information Modelling), a code model of communication (machine-to-machine) is inadequately applied to design communication. This imbalance is addressed in this thesis by using inferential models of communication to capture and frame the psychological and social aspects behind the communicative contracts between people. Three critical aspects of the communicative act have been analysed, namely (1) data representation, (2) data classification and (3) data transaction, with the help of a new digital design communication platform, Speckle, which was developed during this research project for this purpose. By virtue of an applied living laboratory context, Speckle facilitated both qualitative and quantitative comparisons against existing methodologies with data from real-world settings. Regarding data representation (1), this research finds that the communicative performance of a low-level composable object model is better than that of a complete and universal one as it enables a more dynamic process of ontological revision. This implies that current practice and research operates at an inappropriate level of abstraction. On data classification (2), this thesis shows that a curatorial object-based data sharing methodology, as opposed to the current file-based approaches, leads to increased relevancy and a reduction in noise (information without intent, or meaning). Finally, on data transaction (3), the analysis shows that an object-based data sharing methodology is technically better suited to enable communicative contracts between stakeholders. It allows for faster and more meaningful change-dependent transactions, as well as allow for the emergence of traceable communicative networks outside of the predefined exchanges of current practices

    BIBFRAME Linked Data: A Conceptual Study on the Prevailing Content Standards and Data Model

    Get PDF
    The BIBFRAME model is designed with a high degree of flexibility in that it can accommodate any number of existing models as well as models yet to be developed within the Web environment. The model’s flexibility is intended to foster extensibility. This study discusses the relationship of BIBFRAME to the prevailing content standards and models employed by cultural heritage institutions across museums, archives, libraries, historical societies, and community centers or those in the process of being adopted by cultural heritage institutions. This is to determine the degree to which BIBFRAME, as it is currently understood, can be a viable and extensible framework for bibliographic description and exchange in the Web environment. We highlight the areas of compatibility as well as areas of incompatibility. BIBFRAME holds the promise of freeing library data from the silos of online catalogs permitting library data to interact with data both within and outside the library community. We discuss some of the challenges that need to be addressed in order to optimize the potential capabilities that the BIBFRAME model holds

    Technical Challenges of Microservices Migration

    Get PDF
    The microservices architecture is a recent trend in the software engineering community, with the number of research articles in the field increasing, and more companies adopting the architectural style every year. However, the migration of a monolith to the microservices architecture is an error-prone process with a lack of guidelines for its execution. Also, microservices introduce a lot of different challenges that are not faced when following a monolithic architecture. This work aims to fill some gaps in current microservices research by providing a catalogue of the currently most common challenges of adopting this architectural style, and possible solutions for them. For this reason, a systematic mapping study was executed analysing 54 different articles. Also, 30 industry professionals participated in a questionnaire regarding the topic. Furthermore, a participant observation experiment was performed to retrieve additional industry data. Moreover, one of the identified challenges – distributed transactions management – was further detailed and a solution implemented using the choreographed saga pattern. The solution is publicly available as an open-source project. Finally, multiple experts in the microservices field validated the results of the research and the distributed transactions solution and provided insights regarding the value of this work.A arquitetura de microserviços é uma tendência recente na comunidade de engenharia de software, com o número de artigos publicados sobre o tema a aumentar, assim como o número de empresas a adoptar o estilo arquitetural todos os anos. No entanto, o processo de migração de um monolito para uma arquitetura orientada a microserviços tem um alto potencial de erros, uma vez que existe falta de orientações sobre como conduzir o processo corretamente. Para além disso, os microserviços introduzem muitos desafios diferentes que não são enfrentados no desenvolvimento de um sistema monolitico. Este trabalho pretende preencher algumas destas lacunas na investigação da arquitetura de microserviços através da construção de um catalogo dos principais desafios enfrentados ao adoptar o estilo arquitetural e soluções possíveis para estes. Por este motivo, um systematic mapping study foi desenvolvido, analisando 54 artigos diferentes. Para além disso, 30 profissionais da industria responderam a questionario sobre o tema. Finalmente, para obter dados adicionais da indústria, uma experiência de migração foi realizada e observada de forma ativa. Ainda, um dos desafios identificados – gestão de transações distribuídas – foi detalhado e uma solução implementada usando o padrão de sagas coreografadas. A solução está publicamente disponível como um projecto open-source. Finalmente, vários peritos em microserviços avaliaram os resultados deste trabalho, incluindo a solução desenvolvida para gestão de transações distribuídas, e deram feedback relativamente ao valor deste trabalho

    SLIS Student Research Journal, Vol. 1, Iss. 1

    Get PDF

    Helping crisis responders find the informative needle in the tweet haystack

    Get PDF
    Crisis responders are increasingly using social media, data and other digital sources of information to build a situational understanding of a crisis situation in order to design an effective response. However with the increased availability of such data, the challenge of identifying relevant information from it also increases. This paper presents a successful automatic approach to handling this problem. Messages are filtered for informativeness based on a definition of the concept drawn from prior research and crisis response experts. Informative messages are tagged for actionable data -- for example, people in need, threats to rescue efforts, changes in environment, and so on. In all, eight categories of actionability are identified. The two components -- informativeness and actionability classification -- are packaged together as an openly-available tool called Emina (Emergent Informativeness and Actionability)

    Problem-Resolution Dissemination

    Get PDF
    The current problem-solving paradigm for software developers revolves around using a search engine to find knowledge about the problem and its solutions. This approach provides the developer with search results that are only restricted by the context of the keywords they used to search. Problem-Resolution Dissemination (PRD) is a system and method for collecting, filtering, storing and distributing knowledge that users discover and access when solving a problem. The method involves an agent running on a user's (Alice’s) browsing client which is enabled when Alice is solving a problem. After Alice indicates that she has solved the problem, the agent will collect all web pages visited when solving the problem and filter out the pages that are not relevant. Pointers to the remaining pages (URIs) are tagged with Alice’s identity and stored in the central repository. When another user (Bob) attempts to solve the same problem, the above repository is queried based on Bob's social context. This social context is defined by Bob as a group of other users who have one of three trust levels: team, peer or community. The results are displayed by ranking them within each of the above contexts. In the event that no results are relevant to the Bob, he has the option of following traditional problem solving approaches. When Bob has solved his problem, the web pages he visited are added to the repository and made available to future users. In this manner, PRD incorporates relationships and previous experiences to improve the relevancy of results and thus efficiency in problem solving

    DQSOps: Data Quality Scoring Operations Framework for Data-Driven Applications

    Full text link
    Data quality assessment has become a prominent component in the successful execution of complex data-driven artificial intelligence (AI) software systems. In practice, real-world applications generate huge volumes of data at speeds. These data streams require analysis and preprocessing before being permanently stored or used in a learning task. Therefore, significant attention has been paid to the systematic management and construction of high-quality datasets. Nevertheless, managing voluminous and high-velocity data streams is usually performed manually (i.e. offline), making it an impractical strategy in production environments. To address this challenge, DataOps has emerged to achieve life-cycle automation of data processes using DevOps principles. However, determining the data quality based on a fitness scale constitutes a complex task within the framework of DataOps. This paper presents a novel Data Quality Scoring Operations (DQSOps) framework that yields a quality score for production data in DataOps workflows. The framework incorporates two scoring approaches, an ML prediction-based approach that predicts the data quality score and a standard-based approach that periodically produces the ground-truth scores based on assessing several data quality dimensions. We deploy the DQSOps framework in a real-world industrial use case. The results show that DQSOps achieves significant computational speedup rates compared to the conventional approach of data quality scoring while maintaining high prediction performance.Comment: 10 Pages The International Conference on Evaluation and Assessment in Software Engineering (EASE) conferenc
    • …
    corecore