12,985 research outputs found

    Multimodal music information processing and retrieval: survey and future challenges

    Full text link
    Towards improving the performance in various music information processing tasks, recent studies exploit different modalities able to capture diverse aspects of music. Such modalities include audio recordings, symbolic music scores, mid-level representations, motion, and gestural data, video recordings, editorial or cultural tags, lyrics and album cover arts. This paper critically reviews the various approaches adopted in Music Information Processing and Retrieval and highlights how multimodal algorithms can help Music Computing applications. First, we categorize the related literature based on the application they address. Subsequently, we analyze existing information fusion approaches, and we conclude with the set of challenges that Music Information Retrieval and Sound and Music Computing research communities should focus in the next years

    Graph Data-Models and Semantic Web Technologies in Scholarly Digital Editing

    Get PDF
    This volume is based on the selected papers presented at the Workshop on Scholarly Digital Editions, Graph Data-Models and Semantic Web Technologies, held at the Uni- versity of Lausanne in June 2019. The Workshop was organized by Elena Spadini (University of Lausanne) and Francesca Tomasi (University of Bologna), and spon- sored by the Swiss National Science Foundation through a Scientific Exchange grant, and by the Centre de recherche sur les lettres romandes of the University of Lausanne. The Workshop comprised two full days of vibrant discussions among the invited speakers, the authors of the selected papers, and other participants.1 The acceptance rate following the open call for papers was around 60%. All authors – both selected and invited speakers – were asked to provide a short paper two months before the Workshop. The authors were then paired up, and each pair exchanged papers. Paired authors prepared questions for one another, which were to be addressed during the talks at the Workshop; in this way, conversations started well before the Workshop itself. After the Workshop, the papers underwent a second round of peer-review before inclusion in this volume. This time, the relevance of the papers was not under discus- sion, but reviewers were asked to appraise specific aspects of each contribution, such as its originality or level of innovation, its methodological accuracy and knowledge of the literature, as well as more formal parameters such as completeness, clarity, and coherence. The bibliography of all of the papers is collected in the public Zotero group library GraphSDE20192, which has been used to generate the reference list for each contribution in this volume. The invited speakers came from a wide range of backgrounds (academic, commer- cial, and research institutions) and represented the different actors involved in the remediation of our cultural heritage in the form of graphs and/or in a semantic web en- vironment. Georg Vogeler (University of Graz) and Ronald Haentjens Dekker (Royal Dutch Academy of Sciences, Humanities Cluster) brought the Digital Humanities research perspective; the work of Hans Cools and Roberta Laura Padlina (University of Basel, National Infrastructure for Editions), as well as of Tobias Schweizer and Sepi- deh Alassi (University of Basel, Digital Humanities Lab), focused on infrastructural challenges and the development of conceptual and software frameworks to support re- searchers’ needs; Michele Pasin’s contribution (Digital Science, Springer Nature) was informed by his experiences in both academic research, and in commercial technology companies that provide services for the scientific community. The Workshop featured not only the papers of the selected authors and of the invited speakers, but also moments of discussion between interested participants. In addition to the common Q&A time, during the second day one entire session was allocated to working groups delving into topics that had emerged during the Workshop. Four working groups were created, with four to seven participants each, and each group presented a short report at the end of the session. Four themes were discussed: enhancing TEI from documents to data; ontologies for the Humanities; tools and infrastructures; and textual criticism. All of these themes are represented in this volume. The Workshop would not have been of such high quality without the support of the members of its scientific committee: Gioele Barabucci, Fabio Ciotti, Claire Clivaz, Marion Rivoal, Greta Franzini, Simon Gabay, Daniel Maggetti, Frederike Neuber, Elena Pierazzo, Davide Picca, Michael Piotrowski, Matteo Romanello, Maïeul Rouquette, Elena Spadini, Francesca Tomasi, Aris Xanthos – and, of course, the support of all the colleagues and administrative staff in Lausanne, who helped the Workshop to become a reality. The final versions of these papers underwent a single-blind peer review process. We want to thank the reviewers: Helena Bermudez Sabel, Arianna Ciula, Marilena Daquino, Richard Hadden, Daniel Jeller, Tiziana Mancinelli, Davide Picca, Michael Piotrowski, Patrick Sahle, Raffaele Viglianti, Joris van Zundert, and others who preferred not to be named personally. Your input enhanced the quality of the volume significantly! It is sad news that Hans Cools passed away during the production of the volume. We are proud to document a recent state of his work and will miss him and his ability to implement the vision of a digital scholarly edition based on graph data-models and semantic web technologies. The production of the volume would not have been possible without the thorough copy-editing and proof reading by Lucy Emmerson and the support of the IDE team, in particular Bernhard Assmann, the TeX-master himself. This volume is sponsored by the University of Bologna and by the University of Lausanne. Bologna, Lausanne, Graz, July 2021 Francesca Tomasi, Elena Spadini, Georg Vogele

    Enhancing Access to Contextual Information on Individuals, Families, and Corporate Bodies for Archival Collections

    Get PDF
    We will address the ongoing challenge of transforming description of and improving access to primary humanities resources via advanced technologies. The project will test the feasibility of using existing archival descriptions in new ways, in order to enhance access and understanding of cultural resources in archives, libraries, and museums. We will derive Encoded Archival Context-Corporate Bodies, Persons, and Families (EAC-CPF) records from existing archival findings aids from the Library of Congress (LoC) and three consortia, and name authority files from the LoC and the Getty Vocabulary Program. We will produce open-source software used in the derivation and creation of the EAC-CPF records and a prototype access system demonstrating their value to the archival community and the use of primary humanities resources. The Institute for Advanced Technology in the Humanities, Univ. of Virginia, will partner with the California Digital Library and the School of Information, UC Berkeley

    UTILIZING SEMIOTIC PERSPECTIVE TO INVESTIGATE ALGEBRA II STUDENTS’ EXPOSURE TO AND USE OF MULTIPLE REPRESENTATIONS IN UNDERSTANDING ALGEBRAIC CONCEPTS

    Get PDF
    The study employed Ernest (2006) Theory of Semiotic Systems to investigate the use of and exposure to multiple representations in a 10th grade algebra II suburban high school class located in the southeastern region of the United States. The purpose of this exploratory case study (Yin, 2014) was to investigate the role of multiple representations in influencing and facilitating algebra II students’ conceptual understanding of piece-wise function, absolute-value functions, and quadratic functions. This study attempted to answer the following question: How does the use of and exposure to multiple representations influence algebra II students’ understanding and transfer of algebraic concepts? Furthermore, the following sub-questions assisted in developing a deeper understanding of the question: a) how does exposure to and use of multiple representations influence students’ identification of their pseudo-conceptual understanding of algebraic concepts?; b) how does exposure to and use of multiple representations influence students’ transition from pseudo-conceptual to conceptual understanding?; c) how does exposure to and use of multiple representations influence students’ transfer of their conceptual understanding to other related concepts? Understanding the notion of pseudo-conceptual understanding in algebra is significant in providing a tool for examining the veracity of algebra students’ conceptual understanding, where teachers have to consistently examine if students accurately understand the meanings of the mathematical signs that they are constantly using. The following data collection techniques were utilized: a) classroom observation, b) task based interviews, and c) study of documents. The unit of analysis was students’ verbal and written responses to task questions. Three themes emerged from the analysis of in this study: (a) re-imaging of conceptual understanding; (b) reflective approach to understanding and using mathematical signs; and (c) representational versatility in the use of mathematical signs. Findings from this study will contribute to the body of knowledge needed in research on understanding and assessing algebra students’ conceptual understanding of mathematics. In particular the findings from the study will contribute to the literature on understanding; the process of algebraic concepts knowledge acquisition, and the challenges that algebra students have with comprehension of algebraic concepts (Knuth, 2000: Zaslavsky et al., 2002)

    Opening Archives to General Public, a data modelling approach

    Get PDF
    By placing their descriptions on-line, Archives have gained greater public. This new public is mainly consisting of the novice users not familiar with the archival research. Archival research is conducted through the Finding Aids that serve users as a guide to the discovery of archival holdings. However those Finding Aids were originally used by the archivist for the records management and for interpreting users’ requests by deriving answers from provenance and context driven descriptions. In the on-line environment, Finding Aids are usually accessible through the Encoded Archival Description (EAD) standard. The EAD was developed with the purpose of encoding and capturing many different archival descriptive practices The problem has arisen with this notion that Finding Aids in the on-line environment have the exact same form as before, just without the archivist as an mediating factor. This causes many problems to the general user public that is not familiar with the archival research process. This thesis tends to explore one possible approach for facilitating access on behalf of the general user public to the archival holdings in on-line environment. This approach is by transforming the data encoded in EAD standard to another, more general mode. The goal model in question is the Europeana Data Model (EDM) developed for the purpose of Europeana v.1.0. project. The objective of this thesis is investigating weather EDM would bring the wanted changes to the accessibility of archival data. In order to achieve this, the general method for mapping EAD standard to EDM was developed. Furthermore the method developed was applied on the two fonds originating from the archive of Accademia Nazionale di Santa Cecilia, musical academy in Rome, for the purpose of validation of the developed method and analyzing the results of the mapping. The results of this study have shown that transforming archival description in EDM would bring certain improvements to the non-expert users accessing on-line. The main improvements are regarding terminology, facilitated access to the different levels of the archival description, improved search functionalities and better visibility of archival holdings.Joint Master Degree in Digital Library Learning (DILL

    Applying blended conceptual spaces to variable choice and aesthetics in data visualisation

    Get PDF
    Computational creativity is an active area of research within the artificial intelligence domain that investigates what aspects of computing can be considered as an analogue to the human creative process. Computers can be programmed to emulate the type of things that the human mind can. Artificial creativity is worthy of study for two reasons. Firstly, it can help in understanding human creativity and secondly it can help with the design of computer programs that appear to be creative. Although the implementation of creativity in computer algorithms is an active field, much of the research fails to specify which of the known theories of creativity it is aligning with. The combination of computational creativity with computer generated visualisations has the potential to produce visualisations that are context sensitive with respect to the data and could solve some of the current automation problems that computers experience. In addition theories of creativity could theoretically compute unusual data combinations, or introducing graphical elements that draw attention to the patterns in the data. More could be learned about the creativity involved as humans go about the task of generating a visualisation. The purpose of this dissertation was to develop a computer program that can automate the generation of a visualisation, for a suitably chosen visualisation type over a small domain of knowledge, using a subset of the computational creativity criteria, in order to try and explore the effects of the introduction of conceptual blending techniques. The problem is that existing computer programs that generate visualisations are lacking the creativity, intuition, background information, and visual perception that enable a human to decide what aspects of the visualisation will expose patterns that are useful to the consumer of the visualisation. The main research question that guided this dissertation was, “How can criteria derived from theories of creativity be used in the generation of visualisations?”. In order to answer this question an analysis was done to determine which creativity theories and artificial intelligence techniques could potentially be used to implement the theories in the context of those relevant to computer generated visualisations. Measurable attributes and criteria that were sufficient for an algorithm that claims to model creativity were explored. The parts of the visualisation pipeline were identified and the aspects of visualisation generation that humans are better at than computers was explored. Themes that emerged in both the computational creativity and the visualisation literature were highlighted. Finally a prototype was built that started to investigate the use of computational creativity methods in the ‘variable choice’, and ‘aesthetics’ stages of the data visualisation pipeline.School of ComputingM. Sc. (Computing

    ARIADNE: A Research Infrastructure for Archaeology

    Get PDF
    Research e-infrastructures, digital archives, and data services have become important pillars of scientific enterprise that in recent decades have become ever more collaborative, distributed, and data intensive. The archaeological research community has been an early adopter of digital tools for data acquisition, organization, analysis, and presentation of research results of individual projects. However, the provision of e-infrastructure and services for data sharing, discovery, access, and (re)use have lagged behind. This situation is being addressed by ARIADNE, the Advanced Research Infrastructure for Archaeological Dataset Networking in Europe. This EU-funded network has developed an e-infrastructure that enables data providers to register and provide access to their resources (datasets, collections) through the ARIADNE data portal, facilitating discovery, access, and other services across the integrated resources. This article describes the current landscape of data repositories and services for archaeologists in Europe, and the issues that make interoperability between them difficult to realize. The results of the ARIADNE surveys on users’ expectations and requirements are also presented. The main section of the article describes the architecture of the e-infrastructure, core services (data registration, discovery, and access), and various other extant or experimental services. The ongoing evaluation of the data integration and services is also discussed. Finally, the article summarizes lessons learned and outlines the prospects for the wider engagement of the archaeological research community in the sharing of data through ARIADNE

    A knowledge management architecture for information technology services delivery

    Get PDF
    Knowledge Management is a scientific area related to the organizational value of knowledge and is understood as a multidisciplinary field of research. Notions and practices are emerging and incorporated in organizations in different areas, as is the case of IT Service Management. Today’s business environment is increasingly unstable, characterized by uncertainties and changes, where technology changes rapidly, competitors multiply, and products and services quickly become obsolete. In this context, management is increasingly focused not only on people management, but on the knowledge they have and how to capture it. An Information System aligned with Knowledge Management and Intellectual Capital aims to represent and manage explicitly the different dimensions associated with an organizational competence. If organizations integrate Knowledge Competencies, Knowledge Engineering, Information Systems and Organizational Memories, these will improve the organization's knowledge and subsequently improve the quality of the service provided to users and customers. This research will use Design Science Research methodology to create an artifact to be applied in a case study from an organization aligned with ITIL best practices. This organization is supported by an Intranet and an ERP for laptop repair process. The outcome of this dissertation aims to demonstrate if Knowledge Management improves the IT services delivery
    • 

    corecore