2 research outputs found

    Human-Like Neural-Symbolic Computing (Dagstuhl Seminar 17192)

    No full text
    This report documents the program and the outcomes of Dagstuhl Seminar 17192 "Human-Like Neural-Symbolic Computing", held from May 7th to 12th, 2017. The underlying idea of Human-Like Computing is to incorporate into Computer Science aspects of how humans learn, reason and compute. Whilst recognising the relevant scientific trends in big data and deep learning, capable of achieving state-of-the-art performance in speech recognition and computer vision tasks, limited progress has been made towards understanding the principles underlying language and vision understanding. Under the assumption that neural-symbolic computing - the study of logic and connectionism as well statistical approaches - can offer new insight into this problem, the seminar brought together computer scientists, but also specialists on artificial intelligence, cognitive science, machine learning, knowledge representation and reasoning, computer vision, neural computation, and natural language processing. The seminar consisted of contributed and invited talks, breakout and joint group discussion sessions, and a hackathon. It was built upon previous seminars and workshops on the integration of computational learning and symbolic reasoning, such as the Neural-Symbolic Learning and Reasoning (NeSy) workshop series, and the previous Dagstuhl Seminar 14381: Neural-Symbolic Learning and Reasoning

    Learning, Probability and Logic: Toward a Unified Approach for Content-Based Music Information Retrieval

    Get PDF
    Within the last 15 years, the field of Music Information Retrieval (MIR) has made tremendous progress in the development of algorithms for organizing and analyzing the ever-increasing large and varied amount of music and music-related data available digitally. However, the development of content-based methods to enable or ameliorate multimedia retrieval still remains a central challenge. In this perspective paper, we critically look at the problem of automatic chord estimation from audio recordings as a case study of content-based algorithms, and point out several bottlenecks in current approaches: expressiveness and flexibility are obtained to the expense of robustness and vice versa; available multimodal sources of information are little exploited; modeling multi-faceted and strongly interrelated musical information is limited with current architectures; models are typically restricted to short-term analysis that does not account for the hierarchical temporal structure of musical signals. Dealing with music data requires the ability to tackle both uncertainty and complex relational structure at multiple levels of representation. Traditional approaches have generally treated these two aspects separately, probability and learning being the usual way to represent uncertainty in knowledge, while logical representation being the usual way to represent knowledge and complex relational information. We advocate that the identified hurdles of current approaches could be overcome by recent developments in the area of Statistical Relational Artificial Intelligence (StarAI) that unifies probability, logic and (deep) learning. We show that existing approaches used in MIR find powerful extensions and unifications in StarAI, and we explain why we think it is time to consider the new perspectives offered by this promising research field
    corecore