4 research outputs found

    Graphonomics and your Brain on Art, Creativity and Innovation : Proceedings of the 19th International Graphonomics Conference (IGS 2019 – Your Brain on Art)

    Get PDF
    [Italiano]: “Grafonomia e cervello su arte, creatività e innovazione”. Un forum internazionale per discutere sui recenti progressi nell'interazione tra arti creative, neuroscienze, ingegneria, comunicazione, tecnologia, industria, istruzione, design, applicazioni forensi e mediche. I contributi hanno esaminato lo stato dell'arte, identificando sfide e opportunità, e hanno delineato le possibili linee di sviluppo di questo settore di ricerca. I temi affrontati includono: strategie integrate per la comprensione dei sistemi neurali, affettivi e cognitivi in ambienti realistici e complessi; individualità e differenziazione dal punto di vista neurale e comportamentale; neuroaesthetics (uso delle neuroscienze per spiegare e comprendere le esperienze estetiche a livello neurologico); creatività e innovazione; neuro-ingegneria e arte ispirata dal cervello, creatività e uso di dispositivi di mobile brain-body imaging (MoBI) indossabili; terapia basata su arte creativa; apprendimento informale; formazione; applicazioni forensi. / [English]: “Graphonomics and your brain on art, creativity and innovation”. A single track, international forum for discussion on recent advances at the intersection of the creative arts, neuroscience, engineering, media, technology, industry, education, design, forensics, and medicine. The contributions reviewed the state of the art, identified challenges and opportunities and created a roadmap for the field of graphonomics and your brain on art. The topics addressed include: integrative strategies for understanding neural, affective and cognitive systems in realistic, complex environments; neural and behavioral individuality and variation; neuroaesthetics (the use of neuroscience to explain and understand the aesthetic experiences at the neurological level); creativity and innovation; neuroengineering and brain-inspired art, creative concepts and wearable mobile brain-body imaging (MoBI) designs; creative art therapy; informal learning; education; forensics

    Scalable and Quality-Aware Training Data Acquisition for Conversational Cognitive Services

    Full text link
    Dialog Systems (or simply bots) have recently become a popular human-computer interface for performing user's tasks, by invoking the appropriate back-end APIs (Application Programming Interfaces) based on the user's request in natural language. Building task-oriented bots, which aim at performing real-world tasks (e.g., booking flights), has become feasible with the continuous advances in Natural Language Processing (NLP), Artificial Intelligence (AI), and the countless number of devices which allow third-party software systems to invoke their back-end APIs. Nonetheless, bot development technologies are still in their preliminary stages, with several unsolved theoretical and technical challenges stemming from the ambiguous nature of human languages. Given the richness of natural language, supervised models require a large number of user utterances paired with their corresponding tasks -- called intents. To build a bot, developers need to manually translate APIs to utterances (called canonical utterances) and paraphrase them to obtain a diverse set of utterances. Crowdsourcing has been widely used to obtain such datasets, by paraphrasing the initial utterances generated by the bot developers for each task. However, there are several unsolved issues. First, generating canonical utterances requires manual efforts, making bot development both expensive and hard to scale. Second, since crowd workers may be anonymous and are asked to provide open-ended text (paraphrases), crowdsourced paraphrases may be noisy and incorrect (not conveying the same intent as the given task). This thesis first surveys the state-of-the-art approaches for collecting large training utterances for task-oriented bots. Next, we conduct an empirical study to identify quality issues of crowdsourced utterances (e.g., grammatical errors, semantic completeness). Moreover, we propose novel approaches for identifying unqualified crowd workers and eliminating malicious workers from crowdsourcing tasks. Particularly, we propose a novel technique to promote the diversity of crowdsourced paraphrases by dynamically generating word suggestions while crowd workers are paraphrasing a particular utterance. Moreover, we propose a novel technique to automatically translate APIs to canonical utterances. Finally, we present our platform to automatically generate bots out of API specifications. We also conduct thorough experiments to validate the proposed techniques and models

    Inferring implicit relevance from physiological signals

    Get PDF
    Ongoing growth in data availability and consumption has meant users are increasingly faced with the challenge of distilling relevant information from an abundance of noise. Overcoming this information overload can be particularly difficult in situations such as intelligence analysis, which involves subjectivity, ambiguity, or risky social implications. Highly automated solutions are often inadequate, therefore new methods are needed for augmenting existing analysis techniques to support user decision making. This project investigated the potential for deep learning to infer the occurrence of implicit relevance assessments from users' biometrics. Internal cognitive processes manifest involuntarily within physiological signals, and are often accompanied by 'gut feelings' of intuition. Quantifying unconscious mental processes during relevance appraisal may be a useful tool during decision making by offering an element of objectivity to an inherently subjective situation. Advances in wearable or non-contact sensors have made recording these signals more accessible, whilst advances in artificial intelligence and deep learning have enhanced the discovery of latent patterns within complex data. Together, these techniques might make it possible to transform tacit knowledge into codified knowledge which can be shared. A series of user studies recorded eye gaze movements, pupillary responses, electrodermal activity, heart rate variability, and skin temperature data from participants as they completed a binary relevance assessment task. Participants were asked to explicitly identify which of 40 short-text documents were relevant to an assigned topic. Investigations found this physiological data to contain detectable cues corresponding with relevance judgements. Random forests and artificial neural networks trained on features derived from the signals were able to produce inferences with moderate correlations with the participants' explicit relevance decisions. Several deep learning algorithms trained on the entire physiological time series data were generally unable to surpass the performance of feature-based methods, and instead produced inferences with low correlations with participants' explicit personal truths. Overall, pupillary responses, eye gaze movements, and electrodermal activity offered the most discriminative power, with additional physiological data providing diminishing or adverse returns. Finally, a conceptual design for a decision support system is used to discuss social implications and practicalities of quantifying implicit relevance using deep learning techniques. Potential benefits included assisting with introspection and collaborative assessment, however quantifying intrinsically unknowable concepts using personal data and abstruse artificial intelligence techniques were argued to pose incommensurate risks and challenges. Deep learning techniques therefore have the potential for inferring implicit relevance in information-rich environments, but are not yet fit for purpose. Several avenues worthy of further research are outlined

    Digital Interaction and Machine Intelligence

    Get PDF
    This book is open access, which means that you have free and unlimited access. This book presents the Proceedings of the 9th Machine Intelligence and Digital Interaction Conference. Significant progress in the development of artificial intelligence (AI) and its wider use in many interactive products are quickly transforming further areas of our life, which results in the emergence of various new social phenomena. Many countries have been making efforts to understand these phenomena and find answers on how to put the development of artificial intelligence on the right track to support the common good of people and societies. These attempts require interdisciplinary actions, covering not only science disciplines involved in the development of artificial intelligence and human-computer interaction but also close cooperation between researchers and practitioners. For this reason, the main goal of the MIDI conference held on 9-10.12.2021 as a virtual event is to integrate two, until recently, independent fields of research in computer science: broadly understood artificial intelligence and human-technology interaction
    corecore