80,939 research outputs found

    Impressionistic techniques applied in sound art & design

    Get PDF
    Sound art and design collectively refer to the process of specifying, acquiring, manipulating or generating sonic elements to evoke emotion and environment. Sound is used to convey the intentions, emotions, spirit or aura of a story, performance, or sonic installation. Sound connects unique aural environments, creating an immersive experience via mood and atmosphere. Impressionistic techniques such as Impasto, Pointillism, Sgraffito, Stippling introduced by 19th-century painters captured the essence of their subject in more vivid compositions, exuding authentic movements and atmosphere. This thesis applied impressionistic techniques using sound art and design to project specific mood and atmosphere responses among listeners. Four unique sound textures, each representing a technique from Impressionism, and a fifth composite sound texture were created for this project. All five sound textures were validated as representative of their respective Impressionistic technique. Only sonic Pointillism matched its emotive intent. This outcome supports the research question that sound art and design can be used to direct listeners’ mood and atmosphere responses. Partnering Impressionistic principles with sound art and design offers a deeper palette to sonically deliver more robust, holistic soundscapes for amplifying an audience’s listening experience. This project provides a foundation for future explorations and studies in applying cross-disciplinary artistic techniques with sound art and design or other artistic endeavors

    Multimodal Content Analysis for Effective Advertisements on YouTube

    Full text link
    The rapid advances in e-commerce and Web 2.0 technologies have greatly increased the impact of commercial advertisements on the general public. As a key enabling technology, a multitude of recommender systems exists which analyzes user features and browsing patterns to recommend appealing advertisements to users. In this work, we seek to study the characteristics or attributes that characterize an effective advertisement and recommend a useful set of features to aid the designing and production processes of commercial advertisements. We analyze the temporal patterns from multimedia content of advertisement videos including auditory, visual and textual components, and study their individual roles and synergies in the success of an advertisement. The objective of this work is then to measure the effectiveness of an advertisement, and to recommend a useful set of features to advertisement designers to make it more successful and approachable to users. Our proposed framework employs the signal processing technique of cross modality feature learning where data streams from different components are employed to train separate neural network models and are then fused together to learn a shared representation. Subsequently, a neural network model trained on this joint feature embedding representation is utilized as a classifier to predict advertisement effectiveness. We validate our approach using subjective ratings from a dedicated user study, the sentiment strength of online viewer comments, and a viewer opinion metric of the ratio of the Likes and Views received by each advertisement from an online platform.Comment: 11 pages, 5 figures, ICDM 201

    Automated annotation of multimedia audio data with affective labels for information management

    Get PDF
    The emergence of digital multimedia systems is creating many new opportunities for rapid access to huge content archives. In order to fully exploit these information sources, the content must be annotated with significant features. An important aspect of human interpretation of multimedia data, which is often overlooked, is the affective dimension. Such information is a potentially useful component for content-based classification and retrieval. Much of the affective information of multimedia content is contained within the audio data stream. Emotional features can be defined in terms of arousal and valence levels. In this study low-level audio features are extracted to calculate arousal and valence levels of multimedia audio streams. These are then mapped onto a set of keywords with predetermined emotional interpretations. Experimental results illustrate the use of this system to assign affective annotation to multimedia data

    Plug-in to fear: game biosensors and negative physiological responses to music

    Get PDF
    The games industry is beginning to embark on an ambitious journey into the world of biometric gaming in search of more exciting and immersive gaming experiences. Whether or not biometric game technologies hold the key to unlock the “ultimate gaming experience” hinges not only on technological advancements alone but also on the game industry’s understanding of physiological responses to stimuli of different kinds, and its ability to interpret physiological data in terms of indicative meaning. With reference to horror genre games and music in particular, this article reviews some of the scientific literature relating to specific physiological responses induced by “fearful” or “unpleasant” musical stimuli, and considers some of the challenges facing the games industry in its quest for the ultimate “plugged-in” experience

    Icanlearn: A Mobile Application For Creating Flashcards And Social Stories\u3csup\u3etm\u3c/sup\u3e For Children With Autistm

    Get PDF
    The number of children being diagnosed with Autism Spectrum Disorder (ASD) is on the rise, presenting new challenges for their parents and teachers to overcome. At the same time, mobile computing has been seeping its way into every aspect of our lives in the form of smartphones and tablet computers. It seems only natural to harness the unique medium these devices provide and use it in treatment and intervention for children with autism. This thesis discusses and evaluates iCanLearn, an iOS flashcard app with enough versatility to construct Social StoriesTM. iCanLearn provides an engaging, individualized learning experience to children with autism on a single device, but the most powerful way to use iCanLearn is by connecting two or more devices together in a teacher-learner relationship. The evaluation results are presented at the end of the thesis

    Multimodal music information processing and retrieval: survey and future challenges

    Full text link
    Towards improving the performance in various music information processing tasks, recent studies exploit different modalities able to capture diverse aspects of music. Such modalities include audio recordings, symbolic music scores, mid-level representations, motion, and gestural data, video recordings, editorial or cultural tags, lyrics and album cover arts. This paper critically reviews the various approaches adopted in Music Information Processing and Retrieval and highlights how multimodal algorithms can help Music Computing applications. First, we categorize the related literature based on the application they address. Subsequently, we analyze existing information fusion approaches, and we conclude with the set of challenges that Music Information Retrieval and Sound and Music Computing research communities should focus in the next years

    Is Vivaldi smooth and takete? Non-verbal sensory scales for describing music qualities

    Get PDF
    Studies on the perception of music qualities (such as induced or perceived emotions, performance styles, or timbre nuances) make a large use of verbal descriptors. Although many authors noted that particular music qualities can hardly be described by means of verbal labels, few studies have tried alternatives. This paper aims at exploring the use of non-verbal sensory scales, in order to represent different perceived qualities in Western classical music. Musically trained and untrained listeners were required to listen to six musical excerpts in major key and to evaluate them from a sensorial and semantic point of view (Experiment 1). The same design (Experiment 2) was conducted using musically trained and untrained listeners who were required to listen to six musical excerpts in minor key. The overall findings indicate that subjects\u2019 ratings on non-verbal sensory scales are consistent throughout and the results support the hypothesis that sensory scales can convey some specific sensations that cannot be described verbally, offering interesting insights to deepen our knowledge on the relationship between music and other sensorial experiences. Such research can foster interesting applications in the field of music information retrieval and timbre spaces explorations together with experiments applied to different musical cultures and contexts

    Somaesthetics and Dance

    Get PDF
    Dance is proposed as the most representative of somaesthetic arts in Thinking Through the Body: Essays in Somaesthetics and other writings of Richard Shusterman. Shuster- man offers a useful, but incomplete approach to somaesthetics of dance. In the examples provided, dance appears as subordinate to another art form (theater or photography) or as a means to achieving bodily excellence. Missing, for example, are accounts of the role of dance as an independent art form, how somaesthetics would address differences in varying approaches to dance, and attention to the viewer’s somaesthetic dance experience. Three strategies for developing new directions for dance somaesthetics are offered here: identify a fuller range of applications of somaesthetics to dance as an independent art form (e.g. Martha Graham); develop somaesthetics for a wider range of theatre dance (e.g. ballet, modern and experimental dance); and relate somaesthetics to more general features of dance (content, form, expression, style, kinesthetics) necessary for understanding the roles of the choreographer/dancer and the viewer

    Generating Music from Literature

    Full text link
    We present a system, TransProse, that automatically generates musical pieces from text. TransProse uses known relations between elements of music such as tempo and scale, and the emotions they evoke. Further, it uses a novel mechanism to determine sequences of notes that capture the emotional activity in the text. The work has applications in information visualization, in creating audio-visual e-books, and in developing music apps
    • 

    corecore