3,413 research outputs found

    LEARNING AND VISUALIZING MUSIC SPECIFICATIONS USING PATTERN GRAPHS

    Get PDF
    ABSTRACT We describe a system to learn and visualize specifications from song(s) in symbolic and audio formats. The core of our approach is based on a software engineering procedure called specification mining. Our procedure extracts patterns from feature vectors and uses them to build pattern graphs. The feature vectors are created by segmenting song(s) and extracting time and and frequency domain features from them, such as chromagrams, chord degree and interval classification. The pattern graphs built on these feature vectors provide the likelihood of a pattern between nodes, as well as start and ending nodes. The pattern graphs learned from a song(s) describe formal specifications that can be used for human interpretable quantitatively and qualitatively song comparison or to perform supervisory control in machine improvisation. We offer results in song summarization, song and style validation and machine improvisation with formal specifications

    Dynamic Interactions for Network Visualization and Simulation

    Get PDF
    Most network visualization suites do not interact with a simulator, as it executes. Nor do they provide an effective user interface that includes multiple visualization functions. The subject of this research is to improve the network visualization presented in the previous research [5] adding these capabilities to the framework. The previous network visualization did not have the capability of altering specific visualization characteristics, especially when detailed observations needed to be made for a small part of a large network. Searching for a network event in this topology might cause large delays leading to lower quality user interface. In addition to shortfalls in handling complex network events, [5] did not provide dynamic user interactions since it did not have real-time interaction with a simulator. These shortfalls motivate the development of a new network visualization framework design that provides a more robust user interface, network observation tools and an interaction with the simulator. Our research presents the design, development and implementation of this new network visualization framework to enhance network scenarios and provide interaction with NS-2, as it executes. From the interface design perspective, this research presents a prototype design to ease the implementation process of the framework. The visualization functions such as clustering, filtering, labeling and color coding help accessing network objects and events, supporting four tabs consisting of buttons, menus, and sliders. The new network visualization framework design gives the ability to handle the inherent complexity of large networks, allowing the user to interact with the current display of the framework, alter visualization parameters and control the network through the visualization. In our application, multiple visualizations are linked to NS-2 to build execution scenarios which let to test clustering, filtering, labeling functionalities on separate visualization screens, as NS-2 progresses

    Establishing a Framework for the development of Multimodal Virtual Reality Interfaces with Applicability in Education and Clinical Practice

    Get PDF
    The development of Virtual Reality (VR) and Augmented Reality (AR) content with multiple sources of both input and output has led to countless contributions in a great many number of fields, among which medicine and education. Nevertheless, the actual process of integrating the existing VR/AR media and subsequently setting it to purpose is yet a highly scattered and esoteric undertaking. Moreover, seldom do the architectures that derive from such ventures comprise haptic feedback in their implementation, which in turn deprives users from relying on one of the paramount aspects of human interaction, their sense of touch. Determined to circumvent these issues, the present dissertation proposes a centralized albeit modularized framework that thus enables the conception of multimodal VR/AR applications in a novel and straightforward manner. In order to accomplish this, the aforesaid framework makes use of a stereoscopic VR Head Mounted Display (HMD) from Oculus Rift©, a hand tracking controller from Leap Motion©, a custom-made VR mount that allows for the assemblage of the two preceding peripherals and a wearable device of our own design. The latter is a glove that encompasses two core modules in its innings, one that is able to convey haptic feedback to its wearer and another that deals with the non-intrusive acquisition, processing and registering of his/her Electrocardiogram (ECG), Electromyogram (EMG) and Electrodermal Activity (EDA). The software elements of the aforementioned features were all interfaced through Unity3D©, a powerful game engine whose popularity in academic and scientific endeavors is evermore increasing. Upon completion of our system, it was time to substantiate our initial claim with thoroughly developed experiences that would attest to its worth. With this premise in mind, we devised a comprehensive repository of interfaces, amid which three merit special consideration: Brain Connectivity Leap (BCL), Ode to Passive Haptic Learning (PHL) and a Surgical Simulator
    corecore