129 research outputs found

    INTERACTIVE PHYSICAL DESIGN AND HAPTIC PLAYING OF VIRTUAL MUSICAL INSTRUMENTS

    No full text
    International audienceIn Computer Music, a practical approach of many Digital Musical Instruments is to separate the gestural input stage from the sound synthesis stage. While these instruments offer many creative possibilities, they present a strong rupture with traditional acoustic instruments, as the physical coupling between human and sound is broken. This coupling plays a crucial role for the expressive musical playing of acoustic instruments; we believe restoring it in a digital context is of equal importance for revealing the full expressive potential of digital instruments. This paper first presents haptic and physical modelling technologies for representing the mechano-acoustical instrumental situation in the context of DMIs. From these technologies, a prototype environment has been implemented for both designing virtual musical instruments and interacting with them via a force feedback device, able to preserve the energetic coherency of the musician-sound chain

    A LPDDR4 MEMORY CONTROLLER DESIGN WITH EYE CENTER DETECTION ALGORITHM

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 2. 김수환.The demand for higher bandwidth with reduced power consumption in mobile memory is increasing. In this thesis, architecture of the LPDDR4 memory controller, operated with a LPDDR4 memory, is proposed and designed, and efficient training algorithm, which is appropriate for this architecture, is proposed for memory training and verification. The operation speed range of the LPDDR4 memory specification is from 533Mbps to 4266Mbps, and the LPDDR4 memory controller is designed to support that range of the LPDDR4 memory. The phase-locked loop in the LPDDR4 memory controller is designed to operate between 1333MHz and 2133MHz. To cover the range of the LPDDR4 memory, the selectable frequency divider is used to provide operation clock. The output frequency of the phase-locked loop with divider is from 266MHz to 2133MHz. The delay-locked loop in the LPDDR4 memory controller is designed to operate between 266MHz and 2133MHz with 180˚ phase locking. The delay-locked loop is used each training operation, which is command training, data read and write training. To complete training in each training stage, eye center detection algorithm is used. The circuits for the proposed eye center detection algorithm such as delay line, phase interpolator and reference generator are designed and validated. The proposed 1x2y3x eye center detection algorithm is 23 times faster than conventional two-dimensional eye center detection algorithm and it can be implemented simply. Using 65nm CMOS process, the proposed LPDDR4 memory controller occupies 12mm2. The verification of the LPDDR4 memory controller is performed with commodity LPDDR4 memory. The verification of all training sequence, which is power on, initializing, boot up, command training, write leveling, read training, write training, is performed in this environment. The low voltage swing terminated logic driver and other several functions, including write leveling and data transmission, are verified at 4266Mbps and the entire LPDDR4 memory controller operations from 566Mbps to 1600Mbps are verified. The proposed eye center detection algorithm is verified from 566Mbps to 2843Mbps.CHAPTER 1 INTRODUCTION 1 1.1 MOTIVATION 1 1.2 INTRODUCTION 5 1.3 THESIS ORGANIZATION 7 CHAPTER 2 LPDDR4 MEMORY CONTROLLER DESIGN 8 2.1 DIFFERENCE BETWEEN LPDDR3 AND LPDDR4 MEMORY 8 2.1.1 ARCHITECTURAL DIFFERENCE BETWEEN LPDDR3 AND LPDDR4 MEMORY 10 2.1.2 SOURCE SYNCHRONOUS MATCHED SCHEME AND UNMATCHED SCHEME 11 2.1.3 LOW VOLTAGE SWING TERMINATED LOGIC DRIVER AND TERMINATION SCHEME 12 2.2 LPDDR4 MEMORY CONTROLLER SPECIFICATION 15 2.3 DESIGN PROCEDURE 18 CHAPTER 3 LPDDR4 MEMORY CONTROLLER ARCHITECTURE BASED ON MEMORY TRAINING 20 3.1 LPDDR4 MEMORY TRAINING SEQUENCE 20 3.2 LPDDR4 MEMORY TRAINING EYE DETECTION ALGORITHM 24 3.2.1 EYE CENTER DETECTION 24 3.2.2 1X2Y3X EYE CENTER DETECTION ALGORITHM 27 3.3. LPDDR4 MEMORY CONTROLLER DESIGN BASED ON MEMORY TRAINING 31 3.3.1 ARCHITECTURE FOR MEMORY BOOT UP AND POWER UP 31 3.3.2 CLOCK PATH ARCHITECTURE AND CLOCK TREE 34 3.3.3 COMMAND TRAINING AND COMMAND PATH ARCHITECTURE 35 3.3.4 WRITE LEVELING AND DATA STROBE TRANSMISSION PATH ARCHITECTURE 39 3.3.5 READ TRAINING AND READ PATH ARCHITECTURE 41 3.3.6 WRITE TRAINING AND WRITE PATH ARCHITECTURE 43 3.3.7 NORMAL READ/WRITE OPERATION AND MARGIN TEST 46 CHAPTER 4 LPDDR4 MEMORY CONTROLLER ARCHITECTURE MODELING AND CIRCUIT DESIGN 48 4.1 OVERALL LPDDR4 MEMORY CONTROLLER ARCHITECTURE MODELING 48 4.2 SIMULATION RESULT OF LPDDR4 MEMORY CONTROLLER MODELING 51 4.3 LPDDR4 MEMORY CONTROLLER CIRCUIT DESIGN 61 4.3.1 PHASE-LOCKED LOOP 61 4.3.2 DELAY-LOCKED LOOP 65 4.3.3 TRANSMITTER OF LPDDR4 MEMORY CONTROLLER: WRITE PATH 70 4.3.4 DE-SERIALIZER WITH CLOCK DOMAIN CROSSING 75 CHAPTER 5 MEASUREMENT RESULT OF LPDDR4 MEMORY CONTROLLER 77 5.1 LPDDR4 MEMORY CONTROLLER MEASUREMENT SETUP 77 5.1.1 LPDDR4 MEMORY CONTROLLER FLOOR PLAN AND LAYOUT 77 5.1.2 PACKAGE AND TEST BOARD 79 5.2 LPDDR4 MEMORY CONTROLLER SUB-BLOCK MEASUREMENT 81 5.2.1 PHASE-LOCKED LOOP 81 5.2.2 DELAY-LOCKED LOOP 83 5.2.3 200PS AND 800PS DELAY LINE 85 5.2.4 VOLTAGE REFERENCE GENERATOR 86 5.2.5 PHASE INTERPOLATOR 87 5.3 LPDDR4 MEMORY SYSTEM OPERATION MEASUREMENT 90 CHAPTER 6 CONCLUSION 93 APPENDIX OPERATION FLOW CHART OF THE PROPOSED LPDDR4 MEMORY CONTROLLER 95 BIBLIOGRAPHY 118 KOREAN ABSTRACT 124Docto

    Multisensory instrumental dynamics as an emergent paradigm for digital musical creation

    Get PDF
    The nature of human/instrument interaction is a long-standing area of study, drawing interest from fields as diverse as philosophy, cognitive sciences, anthropology, human–computer-interaction, and artistic creation. In particular, the case of the interaction between performer and musical instrument provides an enticing framework for studying the instrumental dynamics that allow for embodiment, skill acquisition and virtuosity with (electro-)acoustical instruments, and questioning how such notions may be transferred into the realm of digital music technologies and virtual instruments. This paper offers a study of concepts and technologies allowing for instrumental dynamics with Digital Musical Instruments, through an analysis of haptic-audio creation centred on (a) theoretical and conceptual frameworks, (b) technological components—namely physical modelling techniques for the design of virtual mechanical systems and force-feedback technologies allowing mechanical coupling with them, and (c) a corpus of artistic works based on this approach. Through this retrospective, we argue that artistic works created in this field over the last 20 years—and those yet to come—may be of significant importance to the haptics community as new objects that question physicality, tangibility, and creativity from a fresh and rather singular angle. Following which, we discuss the convergence of efforts in this field, challenges still ahead, and the possible emergence of a new transdisciplinary community focused on multisensory digital art forms

    Can Language Models Learn to Listen?

    Full text link
    We present a framework for generating appropriate facial responses from a listener in dyadic social interactions based on the speaker's words. Given an input transcription of the speaker's words with their timestamps, our approach autoregressively predicts a response of a listener: a sequence of listener facial gestures, quantized using a VQ-VAE. Since gesture is a language component, we propose treating the quantized atomic motion elements as additional language token inputs to a transformer-based large language model. Initializing our transformer with the weights of a language model pre-trained only on text results in significantly higher quality listener responses than training a transformer from scratch. We show that our generated listener motion is fluent and reflective of language semantics through quantitative metrics and a qualitative user study. In our evaluation, we analyze the model's ability to utilize temporal and semantic aspects of spoken text. Project page: https://people.eecs.berkeley.edu/~evonne_ng/projects/text2listen/Comment: ICCV 2023; Project page: https://people.eecs.berkeley.edu/~evonne_ng/projects/text2listen

    Musical Haptics

    Get PDF
    Haptic Musical Instruments; Haptic Psychophysics; Interface Design and Evaluation; User Experience; Musical Performanc

    Ubiquitous Multimodality as a Tool in Violin Performance Classification

    Get PDF
    Through integrated sensors, wearable devices such as fitness trackers and smart-watches provide convenient interfaces by which multimodal time-series data may be recorded. Fostering multimodality in data collection allows for the observation of recorded actions, exercises or performances with consideration towards multiple transpiring aspects. This paper details an exploration of machine-learning based classification upon a dataset of audio-gestural violin recordings, collated through the use of a purpose-built smartwatch application. This interface allowed for the recording of synchronous gestural and audio data, which proved well-suited towards classification by deep neural networks (DNNs). Recordings were segmented into individual bow strokes, these were classified through completion of three tasks: Participant Recognition, Articulation Recognition, and Scale Recognition. Higher participant classification accuracies were observed through the use of lone gestural data, while multi-input deep neural networks (MI-DNNs) achieved varying increases in accuracy during completion of the latter two tasks, through concatenation of separate audio and gestural subnetworks. Across tasks and across network architectures, test-classification accuracies ranged between 63.83% and 99.67%. Articulation Recognition accuracies were consistently high, averaging 99.37%

    Interoperability framework of virtual factory and business innovation

    Get PDF
    Interoperability framework of virtual factory and business innovationTask T51 Design a common schema and schema evolution framework for supporting interoperabilityTask T52 Design interoperability framework for supporting datainformation transformation service composition and business process cooperation among partnersA draft version is envisioned for month 44 which will be updated to reflect incremental changes driven by the other working packages for month 72 deliverable 7.

    Musical Haptics

    Get PDF
    Haptic Musical Instruments; Haptic Psychophysics; Interface Design and Evaluation; User Experience; Musical Performanc

    Intonaspacio : comprehensive study on the conception and design of digital musical instruments : interaction between space and musical gesture

    Get PDF
    Site-specific art understands that the place where the artwork is presented cannot be excluded from the artwork itself. The completion of the work is only achieved when the artwork and place intersect. Acoustically, sound presents a natural relation with place. The perception of sound is the result of place modulation on its spectral content, likewise perception of place is dependent on the sound content of that place. Even so, the number of sound artworks where place has a primary role is still very reduced. We thus purpose to create a tool to compose inherently place-specific sounds. Inherently because the sound is the result of the interaction between place and performer. Place because is the concept that is closer to human perception and of the idea of intimacy. Along this thesis we suggest that this interaction can be mediated by a digital musical instrument - Intonaspacio, that allows the performer to compose place-specific sounds and control it. In the rst part we describe the process of construction and design of Intonaspacio - how to access the sound present in the place, what gestures to measure, what sensors to use and where to place them, what mapping to design in order to compose place-speci c sound. We start by suggesting two di erent mappings to combine place and sound, where we look at di erent approaches on how to excite the structural sound of the place, i.e., the resonant frequencies. The rst one, uses a process where the performer can record a sample of sound ambiance and reproduce it, creating a feedback loop that excites at each iteration the resonances of the room. The second approach suggest a method where the input sound is analyzed and an ensemble of the frequencies of the place with the highest amplitudes is extracted. These are mapped to control several parameters of sound e ects. To evaluate Intonaspacio we conducted an experiment with participants who played the instrument during several trial sessions. The analysis of this experiment led us to propose a third mapping that combines the previous mappings. The second part of the thesis intends to create the conditions to give longevity to Intonaspacio. Starting from the premise that a musical instrument to be classi ed as such needs to have a dedicated instrumental technique and repertoire. These two conditions were achieved first, by suggesting a gestural vocabulary of the idiomatic gestures of Intonaspacio based on direct observation of the most repeated gestures of the participants of our experiment. Second, by collaborating with two composers whom wrote two pieces for Intonaspacio.A arte situada é uma disciplina artística tradicionalmente ligada a Instalação que pretende criar obras que mantêm uma relação directa com o espaço onde são apresentadas. A obra de arte não pode assim ser separada desse mesmo espaço sem perder o significado inicial. O som pelas suas características físicas reflecte naturalmente o espaço onde foi emitido, isto é, a percepção que temos de um som resulta da combinação do som directo com as reflecções do mesmo no espaço (cujo tempo e amplitude est~ao directamente relacionados com a arquitectura do espaço). Nesta lógica a arte sonora seria aquela que mais directamente procuraria compôr som situado. No entanto, o espaço é raramente utilizado como fen omeno criativo intencional. Nesse sentido, o trabalho aqui apresentado propõem-se a investigar a possibiliade de criar sons situados. O termo Espaço está muitas vezes associado a algo de dimensões vastas e ilimitadas. Assim sendo e na óptica da arte situada, onde h a uma necessidade de criar uma relação, parece-nos que lugar é um termo mais adequado para enquadrar o nosso trabalho de investigação. O lugar, para além de representar um espaço onde se podem estabelecer relações de intimidade (proximidade), apresenta dimensões que são moldáveis consoante a percepcção e o corpo humano. Ou seja, o Homem ao deslocar-se no lugar vai ao mesmo tempo de nindo as fronteiras desse mesmo lugar. Esta visão do lugar aparece no final do século dezanove quando a filosofia começa a orientar o pensamento para uma visão mais direccionada para o Homem e para a percepcção humana. O lugar passa então a representar algo que e estabelecido na acção e pela percepcção humana, onde e possível estabelecer relações de intimidade, ao contrário dos não-lugares (sítios mais ou menos descaracterizados onde as pessoas estão só de passagem). Re-adaptámos por isso a nossa questão inicial não só para realçar esta ideia de lugar mas também para reflectir uma bi-direccionalidade perceptiva que é fulcral para a arte situada - como criar e controlar sons inerentemente localizados? Inerentemente porque para existir de facto uma interacção entre lugar e obra de arte sonora são necessárias duas condições: por um lado o som possa provocar uma resposta do lugar, e por outro, o lugar possa modificar a nossa percepcção dele mesmo. A existência de uma relação interactiva abre espaço a um novo ponto que não tinhámos considerado anteriormente e que acrescentámos a nossa nova questão, o controlo. Propomos como possível reposta a esta questão a construção de um instrumento musical digital, o Intonaspacio, que servir a de mediador desta interacção e que possibilitará ao performer a criação e o controlo de sons localizados. Primeiro poque o instrumento musical possibilita o aumento das capacidades humanas, através da extensão do corpo humano (tal como um garfo extende a nossa mão, por exemplo). Segundo, porque o instrumento musical digital pelas suas características, nomeadamente pela separação entre o sistema de controlo e o sistema de geração de som abre novas possibilidades sonoras antes excluídas por limitações mecâncias ou humanas. Podemos por isso visionar um acesso mais alargado a novas dimensões espaciais e temporais. Esta tese está dividida em duas partes, na primeira parte descrevemos a construção do Intonaspacio, e na segunda estabelecemos as bases para permitir a sua longevidade. A primeira parte começa por investigar formas de acesso ao som do lugar, composto pelo conjunto dos sons ambiente e dos sons estruturais do lugar (ressonâncias próprias resultantes da arquitectura). Pensamos que uma das possíveis formas de compôr sons localizados e precisamente através da possibilidade de poder ter os sons ambiente a gerar e a amplificar os sons estruturais. Surgem então duas novas questões de natureza técnica: Como integrar o som ambiente na obra sonora em tempo-real? Como permitir que estes excitem a resposta do espaço? Para as responder desenhámos dois mapeamentos diferentes. Um primeiro em que o performer pode gravar pequenos trechos de som ambiente que são emitidos e re-gravados criando um ciclo de feedback que excita as ressonâncias do lugar. Um segundo método onde se faz uma análise espectral ao som captado e se extrai um conjunto de frequências cujas amplitudes são as mais elevadas. Estas são posteriormente utilizadas para controlar parâmetros de vários efeitos sonoros. Colocámos ainda no instrumento um conjunto de sensores diferentes para captar o gesto do performer. Estes estão localizados em diferentes areas do esqueleto do instrumento de modo a permitir areas sensíveis maiores e consequentemente um maior n umero de graus de liberdade ao performer. Neste momento o Intonaspacio permite extrair cerca de 17 características diferentes, agrupadas em três secções - orientação, impacto e distância. Estas podem ser utilizadas para modelar o som gerado pelo instrumento através dos diferentes mapeamentos. Ambas as propostas de mapeamento foram avaliadas por um conjunto de pessoas durante um teste de utilização do Intonaspacio. Os resultados deste permitiram-nos chegar a uma terceira sugestão de mapeamento onde combinamos características de ambas as propostas anteriores. No terceiro mapeamento mantém-se a análise ao som captado pelo instrumento mas a informação recolhida e usada como material sonoro de um algoritmo de síntese aditiva. A segunda parte da tese parte de uma premissa estabelecida durante o trabalho realizado nesta tese. Um instrumento musical deve possuir uma técnica instrumental própria e um repertório dedicado para que seja considerado enquanto tal. Neste sentido e com base na observação directa dos gestos mais comuns entre participantes do nosso estudo, propusémos um vocabulário gestual dos gestos idiomáticos do Intonaspacio, ou seja, dos gestos que dependem exclusivamente da forma do próprio instrumento e da localização dos sensores na estrutura do instrumento (zonas sensíveis) e são independentes do mapeamento. Colaborámos ainda com dois compositores que escreveram duas peças musicais para o Intonaspacio. O Intonaspacio revelou ser um instrumento complexo e expressivo que possibilita aos performers incluir o lugar enquanto parâmetro criativo, no entanto apresenta ainda alguns problemas de controlo. No primeiro mapeamento, embora a integração do lugar seja sentida como mais directa e apresentando resultados sonoros mais interessantes (de acordo com os participantes do estudo), a sensação de controlo é muito baixa. Já no segundo mapeamento, embora tenha um controlo mais fácil, a presença do lugar é muito subtil e pouco perceptível. Esperamos que o terceiro mapeamento venha contribuir para solucionar este problema e aumentar o interesse no instrumento, principalmente por parte dos compositores com quem colaborámos e iremos colaborar no futuro
    corecore