62 research outputs found

    O clítico se aspectual e causa

    Get PDF
    Orientador: Maria Filomena Spatti SandaloDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Estudos da LinguagemResumo: Esta tese tem como objetivo principal dar conta do assim chamado Se aspectual no espanhol, especificamente no dialeto falado na cidade de Lima. Fundamentalmente tem tido duas aproximações para explicar ao clítico: semântico-aspectual e sintático. Neste trabalho trata-se de combinar as duas perspectivas através da hipótese de que há um nó Causa nas construções com o clítico aspectual. Baseados nos trabalhos de Pylkkänen (2002, 2008) postulamos que o espanhol é uma língua voice-bundling e root-selecting, isto é, os nós Causa e Voice aparecem juntos, fundidos, e Causa seleciona diretamente a uma raiz que será verbalizada. Construções com o Se aspectual com verbos como Morir(se) ou Beber(se) seriam casos de um processo de causativización (opcional) do verbo. Para a parte aspectual, baseamos-nos no trabalho de de Miguel y Fernández (2000). As autoras argumentam que as construções com o Se aspectual têm duas fases. A segunda fase, que inclui a culminação do evento e a mudança do estado, é focalizada pelo clítico Se. Na tese trata-se de fazer equivaler essas duas fases com os eventos causante e causado, respectivamente. A análise, por outro lado, ajudará a mostrar qual é a função do clítico Se e qual posição ocuparia na sintaxe; especificamente, postulamos que o clítico é um reflexivo que se geraria no Sv, adotando a hipótese base-generated dos clíticos. Também se pretende no trabalho dar conta de maneira exaustiva de todos os contextos verbais nos que aparece o clítico em questão: com verbos inacusativos e transitivosAbstract: This thesis has like main aim give account of the called aspectual Se in Spanish, specifically in the dialect spoken in the city of Lima. Fundamentally there have been two approaches to explain to the clitic: semantic-aspectual and syntactic. In this work, I try to combine the two perspectives through the hypothesis that there is a Cause node in the constructions with the aspectual clitic. Based in the Pylkkänen's work (2002, 2008), I posit that the Spanish is a voice-bundling and root-selecting language, this is, the Cause node and Voice appear together, merged, and Cause selects directly to a root that will be verbalize. Constructions with the aspectual Se with verbs like Morir (Die) or Beber (Drink) would be cases of a optional causation process of the verb. For the aspectual part, I have based in the de Miguel and Fernández (2000) work. The authors argue that constructions with the aspectual Se have two phases. The first phase is the process (or equivalent) expressed by the verb. The second phase, that includes the culmination of the event and the change of state, is focalized by the clitic. In the thesis, I treat to be equivalent these two phases with the causing event and caused event, respectively. The analysis, on the other hand, will help us to elucidate the function of the Se, and which position would occupy in the syntax; specifically, I posit that the clitic is one reflexive that would have been generated in the Sv, adopting the base-generated hypothesis of the clitics. Also I pretend in this thesis give account, of exhaustive way, of all the verbal contexts in which the clitic appears: with unacussative, inergative, and transitive verbMestradoLinguisticaMestra em Linguístic

    Air violin: a machine learning approach to fingering gesture recognition

    No full text
    Comunicació presentada a: 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education, celebrat el 13 de novembre de 2017 a Glasgow, Regne Unit.We train and evaluate two machine learning models for predicting fingering in violin performances using motion and EMG sensors integrated in the Myo device. Our aim is twofold: first, provide a fingering recognition model in the context of a gamification virtual violin application where we measure both right hand (i.e. bow) and left hand (i.e. fingering) gestures, and second, implement a tracking system for a computer assisted pedagogical tool for self-regulated learners in high-level music education. Our approach is based on the principle of mapping-by-demonstration in which the model is trained by the performer. We evaluated a model based on Decision Trees and compared it with a Hidden Markovian Model.This work has been partly sponsored by the Spanish TIN project TIMUL (TIN 2013-48152-C2-2-R), the European Union Horizon 2020 research and innovation programme under grant agreement No. 688269 (TELMI project), and the Spanish Ministry of Economy and Competitiveness under the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502)

    A Machine learning approach to ornamentation modeling and synthesis in jazz guitar

    No full text
    We present a machine learning approach to automatically generate expressive (ornamented) jazz performances from un-expressive music scores. Features extracted from the scores and the corresponding audio recordings performed by a professional guitarist were used to train computational models for predicting melody ornamentation. As a first step, several machine learning techniques were explored to induce regression models for timing, onset, and dynamics (i.e. note duration and energy) transformations, and an ornamentation model for classifying notes as ornamented or non-ornamented. In a second step, the most suitable ornament for predicted ornamented notes was selected based on note context similarity. Finally, concatenative synthesis was used to automatically synthesize expressive performances of new pieces using the induced models. Supplemental online material for this article containing musical examples of the automatically generated ornamented pieces can be accessed at doi: 10.1080/17459737.2016.1207814 and https://soundcloud.com/machine-learning-and-jazz. In the Online Supplement we present an example of the musical piece Yesterdays by Jerome Kern, which was modeled using our methodology for expressive music performance in jazz guitar.This project has received funding from: the European Union Horizon 2020 research and innovation programme [grant agreement No 688269]; the Spanish TIN project TIMUL [grant agreement TIN2013-48152-C2-2-R]

    Mixed reality or LEGO game play? Fostering social interaction in children with Autism

    No full text
    This study extends the previous research in which it has been shown that a mixed reality (MR) system fosters social interaction behaviours (SIBs) in children with Autism Spectrum Condition (ASC). When comparing this system to a LEGO-based non-digital intervention, it has been observed that an MR system efectively mediates a face-to-face play session between a child with ASC and a child without ASC providing new specifc advantageous properties (e.g. not being a passive tool, not needing to be guided by the therapist). Considering the newly collected multimodal data totaling to 72 children (36 trials of dyads, child with ASC/child without ASC), a frst goal of the present study is to apply detailed statistical inference and machine learning techniques to extensively evaluate the overall efect of this MR system, when compared to the LEGO condition. This goal also includes the analysis of psychophysiological data and allows the context-driven triangulation of the multimodal data which is operationalized by (i) video-coding of SIBs, (ii) psychophysiological data, and (iii) system logs of user-system events. A second goal is to show how SIBs, taking place in these experiences, are infuenced by the internal states of the users and the system. SIBs were measured by video-coding overt behaviours (Initiation, Response and Externalization) and with self-reports. Internal states were measured using a wearable device designed by the FuBIntLab (Full-Body Interaction Lab) to acquire: Electrocardiogram (ECG) and Electrodermal Activity (EDA) data. Afective sliders and State Trait Anxiety Scale questionnaires were used as self-reports. Repeated-measures design was chosen with two conditions, the MR environment and the traditional therapy LEGO. The results show that the MR system has a positive efect on SIBs when compared to the LEGO condition, with an added advantage of being more fexible

    Neural correlates of bow learning technique

    No full text
    Comunicació presentada a: 10th International Workshop on Machine Learning and Music (MML), celebrat a Barcelona (Espanya), el 6 d'octubre de 2017.In this work we want to study the process of learning a musical instrument through the use of audio descriptors and EEG. Twelve subjects participated in our experiment. Subjects were divided into two groups: a group of people who has never played the violin before (six subjects) and a group of experts (more than six years playing the violin). Participants were asked to perform a violin exercise during eighteen trials while the corresponding audio to each trial was recorded together with their EEG activity. Beginners showed significant differences between the beginning of the session and the end corresponding to an improve in the quality of the sound recorded while experts maintained their results. On the other hand, beginners showed more power in the High Beta frequency band (21-35Hz) than experts although the power values decreased during the session correlated with an improvement in the scores of the exercise

    Neural correlates of bow learning technique

    No full text
    Comunicació presentada a: 10th International Workshop on Machine Learning and Music (MML), celebrat a Barcelona (Espanya), el 6 d'octubre de 2017.In this work we want to study the process of learning a musical instrument through the use of audio descriptors and EEG. Twelve subjects participated in our experiment. Subjects were divided into two groups: a group of people who has never played the violin before (six subjects) and a group of experts (more than six years playing the violin). Participants were asked to perform a violin exercise during eighteen trials while the corresponding audio to each trial was recorded together with their EEG activity. Beginners showed significant differences between the beginning of the session and the end corresponding to an improve in the quality of the sound recorded while experts maintained their results. On the other hand, beginners showed more power in the High Beta frequency band (21-35Hz) than experts although the power values decreased during the session correlated with an improvement in the scores of the exercise

    A Machine learning approach to ornamentation modeling and synthesis in jazz guitar

    No full text
    We present a machine learning approach to automatically generate expressive (ornamented) jazz performances from un-expressive music scores. Features extracted from the scores and the corresponding audio recordings performed by a professional guitarist were used to train computational models for predicting melody ornamentation. As a first step, several machine learning techniques were explored to induce regression models for timing, onset, and dynamics (i.e. note duration and energy) transformations, and an ornamentation model for classifying notes as ornamented or non-ornamented. In a second step, the most suitable ornament for predicted ornamented notes was selected based on note context similarity. Finally, concatenative synthesis was used to automatically synthesize expressive performances of new pieces using the induced models. Supplemental online material for this article containing musical examples of the automatically generated ornamented pieces can be accessed at doi: 10.1080/17459737.2016.1207814 and https://soundcloud.com/machine-learning-and-jazz. In the Online Supplement we present an example of the musical piece Yesterdays by Jerome Kern, which was modeled using our methodology for expressive music performance in jazz guitar.This project has received funding from: the European Union Horizon 2020 research and innovation programme [grant agreement No 688269]; the Spanish TIN project TIMUL [grant agreement TIN2013-48152-C2-2-R]

    The EyeHarp: A gaze-controlled digital musical instrument

    No full text
    We present and evaluate the EyeHarp, a new gaze-controlled Digital Musical Instrument, which aims to enable people with severe motor disabilities to learn, perform, and compose music using only their gaze as control mechanism. It consists of (1) a step-sequencer layer, which serves for constructing chords/arpeggios, and (2) a melody layer, for playing melodies and changing the chords/arpeggios. We have conducted a pilot evaluation of the EyeHarp involving 39 participants with no disabilities from both a performer and an audience perspective. In the first case, eight people with normal vision and no motor disability participated in a music-playing session in which both quantitative and qualitative data were collected. In the second case 31 people qualitatively evaluated the EyeHarp in a concert setting consisting of two parts: a solo performance part, and an ensemble (EyeHarp, two guitars, and flute) performance part. The obtained results indicate that, similarly to traditional music instruments, the proposed digital musical instrument has a steep learning curve, and allows to produce expressive performances both from the performer and audience perspective.This project has received funding from the European Unions Horizon 2020 research and innovation program under grant agreement No. 688269, as well as from the Spanish TIN project TIMUL under grant agreement TIN2013-48152-C2-2-R

    Bowing gestures classification in violin performance: a machine learning approach

    No full text
    Gestures in music are of paramount importance partly because they are directly linked to musicians' sound and expressiveness. At the same time, current motion capture technologies are capable of detecting body motion/gestures details very accurately. We present a machine learning approach to automatic violin bow gesture classification based on Hierarchical Hidden Markov Models (HHMM) and motion data. We recorded motion and audio data corresponding to seven representative bow techniques (Détaché, Martelé, Spiccato, Ricochet, Sautillé, Staccato, and Bariolage) performed by a professional violin player. We used the commercial Myo device for recording inertial motion information from the right forearm and synchronized it with audio recordings. Data was uploaded into an online public repository. After extracting features from both the motion and audio data, we trained an HHMM to identify the different bowing techniques automatically. Our model can determine the studied bowing techniques with over 94% accuracy. The results make feasible the application of this work in a practical learning scenario, where violin students can benefit from the real-time feedback provided by the system.This work has been partly sponsored by the Spanish TIN project TIMUL (TIN 2013-48152-C2-2-R), the European Union Horizon 2020 research and innovation programme under grant agreement No. 688269 (TELMI project), and the Spanish Ministry of Economy and Competitiveness under the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502)

    A Machine learning approach to discover rules for expressive performance actions in jazz guitar music

    No full text
    Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules.This work has been partly sponsored by the Spanish TIN project TIMUL (TIN2013-48152-C2-2-R), and the European Union Horizon 2020 research and innovation programme under grant agreement No. 688269 (TELMI project)
    corecore