54 research outputs found
Application of Intermediate Multi-Agent Systems to Integrated Algorithmic Composition and Expressive Performance of Music
We investigate the properties of a new Multi-Agent Systems (MAS) for computer-aided composition called IPCS (pronounced βipp-sissβ) the Intermediate Performance Composition System which generates expressive performance as part of its compositional process, and produces emergent melodic structures by a novel multi-agent process. IPCS consists of a small-medium size (2 to 16) collection of agents in which each agent can perform monophonic tunes and learn monophonic tunes from other agents. Each agent has an affective state (an βartificial emotional stateβ) which affects how it performs the music to other agents; e.g. a βhappyβ agent will perform βhappierβ music. The agent performance not only involves compositional changes to the music, but also adds smaller changes based on expressive music performance algorithms for humanization. Every agent is initialized with a tune containing the same single note, and over the interaction period longer tunes are built through agent interaction. Agents will only learn tunes performed to them by other agents if the affective content of the tune is similar to their current affective state; learned tunes are concatenated to the end of their current tune. Each agent in the society learns its own growing tune during the interaction process. Agents develop βopinionsβ of other agents that perform to them, depending on how much the performing agent can help their tunes grow. These opinions affect who they interact with in the future. IPCS is not a mapping from multi-agent interaction onto musical features, but actually utilizes music for the agents to communicate emotions. In spite of the lack of explicit melodic intelligence in IPCS, the system is shown to generate non-trivial melody pitch sequences as a result of emotional communication between agents. The melodies also have a hierarchical structure based on the emergent social structure of the multi-agent system and the hierarchical structure is a result of the emerging agent social interaction structure. The interactive humanizations produce micro-timing and loudness deviations in the melody which are shown to express its hierarchical generative structure without the need for structural analysis software frequently used in computer music humanization
ΠΡΠΎΠ³ΡΠ°ΠΌΠΈΡΠ°ΡΠ΅ ΠΊΠ²Π°Π½ΡΠ½ΠΈΡ ΡΠ°ΡΡΠ½Π°ΡΠ° Π±Π°Π·ΠΈΡΠ°Π½ΠΈΡ Π½Π° ΡΠΏΠΎΡΡΠ΅Π±ΠΈ Π»ΠΎΠ³ΠΈΡΠΊΠΈΡ ΠΊΠΎΠ»Π° Π·Π° ΠΏΠΎΡΡΠ΅Π±Π΅ ΡΠ°Π΄Π° ΡΠ° ΠΌΡΠ·ΠΈΠΊΠΎΠΌ
There have been significant attempts previously to use the equations of quantum
mechanics for generating sound, and to sonify simulated quantum processes. For
new forms of computation to be utilized in computer music, eventually hardware
must be utilized. This has rarely happened with quantum computer music. One
reason for this is that it is currently not easy to get access to such hardware. A second
is that the hardware available requires some understanding of quantum computing
theory. Tis paper moves forward the process by utilizing two hardware quantum
computation systems: IBMQASM v1.1 and a D-Wave 2X. It also introduces the ideas
behind the gate-based IBM system, in a way hopefully more accessible to computerliterate readers. Tis is a presentation of the frst hybrid quantum computer algorithm,
involving two hardware machines. Although neither of these algorithms explicitly
utilize the promised quantum speed-ups, they are a vitalfrst step in introducing QC to
the musical feld. Te article also introduces some key quantum computer algorithms
and discusses their possible future contribution to computer music.ΠΠΎΡΠ°Π΄ ΡΡ Π·Π°Π±Π΅Π»Π΅ΠΆΠ΅Π½ΠΈ Π·Π½Π°ΡΠ°ΡΠ½ΠΈ ΠΏΠΎΠΊΡΡΠ°ΡΠΈ Π΄Π° ΡΠ΅ ΡΠ΅Π΄Π½Π°ΡΠΈΠ½Π΅ ΠΊΠ²Π°Π½ΡΠ½Π΅ ΠΌΠ΅Ρ
Π°Π½ΠΈΠΊΠ΅
ΠΊΠΎΡΠΈΡΡΠ΅ Π·Π° Π³Π΅Π½Π΅ΡΠΈΡΠ°ΡΠ΅ Π·Π²ΡΠΊΠ° ΠΈ Π΄Π° ΡΠ΅ ΠΎΠ·Π²ΡΡΠ΅ ΡΠΈΠΌΡΠ»ΠΈΡΠ°Π½ΠΈ ΠΊΠ²Π°Π½ΡΠ½ΠΈ ΠΏΡΠΎΡΠ΅ΡΠΈ. ΠΠ»ΠΈ,
Π·Π° Π½ΠΎΠ²Π΅ ΠΎΠ±Π»ΠΈΠΊΠ΅ ΡΠ°ΡΡΠ½Π°ΡΠ° ΠΊΠΎΡΠΈ Π±ΠΈ ΡΠ΅ ΠΊΠΎΡΠΈΡΡΠΈΠ»ΠΈ Ρ ΠΊΠΎΠΌΠΏΡΡΡΠ΅ΡΡΠΊΠΎΡ ΠΌΡΠ·ΠΈΡΠΈ, ΠΌΠΎΡΠ° ΡΠ΅
ΡΠΏΠΎΡΡΠ΅Π±ΠΈΡΠΈ ΠΎΠ΄Π³ΠΎΠ²Π°ΡΠ°ΡΡΡΠΈ Ρ
Π°ΡΠ΄Π²Π΅Ρ. ΠΠ²ΠΎ ΡΠ΅ Π΄ΠΎΡΠ°Π΄ ΡΠ΅ΡΠΊΠΎ Π΄Π΅ΡΠ°Π²Π°Π»ΠΎ ΡΠ° ΠΊΠ²Π°Π½ΡΠ½ΠΎΠΌ
ΠΊΠΎΠΌΠΏΡΡΡΠ΅ΡΡΠΊΠΎΠΌ ΠΌΡΠ·ΠΈΠΊΠΎΠΌ, Π½Π°ΡΠΏΡΠ΅ Π·Π°ΡΠΎ ΡΡΠΎ ΡΠ°ΠΊΠ°Π² Ρ
Π°ΡΠ΄Π²Π΅Ρ Π½ΠΈΡΠ΅ ΡΠΈΡΠΎΠΊΠΎ Π΄ΠΎΡΡΡΠΏΠ°Π½.
ΠΡΡΠ³ΠΈ ΡΠ°Π·Π»ΠΎΠ³ ΡΠ΅ΡΡΠ΅ ΠΎΠΊΠΎΠ»Π½ΠΎΡΡ Π΄Π° ΠΎΠ²Π°ΠΊΠ°Π² Ρ
Π°ΡΠ΄Π²Π΅Ρ Π·Π°Ρ
ΡΠ΅Π²Π° ΠΈΠ·Π²Π΅ΡΠ½ΠΎ ΠΏΠΎΠ·Π½Π°Π²Π°ΡΠ΅
ΡΠ΅ΠΎΡΠΈΡΠ΅ ΠΊΠ²Π°Π½ΡΠ½ΠΎΠ³ ΡΠ°ΡΡΠ½Π°ΡΡΡΠ²Π°. ΠΠ²ΠΈΠΌ ΡΠ»Π°Π½ΠΊΠΎΠΌ ΠΏΠΎΠΌΠ΅ΡΠ°ΠΌΠΎ ΠΎΠ²Π°Ρ ΠΏΡΠΎΡΠ΅Ρ ΡΠ½Π°ΠΏΡΠ΅Π΄
ΠΏΠΎΠΌΠΎΡΡ Π΄Π²Π° Ρ
Π°ΡΠ΄Π²Π΅ΡΡΠΊΠ° ΠΊΠ²Π°Π½ΡΠ½Π° ΡΠ°ΡΡΠ½Π°ΡΡΠΊΠ° ΡΠΈΡΡΠ΅ΠΌΠ°: IBMQASM v1.1 ΠΈ
D-Wave 2X. Π’Π°ΠΊΠΎΡΠ΅ ΡΠ²ΠΎΠ΄ΠΈΠΌΠΎ Π½Π΅ΠΊΠ΅ ΠΈΠ΄Π΅ΡΠ΅ ΠΈΠ· IBM-ΠΎΠ²ΠΎΠ³ ΡΠΈΡΡΠ΅ΠΌΠ° Π·Π°ΡΠ½ΠΎΠ²Π°Π½ΠΎΠ³
Π½Π° Π»ΠΎΠ³ΠΈΡΠΊΠΈΠΌ ΠΊΠΎΠ»ΠΈΠΌΠ°, Π½Π° Π½Π°ΡΠΈΠ½ Π΄ΠΎΡΡΡΠΏΠ°Π½ ΡΠ°ΡΡΠ½Π°ΡΡΠΊΠΈ ΠΏΠΈΡΠΌΠ΅Π½ΠΈΠΌ ΡΠΈΡΠ°ΠΎΡΠΈΠΌΠ°.
ΠΠ²ΠΎ ΡΠ΅ ΠΏΡΠ΅Π·Π΅Π½ΡΠ°ΡΠΈΡΠ° ΠΏΡΠ²ΠΎΠ³ Ρ
ΠΈΠ±ΡΠΈΠ΄Π½ΠΎΠ³ ΠΊΠ²Π°Π½ΡΠ½ΠΎΠ³ ΠΊΠΎΠΌΠΏΡΡΡΠ΅ΡΡΠΊΠΎΠ³ Π°Π»Π³ΠΎΡΠΈΡΠΌΠ°,
ΠΊΠΎΡΠΈ ΡΠΊΡΡΡΡΡΠ΅ Π΄Π²Π΅ Ρ
Π°ΡΠ΄Π²Π΅ΡΡΠΊΠ΅ ΠΌΠ°ΡΠΈΠ½Π΅. ΠΠ°ΠΊΠΎ Π½ΠΈΡΠ΅Π΄Π°Π½ ΠΎΠ΄ ΠΎΠ²ΠΈΡ
Π°Π»Π³ΠΎΡΠΈΡΠ°ΠΌΠ°
Π΅ΠΊΡΠΏΠ»ΠΈΡΠΈΡΠ½ΠΎ Π½Π΅ ΠΊΠΎΡΠΈΡΡΠΈ ΠΎΠ±Π΅ΡΠ°Π½Π° ΠΊΠ²Π°Π½ΡΠ½Π° ΡΠ±ΡΠ·Π°ΡΠ°, ΠΎΠ½ΠΈ ΠΏΡΠ΅Π΄ΡΡΠ°Π²ΡΠ°ΡΡ Π²ΠΈΡΠ°Π»Π°Π½
ΠΏΡΠ²ΠΈ ΠΊΠΎΡΠ°ΠΊ Ρ ΡΠ²ΠΎΡΠ΅ΡΡ ΠΊΠ²Π°Π½ΡΠ½ΠΎΠ³ ΡΠ°ΡΡΠ½Π°ΡΡΡΠ²Π° Ρ ΠΏΠΎΡΠ΅ ΠΌΡΠ·ΠΈΠΊΠ΅.
Π§Π»Π°Π½Π°ΠΊ Π·Π°ΠΏΠΎΡΠΈΡΠ΅ΠΌΠΎ ΠΊΡΠ°ΡΠΊΠΈΠΌ ΠΏΡΠ΅Π³Π»Π΅Π΄ΠΎΠΌ ΠΊΠ²Π°Π½ΡΠ½ΠΎΠ³ ΡΠ°ΡΡΠ½Π°ΡΡΡΠ²Π° ΠΈ ΡΠΊΠ°Π·ΡΡΠ΅ΠΌΠΎ
ΠΊΠ°ΠΊΠΎ ΡΠ΅ ΠΎΠ½ΠΎ ΠΌΠΎΠΆΠ΅ ΠΏΡΠΈΠΌΠ΅Π½ΠΈΡΠΈ Π½Π° ΠΏΠΎΠ΄ΡΡΡΡΡ ΡΠΌΠ΅ΡΠ½ΠΎΡΡΠΈ. Π‘Π»Π΅Π΄ΠΈ ΠΈΡΡΡΠ°ΠΆΠΈΠ²Π°ΡΠ΅
ΠΏΡΠ΅ΡΡ
ΠΎΠ΄Π½ΠΈΡ
ΠΏΡΠΎΡΠ΅ΠΊΠ°ΡΠ° Ρ ΠΊΠΎΡΠΈΠΌΠ° ΡΡ ΠΊΠΎΡΠΈΡΡΠ΅Π½ΠΈ ΡΡΠ²Π°ΡΠ½ΠΈ ΠΈΠ»ΠΈ ΡΠΈΠΌΡΠ»ΠΈΡΠ°Π½ΠΈ
ΠΊΠ²Π°Π½ΡΠ½ΠΈ ΠΏΡΠΎΡΠ΅ΡΠΈ Ρ ΠΌΡΠ·ΠΈΡΠΊΠΈΠΌ Π΄Π΅Π»ΠΈΠΌΠ° ΠΈΠ»ΠΈ ΠΈΠ·Π²ΠΎΡΠ΅ΡΠΈΠΌΠ°. Π£ ΡΠ»Π΅Π΄Π΅ΡΠ΅ΠΌ ΠΎΠ΄Π΅ΡΠΊΡ ΡΠ΅
Π³ΠΎΠ²ΠΎΡΠΈ ΠΎ Π½Π°ΡΠΏΠΎΠ·Π½Π°ΡΠΈΡΠΎΡ Π²ΡΡΡΠΈ ΠΊΠ²Π°Π½ΡΠ½ΠΈΡ
ΡΠ°ΡΡΠ½Π°ΡΠ°, Π·Π°ΡΠ½ΠΎΠ²Π°Π½ΠΈΡ
Π½Π° Π»ΠΎΠ³ΠΈΡΠΊΠΈΠΌ
ΠΊΠΎΠ»ΠΈΠΌΠ°, ΠΈ ΠΎΠΏΠΈΡΡΡΠ΅ ΡΠ΅ Ρ
Π°ΡΠ΄Π²Π΅Ρ ΡΠ΅Π΄Π½ΠΎΠ³ ΠΎΠ΄ ΠΌΠ°ΡΠΈΡ
ΠΊΠ²Π°Π½ΡΠ½ΠΈΡ
ΡΠ°ΡΡΠ½Π°ΡΠ° ΠΊΠΎΠΌΠΏΠ°Π½ΠΈΡΠ΅
IBM. Π‘Π»Π΅Π΄ΠΈ ΠΊΡΠ°ΡΠ°ΠΊ ΡΠ²ΠΎΠ΄ Ρ ΡΠ΅ΠΎΡΠΈΡΡ ΠΊΠ²Π°Π½ΡΠ½ΠΎΠ³ ΡΠ°ΡΡΠ½Π°ΡΡΡΠ²Π°; ΠΎΠ²Π΅ ΠΈΠ΄Π΅ΡΠ΅ ΡΡ ΠΏΠΎΡΠΎΠΌ
ΠΏΡΠΎΡΠ΅ΠΊΡΠΎΠ²Π°Π½Π΅ Π½Π° ΡΠ΅Π·ΠΈΠΊ ΠΊΠΎΡΠΈ ΠΊΠΎΡΠΈΡΡΠ΅ IBM ΡΠ°ΡΡΠ½Π°ΡΠΈ: IBMQASM.
Π‘Π»Π΅Π΄Π΅ΡΠΈ ΠΎΠ΄Π΅ΡΠ°ΠΊ Π΄ΠΎΠ½ΠΎΡΠΈ ΠΊΡΠ°ΡΠ°ΠΊ ΠΏΡΠ΅Π³Π»Π΅Π΄ Π΄ΡΡΠ³Π΅ Π²ΡΡΡΠ΅ ΠΊΠ²Π°Π½ΡΠ½ΠΎΠ³ ΡΠ°ΡΡΠ½Π°ΡΠ° ΠΊΠΎΡΠΈ
ΡΠ΅ ΠΊΠΎΡΠΈΡΡΠΈ: D-Wave. ΠΠ΅ΡΠ°ΡΠ½ΠΈΡΠΈ ΠΎΠΏΠΈΡΠΈ ΠΌΠΎΠ³ Π°Π»Π³ΠΎΡΠΈΡΠΌΠ° Π΄ΠΎΡΡΡΠΏΠ½ΠΈ ΡΡ Ρ Π΄ΡΡΠ³ΠΈΠΌ
ΡΠ»Π°Π½ΡΠΈΠΌΠ° Π½Π° ΠΊΠΎΡΠ΅ ΡΠ΅ ΠΏΠΎΠ·ΠΈΠ²Π°ΠΌ. ΠΠ° ΠΊΡΠ°ΡΡ ΡΠ΅ ΠΎΠΏΠΈΡΠ°Π½ qGen: IBM Π³Π΅Π½Π΅ΡΠΈΡΠ΅
ΠΌΠ΅Π»ΠΎΠ΄ΠΈΡΡ, Π° D-Wave ΡΠ΅ Ρ
Π°ΡΠΌΠΎΠ½ΠΈΠ·ΡΡΠ΅. Π€ΠΎΠΊΡΡ ΡΠ΅ Π½Π° ΠΌΠ΅Π»ΠΎΠ΄ΠΈΡΡΠΊΠΎΠΌ Π°Π»Π³ΠΎΡΠΈΡΠΌΡ, ΠΏΠΎΡΡΠΎ
ΡΠ΅ Π°Π»Π³ΠΎΡΠΈΡΠ°ΠΌ D-Wave ΠΎΠΏΠΈΡΠ°Π½ Ρ ΠΏΠΎΠ³Π»Π°Π²ΡΡ ΠΈΠ· ΠΊΡΠΈΠ³Π΅ Π½Π° ΠΊΠΎΡΡ ΡΠ΅ΡΠ΅ΡΠΈΡΠ°ΠΌ. Π Π°Π·Π²ΠΈΡΠ΅Π½
ΡΠ΅ βΠ½Π°ΡΡΠ΅Π΄Π½ΠΎΡΡΠ°Π²Π½ΠΈΡΠΈ ΠΌΠΎΠ³ΡΡΠΈβ ΠΌΠ΅Π»ΠΎΠ΄ΠΈΡΡΠΊΠΈ Π°Π»Π³ΠΎΡΠΈΡΠ°ΠΌ, ΡΠ· ΠΊΠΎΡΠΈ ΡΠ΅ ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ ΠΈ
ΠΎΠ΄Π³ΠΎΠ²Π°ΡΠ°ΡΡΡΠΈ ΠΏΡΠΈΠΌΠ΅Ρ
Learning and Co-operation in Mobile Multi-Robot Systems
Merged with duplicate record 10026.1/1984 on 27.02.2017 by CS (TIS)This thesis addresses the problem of setting the balance between exploration and
exploitation in teams of learning robots who exchange information. Specifically it looks at
groups of robots whose tasks include moving between salient points in the environment.
To deal with unknown and dynamic environments,such robots need to be able to discover
and learn the routes between these points themselves. A natural extension of this scenario
is to allow the robots to exchange learned routes so that only one robot needs to learn a
route for the whole team to use that route. One contribution of this thesis is to identify a
dilemma created by this extension: that once one robot has learned a route between two
points, all other robots will follow that route without looking for shorter versions. This
trade-off will be labeled the Distributed Exploration vs. Exploitation Dilemma, since
increasing distributed exploitation (allowing robots to exchange more routes) means
decreasing distributed exploration (reducing robots ability to learn new versions of routes),
and vice-versa. At different times, teams may be required with different balances of
exploitation and exploration. The main contribution of this thesis is to present a system for
setting the balance between exploration and exploitation in a group of robots. This system
is demonstrated through experiments involving simulated robot teams. The experiments
show that increasing and decreasing the value of a parameter of the novel system will lead
to a significant increase and decrease respectively in average exploitation (and an
equivalent decrease and increase in average exploration) over a series of team missions. A
further set of experiments show that this holds true for a range of team sizes and numbers
of goals
Wireless Interactive Sonification of Large Water Waves to Demonstrate the Facilities of a Large-Scale Research Wave Tank
Interactive sonification can provide a platform for demonstration and education as well as for monitoring and investigation. We present a system designed to demonstrate the facilities of the UK's most advanced large-scale research wave tank. The interactive sonification of water waves in the βocean basinβ wave tank at Plymouth University consisted of a number of elements: generation of ocean waves, acquisition and sonification of ocean-wave measurement data, and gesture-controlled pitch and amplitude of sonifications. The generated water waves were linked in real time to sonic features via depth monitors and motion tracking of a floating buoy. Types of water-wave patterns, varying in shape and size, were selected and triggered using wireless motion detectors attached to the demonstrator's arms. The system was implemented on a network of five computers utilizing Max/MSP alongside specialist marine research software, and was demonstrated live in a public performance for the formal opening of the Marine Institute building. </jats:p
Electroencephalography reflects the activity of sub-cortical brain regions during approach-withdrawal behaviour while listening to music
The ability of music to evoke activity changes in the core brain structures that underlie the experience of emotion suggests that it has the potential to be used in therapies for emotion disorders. A large volume of research has identified a network of sub-cortical brain regions underlying music-induced emotions. Additionally, separate evidence from electroencephalography (EEG) studies suggests that prefrontal asymmetry in the EEG reflects the approach-withdrawal response to music-induced emotion. However, fMRI and EEG measure quite different brain processes and we do not have a detailed understanding of the functional relationships between them in relation to music-induced emotion. We employ a joint EEG β fMRI paradigm to explore how EEG-based neural correlates of the approach-withdrawal response to music reflect activity changes in the sub-cortical emotional response network. The neural correlates examined are asymmetry in the prefrontal EEG, and the degree of disorder in that asymmetry over time, as measured by entropy. Participantsβ EEG and fMRI were recorded simultaneously while the participants listened to music that had been specifically generated to target the elicitation of a wide range of affective states. While listening to this music, participants also continuously reported their felt affective states. Here we report on co-variations in the dynamics of these self-reports, the EEG, and the sub-cortical brain activity. We find that a set of sub-cortical brain regions in the emotional response network exhibits activity that significantly relates to prefrontal EEG asymmetry. Specifically, EEG in the pre-frontal cortex reflects not only cortical activity, but also changes in activity in the amygdala, posterior temporal cortex, and cerebellum. We also find that, while the magnitude of the asymmetry reflects activity in parts of the limbic and paralimbic systems, the entropy of that asymmetry reflects activity in parts of the autonomic response network such as the auditory cortex. This suggests that asymmetry magnitude reflects affective responses to music, while asymmetry entropy reflects autonomic responses to music. Thus, we demonstrate that it is possible to infer activity in the limbic and paralimbic systems from pre-frontal EEG asymmetry. These results show how EEG can be used to measure and monitor changes in the limbic and paralimbic systems. Specifically, they suggest that EEG asymmetry acts as an indicator of sub-cortical changes in activity induced by music. This shows that EEG may be used as a measure of the effectiveness of music therapy to evoke changes in activity in the sub-cortical emotion response network. This is also the first time that the activity of sub-cortical regions, normally considered βinvisibleβ to EEG, has been shown to be characterisable directly from EEG dynamics measured during music listening
Affective calibration of musical feature sets in an emotionally intelligent music composition system
Affectively driven algorithmic composition (AAC) is a rapidly growing field that exploits computer-aided composition in order to generate new music with particular emotional qualities or affective intentions. An AAC system was devised in order to generate a stimulus set covering nine discrete sectors of a two-dimensional emotion space by means of a 16-channel feed-forward artificial neural network. This system was used to generate a stimulus set of short pieces of music, which were rendered using a sampled piano timbre and evaluated by a group of experienced listeners who ascribed a two-dimensional valence-arousal coordinate to each stimulus. The underlying musical feature set, initially drawn from the literature, was subsequently adjusted by amplifying or attenuating the quantity of each feature in order to maximize the spread of stimuli in the valence-arousal space before a second listener evaluation was conducted. This process was repeated a third time in order to maximize the spread of valence-arousal coordinates ascribed to the generated stimulus set in comparison to a spread taken from an existing prerated database of stimuli, demonstrating that this prototype AAC system is capable of creating short sequences of music with a slight improvement on the range of emotion found in a stimulus set comprised of real-world, traditionally composed musical excerpts
Personalised, multi-modal, affective state detection for hybrid brain-computer music interfacing
Brain-computer music interfaces (BCMIs) may be used to modulate affective states, with applications in music therapy, composition, and entertainment. However, for such systems to work they need to be able to reliably detect their user's current affective state. We present a method for personalised affective state detection for use in BCMI. We compare it to a population-based detection method trained on 17 users and demonstrate that personalised affective state detection is significantly ( p<0.01 ) more accurate, with average improvements in accuracy of 10.2 percent for valence and 9.3 percent for arousal. We also compare a hybrid BCMI (a BCMI that combines physiological signals with neurological signals) to a conventional BCMI design (one based upon the use of only EEG features) and demonstrate that the hybrid design results in a significant ( p<0.01 ) 6.2 percent improvement in performance for arousal classification and a significant ( p<0.01 ) 5.9 percent improvement for valence classification
Affective brainβcomputer music interfacing
We aim to develop and evaluate an affective brainβcomputer music interface
(aBCMI) for modulating the affective states of its users. Approach. An aBCMI is constructed to
detect a userΚΌs current affective state and attempt to modulate it in order to achieve specific
objectives (for example, making the user calmer or happier) by playing music which is generated
according to a specific affective target by an algorithmic music composition system and a casebased
reasoning system. The system is trained and tested in a longitudinal study on a population
of eight healthy participants, with each participant returning for multiple sessions. Main results.
The final online aBCMI is able to detect its users current affective states with classification
accuracies of up to 65% (3 class, p < 0.01) and modulate its userΚΌs affective states significantly
above chance level (p < 0.05). Significance. Our system represents one of the first
demonstrations of an online aBCMI that is able to accurately detect and respond to userΚΌs
affective states. Possible applications include use in music therapy and entertainmen
RadioMe: Adaptive Radio to Support People with Mild Dementia in Their Own Home
People with dementia and their carers are experiencing a complicated and highly personal health journey. The RadioMe system, an adaptive live radio system enriched with reminder possibilities and agitation detection and intervention with personalised calming music, is being developed to support people with mild dementia in their own home. RadioMe is an ongoing, interdisciplinary project, combining expertise on dementia, music therapy, music computation and human computer interaction
Directed motor-auditory EEG connectivity is modulated by music tempo
Beat perception is fundamental to how we experience music, and yet the mechanism behind this spontaneous building of the internal beat representation is largely unknown. Existing findings support links between the tempo (speed) of the beat and enhancement of electroencephalogram (EEG) activity at tempo-related frequencies, but there are no studies looking at how tempo may affect the underlying long-range interactions between EEG activity at different electrodes. The present study investigates these long-range interactions using EEG activity recorded from 21 volunteers listening to music stimuli played at 4 different tempi (50, 100, 150 and 200 beats per minute). The music stimuli consisted of piano excerpts designed to convey the emotion of βpeacefulnessβ. Noise stimuli with an identical acoustic content to the music excerpts were also presented for comparison purposes. The brain activity interactions were characterized with the imaginary part of coherence (iCOH) in the frequency range 1.5β18 Hz (Ξ΄, ΞΈ, Ξ± and lower Ξ²) between all pairs of EEG electrodes for the four tempi and the music/noise conditions, as well as a baseline resting state (RS) condition obtained at the start of the experimental task. Our findings can be summarized as follows: (a) there was an ongoing long-range interaction in the RS engaging fronto-posterior areas; (b) this interaction was maintained in both music and noise, but its strength and directionality were modulated as a result of acoustic stimulation; (c) the topological patterns of iCOH were similar for music, noise and RS, however statistically significant differences in strength and direction of iCOH were identified; and (d) tempo had an effect on the direction and strength of motor-auditory interactions. Our findings are in line with existing literature and illustrate a part of the mechanism by which musical stimuli with different tempi can entrain changes in cortical activity
- β¦