16 research outputs found

    Procedural generation of music-guided weapons

    Get PDF
    Beyond the standard use of music as a passive and, sometimes, optional component of player experience the impact of music as a guide for the procedural generation of game content has not been explored yet. Being a core elicitor of player experience music can be used to drive the generation of personalized game content for a particular musical theme, song or sound effect being played during the game. In this paper we introduce a proof-of-concept game demonstrator exploring the relationship between music and visual game content across different playing behaviors and styles. For that purpose, we created a side-scroller shooter game where players can affect the relationship between projectiles’ trajectories and the background music through interactive evolution. By coupling neuroevolution of augmented topologies with interactive evolution we are able to create an initial arsenal of innovative weapons. Those weapons are both interesting to play with and also create novel fusions of visual and musical aesthetics.Thanks to Ryan Abela for his input on designing the sound extraction methods. The research was supported, in part, by the FP7 Marie Curie CIG project AutoGameDesign (project no: 630665).peer-reviewe

    The association between work-related rumination and heart rate variability: A field study

    Get PDF
    This is the final version. Available on open access from Frontiers Media via the DOI in this recordThe objective of this study was to examine the association between perseverative cognition in the form of work-related rumination, and heart rate variability (HRV). We tested the hypothesis that high ruminators would show lower vagally mediated HRV relative to low ruminators during their leisure time. Individuals were classified as being low (n = 17) or high ruminators (n = 19), using the affective scale on the work-related rumination measure. HRV was assessed using a wrist sensor band (Microsoft Band 2). HRV was sampled between 8 pm and 10 pm over three workday evenings (Monday to Wednesday) while individuals carried out their normal evening routines. Compared to the low ruminators, high affective ruminators demonstrated lower HRV in the form of root mean square successive differences (RMSSDs), relative to the low ruminators, indicating lower parasympathetic activity. There was no significant difference in heart rate, or activity levels between the two groups during the recording periods. The current findings of this study may have implications for the design and delivery of interventions to help individuals unwind post work and to manage stress more effectively. Limitations and implications for future research are discussed

    Tekoälyn käyttö videopelien NPC-hahmoissa ja sen vaikutus pelaajan immersioon

    Get PDF
    Tiivistelmä. Tietokoneiden ja videopelien kehittyessä pelaajat vaativat jatkuvasti enemmän peleiltä ja pelikokemukselta. Vaikuttavien grafiikoiden lisäksi pelaajat vaativat peleiltä myös miellyttäviä kokemuksia ja tilanteita. Tietokoneen ohjaamat hahmot eli non-player-characterit (NPC) ovat suuressa roolissa miellyttävän pelikokemuksen ja immersion luonnissa. Tekoälyn kehitys onkin tuonut uusia ulottuvuuksia ja mahdollisuuksia NPC-hahmojen käyttäytymiseen. Tutkielman aiheena oli tekoälyn käyttö videopelien NPC-hahmoissa ja sen vaikutus pelaajan immersioon. Rajaus tehtiin NPC-hahmoihin, sillä aikaisemmat tutkimukset ovat useasti tutkineet aihetta esimerkiksi oppivien agenttien näkökulmasta. Tutkielma toteutettiin kirjallisuuskatsauksena. Tutkielmassa huomattiin, että tekoälyn käyttö NPC-hahmoissa on suurimmaksi osaksi keskittynyt NPC-hahmojen käytöksen uskottavuuteen. NPC-hahmoista on pyritty tekemään tekoälyllä ihmisen kaltaisia ja on huomattu, että tekoälyn avulla niistä voidaan tehdä persoonallisia. Persoonalliset NPC-hahmot ovat pelaajan immersion kannalta tärkeitä ja onnistuneella toteutuksella ne voivat vaikuttaa useaan pelaajan immersioon vaikuttavaan asiaan. Tutkimusta NPC-hahmojen äänien ja tekoälyn yhdistämisestä oli vielä melko rajallisesti, mutta äänien vaikutus pelaajan immersioon oli huomattu tutkimuksissa. Jatkotutkimus NPC-hahmojen äänien luonnista tekoälyllä olisikin hyödyllistä. Tutkielmassa huomattiin myös, että pelaajan immersion parantaminen NPC-hahmojen tekoälyllä vaatii tekoälyn laadukasta toteutusta. Hyvin toteutettuna tekoäly tukee pelaajan immersiota, mutta huonosti toteutettuna tekoäly vaikuttaa immersioon negatiivisesti. Jatkotutkimuksessa olisi hyödyllistä selvittää milloin tekoälyn omaava NPC-hahmo on oppinut sopivan verran pelaajan immersion kannalta

    Computer music and digital games : the influence of algorithmic composition on the player's feeling of immersion

    Get PDF
    Orientador: Tiago Fernandes TavaresDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: O crescente interesse pelo desenvolvimento de jogos digitais traz uma maior demanda de criação de trilhas sonoras. O esforço exigido para a criação de tais recursos estimula a adoção de técnicas para a geração de conteúdo através de algoritmos. Dado o atual progresso do estudo de algoritmos para gerar músicas, capazes de influenciar uma variedade de sensações nos ouvintes, o presente trabalho visa estudar o impacto da utilização de tais músicas, geradas através de algoritmos e em tempo real, na sensação de imersão do jogador. Tem como hipótese que a utilização de músicas geradas em tempo real através de algoritmos pode proporcionar uma maior sensação de imersão no jogador, quando comparada à utilização de músicas pré-compostas. Os objetivos são o estudo dos conceitos de imersão e engajamento, bem como sua relação com os jogos digitais; o estudo de métodos para composição musical algorítmica; a implementação de jogo com músicas geradas em tempo real, para a etapa de teste; coleta de dados dos jogadores através do registro de ações durante as sessões de jogo e aplicação de questionários. Para os grupos de controle e teste foi realizado um teste A/B, com duas versões do mesmo jogo. Uma delas com música escrita por um ser humano, e a outra versão com melodias produzidas em tempo real por Cadeias de Markov. Os resultados dessa coleta foram analisados, buscando determinar se há relação entre o tipo de música (pré-composta ou algorítmica) e parâmetros que possam sugerir um aumento da imersão do usuário, como as respostas a um questionário de imersão e o desempenho na resolução de quebra-cabeças. O grupo de controle apresentou uma pontuação de imersão média superior (50%, contra 40% no grupo de teste). Também observou-se que uma maior pontuação de imersão não se relacionou a um ganho inferior no desempenho ao resolver quebra-cabeças do tipo tangram. Contudo, não foram encontradas diferenças significativas na pontuação de imersão, avaliada por questionários, entre os dois grupos. Isto indica que a música gerada através de algoritmos pode ser uma alternativa viável para a produção de conteúdo musical em jogos digitaisAbstract: The growing interest in developing digital games brings a greater demand for creating soundtracks. The effort required to create such resources stimulates the adoption of techniques for generating content through algorithms. Given the current progress in studies regarding algorithms for generating music, capable of influencing a variety of feelings in the listeners, the present study aims to study the impact of using such music, generated by algorithms and in real-time, on the player's feeling of immersion. It has the hypothesis that the usage of music generated in real-time through algorithms can provide a greater feeling of immersion on the player, when compared to the usage of pre-composed music. The objectives are the study of the concepts of immersion and engagement, as well as their relationship with digital games; the study of methods for algorithmic music composition; the development of a game with music generated in real-time, for the test procedure; the collection of data from players through logging their actions during the gameplay sessions and the questionnaire procedure. For the control and test groups we did A/B tests, employing two versions of the same game. One of them had music written by a human being, and the other version had melodies produced in real-time by Markov Chains. The results from this data collection were analyzed, aiming to determine, through A/B tests, if there is a relationship between the type of music (pre-composed or algorithmic) and the parameters which could suggest a greater player immersion, such as the answers to the immersion questionnaire and the performance solving a puzzle. The control group presented a higher average immersion score (50%, against 40% in the test group). Also observed that a higher score in immersion was not related to a smaller gain in performance solving the tangram type puzzle. However, no significant differences were found in the immersion score, evaluated by questionnaires, between the two groups. This indicates that music generated through algorithms can be a viable alternative for producing musical content in digital gamesMestradoEngenharia de ComputaçãoMestre em Engenharia Elétric

    Interactive Sonic Environments: Sonic artwork via gameplay experience

    Get PDF
    The purpose of this study is to investigate the use of video-game technology in the design and implementation of interactive sonic centric artworks, the purpose of which is to create and contribute to the discourse and understanding of its effectiveness in electro-acoustic composition highlighting the creative process. Key research questions include: How can the language of electro-acoustic music be placed in a new framework derived from videogame aesthetics and technology? What new creative processes need to be considered when using this medium? Moreover, what aspects of 'play' should be considered when designing the systems? The findings of this study assert that composers and sonic art practitioners need little or no coding knowledge to create exciting applications and the myriad of options available to the composer when using video-game technology is limited only by imagination. Through a cyclic process of planning, building, testing and playing these applications the project revealed advantages and unique sonic opportunities in comparison to other sonic art installations. A portfolio of selected original compositions, both fixed and open are presented by the author to complement this study. The commentary serves to place the work in context with other practitioners in the field and to provide compositional approaches that have been taken

    Automated manipulation of musical grammars to support episodic interactive experiences

    Get PDF
    Music is used to enhance the experience of participants and visitors in a range of settings including theatre, film, video games, installations and theme parks. These experiences may be interactive, contrastingly episodic and with variable duration. Hence, the musical accompaniment needs to be dynamic and to transition between contrasting music passages. In these contexts, computer generation of music may be necessary for practical reasons including distribution and cost. Automated and dynamic composition algorithms exist but are not well-suited to a highly interactive episodic context owing to transition-related problems including discontinuity, abruptness, extended repetitiveness and lack of musical granularity and musical form. Addressing these problems requires algorithms capable of reacting to participant behaviour and episodic change in order to generate formic music that is continuous and coherent during transitions. This thesis presents the Form-Aware Transitioning and Recovering Algorithm (FATRA) for realtime, adaptive, form-aware music generation to provide continuous musical accompaniment in episodic context. FATRA combines stochastic grammar adaptation and grammar merging in real time. The Form-Aware Transition Engine (FATE) implementation of FATRA estimates the time-occurrence of upcoming narrative transitions and generates a harmonic sequence as narrative accompaniment with a focus on coherent, form-aware music transitioning between music passages of contrasting character. Using FATE, FATRA has been evaluated in three perceptual user studies: An audioaugmented real museum experience, a computer-simulated museum experience and a music-focused online study detached from narrative. Music transitions of FATRA were benchmarked against common approaches of the video game industry, i.e. crossfading and direct transitions. The participants were overall content with the music of FATE during their experience. Transitions of FATE were significantly favoured against the crossfading benchmark and competitive against the direct transitions benchmark, without statistical significance for the latter comparison. In addition, technical evaluation demonstrated capabilities of FATRA including form generation, repetitiveness avoidance and style/form recovery in case of falsely predicted narrative transitions. Technical results along with perceptual preference and competitiveness against the benchmark approaches are deemed as positive and the structural advantages of FATRA, including form-aware transitioning, carry considerable potential for future research

    Algorithmic composition of music in real-time with soft constraints

    Get PDF
    Music has been the subject of formal approaches for a long time, ranging from Pythagoras’ elementary research on tonal systems to J. S. Bach’s elaborate formal composition techniques. Especially in the 20th century, much music was composed based on formal techniques: Algorithmic approaches for composing music were developed by composers like A. Schoenberg as well as in the scientific area. So far, a variety of mathematical techniques have been employed for composing music, e.g. probability models, artificial neural networks or constraint-based reasoning. In the recent time, interactive music systems have become popular: existing songs can be replayed with musical video games and original music can be interactively composed with easy-to-use applications running e.g. on mobile devices. However, applications which algorithmically generate music in real-time based on user interaction are mostly experimental and limited in either interactivity or musicality. There are many enjoyable applications but there are also many opportunities for improvements and novel approaches. The goal of this work is to provide a general and systematic approach for specifying and implementing interactive music systems. We introduce an algebraic framework for interactively composing music in real-time with a reasoning-technique called ‘soft constraints’: this technique allows modeling and solving a large range of problems and is suited particularly well for problems with soft and concurrent optimization goals. Our framework is based on well-known theories for music and soft constraints and allows specifying interactive music systems by declaratively defining ‘how the music should sound’ with respect to both user interaction and musical rules. Based on this core framework, we introduce an approach for interactively generating music similar to existing melodic material. With this approach, musical rules can be defined by playing notes (instead of writing code) in order to make interactively generated melodies comply with a certain musical style. We introduce an implementation of the algebraic framework in .NET and present several concrete applications: ‘The Planets’ is an application controlled by a table-based tangible interface where music can be interactively composed by arranging planet constellations. ‘Fluxus’ is an application geared towards musicians which allows training melodic material that can be used to define musical styles for applications geared towards non-musicians. Based on musical styles trained by the Fluxus sequencer, we introduce a general approach for transforming spatial movements to music and present two concrete applications: the first one is controlled by a touch display, the second one by a motion tracking system. At last, we investigate how interactive music systems can be used in the area of pervasive advertising in general and how our approach can be used to realize ‘interactive advertising jingles’.Musik ist seit langem Gegenstand formaler Untersuchungen, von Phytagoras‘ grundlegender Forschung zu tonalen Systemen bis hin zu J. S. Bachs aufwändigen formalen Kompositionstechniken. Vor allem im 20. Jahrhundert wurde vielfach Musik nach formalen Methoden komponiert: Algorithmische Ansätze zur Komposition von Musik wurden sowohl von Komponisten wie A. Schoenberg als auch im wissenschaftlichem Bereich entwickelt. Bislang wurde eine Vielzahl von mathematischen Methoden zur Komposition von Musik verwendet, z.B. statistische Modelle, künstliche neuronale Netze oder Constraint-Probleme. In der letzten Zeit sind interaktive Musiksysteme populär geworden: Bekannte Songs können mit Musikspielen nachgespielt werden, und mit einfach zu bedienenden Anwendungen kann man neue Musik interaktiv komponieren (z.B. auf mobilen Geräten). Allerdings sind die meisten Anwendungen, die basierend auf Benutzerinteraktion in Echtzeit algorithmisch Musik generieren, eher experimentell und in Interaktivität oder Musikalität limitiert. Es gibt viele unterhaltsame Anwendungen, aber ebenso viele Möglichkeiten für Verbesserungen und neue Ansätze. Das Ziel dieser Arbeit ist es, einen allgemeinen und systematischen Ansatz zur Spezifikation und Implementierung von interaktiven Musiksystemen zu entwickeln. Wir stellen ein algebraisches Framework zur interaktiven Komposition von Musik in Echtzeit vor welches auf sog. ‚Soft Constraints‘ basiert, einer Methode aus dem Bereich der künstlichen Intelligenz. Mit dieser Methode ist es möglich, eine große Anzahl von Problemen zu modellieren und zu lösen. Sie ist besonders gut geeignet für Probleme mit unklaren und widersprüchlichen Optimierungszielen. Unser Framework basiert auf gut erforschten Theorien zu Musik und Soft Constraints und ermöglicht es, interaktive Musiksysteme zu spezifizieren, indem man deklarativ angibt, ‚wie sich die Musik anhören soll‘ in Bezug auf sowohl Benutzerinteraktion als auch musikalische Regeln. Basierend auf diesem Framework stellen wir einen neuen Ansatz vor, um interaktiv Musik zu generieren, die ähnlich zu existierendem melodischen Material ist. Dieser Ansatz ermöglicht es, durch das Spielen von Noten (nicht durch das Schreiben von Programmcode) musikalische Regeln zu definieren, nach denen interaktiv generierte Melodien an einen bestimmten Musikstil angepasst werden. Wir präsentieren eine Implementierung des algebraischen Frameworks in .NET sowie mehrere konkrete Anwendungen: ‚The Planets‘ ist eine Anwendung für einen interaktiven Tisch mit der man Musik komponieren kann, indem man Planetenkonstellationen arrangiert. ‚Fluxus‘ ist eine Anwendung, die sich an Musiker richtet. Sie erlaubt es, melodisches Material zu trainieren, das wiederum als Musikstil in Anwendungen benutzt werden kann, die sich an Nicht-Musiker richten. Basierend auf diesen trainierten Musikstilen stellen wir einen generellen Ansatz vor, um räumliche Bewegungen in Musik umzusetzen und zwei konkrete Anwendungen basierend auf einem Touch-Display bzw. einem Motion-Tracking-System. Abschließend untersuchen wir, wie interaktive Musiksysteme im Bereich ‚Pervasive Advertising‘ eingesetzt werden können und wie unser Ansatz genutzt werden kann, um ‚interaktive Werbejingles‘ zu realisieren

    Penempatan Posisi Kamera Secara Otomatis Untuk Sutradara Virtual Dalam Machinima Berbasis Logika Fuzzy

    Get PDF
    Machinima adalah sebuah teknologi pencitraan komputer yang biasanya digunakan untuk membuat permainan komputer dan animasi. Machinima akan meletakan semua property dan pemain film ke dalam lingkungan virtual, dalam hal ini penempatan posisi kameranya. Karena sinematografi melengkapi machinima, memungkinkan untuk mensimulasikan gaya seorang sutradara dalam penempatan posisi kamera di lingkungan virtual. Dalam aplikasi permainan komputer, gaya sutradara adalah satu satu faktor dalam sinematik yang sangat berpengaruh, nuansa bermain game akan berbeda jika diterapkan gaya yang berbeda walau pada adegan atau aksi yang sama. Penelitian ini mengusulkan sebuah sistem yang diberi nama Automatically Cinematography Engine (ACE), sebuah engine untuk menempatkan posisi kamera virtual dan melakukan profile terhadap gaya sutradara dengan pendekatan berbasis logika fuzzy. Sistem yang pertama adalah sistem yang mampu menempatkan posisi kamera virtual sesuai dengan gaya sutradara menggunakan logika fuzzy. Yang kedua adalah sistem yang mampu secara otomatis melakukan profile gaya sutradara dengan menggunakan logika fuzzy. Digunakan 19 variabel output dan 15 variabel hasil perhitungan lainnya dari hasil ekstraksi animasi dari dua gaya sutradara yang berbeda. Hasil perhitungan menghasilkan diagram area plot dan histogram dan dengan menganalisa histogram, gaya sutradara yang berbeda dapat diklasifikasikan. ================================================================================================ Machinima is a computer imaging technology typically used in games and animation. It prints all movie cast properties into a virtual environment by means of a camera positioning. Since cinematography is complementary to Machinima, it is possible to simulate a director's style via various camera placements in this environment. In a gaming application, the director's style is one of the most impressive cinematic factors, where a whole different gaming experience can be obtained using different styles applied to the same scene. This research describes Automatically Cinematography Engine (ACE) an engine for camera positioning and profiling a director’s style using fuzzy logic approach. The first one is a system capable to positioning a virtual camera in virtual environment automatically according to a director’s style using fuzzy logic. The second is a system capable of automatically profile a director's style using fuzzy logic. This research employed 19 output variables and 15 other calculated variables from the animation extraction data to profile two different directors' styles from five scenes. Area plots and histograms were generated, and, by analyzing the histograms, different director's styles could be subsequently classified
    corecore