14 research outputs found

    A network of noise: designing with a decade of data to sonify JANET

    Get PDF
    The existing sonification of networks mainly focuses on security. Our novel approach is framed by the ways in which network traffic changes over the national JANET network. Using a variety of sonification techniques, we examine the user context, how this sonification leads to system design considerations, and feeds back into the user experience. • Human Centred Computing → Interaction Techniques → Auditory Feedback; Network

    Sonification Aesthetics and Listening for Network Situational Awareness

    Full text link
    This paper looks at the problem of using sonification to enable network administrators to maintaining situational awareness about their network environment. Network environments generate a lot of data and the need for continuous monitoring means that sonification systems must be designed in such a way as to maximise acceptance while minimising annoyance and listener fatigue. It will be argued that solutions based on the concept of the soundscape offer an ecological advantage over other sonification designs.Comment: Workshop paper presented at SoniHED --- Conference on Sonification of Health and Environmental Data, York, UK, 12 September, 201

    Augmenting Cyber Defender Performance and Workload through Sonified Displays

    Get PDF
    AbstractMilitary cyber operations occur in a cognitively intense and stressful environment, and consequently, operator burnout is relatively high when compared to other operational environments. There is a distinct need for new and innovative ways to augment operator capabilities, increase performance, manage workload, and decrease stress in cyber. In this study, we assessed how a sonified display could address these requirements. Sonification has been demonstrated to be a useful method for presenting temporal data in multiple domains. Participants in the experiment were tasked with detecting evidence of a cyber attack in a simulated task environment modeled after “Wireshark,” a popular packet analyzer program. As they completed the task, participants either did or did not have access to a redundant sonified display that provided an auditory representation of the textual data presented in Wireshark. We expected that the sonified display would improve operator performance and reduce workload and stress. However, our results did not support those expectations – access to the sonification did not affect performance, workload, or stress. Our findings highlight the need for continued research into effective methods for augmenting cyber operator capabilities

    MAlSim Deployment

    Get PDF
    This report describes the deployment issues related to MAlSim - Mobile Agent Malware Simulator - a mobile agent framework which aims at simulation of malware - malicious software that run on a computer and make the system behaving in a way wanted by an attacker. MAlSim was introduced in our previous report where we described its composition and functions, and provided the details of the simulation environment in which MAlSim is deployed and the auxiliary parts which support the experiments performed with MAlSim. In this report we are providing more technical details related to the installation and use of the framework.JRC.G.6-Sensors, radar technologies and cybersecurit

    Malware Templates for MAlSim

    Get PDF
    This report describes the methodology of malware templates for MAlSim - Mobile Agent Malware Simulator, a mobile agent framework which aims at simulation of diverse malicious software in computer network of an arbitrary information system. Malware template is a pattern (a 'guide') for implementation of MAlSim agent aiming at simulation of a concrete malware. It indicates the selection and configuration of Java classes (MAlSim agent, one or more behavioural patterns and one or more migration/replication patterns) selected from MAlSim Toolkit.JRC.G.6-Sensors, radar technologies and cybersecurit

    NeMoS: Network monitoring with sound

    Get PDF
    Proceedings of the 9th International Conference on Auditory Display (ICAD), Boston, MA, July 7-9, 2003.In this paper we present NeMoS, a program written in Java that allows monitoring of a distributed system with sound. The architecture is client/server: the server collects (by polling via SNMP [16]) data from the monitored Network Components and the client plays accordingly. The sonification technique associates events (as defined by the user) to MIDI tracks. Our system is versatile (several channels of events can be created and used), easily configurable (personalization of events and tracks is offered to users), standard (it fits within the framework described in RFC 2570 [5]), distributed (multiple clients can be anywhere in the system) and portable (using Java as Programming Language)

    Server Sounds and Network Noises

    Get PDF
    Abstract-For server and network administrators, it is a challenge to keep an overview of their systems to detect potential intrusions and security risks in real-time as well as in retrospect. Most security tools leverage our inherent ability for pattern detection by visualizing different types of security data. Several studies suggest that complementing visualization with sonification (the presentation of data using sound) can alleviate some of the challenges of visual monitoring (such as the need for constant visual focus). This paper therefore provides an overview of the current state of research regarding auditory-based and multimodal tools in computer security. Most existing research in this area is geared towards supporting users in real-time network and server monitoring, while there are only few approaches that are designed for retrospective data analysis. There exist several sonification-based tools in a mature state, but their effectiveness has hardly been tested in formal user and usability studies. Such studies are however needed to provide a solid basis for deciding which type of sonification is most suitable for which kind of scenarios and how to best combine the two modalities, visualization and sonification, to support users in their daily routines

    Towards harmonic extensions of pulsed melodic affective processing - further musical structures for increasing transparency in emotional computation

    Get PDF
    Pulsed Melodic Affective Processing (PMAP) is a method for the processing of artificial emotions in affective computing. PMAP is a data stream which can be listened to, as well as computed with. The affective state is represented by numbers which are analogues of musical features, rather than by a binary stream. Previous affective computation has been done with emotion category indices, or real numbers representing positivity of emotion, etc. PMAP data can be generated directly by sound and rhythms (e.g. heart rates or key-press speeds) and turned directly into into music with minimal transformation. This is because PMAP data is music and computations done with PMAP data are computations done with music. Why is this important? Because PMAP is constructed so that the emotion which its data represents at the computational level, will be similar to the emotion which a person "listening" to the PMAP melody hears. So PMAP can be used to calculate "feelings" and the result data will "sound like" the feelings calculated. Harmonic PMAP (PMAPh) is an extension of PMAP allowing harmonies to be used in calculations © 2014 Old City Publishing, Inc

    Pulsed Melodic Affective Processing: Musical structures for increasing transparency in emotional computation

    Get PDF
    Pulsed Melodic Affective Processing (PMAP) is a method for the processing of artificial emotions in affective computing. PMAP is a data stream designed to be listened to, as well as computed with. The affective state is represented by numbers that are analogues of musical features, rather than by a binary stream. Previous affective computation has been done with emotion category indices, or real numbers representing various emotional dimensions. PMAP data can be generated directly by sound (e.g. heart rates or key-press speeds) and turned directly into music with minimal transformation. This is because PMAP data is music and computations done with PMAP data are computations done with music. This is important because PMAP is constructed so that the emotion that its data represents at the computational level will be similar to the emotion that a person “listening” to the PMAP melody hears. Thus, PMAP can be used to calculate “feelings” and the result data will “sound like” the feelings calculated. PMAP can be compared to neural spike streams, but ones in which pulse heights and rates encode affective information. This paper illustrates PMAP in a range of simulations. In a multi-agent simulation, initial results support that an affective multi-robot security system could use PMAP to provide a basic control mechanism for “search-and-destroy”. Results of fitting a musical neural network with gradient descent to help solve a text emotional detection problem are also presented. The paper concludes by discussing how PMAP may be applicable in the stock markets, using a simplified order book simulation. © 2014, The Society for Modeling and Simulation International. All rights reserved
    corecore