172 research outputs found

    Impact of constrained rewiring on network structure and node dynamics

    Get PDF
    In this paper, we study an adaptive spatial network. We consider a susceptible-infected-susceptible (SIS) epidemic on the network, with a link or contact rewiring process constrained by spatial proximity. In particular, we assume that susceptible nodes break links with infected nodes independently of distance and reconnect at random to susceptible nodes available within a given radius. By systematically manipulating this radius we investigate the impact of rewiring on the structure of the network and characteristics of the epidemic.We adopt a step-by-step approach whereby we first study the impact of rewiring on the network structure in the absence of an epidemic, then with nodes assigned a disease status but without disease dynamics, and finally running network and epidemic dynamics simultaneously. In the case of no labeling and no epidemic dynamics, we provide both analytic and semianalytic formulas for the value of clustering achieved in the network. Our results also show that the rewiring radius and the network’s initial structure have a pronounced effect on the endemic equilibrium, with increasingly large rewiring radiuses yielding smaller disease prevalence

    Head movement differs for positive and negative emotions in video recordings of sitting individuals

    Get PDF
    Individuals tend to approach positive stimuli and avoid negative stimuli. Furthermore, emotions influence whether individuals freeze or move more. These two kinds of motivated behavior refer to the approach/avoidance behavior and behavioral freezing/activation. Previous studies examined (e.g., using forced platforms) whether individuals' behavior depends on stimulus' valence; however, the results were mixed. Thus, we aimed to test whether emotions' effects on spontaneous whole-body behavior of standing individuals also occur in the seated position. We used a computer vision method to measure the head sway in video recordings that offers ease of use, replicability, and unobtrusiveness for the seated research participant. We analyzed behavior recorded in the laboratory during emotion manipulations across five studies totaling 932 participants. We observed that individuals leaned more forward and moved more when watching positive stimuli than when watching negative stimuli. However, individuals did not behave differently when watching positive or negative stimuli than in the neutral condition. Our results indicate that head movements extracted from seated individuals' video recordings can be useful in detecting robust differences in emotional behavior (positive vs. negative emotions)

    Understanding Experiences of Blind Individuals in Outdoor Nature

    Get PDF
    Research shows that exposure to nature has benefits for people's mental and physical health and that ubiquitous and mobile technologies encourage engagement with nature. However, existing research in this area is primarily focused on people without visual impairments and is not inclusive of blind and partially sighted individuals. To address this gap in research, we interviewed seven blind people (without remaining vision) about their experiences when exploring and experiencing the outdoor natural environment to gain an understanding of their needs and barriers and how these needs can be addressed by technology. In this paper, we present the three themes identified from the interview data; independence, knowledge of the environment, and sensory experiences

    Embodiment in a Child-Like Talking Virtual Body Influences Object Size Perception, Self-Identification, and Subsequent Real Speaking

    Get PDF
    People’s mental representations of their own body are malleable and continuously updated through sensory cues. Altering one’s body-representation can lead to changes in object perception and implicit attitudes. Virtual reality has been used to embody adults in the body of a 4-year-old child or a scaled-down adult body. Child embodiment was found to cause an overestimation of object sizes, approximately double that during adult embodiment, and identification of the self with child-like attributes. Here we tested the contribution of auditory cues related to one’s own voice to these visually-driven effects. In a 2 × 2 factorial design, visual and auditory feedback on one’s own body were varied across conditions, which included embodiment in a child or scaled-down adult body, and real (undistorted) or child-like voice feedback. The results replicated, in an older population, previous findings regarding size estimations and implicit attitudes. Further, although auditory cues were not found to enhance these effects, we show that the strength of the embodiment illusion depends on the child-like voice feedback being congruent or incongruent with the age of the virtual body. Results also showed the positive emotional impact of the illusion of owning a child’s body, opening up possibilities for health applications

    Author Correction: Embodiment in a Child-Like Talking Virtual Body Influences Object Size Perception, Self-Identification, and Subsequent Real Speaking

    Get PDF
    Correction to: Scientific Reports https://doi.org/10.1038/s41598-017-09497-3, published online 29 August 201

    Multiple Instance Learning for Emotion Recognition using Physiological Signals

    Get PDF
    The problem of continuous emotion recognition has been the subject of several studies. The proposed affective computing approaches employ sequential machine learning algorithms for improving the classification stage, accounting for the time ambiguity of emotional responses. Modeling and predicting the affective state over time is not a trivial problem because continuous data labeling is costly and not always feasible. This is a crucial issue in real-life applications, where data labeling is sparse and possibly captures only the most important events rather than the typical continuous subtle affective changes that occur. In this work, we introduce a framework from the machine learning literature called Multiple Instance Learning, which is able to model time intervals by capturing the presence or absence of relevant states, without the need to label the affective responses continuously (as required by standard sequential learning approaches). This choice offers a viable and natural solution for learning in a weakly supervised setting, taking into account the ambiguity of affective responses. We demonstrate the reliability of the proposed approach in a gold-standard scenario and towards real-world usage by employing an existing dataset (DEAP) and a purposely built one (Consumer). We also outline the advantages of this method with respect to standard supervised machine learning algorithms

    Opportunities for Supporting Self-efficacy through Orientation & Mobility Training Technologies for Blind and Partially Sighted People

    Get PDF
    Orientation and mobility (O&M) training provides essential skills and techniques for safe and independent mobility for blind and partially sighted (BPS) people. The demand for O&M training is increasing as the number of people living with vision impairment increases. Despite the growing portfolio of HCI research on assistive technologies (AT), few studies have examined the experiences of BPS people during O&M training, including the use of technology to aid O&M training. To address this gap, we conducted semi-structured interviews with 20 BPS people and 8 Mobility and Orientation Trainers (MOT). The interviews were thematically analysed and organised into four overarching themes discussing factors influencing the self-efficacy belief of BPS people: Tools and Strategies for O&M training, Technology Use in O&M Training, Changing Personal and Social Circumstances, and Social Influences. We further highlight opportunities for combinations of multimodal technologies to increase access to and effectiveness of O&M training

    As light as your footsteps: altering walking sounds to change perceived body weight, emotional state and gait

    Get PDF
    An ever more sedentary lifestyle is a serious problem in our society. Enhancing people’s exercise adherence through technology remains an important research challenge. We propose a novel approach for a system supporting walking that draws from basic findings in neuroscience research. Our shoe-based prototype senses a person’s footsteps and alters in real-time the frequency spectra of the sound they produce while walking. The resulting sounds are consistent with those produced by either a lighter or heavier body. Our user study showed that modified walking sounds change one’s own perceived body weight and lead to a related gait pattern. In particular, augmenting the high frequencies of the sound leads to the perception of having a thinner body and enhances the motivation for physical activity inducing a more dynamic swing and a shorter heel strike. We here discuss the opportunities and the questions our findings open
    corecore