1,217 research outputs found
The heartbreak of depression: 'Psycho-cardiac' coupling in myocardial infarction
Ample evidence identifies strong links between major depressive disorder (MDD) and both risk of ischemic or coronary heart disease (CHD) and resultant morbidity and mortality. The molecular mechanistic bases of these linkages are poorly defined. Systemic factors linked to MDD, including vascular dysfunction, atherosclerosis, obesity and diabetes, together with associated behavioral changes, all elevate CHD risk. Nonetheless, experimental evidence indicates the myocardium is also directly modified in depression, independently of these factors, impairing infarct tolerance and cardioprotection. It may be that MDD effectively breaks the heart's intrinsic defense mechanisms. Four extrinsic processes are implicated in this psycho-cardiac coupling, presenting potential targets for therapeutic intervention if causally involved: sympathetic over-activity vs. vagal under-activity, together with hypothalamic-pituitary-adrenal (HPA) axis and immuno-inflammatory dysfunctions. However, direct evidence of their involvement remains limited, and whether targeting these upstream mediators is effective (or practical) in limiting the cardiac consequences of MDD is unknown. Detailing myocardial phenotype in MDD can also inform approaches to cardioprotection, yet cardiac molecular changes are similarly ill defined. Studies support myocardial sensitization to ischemic insult in models of MDD, including worsened oxidative and nitrosative damage, apoptosis (with altered Bcl-2 family expression) and infarction. Moreover, depression may de-sensitize hearts to protective conditioning stimuli. The mechanistic underpinnings of these changes await delineation. Such information not only advances our fundamental understanding of psychological determinants of health, but also better informs management of the cardiac consequences of MDD and implementing cardioprotection in this cohort.Griffith Health, School of Medical ScienceNo Full Tex
Arbitrary view action recognition via transfer dictionary learning on synthetic training data
Human action recognition is an important problem in robotic vision. Traditional recognition algorithms usually require the knowledge of view angle, which is not always available in robotic applications such as active vision. In this paper, we propose a new framework to recognize actions with arbitrary views. A main feature of our algorithm is that view-invariance is learned from synthetic 2D and 3D training data using transfer dictionary learning. This guarantees the availability of training data, and removes the hassle of obtaining real world video in specific viewing angles. The result of the process is a dictionary that can project real world 2D video into a view-invariant sparse representation. This facilitates the training of a view-invariant classifier. Experimental results on the IXMAS and N-UCLA datasets show significant improvements over existing algorithms
Intelligent Home Heating Controller Using Fuzzy Rule Interpolation
The reduction of domestic energy waste helps in achieving the legal binding target in the UK that CO2 emissions needs to be reduced by at least 34% below base year (1990) levels by 2020. Space heating consumes about 60% of the household energy consumption, and it has been reported by the Household Electricity Survey from GOV.UK, that 23% of residents leave the heating on while going out. To minimise the waste of heating unoccupied homes, a number of sensor-based and programmable controllers for central heating system have been developed, which can successfully switch off the home heating systems when a property is unoccupied. However, these systems cannot automatically preheat the homes before occupants return without manual inputs or leaving the heating on unnecessarily for longer time, which has limited the wide application of such devices. In order to address this limitation, this paper proposes a smart home heating controller, which enables a home heating system to efficiently preheat the home by successfully predicting the users’ home time. In particular, residents’ home time is calculated by employing fuzzy rule interpolation, supported by users’ historic and current location data from portable devices (commonly smart mobile phones). The proposed system has been applied to a real-world case with promising results shown
Automatic Dance Generation System Considering Sign Language Information
In recent years, thanks to the development of 3DCG animation editing tools (e.g. MikuMikuDance), a lot of 3D character dance animation movies are created by amateur users. However it is very difficult to create choreography from scratch without any technical knowledge. Shiratori et al. [2006] produced the dance automatic generation system considering rhythm and intensity of dance motions. However each segment is selected randomly from database, so the generated dance motion has no linguistic or emotional meanings. Takano et al. [2010] produced a human motion generation system considering motion labels. However they use simple motion labels like “running” or “jump”, so they cannot generate motions that express emotions. In reality, professional dancers make choreography based on music features or lyrics in music, and express emotion or how they feel in music. In our work, we aim at generating more emotional dance motion easily. Therefore, we use linguistic information in lyrics, and generate dance motion.
In this paper, we propose the system to generate the sign dance motion from continuous sign language motion based on lyrics of music. This system could help the deaf to listen to music as visualized music application
From A to Z: Wearable technology explained
Wearable technology (WT) has become a viable means to provide low-cost clinically sensitive data for more informed patient assessment. The benefit of WT seems obvious: small, worn discreetly in any environment, personalised data and possible integration into communication networks, facilitating remote monitoring. Yet, WT remains poorly understood and technology innovation often exceeds pragmatic clinical demand and use. Here, we provide an overview of the common challenges facing WT if it is to transition from novel gadget to an efficient, valid and reliable clinical tool for modern medicine. For simplicity, an A–Z guide is presented, focusing on key terms, aiming to provide a grounded and broad understanding of current WT developments in healthcare
Pro-active Meeting Assistants: Attention Please!
This paper gives an overview of pro-active meeting assistants, what they are and when they can be useful. We explain how to develop such assistants with respect to requirement definitions and elaborate on a set of Wizard of Oz experiments, aiming to find out in which form a meeting assistant should operate to be accepted by participants and whether the meeting effectiveness and efficiency can be improved by an assistant at all. This paper gives an overview of pro-active meeting assistants, what they are and when they can be useful. We explain how to develop such assistants with respect to requirement definitions and elaborate on a set of Wizard of Oz experiments, aiming to find out in which form a meeting assistant should operate to be accepted by participants and whether the meeting effectiveness and efficiency can be improved by an assistant at all
A multilevel model for movement rehabilitation in Traumatic Brain Injury (TBI) using virtual environments
This paper presents a conceptual model for movement rehabilitation of traumatic brain injury (TBI) using virtual environments. This hybrid model integrates principles from ecological systems theory with recent advances in cognitive neuroscience, and supports a multilevel approach to both assessment and treatment. Performance outcomes at any stage of recovery are determined by the interplay of task, individual, and environmental/contextual factors. We argue that any system of rehabilitation should provide enough flexibility for task and context factors to be varied systematically, based on the current neuromotor and biomechanical capabilities of the performer or patient. Thus, in order to understand how treatment modalities are to be designed and implemented, there is a need to understand the function of brain systems that support learning at a given stage of recovery, and the inherent plasticity of the system. We know that virtual reality (VR) systems allow training environments to be presented in a highly automated, reliable, and scalable way. Presentation of these virtual environments (VEs) should permit movement analysis at three fundamental levels of behaviour: (i) neurocognitive bases of performance (we focus in particular on the development and use of internal models for action which support adaptive, on-line control); (ii) movement forms and patterns that describe the patients' movement signature at a given stage of recovery (i.e, kinetic and kinematic markers of movement proficiency), (iii) functional outcomes of the movement. Each level of analysis can also map quite seamlessly to different modes of treatment. At the neurocognitive level, for example, semi-immersive VEs can help retrain internal modeling processes by reinforcing the patients' sense of multimodal space (via augmented feedback), their position within it, and the ability to predict and control actions flexibly (via movement simulation and imagery training). More specifically, we derive four - key therapeutic environment concepts (or Elements) presented using VR technologies: Embodiment (simulation and imagery), Spatial Sense (augmenting position sense), Procedural (automaticity and dual-task control), and Participatory (self-initiated action). The use of tangible media/objects, force transduction, and vision-based tracking systems for the augmentation of gestures and physical presence will be discussed in this context
- …
