611 research outputs found

    Real-time functional magnetic resonance imaging in obsessive-compulsive disorder

    Get PDF
    The current literature provides substantial evidence of brain alterations associated with obsessive-compulsive disorder (OCD) symptoms (eg, checking, cleaning/decontamination, counting compulsions; harm or sexual, symmetry/exactness obsessions), and emotional problems (eg, defensive/appetitive emotional imbalance, disgust, guilt, shame, and fear learning/extinction) and cognitive impairments associated with this disorder (eg, inhibitory control, working memory, cognitive flexibility). Building on this evidence, new clinical trials can now target specific brain regions/networks. Real-time functional magnetic resonance imaging (rtfMRI) was introduced as a new therapeutic tool for the self-regulation of brain-mind. In this review, we describe initial trials testing the use of rtfMRI to target brain regions associated with specific OCD symptoms (eg, contamination), and other mind-brain processes (eg, cognitive -working memory, inhibitory control, emotional - defensive, appetitive systems, fear reduction through counter-conditioning) found impaired in OCD patients. While this is a novel topic of research, initial evidence shows the promise of using rtfMRI in training the self-regulation of brain regions and mental processes associated with OCD. Additionally, studies with healthy populations have shown that individuals can regulate brain regions associated with cognitive and emotional processes found impaired in OCD. After the initial "proof-of-concept" stage, there is a need to follow up with controlled clinical trials that could test rtfMRI innovative treatments targeting brain regions and networks associated with different OCD symptoms and cognitive-emotional impairments.info:eu-repo/semantics/publishedVersio

    Communion: a new strategy for memory management in high-performance computer systems

    Get PDF
    Modern computers present a big gap between peak performance and sustained performance. There are many reasons for this situation, but mainly involving an inefficient usage of computational resources. Nowadays the memory system is the most critical component because of its growing inability to keep up with the processor requests. Technological trends have produced a large and growing gap between CPU speeds and DRAM speeds. Much research has focused this memory system problem, including program optimizing techniques, data locality enhancement, hardware and software prefetching, decoupled architectures, mutithreading, speculative loads and execution. These techniques have got a relative success, but they focus only one component in the hardware or software systems. We present here a new strategy for memory management in high-performance computer systems, named COMMUNION. The basic idea behind this strategy is cooperation. We introduce some interaction possibilities among system programs that are responsible to generate and execute application programs. So, we investigate two specific interactions: between the compiler and the operating system, and among the compiling system components. The experimental results show that it’s possible to get improvements of about 10 times in execution time, and about 5 times in memory demand. In the interaction between compiler and operating system, named Compiler-Aided Page Replacement (CAPR), we achieved a reduction of about 10% in space-time product, with an increase of only 0.5% in the total execution time. All these results show that it’s possible to manage main memory with a better efficiency than current systems.Eje: Procesamiento distribuido y paralelo. Tratamiento de señalesRed de Universidades con Carreras en Informática (RedUNCI

    Communion: a new strategy form memory management in high-performance computer

    Get PDF
    Modern computers present a big gap between peak performance and sustained performance. There are many reasons for this situation, but mainly involving an inefficient usage of computational resources. Nowadays the memory system is the most critical component because of its growing inability to keep up with the processor requests. Technological trends have produced a large and growing gap between CPU speeds and DRAM speeds. Much research has focused this memory system problem, including program optimizing techniques, data locality enhancement, hardware and software prefetching, decoupled architectures, multithreading, speculative loads and execution. These techniques have got a relative success, but they focus only one component in the hardware or software systems. We present here a new strategy for memory management in high-performance computer systems, named COMMUNION. The basic idea behind this strategy is "cooperation". We introduce some interaction possibilities among system programs that are responsible to generate and execute application programs. So, we investigate two specific interactions: between the compiler and the operating system, and among the compiling system components. The experimental results show that it's possible to get improvements of about 10 times in execution time, and about 5 times in memory demand, enhancing the interaction between the compiling system components. In the interaction between compiler and operating system, named Compiler-Aided Page Replacement (CAPR), we achieved a reduction of about 10% in space-time product, with an increase of only 0.5% in the total execution time. All these results show that it s possible to manage main memory with a better efficiency than current systems.Facultad de Informátic

    Evaluation of a local strategy for high performance memory management

    Get PDF
    Conventional operating systems, like Silicon Graphics' IRIX and IBM's AIX, adopt a single Memory Management algorithm. The choice of this algorithm is usually based on its good performance in relation to the set of programs executed in the computer. Some approximation of LRU (least­recently used) is usually adopted. This choice can take to certain situations in that the computer presents a bad performance due to its bad behavior for certain programs. A possible solution for such cases is to enable each program to have a specific Management algorithm (local strategy) that is adapted to its Memory access pattern. For example, programs with sequential access pattern, such as SOR, should be managed by the algorithm MRU (most­recently used) because its bad performance when managed by LRU. In this strategy it is very important to decide the Memory partitioning strategy among the programs in execution in a multiprogramming environment. Our strategy named CAPR (Compiler­Aided Page Replacement) analyze the pattern of Memory references from the source program of an application and communicate these characteristics to the operating system that will make the choice of the best Management algorithm and Memory partitioning strategy. This paper evaluates the influence of the Management algorithms and Memory partitioning strategy in the global system performance and in the individual performance of each program. It is also presented a comparison of this local strategy with the classic global strategy and the viability of the strategy is analyzed. The obtained results showed a difference of at least an order of magnitude in the number of page faults among the algorithms LRU and MRU in the global strategy. After that, starting from the analysis of the intrinsic behavior of each application in relation to its Memory access pattern and of the number of page faults, an optimization procedure of Memory system performance was developed for multiprogramming environments. This procedure allows to decide system performance parameters, such as Memory partitioning strategy among the programs and the appropriate Management algorithm for each program. The results showed that, with the local Management strategy, it was obtained a reduction of at least an order of magnitude in the number of page faults and a reduction in the mean Memory usage of about 3 to 4 times in relation to the global strategy. This performance improvement shows the viability of our strategy. It is also presented some implementation aspects of this strategy in traditional operating systems.Sistemas Distribuidos - Redes ConcurrenciaRed de Universidades con Carreras en Informática (RedUNCI

    Communion: a new strategy for memory management in high-performance computer systems

    Get PDF
    Modern computers present a big gap between peak performance and sustained performance. There are many reasons for this situation, but mainly involving an inefficient usage of computational resources. Nowadays the memory system is the most critical component because of its growing inability to keep up with the processor requests. Technological trends have produced a large and growing gap between CPU speeds and DRAM speeds. Much research has focused this memory system problem, including program optimizing techniques, data locality enhancement, hardware and software prefetching, decoupled architectures, mutithreading, speculative loads and execution. These techniques have got a relative success, but they focus only one component in the hardware or software systems. We present here a new strategy for memory management in high-performance computer systems, named COMMUNION. The basic idea behind this strategy is cooperation. We introduce some interaction possibilities among system programs that are responsible to generate and execute application programs. So, we investigate two specific interactions: between the compiler and the operating system, and among the compiling system components. The experimental results show that it’s possible to get improvements of about 10 times in execution time, and about 5 times in memory demand. In the interaction between compiler and operating system, named Compiler-Aided Page Replacement (CAPR), we achieved a reduction of about 10% in space-time product, with an increase of only 0.5% in the total execution time. All these results show that it’s possible to manage main memory with a better efficiency than current systems.Eje: Procesamiento distribuido y paralelo. Tratamiento de señalesRed de Universidades con Carreras en Informática (RedUNCI

    Long run communication support based on diagnostic symptom to the eruption as key roles of volcanologists toward sleeping giant - Case studies from Bandai, Usu and Azores

    Get PDF
    Volcanic eruptions can occur after decades-centuries long dormancy as has been seen from the recent examples: Mount St. Helens 1980, Pinatubo 1991, Unzen 1991, Soufrière Hills volcano 1995, Chaitén 2008, and Eyjafjallajökull 2010. Bandai volcano, NE Japan experienced a large scale sector collapse in 1888 which killed 477 people. We study how this catastrophic event is looked back by scientists, government and the local people. During the recent decades, the cultural and educational activities led by the Bandai Volcano Eruption Memorial Museum play an important role in disseminating hazard knowledge to the local people. Many outreach activities have been carried out at schools and/or in the volcano as well as the delivery of the volcanic hazard map. The regional headquarters of JMA in Sendai is in charge of monitoring 18 active volcanoes in Tohoku district. The area has not experienced major eruptions for long time, however the high potentiality for the large-scale eruption above VEI 4-5 is remarked such as Chokai B.C. 466 and Towada 915. The last four eruption crisis of Usu volcano, northern Japan were advised always by a couple of geophysicists and volcanologists through face-to-face communication with local town officers and the residents. Fogo volcano, Azores, shows 452 years of eruption dormancy whereas the recent geophysical studies have revealed repeated intrusion episodes during the last decades. We study how the information flows from scientific community to the public in case of volcanic crisis. It is a very challenging task how to deal with sleeping giants. Facilitating awareness of volcanic risks by maintaining long run communication among scientists, local authority and residents/tourists is the key for mitigating large volcanic hazards with low probabilities. Scientific support should be aimed at building community where “the local residents could make their own contingency and evacuation plans (Surono, 2013)”

    Prevalencia de anemia en embarazadas y la fortificación de harinas con hierro

    Get PDF
    Avaliou-se o impacto da fortificação das farinhas com ferro, na prevalência de anemia e concentração de hemoglobina de gestantes. Este estudo transversal retrospectivo foi desenvolvido em um Centro de Saúde Escola do município de São Paulo - SP. Os dados, colhidos de setembro a dezembro de 2006, foram obtidos de 750 prontuários de gestantes distribuídas em dois grupos (não fortificado e fortificado), antes e após a fortificação. Gestantes com nível de hemoglobina inferior a 11g/dl foram consideradas anêmicas. Realizou-se análise de regressão linear múltipla. A anemia afetava 9,2% e 8,6% das gestantes, antes e após a implantação do programa, respectivamente (p>0,05). A análise múltipla não evidenciou diferença estatística na média de hemoglobina entre os grupos (p=0,117). Os resultados indicaram uma baixa prevalência de anemia e médias de hemoglobina similares entre os grupos, fato que provavelmente não permitiu constatar o efeito da fortificação das farinhas.This study evaluated the impact of iron-fortified flours in the prevalence of anemia and hemoglobin levels of pregnant women. This transversal cross-sectional study was developed at a Health Center School in São Paulo - SP, Brazil. Data, collected from September to December of 2006, were obtained from 750 pregnant women's medical records and discriminated into two groups, before and after fortification: non-fortified and fortified. Pregnant women with hemoglobin levels lower than 11g/dl were considered anemic. Data were submitted to multiple regression analysis. Anemia affected 9.2% and 8.6% of pregnant women, before and after the fortification, respectively (p>0.05). Multiple analysis indicates no statistical difference in the mean hemoglobin levels between the groups (p=0.117). The results indicated a low prevalence of anemia and similar hemoglobin levels between the groups, which probably did not allow for demonstrating the effect of flour fortification.Se evaluó el impacto de la fortificación de harinas con hierro en la prevalencia de anemia y concentración de hemoglobina de gestantes. Este estudio transversal retrospectivo fue desarrollado en un Centro de Salud Escuela, de la ciudad de São Paulo - SP. Los datos, recogidos de septiembre a diciembre de 2006, fueron obtenidos de 750 manuales de gestantes, distribuidas en dos grupos, antes y después del inicio de la fortificación: no fortificado y fortificado. Las gestantes con nivel de hemoglobina inferior 11g/dl fueron consideradas anémicas. Se realizó el análisis de la regresión múltiple. La anemia afectó 9,2% y 8,6% de las gestantes, antes y después de la implantación del programa, respectivamente (p>0,05). El análisis de la regresión múltiple no evidenció diferencia estadística en el promedio de hemoglobina entre los grupos (p=0,117). Los resultados indicaron baja prevalencia de anemia y promedios similares de hemoglobina entre los grupos estudiados, hecho que probablemente no permitió demostrar el efecto de la fortificación de las harinas

    Abnormal brain connectivity patterns in adults with ADHD : a coherence study

    Get PDF
    Studies based on functional magnetic resonance imaging (fMRI) during the resting state have shown decreased functional connectivity between the dorsal anterior cingulate cortex (dACC) and regions of the Default Mode Network (DMN) in adult patients with Attention-Deficit/Hyperactivity Disorder (ADHD) relative to subjects with typical development (TD). Most studies used Pearson correlation coefficients among the BOLD signals from different brain regions to quantify functional connectivity. Since the Pearson correlation analysis only provides a limited description of functional connectivity, we investigated functional connectivity between the dACC and the posterior cingulate cortex (PCC) in three groups (adult patients with ADHD, n = 21; TD age-matched subjects, n = 21; young TD subjects, n = 21) using a more comprehensive analytical approach – unsupervised machine learning using a one-class support vector machine (OC-SVM) that quantifies an abnormality index for each individual. The median abnormality index for patients with ADHD was greater than for TD agematched subjects (p = 0.014); the ADHD and young TD indices did not differ significantly (p = 0.480); the median abnormality index of young TD was greater than that of TD age-matched subjects (p = 0.016). Low frequencies below 0.05 Hz and around 0.20 Hz were the most relevant for discriminating between ADHD patients and TD age-matched controls and between the older and younger TD subjects. In addition, we validated our approach using the fMRI data of children publicly released by the ADHD-200 Competition, obtaining similar results. Our findings suggest that the abnormal coherence patterns observed in patients with ADHD in this study resemble the patterns observed in young typically developing subjects, which reinforces the hypothesis that ADHD is associated with brain maturation deficits
    corecore