339 research outputs found

    Bringing the pieces together:Integrating cardiac and geriatric care in older patients with heart disease

    Get PDF
    Due to the increasing aging population, the number of older cardiac patients is also expected to rise in the next decades. The treatment of older cardiac patients is complex due to the simultaneously presence of comorbidities and polypharmacy, and geriatric conditions such as functional impairment, fall risk and malnutrition. However, the assessment of geriatric conditions is not part of the medical routine in cardiology and therefore these conditions are frequently unrecognized although they have a significant impact on treatment and on outcomes. In addition, treatments are mostly based on single-disease oriented guidelines and inadequately take other conditions into account. This may lead to conflicting recommendations and treatments that do not address important outcomes for older patients such as daily functioning, symptom relief and quality of life. Thus, the care of older cardiac patients is currently suboptimal which increases the risk of functional loss, readmission and mortality. The overall aim of the work described in this thesis is to explore the integration of cardiac and geriatric care for older patients with heart disease. First, by examining how hospitalized older cardiac patients at high risk for adverse events could be identified. Second, by investigating lifestyle-related secondary prevention of cardiovascular complications in older cardiac patients. And third, by developing a transitional care intervention for older cardiac patients and evaluating the effect on unplanned hospital readmission and mortality

    Bridging the gap:Adapting transitional care to older cardiac patients' needs

    Get PDF
    Hospital readmission and mortality rates of older cardiac patients are high. Multimorbidity and geriatric conditions are common in this population and increase this risk. In frail patients with cardiovascular disease, the risk of readmission and mortality is even 2-3 times higher. The identification of patients at risk is important to enable adequate treatment, based on patients’ individual risk factors and needs. Especially patients who are transferred between care settings or discharged from hospital to home, are at high risk of adverse events. The aims of this thesis are 1) to evaluate strategies to identify patients at high risk of readmission and mortality, 2) to evaluate a transitional care intervention in frail older cardiac patients and 3) to evaluate new approaches in cardiac rehabilitation. Bridging the gap between hospital and home by combining disease management, case management and home-based cardiac rehabilitation did not lead to reduction of readmission and mortality in frail cardiac patients. The Cardiac Care Bridge (CCB) intervention is in its current form not recommended for implementation in clinical practice. If, with adequate risk assessment high-risk patients eligible for high-intensity preventive interventions can be identified, the CCB intervention may be reconsidered. For future purpose, interventions should as much as possible be integrated within existing care systems and should focus on patients’ own needs and preferences to achieve goals. Educational strategies focusing on interdisciplinary collaboration, system empowerment and identifying patients’ own drivers could improve the intervention quality to bridge the gap between current practice and older cardiac patients’ needs

    Data semantic enrichment for complex event processing over IoT Data Streams

    Get PDF
    This thesis generalizes techniques for processing IoT data streams, semantically enrich data with contextual information, as well as complex event processing in IoT applications. A case study for ECG anomaly detection and signal classification was conducted to validate the knowledge foundation

    Multigrain shared memory

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 197-203).by Donald Yeung.Ph.D

    Efficient openMP over sequentially consistent distributed shared memory systems

    Get PDF
    Nowadays clusters are one of the most used platforms in High Performance Computing and most programmers use the Message Passing Interface (MPI) library to program their applications in these distributed platforms getting their maximum performance, although it is a complex task. On the other side, OpenMP has been established as the de facto standard to program applications on shared memory platforms because it is easy to use and obtains good performance without too much effort. So, could it be possible to join both worlds? Could programmers use the easiness of OpenMP in distributed platforms? A lot of researchers think so. And one of the developed ideas is the distributed shared memory (DSM), a software layer on top of a distributed platform giving an abstract shared memory view to the applications. Even though it seems a good solution it also has some inconveniences. The memory coherence between the nodes in the platform is difficult to maintain (complex management, scalability issues, high overhead and others) and the latency of the remote-memory accesses which can be orders of magnitude greater than on a shared bus due to the interconnection network. Therefore this research improves the performance of OpenMP applications being executed on distributed memory platforms using a DSM with sequential consistency evaluating thoroughly the results from the NAS parallel benchmarks. The vast majority of designed DSMs use a relaxed consistency model because it avoids some major problems in the area. In contrast, we use a sequential consistency model because we think that showing these potential problems that otherwise are hidden may allow the finding of some solutions and, therefore, apply them to both models. The main idea behind this work is that both runtimes, the OpenMP and the DSM layer, should cooperate to achieve good performance, otherwise they interfere one each other trashing the final performance of applications. We develop three different contributions to improve the performance of these applications: (a) a technique to avoid false sharing at runtime, (b) a technique to mimic the MPI behaviour, where produced data is forwarded to their consumers and, finally, (c) a mechanism to avoid the network congestion due to the DSM coherence messages. The NAS Parallel Benchmarks are used to test the contributions. The results of this work shows that the false-sharing problem is a relative problem depending on each application. Another result is the importance to move the data flow outside of the critical path and to use techniques that forwards data as early as possible, similar to MPI, benefits the final application performance. Additionally, this data movement is usually concentrated at single points and affects the application performance due to the limited bandwidth of the network. Therefore it is necessary to provide mechanisms that allows the distribution of this data through the computation time using an otherwise idle network. Finally, results shows that the proposed contributions improve the performance of OpenMP applications on this kind of environments

    TESTING A CONCEPTUAL FRAMEWORK FOR SELF- CARE IN PERSONS WITH DIABETES: THE EFFECT OF DEPRESSION

    Get PDF
    Diabetes is a major source of morbidity, mortality, and economic expense. Not only do people with diabetes have a higher risk of developing depression, the rate of depression is much higher than in the general population (ADA, 2010). Depression is believed to influence Diabetes Self Care Management (DSCM), self efficacy, and self care agency. Therefore, the main study aim was to examine the relationships among these factors using a cross-sectional model testing design. The secondary aim was to examine item characteristics and reliability of the Diabetes Self Management Scale (DSMS). A convenience sample of 78 individuals with type 1 or type 2 diabetes mellitus who were taking insulin was recruited. Participants completed five psychometric questionnaires. Path analysis techniques were used to examine relationships among the variables. For the DSMS, item and reliability resulted in a reduced 40-item scale with an alpha of 0.947. The new scale had a strong correlation with self efficacy (r=0.80) which supports the validity of the scale. The results of the path analysis testing showed that depression negatively affected self efficacy (B=-1.43; p<.01; r2=.18) and self care agency (B=0.53; p<.01; r2=.23). The effect of depression on DSCM was completely mediated by self efficacy and self care agency. The findings may indicate that enhancing self efficacy and self-care agency might mitigate the negative impact of depression on DSCM

    Compiler and Runtime Optimizations for Fine-Grained Distributed Shared Memory Systems

    Get PDF
    Bal, H.E. [Promotor

    Complex Event Processing for integration of Internet of Things devices

    Full text link
    Internet stvari (IoT) se kot relativno nova tehnologija sooča s številnimi izzivi. Za IoT omrežja je značilno, da jih sestavlja veliko število naprav. Vsaka od teh naprav generira ogromno količino dogodkov. Zato je skalabilnost ena od ključnih zahtev IoT aplikacij. Računalništvo v oblaku nam lahko pomaga doseči skalabilnost tako, da nam zagotavlja virtualno neomejene količine virov. Arhitektura mikrostoritev postaja vse bolj popularna za namestitev aplikacij v oblaku. Pogosto hočemo iz dogodkov, ki prihajajo iz IoT naprav, pridobiti informacije v realnem času. Težje bi bilo pridobiti uporabne informacije iz enormne količine neobdelanih dogodkov, če bi dogodke shranjevali v podatkovno bazo. Kompleksna obdelava dogodkov nam omogoča, da analiziramo dogodke in iz njih pridobivamo uporabne informacije v realnem času. Da bi vse to demonstrirali, smo razvili IoT aplikacijo, ki sledi načelom mikrostoritvene arhitekture. Aplikacija lahko simulira dogodke, jih sprejema, izvaja kompleksno obdelavo dogodkov in prikazuje vizualizacije. Mikrostoritev, ki sprejema dogodke, lahko skaliramo navzgor in navzdol s ciljem, da uravnotežimo obremenitev med instancami in dosežemo skalabilnost in elastičnost.As a relatively new technology, the Internet of Things (IoT) faces many challenges. IoT networks are characterized by a big number of devices. Each of the devices produces huge amount of events. Therefore, scalability is one of the key requirements of IoT applications. Cloud computing could help us achieve scalability by providing virtually unlimited resources. The microservices architecture is becoming increasingly popular for cloud deployments of applications. We often want to extract real-time information from the events that are coming from IoT devices. It would be harder to infer useful information from the enormous amount of raw events, if we store them in a database. Complex event processing enables us to analyze the events as the stream of events flows and to infer meaningful information from them in real time. To demonstrate all of this in practice, we developed an IoT application, which follows the principles of microservices architecture. It is able to simulate events, consume them, do complex event processing and display visualizations. In order to balance the load between the instances and achieve scalability and elasticity, the microservice which is consuming the events can be scaled up and scaled down

    Real time stream processing for internet of things

    Get PDF
    06.03.2018 tarihli ve 30352 sayılı Resmi Gazetede yayımlanan “Yükseköğretim Kanunu İle Bazı Kanun Ve Kanun Hükmünde Kararnamelerde Değişiklik Yapılması Hakkında Kanun” ile 18.06.2018 tarihli “Lisansüstü Tezlerin Elektronik Ortamda Toplanması, Düzenlenmesi ve Erişime Açılmasına İlişkin Yönerge” gereğince tam metin erişime açılmıştır.Nesnelerin İnterneti 'nin işletmeler arasında popülerliğinin artmasıyla, izleme ve analiz IoT verilerinin araştırılması ve geliştirilmesi artmıştır. Büyük veri kaynaklarından biri olan Nesnelerin interneti, veri mühendislerinden dikkat çekiyor. Asıl zorluk, büyük miktarda IoT olayının gerçek zamanlı akış işlemesidir. Veri transferini, büyük ölçekli verileri gerçek zamanlı olarak depolamayı, işlemeyi ve analiz etmeyi içerir. Milyarlarca IoT cihazı, istihbaratı gerçek zamanda elde etmek için analiz edilmesi gereken çok miktarda veri üretir. Bu tezde, IoT için gerçek zamanlı akış işlemek için birleştirilmiş bir çözüm önerilmiştir. Önerilen yöntemde, hava istasyonu verilerinin IoT olayları Apache Kafka kullanılarak üretilir ve bir konuya yayınlanır. Bu veriler Apache Spark tüketicisi tarafından tüketilmekte ve RDD'ye dönüştürülmektedir. Spark SQL'i kullanarak, verileri analiz etmek için farklı sorguların uygulandığı veri çerçeveleri oluşturulur. Veriler Cassandra'ya kaydedilir ve Zeppelin notebook verileri görselleştirmek için kullanılır. Spark'deki makine öğrenme kütüphanesini kullanarak gerçek zamanlı tahminler yapmak için bir veri kümesine Lojistik Regresyon algoritması uygulanır. Sonunda, tüm ölçüm farklı metrikleri değiştirerek ve gecikmeyi azaltarak hızlanır. Sonuçlar, bu yöntemin gerçek zamanlı olarak büyük IoT veri kümelerini işlemek için eksiksiz bir çözüm sunduğunu göstermektedir.With the increase in popularity of IoT among enterprises, the research and development in the field of monitoring and analyzing IoT data has been increased. Iot, being one of the major sources of big data is getting attention from data engineers. The main challenge is real time stream processing of large amount of IoT events. It includes data transfer, storing, processing and analyzing large scale of data in real time. Billions of IoT devices generate huge amount of data that should be analyzed for deriving intelligence in real time. In this thesis, a unified solution for real time stream processing for IoT is proposed. In the proposed method, sample IoT events of weather station data are generated using Apache Kafka and published to a topic. This data is consumed by Apache Spark consumer which converted it into RDDs. Using Spark SQL, data frames are generated, on which different queries are applied to analyze the data. Data is saved to Cassandra and Zeppelin notebook is used to visualize the data. Logistic Regression algorithm is applied on a data set to make predictions in real time using machine learning library in Spark. In the end, the whole method is speed up by altering different metrics and reducing delay. Results show that this method provides a complete solution to process large IoT data sets in real time
    corecore