488 research outputs found

    Physical Activity and Exercise with Blood Flow Restriction as Medicine During the COVID-19 Pandemic and Beyond

    Get PDF
    During the COVID-19 pandemic, physical activity levels have decreased and sitting time has increased. This is a major concern as physical inactivity increases the risk for severe COVID-19 outcomes. Evidence also indicates that COVID-19 survivors can experience reduced physical function (i.e., ability to complete daily living activities) long after acute illness. Currently, there are no evidence-based guidelines for recovering physical function following COVID-19 infection. Exercise with blood flow restriction (BFR) presents a promising rehabilitation strategy as the benefits of traditional exercise can be achieved using lower intensities. However, several barriers such as cost, access to equipment, and lack of standardized methods limit its use. The goal of this research was to promote and facilitate the use of physical activity as a critical form of medicine during the COVID-19 pandemic and beyond. With study 1, I implemented a community-based program to provide free physical activity resources to the rural Upper Peninsula during the pandemic. Physical activity was promoted through a widespread media campaign and over 260 virtual home-based workouts were delivered to community members using several platforms (i.e., Zoom, Facebook Live, YouTube, TV, DVD). With study 2, I developed a working hypothesis and theoretical framework for using BFR to help restore physical function in those individuals infected with COVID-19. Specifically, I hypothesized that passive BFR modalities can mitigate losses of muscle mass and muscle strength that occur during acute infection and 2) exercise with BFR can serve as an effective alternative to traditional higher intensity exercise for regaining muscle mass, muscle strength, and aerobic capacity during convalescence. With study 3, I collected laboratory-based measures using Doppler ultrasound and anthropometric techniques in healthy adults (n=143) and applied linear regression methods to develop and validate a prediction equation for performing BFR without the need for specialized equipment. Finally, with study 4, I developed and usability tested a web-based application designed to serve as user support tool that aids physical therapists in implementing BFR. Collectively, my research addressed two major public health problems (COVID-19 and physical inactivity) and sought to enhance accessibility of physical activity and exercise with BFR during the pandemic and beyond

    Statistiline lähenemine mälulekete tuvastamiseks Java rakendustes

    Get PDF
    Kaasaegsed hallatud käitusaja keskkonnad (ingl. managed runtime environment) ja programmeerimiskeeled lihtsustavad rakenduste loomist ning haldamist. Kõige levinumaks näiteks säärase keele ja keskkonna kohta on Java. Üheks tähtsaks hallatud käitusaja keskkonna ülesandeks on automaatne mäluhaldus. Vaatamata sisseehitatud prügikoristajale, mälulekke probleem Javas on endiselt relevantne ning tähendab tarbetut mälu hoidmist. Probleem on eriti kriitiline rakendustes mis peaksid ööpäevaringselt tõrgeteta toimima, kuna mäluleke on üks väheseid programmeerimisvigu mis võib hävitada kogu Java rakenduse. Parimaks indikaatoriks otsustamaks kas objekt on kasutuses või mitte on objekti viimane kasutusaeg. Selle meetrika põhiliseks puudujäägiks on selle hind jõudluse mõttes. Käesolev väitekiri uurib mälulekete problemaatikat Javas ning pakub välja uudse mälulekkeid tuvastava ning diagnoosiva algoritmi. Väitekirjas kirjeldatakse alternatiivset lähenemisviisi objektide kasutuse hindamiseks. Põhihüpoteesiks on idee et lekkivaid objekte saab statistiliste meetoditega eristada mittelekkivatest kui vaadelda objektide populatsiooni eluiga erinevate gruppide lõikes. Pakutud lähenemine on oluliselt odavama hinnaga jõudluse mõttes, kuna objekti kohta on vaja salvestada infot ainult selle loomise hetkel. Väitekirja uurimistöö tulemusi on rakendatud mälulekete tuvastamise tööriista Plumbr arendamisel, mida hetkel edukalt kasutatakse ka erinevates toodangkeskkondades. Pärast sissejuhatavaid peatükke, väitekirjas vaadeldakse siiani pakutud lahendusi ning on pakutud välja ka nende meetodite klassifikatsioon. Järgnevalt on kirjeldatud statistiline baasmeetod mälulekete tuvastamiseks. Lisaks on analüüsitud ka kirjeldatud baasmeetodi puudujääke. Järgnevalt on kirjeldatud kuidas said defineeritud lisamõõdikud mis aitasid masinõppe abil baasmeetodit täpsemaks teha. Testandmeid masinõppe tarbeks on kogutud Plumbri abil päris rakendustest ning toodangkeskkondadest. Lisaks, kirjeldatakse väitekirjas juhtumianalüüse ning võrdlust ühe olemasoleva mälulekete tuvastamise lahendusega.Modern managed runtime environments and programming languages greatly simplify creation and maintenance of applications. One of the best examples of such managed runtime environments and a language is the Java Virtual Machine and the Java programming language. Despite the built in garbage collector, the memory leak problem is still relevant in Java and means wasting memory by preventing unused objects from being removed. The problem of memory leaks is especially critical for applications, which are expected to work uninterrupted around the clock, as running out of memory is one of a few reasons which may cause the termination of the whole Java application. The best indicator of whether an object is used or not is the time of the last access. However, the main disadvantage of this metric is the incurred performance overhead. Current thesis researches the memory leak problem and proposes a novel approach for memory leak detection and diagnosis. The thesis proposes an alternative approach for estimation of the 'unusedness' of objects. The main hypothesis is that leaked objects may be identified by applying statistical methods to analyze lifetimes of objects, by observing the ages of the population of objects grouped by their allocation points. Proposed solution is much more efficient performance-wise as for each object it is sufficient to record any information at the time of creation of the object. The research conducted for the thesis is utilized in a memory leak detection tool Plumbr. After the introduction and overview of the state of the art, current thesis reviews existing solutions and proposes the classification for memory leak detection approaches. Next, the statistical approach for memory leak detection is described along with the description of the main metric used to distinguish leaking objects from non-leaking ones. Follows the analysis of this single metric. Based on this analysis additional metrics are designed and machine learning algorithms are applied on the statistical data acquired from real production environments from the Plumbr tool. Case studies of real applications and one previous solution for the memory leak detection are performed in order to evaluate performance overhead of the tool

    Metrics for Aspect Mining Visualization

    Get PDF
    Aspect oriented programming has over the last decade become the subject of intense research within the domain of software engineering. Aspect mining, which is concerned with identification of cross cutting concerns in legacy software, is an important part of this domain. Aspect refactoring takes the identified cross cutting concerns and converts these into new software constructs called aspects. Software that have been transformed using this process becomes more modularized and easier to comprehend and maintain. The first attempts at mining for aspects were dominated by manual searching and parsing through source code using simple tools. More sophisticated techniques have since emerged including evaluation of execution traces, code clone detection, program slicing, dynamic analysis, and use of various clustering techniques. The focus of most studies has been to maximize aspect mining performance measured by various metrics including those of aspect mining precision and recall. Other metrics have been developed and used to compare the various aspect mining techniques with each other. Aspect mining automation and presentation of aspect mining results has received less attention. Automation of aspect mining and presentation of results conducive to aspect refactoring is important if this research is going to be helpful to software developers. This research showed that aspect mining can be automated. A tool was developed which performed automated aspect mining and visualization of identified cross cutting concerns. This research took a different approach to aspect mining than most aspect mining research by recognizing that many different categories of cross cutting concerns exist and by taking this into account in the mining process. Many different aspect mining techniques have been developed over time, some of which are complementary. This study was different than most aspect mining research in that multiple complementary aspect mining algorithms was used in the aspect mining and visualization process

    Software fault injection and localization in embedded systems

    Get PDF
    Injection and localization of software faults have been extensively researched, but the results are not directly transferable to embedded systems. The domain-specific constraints applying to these systems, such as limited resources and the predominant C/C++ programming languages, require a specific set of injection and localization techniques. In this thesis, we have assessed existing approaches and have contributed a set of novel methods for software fault injection and localization in embedded systems. We have developed a method based on AspectC++ for the injection of errors at interfaces and a method based on Clang for the accurate injection of software faults directly into source code. Both approaches work particularly well in the context of embedded systems, because they do not require runtime support and modify binaries only when necessary. Nevertheless, they are suitable to inject software faults and errors into the software of other domains. These contributions required a thorough assessment of fault injection techniques and fault models presented in literature over the years, which raised multiple questions regarding their validity in the context of C/C++. We found that macros (particularly header files), compile-time language constructs, and the commonly used optimization levels introduce a non-negligible bias to experimental results achieved by injection methods operating on any other layer than the source code. Additionally, we found that the textual specification of fault models is prone to ambiguities and misunderstandings. We have conceived an automatic fault classifier to solve this problem in a field study. Regarding software fault localization, we have combined existing methods making use of program spectra and assertions, and have contributed a new oracle type for autonomous localization of software faults in the field. Our evaluation shows that this approach works particularly well in the context of embedded systems because the generated information can be processed in real-time and, therefore, it can run in an unsupervised manner. Concluding, we assessed a variety of injection and localization approaches in the context of embedded systems and contributed novel methods where applicable improving the current state-of-the-art. Our results also point out weaknesses regarding the general validity of the majority of previous injection experiments in C/C++

    Exploring Potentials in Mobile Phone GPS Data Collection and Analysis

    Get PDF
    In order to support efficient transportation planning decisions, household travel survey data with high levels of accuracy are essential. Due to a number of issues associated with conventional household travel surveys, including high cost, low response rate, trip misreporting, and respondents’ self-reporting bias, government and private agencies are desperately searching for alternative data collection methods. Recent advancements in smart phones and Global Positioning System (GPS) technologies present new opportunities to track travelers’ trips. Considering the high penetration rate of smartphones, it seems reasonable to use smartphone data as a reliable source of individual travel diary. Many studies have applied GPS-Based data in planning and demand analysis but mobile phone GPS data has not received much attention. The Google Location History (GLH) data provide an opportunity to explore the potential of these data. This research presents a study using GLH data, including the data processing algorithm in deriving travel information and the potential applications in understanding travel patterns. The main goal of this study is to explore the potential of using cell phone GPS data to advance the understanding in mobility and travel behavior. The objectives of the study include: a) assessing the technical feasibility of using smartphones in transportation planning as a substitute of traditional household survey b) develop algorithms and procedures to derive travel information from smartphones; and c) identify applications in mobility and travel behavior studies that could take advantage of these smartphones GPS data, which would not have been possible with conventional data collection methods. This research aims to demonstrate how accurate travel information can be collected and analyzed with lower cost using smartphone GPS data and what analysis applications can be made possible with this new data source. Moreover, the framework developed in this study can provide valuable insights for others who are interested in using cell phone data. GLH data are obtained from 45 participants in a two-month period for the study. The results show great promise of using GLH data as a supplement or complement to conventional travel diary data. It shows that GLH provides sufficient high resolution data that can be used to study people’s movement without respondent burden, and potentially it can be applied to a large scale study easily. The developed algorithms in this study work well with the data. This study supports that transportation data can be collected with smartphones less expensively and more accurately than by traditional household travel survey. These data provide the opportunity to facilitate the investigation of various issues, such as less frequent long-distance travel, hourly variations in travel behavior, and daily variations in travel behavior

    New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs

    Full text link
    Tesis por compendio[EN] Relevance of electronics towards safety of common devices has only been growing, as an ever growing stake of the functionality is assigned to them. But of course, this comes along the constant need for higher performances to fulfill such functionality requirements, while keeping power and budget low. In this scenario, industry is struggling to provide a technology which meets all the performance, power and price specifications, at the cost of an increased vulnerability to several types of known faults or the appearance of new ones. To provide a solution for the new and growing faults in the systems, designers have been using traditional techniques from safety-critical applications, which offer in general suboptimal results. In fact, modern embedded architectures offer the possibility of optimizing the dependability properties by enabling the interaction of hardware, firmware and software levels in the process. However, that point is not yet successfully achieved. Advances in every level towards that direction are much needed if flexible, robust, resilient and cost effective fault tolerance is desired. The work presented here focuses on the hardware level, with the background consideration of a potential integration into a holistic approach. The efforts in this thesis have focused several issues: (i) to introduce additional fault models as required for adequate representativity of physical effects blooming in modern manufacturing technologies, (ii) to provide tools and methods to efficiently inject both the proposed models and classical ones, (iii) to analyze the optimum method for assessing the robustness of the systems by using extensive fault injection and later correlation with higher level layers in an effort to cut development time and cost, (iv) to provide new detection methodologies to cope with challenges modeled by proposed fault models, (v) to propose mitigation strategies focused towards tackling such new threat scenarios and (vi) to devise an automated methodology for the deployment of many fault tolerance mechanisms in a systematic robust way. The outcomes of the thesis constitute a suite of tools and methods to help the designer of critical systems in his task to develop robust, validated, and on-time designs tailored to his application.[ES] La relevancia que la electrónica adquiere en la seguridad de los productos ha crecido inexorablemente, puesto que cada vez ésta copa una mayor influencia en la funcionalidad de los mismos. Pero, por supuesto, este hecho viene acompañado de una necesidad constante de mayores prestaciones para cumplir con los requerimientos funcionales, al tiempo que se mantienen los costes y el consumo en unos niveles reducidos. En este escenario, la industria está realizando esfuerzos para proveer una tecnología que cumpla con todas las especificaciones de potencia, consumo y precio, a costa de un incremento en la vulnerabilidad a múltiples tipos de fallos conocidos o la introducción de nuevos. Para ofrecer una solución a los fallos nuevos y crecientes en los sistemas, los diseñadores han recurrido a técnicas tradicionalmente asociadas a sistemas críticos para la seguridad, que ofrecen en general resultados sub-óptimos. De hecho, las arquitecturas empotradas modernas ofrecen la posibilidad de optimizar las propiedades de confiabilidad al habilitar la interacción de los niveles de hardware, firmware y software en el proceso. No obstante, ese punto no está resulto todavía. Se necesitan avances en todos los niveles en la mencionada dirección para poder alcanzar los objetivos de una tolerancia a fallos flexible, robusta, resiliente y a bajo coste. El trabajo presentado aquí se centra en el nivel de hardware, con la consideración de fondo de una potencial integración en una estrategia holística. Los esfuerzos de esta tesis se han centrado en los siguientes aspectos: (i) la introducción de modelos de fallo adicionales requeridos para la representación adecuada de efectos físicos surgentes en las tecnologías de manufactura actuales, (ii) la provisión de herramientas y métodos para la inyección eficiente de los modelos propuestos y de los clásicos, (iii) el análisis del método óptimo para estudiar la robustez de sistemas mediante el uso de inyección de fallos extensiva, y la posterior correlación con capas de más alto nivel en un esfuerzo por recortar el tiempo y coste de desarrollo, (iv) la provisión de nuevos métodos de detección para cubrir los retos planteados por los modelos de fallo propuestos, (v) la propuesta de estrategias de mitigación enfocadas hacia el tratamiento de dichos escenarios de amenaza y (vi) la introducción de una metodología automatizada de despliegue de diversos mecanismos de tolerancia a fallos de forma robusta y sistemática. Los resultados de la presente tesis constituyen un conjunto de herramientas y métodos para ayudar al diseñador de sistemas críticos en su tarea de desarrollo de diseños robustos, validados y en tiempo adaptados a su aplicación.[CA] La rellevància que l'electrònica adquireix en la seguretat dels productes ha crescut inexorablement, puix cada volta més aquesta abasta una major influència en la funcionalitat dels mateixos. Però, per descomptat, aquest fet ve acompanyat d'un constant necessitat de majors prestacions per acomplir els requeriments funcionals, mentre es mantenen els costos i consums en uns nivells reduïts. Donat aquest escenari, la indústria està fent esforços per proveir una tecnologia que complisca amb totes les especificacions de potència, consum i preu, tot a costa d'un increment en la vulnerabilitat a diversos tipus de fallades conegudes, i a la introducció de nous tipus. Per oferir una solució a les noves i creixents fallades als sistemes, els dissenyadors han recorregut a tècniques tradicionalment associades a sistemes crítics per a la seguretat, que en general oferixen resultats sub-òptims. De fet, les arquitectures empotrades modernes oferixen la possibilitat d'optimitzar les propietats de confiabilitat en habilitar la interacció dels nivells de hardware, firmware i software en el procés. Tot i això eixe punt no està resolt encara. Es necessiten avanços a tots els nivells en l'esmentada direcció per poder assolir els objectius d'una tolerància a fallades flexible, robusta, resilient i a baix cost. El treball ací presentat se centra en el nivell de hardware, amb la consideració de fons d'una potencial integració en una estratègia holística. Els esforços d'esta tesi s'han centrat en els següents aspectes: (i) la introducció de models de fallada addicionals requerits per a la representació adequada d'efectes físics que apareixen en les tecnologies de fabricació actuals, (ii) la provisió de ferramentes i mètodes per a la injecció eficient del models proposats i dels clàssics, (iii) l'anàlisi del mètode òptim per estudiar la robustesa de sistemes mitjançant l'ús d'injecció de fallades extensiva, i la posterior correlació amb capes de més alt nivell en un esforç per retallar el temps i cost de desenvolupament, (iv) la provisió de nous mètodes de detecció per cobrir els reptes plantejats pels models de fallades proposats, (v) la proposta d'estratègies de mitigació enfocades cap al tractament dels esmentats escenaris d'amenaça i (vi) la introducció d'una metodologia automatitzada de desplegament de diversos mecanismes de tolerància a fallades de forma robusta i sistemàtica. Els resultats de la present tesi constitueixen un conjunt de ferramentes i mètodes per ajudar el dissenyador de sistemes crítics en la seua tasca de desenvolupament de dissenys robustos, validats i a temps adaptats a la seua aplicació.Espinosa García, J. (2016). New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73146TESISCompendi

    Aspect-oriented technology for dependable operating systems

    Get PDF
    Modern computer devices exhibit transient hardware faults that disturb the electrical behavior but do not cause permanent physical damage to the devices. Transient faults are caused by a multitude of sources, such as fluctuation of the supply voltage, electromagnetic interference, and radiation from the natural environment. Therefore, dependable computer systems must incorporate methods of fault tolerance to cope with transient faults. Software-implemented fault tolerance represents a promising approach that does not need expensive hardware redundancy for reducing the probability of failure to an acceptable level. This thesis focuses on software-implemented fault tolerance for operating systems because they are the most critical pieces of software in a computer system: All computer programs depend on the integrity of the operating system. However, the C/C++ source code of common operating systems tends to be already exceedingly complex, so that a manual extension by fault tolerance is no viable solution. Thus, this thesis proposes a generic solution based on Aspect-Oriented Programming (AOP). To evaluate AOP as a means to improve the dependability of operating systems, this thesis presents the design and implementation of a library of aspect-oriented fault-tolerance mechanisms. These mechanisms constitute separate program modules that can be integrated automatically into common off-the-shelf operating systems using a compiler for the AOP language. Thus, the aspect-oriented approach facilitates improving the dependability of large-scale software systems without affecting the maintainability of the source code. The library allows choosing between several error-detection and error-correction schemes, and provides wait-free synchronization for handling asynchronous and multi-threaded operating-system code. This thesis evaluates the aspect-oriented approach to fault tolerance on the basis of two off-the-shelf operating systems. Furthermore, the evaluation also considers one user-level program for protection, as the library of fault-tolerance mechanisms is highly generic and transparent and, thus, not limited to operating systems. Exhaustive fault-injection experiments show an excellent trade-off between runtime overhead and fault tolerance, which can be adjusted and optimized by fine-grained selective placement of the fault-tolerance mechanisms. Finally, this thesis provides evidence for the effectiveness of the approach in detecting and correcting radiation-induced hardware faults: High-energy particle radiation experiments confirm improvements in fault tolerance by almost 80 percent

    Feasibility study of intelligent LVAD control for optimal heart failure therapy.

    Get PDF
    Background: Left ventricular assist devices (LVAD) are operated at constant speeds (rpm), consequently, pump flow is passively determined by the pressure difference between the LV and aorta. Since the diastolic pressure gradient (~70 mmHg) is much larger than the systolic gradient (~10 mmHg), the majority of pump flow occurs during systole. This limitation results in sub-optimal LV volume unloading, LV washing, and diminished vascular pulsatility that may be associated with increased risk for clinically-significant adverse events, including stroke, bleeding, arteriovenous malformations, and aortic insufficiency. To address these clinical adverse events, an intelligent control strategy using pump speed modulation was developed to provide dynamic LV unloading during the cardiac cycle to produce near-physiologic pulsatile flow delivery similar to that of the native heart. Materials and Methods: The objective of this study was to integrate a novel algorithm to dynamically control Medtronic HVAD pump speed and demonstrate proofof-concept by characterizing hemodynamic performance in a mock flow loop primed with a blood analog solution (glycerol-saline, 3 cP) and tuned to simulate class IV heart failure (HF). The intelligent LVAD control was operated a varying pump speeds (Dspeed = 0, 1000, 1500, 2000, 2500 rpm) and systolic durations (30%, 35%, and 40%); systolic duration correlates to the time spent at either the high or low pump speed setting. The intelligent LVAD control strategy modulates pump speed within a cardiac cycle triggered from an R-wave of an EKG waveform set to 80 BPM. This pump speed modulation control strategy allows for pulsatile operation of a continuous flow LVAD within a single cardiac cycle. Hemodynamic waveforms (LV pressure-volume, aortic pressure-flow, and pump flow) and intrinsic pump parameters (speed and current) were recorded and analyzed for each test condition. We hypothesize that pump speed modulation may be configured for optimal volume unloading (rest), vascular pulsatility (reloading), and/or washing. Results and Discussion: The intelligent LVAD control system successfully demonstrated the ability to rapidly increase and decrease HVAD pump speed within a single cardiac cycle to provide asynchronous, synchronous co-pulsation, and synchronous counter-pulsation profiles for all systolic durations (30, 35, 40%) and Drpm tested (D1000, D1500, D2000, D2500). Asynchronous support was achieved when pump speed increase (or decrease) was independent of the cardiac cycle, co-pulsation support was achieved when increase in pump speed was timed with beginning of systole corresponding with ventricular contraction (systole), and counter-pulsation support was when increase in pump speed was timed with the end of systole corresponding with ventricular filling (diastole). Ideally, the intelligent control would increase (or decrease) the HVAD pump speed instantaneously upon R-wave detection; however, two distinct time delays were observed: (1) a time delay from detection of the R-wave trigger and increase (or decrease) of pump speed for systolic durations of 35% and 40% (being 45 ± 3.0 ms and 82 ± 3.0 ms respectively and (2) a delay in LVAD flow when pump speed was increased which is hypothesized to be from the blood analog solution’s fluid inertia. Left ventricular stroke volume decreased for all LVAD pump speed modulation operating conditions compared to baseline (HF with LVAD off) indicating that the intelligent control strategy was able to reduce LV volume with increasing HVAD support. The highest flow was achieved with the HVAD operated at a fixed speed of 4000 rpm; however, co-pulsation pump speed modulation at the largest pump speed differential (low = 1500, high = 4000, Drpm = 2500, and systolic duration 30%) resulted in a mean pump speed 3,300 ± 1,200 rpm. By comparison, the forward flow at fixed pump speed of 4,000 rpm was 4.8 L/min compared to a mean co-pulsation rpm was 4.5 L/min. Additionally, all operating settings for the intelligent control during pulsatile function produced an average forward flow through the aortic valve, while in contrast at higher fixed speeds (3,500 and 4,000 rpm) the mean aortic flow was negative. Pulse pressure (DP) decreased with increasing mean pump speed (rpm) for all operating modes (fixed, asynchronous, co-pulsation, counter-pulsation). When operating at the same mean pump speed (rpm) copulsation has increased hemodynamic benefit for pulsatility when compared to counterpulsation and fixed speed at the same mean pump (rpm). Conclusion: The results of this study show the ability of the intelligent HVAD control strategy to increase and decrease pump speed within a single cardiac cycle. This study showed that asynchronous modulation with phases of co-pulsation can generate near physiologic pulse pressure and vascular pulsatility when compared to counterpulsation support, while counter-pulsation can generate greater ventricular volume unloading and diastolic augmentation when compared to co-pulsation. Furthermore, the clinical impact of this study is that through speed modulation adverse events of continuous flow LVADs may be reduced such as incidences of bleeding associated with decreased pulsatility and a decrease in the risk of thrombus formation from poor washing around the aortic valve

    The 4th Conference of PhD Students in Computer Science

    Get PDF
    corecore