3,576 research outputs found

    A Real-Time Automated Point-Process Method for the Detection and Correction of Erroneous and Ectopic Heartbeats

    Get PDF
    The presence of recurring arrhythmic events (also known as cardiac dysrhythmia or irregular heartbeats), as well as erroneous beat detection due to low signal quality, significantly affects estimation of both time and frequency domain indices of heart rate variability (HRV). A reliable, real-time classification and correction of ECG-derived heartbeats is a necessary prerequisite for an accurate online monitoring of HRV and cardiovascular control. We have developed a novel point-process-based method for real-time R-R interval error detection and correction. Given an R-wave event, we assume that the length of the next R-R interval follows a physiologically motivated, time-varying inverse Gaussian probability distribution. We then devise an instantaneous automated detection and correction procedure for erroneous and arrhythmic beats by using the information on the probability of occurrence of the observed beat provided by the model. We test our algorithm over two datasets from the PhysioNet archive. The Fantasia normal rhythm database is artificially corrupted with known erroneous beats to test both the detection procedure and correction procedure. The benchmark MIT-BIH Arrhythmia database is further considered to test the detection procedure of real arrhythmic events and compare it with results from previously published algorithms. Our automated algorithm represents an improvement over previous procedures, with best specificity for the detection of correct beats, as well as highest sensitivity to missed and extra beats, artificially misplaced beats, and for real arrhythmic events. A near-optimal heartbeat classification and correction, together with the ability to adapt to time-varying changes of heartbeat dynamics in an online fashion, may provide a solid base for building a more reliable real-time HRV monitoring device. © 1964-2012 IEEE

    Longer-term Baerveldt to Trabectome glaucoma surgery comparison using propensity score matching

    Get PDF
    Purpose: To apply propensity score matching to compare Baerveldt glaucoma drainage implant (BGI) to Trabectome-mediated ab interno trabeculectomy (AIT). Recent data suggests that AIT can produce results similar to BGI which is traditionally reserved for more severe glaucoma. Methods: BGI and AIT patients with at least 1 year of follow-up were included. The primary outcome measures were intraocular pressure (IOP), number of glaucoma medications, and a Glaucoma Index (GI) score. GI reflected glaucoma severity based on visual field, the number of preoperative medications, and preoperative IOP. Score matching used a genetic algorithm consisting of age, gender, type of glaucoma, concurrent phacoemulsification, baseline number of medications, and baseline IOP. Patients with neovascular glaucoma, with prior glaucoma surgery, or without a close match were excluded. Results: Of 353 patients, 30 AIT patients were matched to 29 BGI patients. Baseline characteristics including, IOP, the number of glaucoma medications, type of glaucoma, the degree of VF loss and GI were not significantly different between AIT and BGI. BGI had a preoperative IOP of 21.6 ± 6.3 mmHg compared to 21.5 ± 7.4 for AIT on 2.8 ± 1.1 medications and 2.5 ± 2.3 respectively. At 30 months, the mean IOP was 15.0 ± 3.9 mmHg for AIT versus 15.0 ± 5.7 mmHg for BGI (p > 0.05), while the number of drops was 1.5 ± 1.3 for AIT (change: p = 0.001) versus 2.4 ± 1.2 for BGI (change: p = 0.17; AIT vs BGI: 0.007). Success, defined as IOP  0.05) and 50% versus 52% at 2.5 years. Conclusions: A propensity score matched comparison of AIT and BGI demonstrated a similar IOP reduction through 1 year. AIT required fewer medications

    Evaluating the impact of public subsidies on a firm's performance : a quasi-experimental approach

    Get PDF
    Many regional governments in developed countries design programs to improve the competitiveness of local firms. In this paper, we evaluate the effectiveness of public programs whose aim is to enhance the performance of firms located in Catalonia (Spain). We compare the performance of publicly subsidised companies (treated) with that of similar, but unsubsidised companies (non-treated). We use the Propensity Score Matching (PSM) methodology to construct a control group which, with respect to its observable characteristics, is as similar as possible to the treated group, and that allows us to identify firms which retain the same propensity to receive public subsidies. Once a valid comparison group has been established, we compare the respective performance of each firm. As a result, we find that recipient firms, on average, change their business practices, improve their performance, and increase their value added as a direct result of public subsidy programs

    Productive efficiency and regulatory reform : the case of vehicle inspection services

    Get PDF
    Measuring productive efficiency provides information on the likely effects of regulatory reform. We present a Data Envelopment Analysis (DEA) of a sample of 38 vehicle inspection units under a concession regime, between the years 2000 and 2004. The differences in efficiency scores show the potential technical efficiency benefit of introducing some form of incentive regulation or of progressing towards liberalization. We also compute scale efficiency scores, showing that only units in territories with very low population density operate at a sub-optimal scale. Among those that operate at an optimal scale, there are significant differences in size; the largest ones operate in territories with the highest population density. This suggests that the introduction of new units in the most densely populated territories (a likely effect of some form of liberalization) would not be detrimental in terms of scale efficiency. We also find that inspection units belonging to a large, diversified firm show higher technical efficiency, reflecting economies of scale or scope at the firm level. Finally, we show that between 2002 and 2004, a period of high regulatory uncertainty in the sample's region, technical change was almost zero. Regulatory reform should take due account of scale and diversification effects, while at the same time avoiding regulatory uncertainty

    Assessing the assignation of public subsidies : do the experts choose the most efficient R&D projects?

    Get PDF
    The implementation of public programs to support business R&D projects requires the establishment of a selection process. This selection process faces various difficulties, which include the measurement of the impact of the R&D projects as well as selection process optimization among projects with multiple, and sometimes incomparable, performance indicators. To this end, public agencies generally use the peer review method, which, while presenting some advantages, also demonstrates significant drawbacks. Private firms, on the other hand, tend toward more quantitative methods, such as Data Envelopment Analysis (DEA), in their pursuit of R&D investment optimization. In this paper, the performance of a public agency peer review method of project selection is compared with an alternative DEA method.La implementación de un programa de subvenciones públicas a proyectos empresariales de I+D comporta establecer un sistema de selección de proyectos. Esta selección se enfrenta a problemas relevantes, como son la medición del posible rendimiento de los proyectos de I+D y la optimización del proceso de selección entre proyectos con múltiples y a veces incomparables medidas de resultados. Las agencias públicas utilizan mayoritariamente el método peer review que, aunque presenta ventajas, no está exento de críticas. En cambio, las empresas privadas con el objetivo de optimizar su inversión en I+D utilizan métodos más cuantitativos, como el Data Envelopment Análisis (DEA). En este trabajo se compara laactuación de los evaluadores de una agencia púlica (peer review) con una metodología alternativa de selección de proyectos como es el DEA

    Goodness-of-fit tests for neural population models: the multivariate time-rescaling theorem

    Get PDF
    Poster Presentation from Nineteenth Annual Computational Neuroscience Meeting: CNS*2010 San Antonio, TX, USA. 24-30 July 2010 Statistical models of neural activity are at the core of the field of modern computational neuroscience. The activity of single neurons has been modeled to successfully explain dependencies of neural dynamics to its own spiking history, to external stimuli or other covariates [1]. Recently, there has been a growing interest in modeling spiking activity of a population of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing (existing models include generalized linear models [2,3] or maximum-entropy approaches [4]). For point-process-based models of single neurons, the time-rescaling theorem has proven to be a useful toolbox to assess goodness-of-fit. In its univariate form, the time-rescaling theorem states that if the conditional intensity function of a point process is known, then its inter-spike intervals can be transformed or “rescaled” so that they are independent and exponentially distributed [5]. However, the theorem in its original form lacks sensitivity to detect even strong dependencies between neurons. Here, we present how the theorem can be extended to be applied to neural population models and we provide a step-by-step procedure to perform the statistical tests. We then apply both the univariate and multivariate tests to simplified toy models, but also to more complicated many-neuron models and to neuronal populations recorded in V1 of awake monkey during natural scenes stimulation. We demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. ..

    Instantaneous monitoring of heart beat dynamics during anesthesia and sedation

    Get PDF
    Anesthesia-induced altered arousal depends on drugs having their effect in specific brain regions. These effects are also reflected in autonomic nervous system (ANS) outflow dynamics. To this extent, instantaneous monitoring of ANS outflow, based on neurophysiological and computational modeling, may provide a more accurate assessment of the action of anesthetic agents on the cardiovascular system. This will aid anesthesia care providers in maintaining homeostatic equilibrium and help to minimize drug administration while maintaining antinociceptive effects. In previous studies, we established a point process paradigm for analyzing heartbeat dynamics and have successfully applied these methods to a wide range of cardiovascular data and protocols. We recently devised a novel instantaneous nonlinear assessment of ANS outflow, also suitable and effective for real-time monitoring of the fast hemodynamic and autonomic effects during induction and emergence from anesthesia. Our goal is to demonstrate that our framework is suitable for instantaneous monitoring of the ANS response during administration of a broad range of anesthetic drugs. Specifically, we compare the hemodynamic and autonomic effects in study participants undergoing propofol (PROP) and dexmedetomidine (DMED) administration. Our methods provide an instantaneous characterization of autonomic state at different stages of sedation and anesthesia by tracking autonomic dynamics at very high time-resolution. Our results suggest that refined methods for analyzing linear and nonlinear heartbeat dynamics during administration of specific anesthetic drugs are able to overcome nonstationary limitations as well as reducing inter-subject variability, thus providing a potential real-time monitoring approach for patients receiving anesthesia

    IEA EBC Annex 57 ‘Evaluation of Embodied Energy and CO<sub>2eq</sub> for Building Construction'

    Get PDF
    The current regulations to reduce energy consumption and greenhouse gas emissions (GHG) from buildings have focused on operational energy consumption. Thus legislation excludes measurement and reduction of the embodied energy and embodied GHG emissions over the building life cycle. Embodied impacts are a significant and growing proportion and it is increasingly recognized that the focus on reducing operational energy consumption needs to be accompanied by a parallel focus on reducing embodied impacts. Over the last six years the Annex 57 has addressed this issue, with researchers from 15 countries working together to develop a detailed understanding of the multiple calculation methods and the interpretation of their results. Based on an analysis of 80 case studies, Annex 57 showed various inconsistencies in current methodological approaches, which inhibit comparisons of results and difficult development of robust reduction strategies. Reinterpreting the studies through an understanding of the methodological differences enabled the cases to be used to demonstrate a number of important strategies for the reduction of embodied impacts. Annex 57 has also produced clear recommendations for uniform definitions and templates which improve the description of system boundaries, completeness of inventory and quality of data, and consequently the transparency of embodied impact assessments
    corecore