4,137 research outputs found

    A Real-Time Automated Point-Process Method for the Detection and Correction of Erroneous and Ectopic Heartbeats

    Get PDF
    The presence of recurring arrhythmic events (also known as cardiac dysrhythmia or irregular heartbeats), as well as erroneous beat detection due to low signal quality, significantly affects estimation of both time and frequency domain indices of heart rate variability (HRV). A reliable, real-time classification and correction of ECG-derived heartbeats is a necessary prerequisite for an accurate online monitoring of HRV and cardiovascular control. We have developed a novel point-process-based method for real-time R-R interval error detection and correction. Given an R-wave event, we assume that the length of the next R-R interval follows a physiologically motivated, time-varying inverse Gaussian probability distribution. We then devise an instantaneous automated detection and correction procedure for erroneous and arrhythmic beats by using the information on the probability of occurrence of the observed beat provided by the model. We test our algorithm over two datasets from the PhysioNet archive. The Fantasia normal rhythm database is artificially corrupted with known erroneous beats to test both the detection procedure and correction procedure. The benchmark MIT-BIH Arrhythmia database is further considered to test the detection procedure of real arrhythmic events and compare it with results from previously published algorithms. Our automated algorithm represents an improvement over previous procedures, with best specificity for the detection of correct beats, as well as highest sensitivity to missed and extra beats, artificially misplaced beats, and for real arrhythmic events. A near-optimal heartbeat classification and correction, together with the ability to adapt to time-varying changes of heartbeat dynamics in an online fashion, may provide a solid base for building a more reliable real-time HRV monitoring device. © 1964-2012 IEEE

    Longer-term Baerveldt to Trabectome glaucoma surgery comparison using propensity score matching

    Get PDF
    Purpose: To apply propensity score matching to compare Baerveldt glaucoma drainage implant (BGI) to Trabectome-mediated ab interno trabeculectomy (AIT). Recent data suggests that AIT can produce results similar to BGI which is traditionally reserved for more severe glaucoma. Methods: BGI and AIT patients with at least 1 year of follow-up were included. The primary outcome measures were intraocular pressure (IOP), number of glaucoma medications, and a Glaucoma Index (GI) score. GI reflected glaucoma severity based on visual field, the number of preoperative medications, and preoperative IOP. Score matching used a genetic algorithm consisting of age, gender, type of glaucoma, concurrent phacoemulsification, baseline number of medications, and baseline IOP. Patients with neovascular glaucoma, with prior glaucoma surgery, or without a close match were excluded. Results: Of 353 patients, 30 AIT patients were matched to 29 BGI patients. Baseline characteristics including, IOP, the number of glaucoma medications, type of glaucoma, the degree of VF loss and GI were not significantly different between AIT and BGI. BGI had a preoperative IOP of 21.6 ± 6.3 mmHg compared to 21.5 ± 7.4 for AIT on 2.8 ± 1.1 medications and 2.5 ± 2.3 respectively. At 30 months, the mean IOP was 15.0 ± 3.9 mmHg for AIT versus 15.0 ± 5.7 mmHg for BGI (p > 0.05), while the number of drops was 1.5 ± 1.3 for AIT (change: p = 0.001) versus 2.4 ± 1.2 for BGI (change: p = 0.17; AIT vs BGI: 0.007). Success, defined as IOP  0.05) and 50% versus 52% at 2.5 years. Conclusions: A propensity score matched comparison of AIT and BGI demonstrated a similar IOP reduction through 1 year. AIT required fewer medications

    Assessing the assignation of public subsidies : do the experts choose the most efficient R&D projects?

    Get PDF
    The implementation of public programs to support business R&D projects requires the establishment of a selection process. This selection process faces various difficulties, which include the measurement of the impact of the R&D projects as well as selection process optimization among projects with multiple, and sometimes incomparable, performance indicators. To this end, public agencies generally use the peer review method, which, while presenting some advantages, also demonstrates significant drawbacks. Private firms, on the other hand, tend toward more quantitative methods, such as Data Envelopment Analysis (DEA), in their pursuit of R&D investment optimization. In this paper, the performance of a public agency peer review method of project selection is compared with an alternative DEA method.La implementación de un programa de subvenciones públicas a proyectos empresariales de I+D comporta establecer un sistema de selección de proyectos. Esta selección se enfrenta a problemas relevantes, como son la medición del posible rendimiento de los proyectos de I+D y la optimización del proceso de selección entre proyectos con múltiples y a veces incomparables medidas de resultados. Las agencias públicas utilizan mayoritariamente el método peer review que, aunque presenta ventajas, no está exento de críticas. En cambio, las empresas privadas con el objetivo de optimizar su inversión en I+D utilizan métodos más cuantitativos, como el Data Envelopment Análisis (DEA). En este trabajo se compara laactuación de los evaluadores de una agencia púlica (peer review) con una metodología alternativa de selección de proyectos como es el DEA

    IEA EBC Annex 57 ‘Evaluation of Embodied Energy and CO<sub>2eq</sub> for Building Construction'

    Get PDF
    The current regulations to reduce energy consumption and greenhouse gas emissions (GHG) from buildings have focused on operational energy consumption. Thus legislation excludes measurement and reduction of the embodied energy and embodied GHG emissions over the building life cycle. Embodied impacts are a significant and growing proportion and it is increasingly recognized that the focus on reducing operational energy consumption needs to be accompanied by a parallel focus on reducing embodied impacts. Over the last six years the Annex 57 has addressed this issue, with researchers from 15 countries working together to develop a detailed understanding of the multiple calculation methods and the interpretation of their results. Based on an analysis of 80 case studies, Annex 57 showed various inconsistencies in current methodological approaches, which inhibit comparisons of results and difficult development of robust reduction strategies. Reinterpreting the studies through an understanding of the methodological differences enabled the cases to be used to demonstrate a number of important strategies for the reduction of embodied impacts. Annex 57 has also produced clear recommendations for uniform definitions and templates which improve the description of system boundaries, completeness of inventory and quality of data, and consequently the transparency of embodied impact assessments

    Productive efficiency and regulatory reform : the case of vehicle inspection services

    Get PDF
    Measuring productive efficiency provides information on the likely effects of regulatory reform. We present a Data Envelopment Analysis (DEA) of a sample of 38 vehicle inspection units under a concession regime, between the years 2000 and 2004. The differences in efficiency scores show the potential technical efficiency benefit of introducing some form of incentive regulation or of progressing towards liberalization. We also compute scale efficiency scores, showing that only units in territories with very low population density operate at a sub-optimal scale. Among those that operate at an optimal scale, there are significant differences in size; the largest ones operate in territories with the highest population density. This suggests that the introduction of new units in the most densely populated territories (a likely effect of some form of liberalization) would not be detrimental in terms of scale efficiency. We also find that inspection units belonging to a large, diversified firm show higher technical efficiency, reflecting economies of scale or scope at the firm level. Finally, we show that between 2002 and 2004, a period of high regulatory uncertainty in the sample's region, technical change was almost zero. Regulatory reform should take due account of scale and diversification effects, while at the same time avoiding regulatory uncertainty

    R-process enrichment from a single event in an ancient dwarf galaxy

    Get PDF
    Elements heavier than zinc are synthesized through the (r)apid and (s)low neutron-capture processes. The main site of production of the r-process elements (such as europium) has been debated for nearly 60 years. Initial studies of chemical abundance trends in old Milky Way halo stars suggested continual r-process production, in sites like core-collapse supernovae. But evidence from the local Universe favors r-process production mainly during rare events, such as neutron star mergers. The appearance of a europium abundance plateau in some dwarf spheroidal galaxies has been suggested as evidence for rare r-process enrichment in the early Universe, but only under the assumption of no gas accretion into the dwarf galaxies. Cosmologically motivated gas accretion favors continual r-process enrichment in these systems. Furthermore, the universal r-process pattern has not been cleanly identified in dwarf spheroidals. The smaller, chemically simpler, and more ancient ultra-faint dwarf galaxies assembled shortly after the first stars formed, and are ideal systems with which to study nucleosynthesis events such as the r-process. Reticulum II is one such galaxy. The abundances of non-neutron-capture elements in this galaxy (and others like it) are similar to those of other old stars. Here, we report that seven of nine stars in Reticulum II observed with high-resolution spectroscopy show strong enhancements in heavy neutron-capture elements, with abundances that follow the universal r-process pattern above barium. The enhancement in this "r-process galaxy" is 2-3 orders of magnitude higher than that detected in any other ultra-faint dwarf galaxy. This implies that a single rare event produced the r-process material in Reticulum II. The r-process yield and event rate are incompatible with ordinary core-collapse supernovae, but consistent with other possible sites, such as neutron star mergers.Comment: Published in Nature, 21 Mar 2016: http://dx.doi.org/10.1038/nature1742

    Evaluating the impact of public subsidies on a firm's performance : a quasi-experimental approach

    Get PDF
    Many regional governments in developed countries design programs to improve the competitiveness of local firms. In this paper, we evaluate the effectiveness of public programs whose aim is to enhance the performance of firms located in Catalonia (Spain). We compare the performance of publicly subsidised companies (treated) with that of similar, but unsubsidised companies (non-treated). We use the Propensity Score Matching (PSM) methodology to construct a control group which, with respect to its observable characteristics, is as similar as possible to the treated group, and that allows us to identify firms which retain the same propensity to receive public subsidies. Once a valid comparison group has been established, we compare the respective performance of each firm. As a result, we find that recipient firms, on average, change their business practices, improve their performance, and increase their value added as a direct result of public subsidy programs

    Instantaneous monitoring of heart beat dynamics during anesthesia and sedation

    Get PDF
    Anesthesia-induced altered arousal depends on drugs having their effect in specific brain regions. These effects are also reflected in autonomic nervous system (ANS) outflow dynamics. To this extent, instantaneous monitoring of ANS outflow, based on neurophysiological and computational modeling, may provide a more accurate assessment of the action of anesthetic agents on the cardiovascular system. This will aid anesthesia care providers in maintaining homeostatic equilibrium and help to minimize drug administration while maintaining antinociceptive effects. In previous studies, we established a point process paradigm for analyzing heartbeat dynamics and have successfully applied these methods to a wide range of cardiovascular data and protocols. We recently devised a novel instantaneous nonlinear assessment of ANS outflow, also suitable and effective for real-time monitoring of the fast hemodynamic and autonomic effects during induction and emergence from anesthesia. Our goal is to demonstrate that our framework is suitable for instantaneous monitoring of the ANS response during administration of a broad range of anesthetic drugs. Specifically, we compare the hemodynamic and autonomic effects in study participants undergoing propofol (PROP) and dexmedetomidine (DMED) administration. Our methods provide an instantaneous characterization of autonomic state at different stages of sedation and anesthesia by tracking autonomic dynamics at very high time-resolution. Our results suggest that refined methods for analyzing linear and nonlinear heartbeat dynamics during administration of specific anesthetic drugs are able to overcome nonstationary limitations as well as reducing inter-subject variability, thus providing a potential real-time monitoring approach for patients receiving anesthesia

    Isotopic variation of parity violation in atomic ytterbium

    Full text link
    We report on measurements of atomic parity violation, made on a chain of ytterbium isotopes with mass numbers A=170, 172, 174, and 176. In the experiment, we optically excite the 6s2 1S0 -> 5d6s 3D1 transition in a region of crossed electric and magnetic fields, and observe the interference between the Stark- and weak-interaction-induced transition amplitudes, by making field reversals that change the handedness of the coordinate system. This allows us to determine the ratio of the weak-interaction-induced electric-dipole (E1) transition moment and the Stark-induced E1 moment. Our measurements, which are at the 0.5% level of accuracy for three of the four isotopes measured, allow a definitive observation of the isotopic variation of the weak-interaction effects in an atom, which is found to be consistent with the prediction of the Standard Model. In addition, our measurements provide information about an additional Z' boson.Comment: 19 pages, 4 figures, 2 table

    Measuring the signal-to-noise ratio of a neuron

    Get PDF
    The signal-to-noise ratio (SNR), a commonly used measure of fidelity in physical systems, is defined as the ratio of the squared amplitude or variance of a signal relative to the variance of the noise. This definition is not appropriate for neural systems in which spiking activity is more accurately represented as point processes. We show that the SNR estimates a ratio of expected prediction errors and extend the standard definition to one appropriate for single neurons by representing neural spiking activity using point process generalized linear models (PP-GLM). We estimate the prediction errors using the residual deviances from the PP-GLM fits. Because the deviance is an approximate χ2 random variable, we compute a bias-corrected SNR estimate appropriate for single-neuron analysis and use the bootstrap to assess its uncertainty. In the analyses of four systems neuroscience experiments, we show that the SNRs are -10 dB to -3 dB for guinea pig auditory cortex neurons, -18 dB to -7 dB for rat thalamic neurons, -28 dB to -14 dB for monkey hippocampal neurons, and -29 dB to -20 dB for human subthalamic neurons. The new SNR definition makes explicit in the measure commonly used for physical systems the often-quoted observation that single neurons have low SNRs. The neuron's spiking history is frequently a more informative covariate for predicting spiking propensity than the applied stimulus. Our new SNR definition extends to any GLM system in which the factors modulating the response can be expressed as separate components of a likelihood function
    corecore