1,906 research outputs found

    HEMODYNAMIC EFFICACY OF A NOVEL CATHETER-DEPLOYED INTRA-AORTIC MICRO-AXIAL ENTRAINMENT PUMP IN A PORCINE ACUTE HF MODEL

    Get PDF
    Magazine reporting on the activities and programs of the University Hospital at Boston University Medical Cente

    The Sample Complexity of Dictionary Learning

    Full text link
    A large set of signals can sometimes be described sparsely using a dictionary, that is, every element can be represented as a linear combination of few elements from the dictionary. Algorithms for various signal processing applications, including classification, denoising and signal separation, learn a dictionary from a set of signals to be represented. Can we expect that the representation found by such a dictionary for a previously unseen example from the same source will have L_2 error of the same magnitude as those for the given examples? We assume signals are generated from a fixed distribution, and study this questions from a statistical learning theory perspective. We develop generalization bounds on the quality of the learned dictionary for two types of constraints on the coefficient selection, as measured by the expected L_2 error in representation when the dictionary is used. For the case of l_1 regularized coefficient selection we provide a generalization bound of the order of O(sqrt(np log(m lambda)/m)), where n is the dimension, p is the number of elements in the dictionary, lambda is a bound on the l_1 norm of the coefficient vector and m is the number of samples, which complements existing results. For the case of representing a new signal as a combination of at most k dictionary elements, we provide a bound of the order O(sqrt(np log(m k)/m)) under an assumption on the level of orthogonality of the dictionary (low Babel function). We further show that this assumption holds for most dictionaries in high dimensions in a strong probabilistic sense. Our results further yield fast rates of order 1/m as opposed to 1/sqrt(m) using localized Rademacher complexity. We provide similar results in a general setting using kernels with weak smoothness requirements

    Construction of a Resting High Fidelity ECG "SuperScore" for Management and Screening of Heart Disease

    Get PDF
    Resting conventional ECG is notoriously insensitive for detecting coronary artery disease (CAD) and only nominally useful in screening for cardiomyopathy (CM). Similarly, conventional exercise stress test ECG is both time- and labor-consuming and its accuracy in identifying CAD is suboptimal for use in population screening. We retrospectively investigated the accuracy of several advanced resting electrocardiographic (ECG) parameters, both alone and in combination, for detecting CAD and cardiomyopathy (CM)

    Masses of Neutron Stars in High-Mass X-ray Binaries with Optical Astrometry

    Full text link
    Determining the type of matter that is inside a neutron star (NS) has been a long-standing goal of astrophysics. Despite this, most of the NS equations of state (EOS) that predict maximum masses in the range 1.4-2.8 solar masses are still viable. Most of the precise NS mass measurements that have been made to date show values close to 1.4 solar masses, but a reliable measurement of an over-massive NS would constrain the EOS possibilities. Here, we investigate how optical astrometry at the microarcsecond level can be used to map out the orbits of High-Mass X-ray Binaries (HMXBs), leading to tight constraints on NS masses. While previous studies by Unwin and co-workers and Tomsick and co-workers discuss the fact that the future Space Interferometry Mission should be capable of making such measurements, the current work describes detailed simulations for 6 HMXB systems, including predicted constraints on all orbital parameters. We find that the direct NS masses can be measured to an accuracy of 2.5% (1-sigma) in the best case (X Per), to 6.5% for Vela X-1, and to 10% for two other HMXBs.Comment: 8 pages, Accepted by Ap

    t-DCF: a Detection Cost Function for the Tandem Assessment of Spoofing Countermeasures and Automatic Speaker Verification

    Get PDF
    International audienceThe ASVspoof challenge series was born to spearhead research in anti-spoofing for automatic speaker verification (ASV). The two challenge editions in 2015 and 2017 involved the assessment of spoofing countermeasures (CMs) in isolation from ASV using an equal error rate (EER) metric. While a strategic approach to assessment at the time, it has certain shortcomings. First, the CM EER is not necessarily a reliable predic-tor of performance when ASV and CMs are combined. Second, the EER operating point is ill-suited to user authentication applications , e.g. telephone banking, characterised by a high target user prior but a low spoofing attack prior. We aim to migrate from CM-to ASV-centric assessment with the aid of a new tandem detection cost function (t-DCF) metric. It extends the conventional DCF used in ASV research to scenarios involving spoofing attacks. The t-DCF metric has 6 parameters: (i) false alarm and miss costs for both systems, and (ii) prior probabilities of target and spoof trials (with an implied third, nontar-get prior). The study is intended to serve as a self-contained, tutorial-like presentation. We analyse with the t-DCF a selection of top-performing CM submissions to the 2015 and 2017 editions of ASVspoof, with a focus on the spoofing attack prior. Whereas there is little to choose between countermeasure systems for lower priors, system rankings derived with the EER and t-DCF show differences for higher priors. We observe some ranking changes. Findings support the adoption of the DCF-based metric into the roadmap for future ASVspoof challenges, and possibly for other biometric anti-spoofing evaluations

    NICMOS Imaging of Molecular Hydrogen Emission in Seyfert Galaxies

    Get PDF
    We present NICMOS imaging of broad band and molecular hydrogen emission in Seyfert galaxies. In 6 of 10 Seyferts we detect resolved or extended emission in the 1-0 S(1) 2.121 or 1-0 S(3) 1.9570 micron molecular hydrogen lines. We did not detect emission in the most distant galaxy or in the 2 Seyfert 1 galaxies in our sample because of the luminosity of the nuclear point sources. In NGC 5643, NGC 2110 and MKN 1066, molecular hydrogen emission is detected in the extended narrow line region on scales of a few hundred pc from the nucleus. Emission is coincident with [OIII] and H alpha+[NII] line emission. This emission is also near dust lanes observed in the visible to near-infrared color maps suggesting that a multiphase medium exists near the ionization cones and that the morphology of the line emission is dependent on the density of the ambient media. The high 1-0 S(1) or S(3) H2 to H alpha flux ratio suggests that shock excitation of molecular hydrogen (rather than UV fluorescence) is the dominant excitation process in these extended features. In NGC 2992 and NGC 3227 the molecular hydrogen emission is from 800 and 100 pc diameter `disks' (respectively) which are not directly associated with [OIII] emission and are near high levels of extinction (AV > 10). In NGC 4945 the molecular hydrogen emission appears to be from the edge of a 100 pc superbubble. In these 3 galaxies the molecular gas could be excited by processes associated with local star formation. We confirm previous spectroscopic studies finding that no single mechanism is likely to be responsible for the molecular hydrogen excitation in Seyfert galaxies.Comment: submitted to Ap

    XMM-Newton observation of the persistent Be/neutron-star system X Persei at a high-luminosity level

    Full text link
    We report on the XMM-Newton observation of the HMXRB X Persei, the prototype of the persistent and low-luminosity Be/neutron star pulsars, which was performed on February 2003. The source was detected at a luminosity level of ~ 1.4x10^35 erg/s, which is the highest level of the latest three decades. The pulsation period has increased up to 839.3 s, thus confirming the overall spin-down of the NS detected in the previous observations. The folded light-curve has a complex structure, with features not observed at lower luminosities, and shows a significant energy dependence. The spectral analysis reveals the presence of a significant excess at low energies over the main power-law spectral component, which can be described by a black-body spectrum of high temperature (kT_BB ~ 1.5 keV) and small emitting region (R_BB ~ 340 m); its properties are consistent with a polar-cap origin. Phase-resolved spectroscopy shows that the emission spectrum varies along the pulse period, but it is not possible to prove whether the thermal component is pulsed or not.Comment: 8 pages, 8 figures, 4 tables. Accepted for publication by Astronomy & Astrophysic

    Neuroinflammation in the normal-appearing white matter (NAWM) of the multiple sclerosis brain causes abnormalities at the nodes of Ranvier

    Get PDF
    Changes to the structure of nodes of Ranvier in the normal-appearing white matter (NAWM) of multiple sclerosis (MS) brains are associated with chronic inflammation. We show that the paranodal domains in MS NAWM are longer on average than control, with Kv1.2 channels dislocated into the paranode. These pathological features are reproduced in a model of chronic meningeal inflammation generated by the injection of lentiviral vectors for the lymphotoxin-α (LTα) and interferon-γ (IFNγ) genes. We show that tumour necrosis factor (TNF), IFNγ, and glutamate can provoke paranodal elongation in cerebellar slice cultures, which could be reversed by an N-methyl-D-aspartate (NMDA) receptor blocker. When these changes were inserted into a computational model to simulate axonal conduction, a rapid decrease in velocity was observed, reaching conduction failure in small diameter axons. We suggest that glial cells activated by pro-inflammatory cytokines can produce high levels of glutamate, which triggers paranodal pathology, contributing to axonal damage and conduction deficits

    Tandem Assessment of Spoofing Countermeasures and Automatic Speaker Verification: Fundamentals

    Get PDF
    Recent years have seen growing efforts to develop spoofing countermeasures (CMs) to protect automatic speaker verification (ASV) systems from being deceived by manipulated or artificial inputs. The reliability of spoofing CMs is typically gauged using the equal error rate (EER) metric. The primitive EER fails to reflect application requirements and the impact of spoofing and CMs upon ASV and its use as a primary metric in traditional ASV research has long been abandoned in favour of risk-based approaches to assessment. This paper presents several new extensions to the tandem detection cost function (t-DCF), a recent risk-based approach to assess the reliability of spoofing CMs deployed in tandem with an ASV system. Extensions include a simplified version of the t-DCF with fewer parameters, an analysis of a special case for a fixed ASV system, simulations which give original insights into its interpretation and new analyses using the ASVspoof 2019 database. It is hoped that adoption of the t-DCF for the CM assessment will help to foster closer collaboration between the anti-spoofing and ASV research communities.Comment: Published in IEEE/ACM Transactions on Audio, Speech, and Language Processing (doi updated
    corecore