1,938 research outputs found

    Cardiac re-synchronization therapy in a patient with isolated ventricular non-compaction: a case report.

    Get PDF
    Isolated ventricular non-compaction (IVNC) is a rare, congenital, unclassified cardiomyopathy characterized by prominent trabecular meshwork and deep recesses. Major clinical manifestations of IVNC are heart failure, atrial and ventricular arrhythmias, and thrombo-embolic events. We describe a case of a 69-year-old woman in whom the diagnosis of IVNC was discovered late, whereas former echocardiographic examinations were considered normal. She was known for systolic left ventricular dysfunction for 3 years and then became symptomatic (NYHA III). In the past, she suffered from multiple episodes of deep vein thrombosis and pulmonary embolism. Electrocardiogram revealed a wide QRS complex, and transthoracic echocardiography showed typical apical thickening of the left and right ventricular myocardial wall with two distinct layers. The ratio of non-compacted to compacted myocardium was >2:1. Cardiac MRI confirmed the echocardiographic images. Cerebral MRI revealed multiple ischaemic sequellae. In view of the persistent refractory, heart failure in medical treatment of patients with classical criteria for cardiac re-synchronization therapy, as well as the ventricular arrhythmias, a biventricular automatic intracardiac defibrillator (biventricular ICD) was implanted. The 2-year follow-up period was characterized by improvement of NYHA functional class from III to I and increasing in left ventricular function. We hereby present a case of IVNC with favourable outcome after biventricular ICD implantation. Cardiac re-synchronization therapy could be considered in the management of this pathology

    Identification of Wax Esters in Latent Print Residues by Gas Chromatography-Mass Spectromertry and Their Potential Use as Aging Parameters

    Get PDF
    Recent studies show that the composition of fingerprint residue varies significantly from the same donor as well as between donors. This variability is a major drawback in latent print dating issues. This study aimed, therefore, at the definition of a parameter that is less variable from print to print, using a ratio of peak area of a target compound degrading over time divided by the summed area of peaks of more stable compounds also found in latent print residues.Gas chromatography-mass spectrometry (GC/MS) analysis of the initial lipid composition of latent prints identifies four main classes of compounds that can be used in the definition of an aging parameter: fatty acids, sterols, sterol precursors, and wax esters (WEs). Although the entities composing the first three groups are quite well known, those composing WEs are poorly reported. Therefore, the first step of the present work was to identify WE compounds present in latent print residues deposited by different donors. Of 29 WEs recorded in the chromatograms, seven were observed in the majority of samples.The identified WE compounds were subsequently used in the definition of ratios in combination with squalene and cholesterol to reduce the variability of the initial composition between latent print residues from different persons and more particularly from the same person. Finally, the influence of a latent print enhancement process on the initial composition was studied by analyzing traces after treatment with magnetic powder, 1,2-indanedione, and cyanoacrylate

    The tension between fire risk and carbon storage: evaluating U.S. carbon and fire management strategies through ecosystem models

    Get PDF
    Fire risk and carbon storage are related environmental issues because fire reduction results in carbon storage through the buildup of woody vegetation, and stored carbon is a fuel for fires. The sustainability of the U.S. carbon sink and the extent of fire activity in the next 100 yr depend in part on the type and effectiveness of fire reduction employed. Previous studies have bracketed the range of dynamics from continued fire reduction to the complete failure of fire reduction activities. To improve these estimates, it is necessary to explicitly account for fire reduction in terrestrial models. A new fire reduction submodel that estimates the spatiotemporal pattern of reduction across the United States was developed using gridded data on biomass, climate, land-use, population, and economic factors. To the authors’ knowledge, it is the first large-scale, gridded fire model that explicitly accounts for fire reduction. The model was calibrated to 1° × 1° burned area statistics [Global Burnt Area 2000 Project (GBA-2000)] and compared favorably to three important diagnostics. The model was then implemented in a spatially explicit ecosystem model and used to analyze 1620 scenarios of future fire risk and fire reduction strategies. Under scenarios of climate change and urbanization, burned area and carbon emissions both increased in scenarios where fire reduction efforts were not adjusted to match new patterns of fire risk. Fuel reducing management strategies reduced burned area and fire risk, but also limited carbon storage. These results suggest that to promote carbon storage and minimize fire risk in the future, fire reduction efforts will need to be increased and spatially adjusted and will need to employ a mixture of fuel-reducing and non-fuel-reducing strategies

    The Neutron Halo in Heavy Nuclei Calculated with the Gogny Force

    Full text link
    The proton and neutron density distributions, one- and two-neutron separation energies and radii of nuclei for which neutron halos are experimentally observed, are calculated using the self-consistent Hartree-Fock-Bogoliubov method with the effective interaction of Gogny. Halo factors are evaluated assuming hydrogen-like antiproton wave functions. The factors agree well with experimental data. They are close to those obtained with Skyrme forces and with the relativistic mean field approach.Comment: 13 pages in Latex and 17 figures in ep

    RLFC: Random Access Light Field Compression using Key Views and Bounded Integer Encoding

    Full text link
    We present a new hierarchical compression scheme for encoding light field images (LFI) that is suitable for interactive rendering. Our method (RLFC) exploits redundancies in the light field images by constructing a tree structure. The top level (root) of the tree captures the common high-level details across the LFI, and other levels (children) of the tree capture specific low-level details of the LFI. Our decompressing algorithm corresponds to tree traversal operations and gathers the values stored at different levels of the tree. Furthermore, we use bounded integer sequence encoding which provides random access and fast hardware decoding for compressing the blocks of children of the tree. We have evaluated our method for 4D two-plane parameterized light fields. The compression rates vary from 0.08 - 2.5 bits per pixel (bpp), resulting in compression ratios of around 200:1 to 20:1 for a PSNR quality of 40 to 50 dB. The decompression times for decoding the blocks of LFI are 1 - 3 microseconds per channel on an NVIDIA GTX-960 and we can render new views with a resolution of 512X512 at 200 fps. Our overall scheme is simple to implement and involves only bit manipulations and integer arithmetic operations.Comment: Accepted for publication at Symposium on Interactive 3D Graphics and Games (I3D '19

    La datation des traces digitales (partie II): proposition d'une approche formelle

    Get PDF
    «Quel est l'âge de cette trace digitale?» Cette question est relativement souvent soulevée durant une investigation ou au tribunal, lorsque la personne suspectée admet avoir laissé ses traces sur une scène de crime mais prétend l'avoir fait à un autre moment que celui du crime et pour une raison innocente. La première partie de cet article mettait en évidence le manque de consensus actuel dans les réponses données à cette question par les experts du domaine, ainsi que le fait qu'aucune méthodologie n'est pour l'heure validée et acceptée par la communauté forensique. C'est pourquoi ce deuxième article propose une approche formelle et pragmatique afin d'aborder la question de la datation des traces digitales en se basant sur la recherche actuelle dans le domaine du vieillissement de composés lipidiques détectés dans les traces digitales. Cette approche permet ainsi d'identifier quel type d'information le scientifique serait capable d'apporter aux enquêteurs et/ou à la Cour lors de cas de datation des traces digitales à l'heure actuelle, dans quelles conditions, et quels sont les développements encore nécessaires

    Aging of target lipid parameters in fingermark residue using GC/MS: effects of influence factors and perspectives for dating purposes

    Get PDF
    Despite the recurrence of fingermark dating issues and the research conducted on fingermark composition and aging, no dating methodology has yet been developed and validated. In order to further evaluate the possibility of developing dating methodologies based on the fingermark composition, this research proposed an in-depth study of the aging of target lipid parameters found in fingermark residue and exposed to different in fluence factors. The selected analytical technique was gas chromatography coupled with mass spectrometry (GC/MS). The effects of donor, substrate and enhancement techniques on the selected parameters were firstly evaluated. These factors were called known factors, as their value could be obtained in real caseworks. Using principal component analysis (PCA) and univariate exponential regression, this study highlighted the fact that the effects of these factors were larger than the aging effects, thus preventing the observation of relevant aging patterns. From a fingermark dating perspective, the specific value of these known factors should thus be included in aging models newly built for each case. Then, the effects of deposition moment, pressure, temperature and lighting were also evaluated. These factors were called unknown factors, as their specific value would never be precisely obtained in caseworks. Aging models should thus be particularly robust to their effects and for this reason, different chemometric tools were tested: PCA, univariate exponentialregression and partial least square regression(PLSR). While the first two models allowed observing interesting aging patterns regardless of the value of the applied influence factors, PLSR gave poorer results, as large deviations were obtained. Finally, in order to evaluate the potential of such modelling in realistic situations, blind analyses were carried out on eight test fingermarks. The age of five of them was correctly estimated using soft independent modelling of class analogy analysis (SIMCA) based on PCA classes, univariate exponential linear regression and PLSR. Furthermore, a probabilistic approach using the calculation of likelihood ratios (LR) through the construction of a Bayesian network was also tested. While the age of all test fingermarks were correctly evaluated when the storage conditions were known, the results were not significant when these conditions were unknown. Thus, this model clearly highlighted the impact of storage conditions on correct age evaluation. This research showed that reproducible aging modelling could be obtained based on fingermark residue exposed to influence factors, as well as promising age estimations. However, the proposed models are still not applicable in practice. Further studies should be conducted concerning the impact of influence factors (in particular, storage conditions) in order to precisely evaluate in which conditions significant evaluations could be obtained. Furthermore, these models should be properly validated before any application in real caseworks could be envisaged

    High precision determination of the Q2Q^2-evolution of the Bjorken Sum

    Full text link
    We present a significantly improved determination of the Bjorken Sum for 0.6Q2\leq Q^{2}\leq4.8 GeV2^{2} using precise new g1pg_{1}^{p} and g1dg_{1}^{d} data taken with the CLAS detector at Jefferson Lab. A higher-twist analysis of the Q2Q^{2}-dependence of the Bjorken Sum yields the twist-4 coefficient f2pn=0.064±0.009±0.0360.032f_{2}^{p-n}=-0.064 \pm0.009\pm_{0.036}^{0.032}. This leads to the color polarizabilities χEpn=0.032±0.024\chi_{E}^{p-n}=-0.032\pm0.024 and χBpn=0.032±0.013\chi_{B}^{p-n}=0.032\pm0.013. The strong force coupling is determined to be \alpha_{s}^{\overline{\mbox{ MS}}}(M_{Z}^{2})=0.1124\pm0.0061, which has an uncertainty a factor of 1.5 smaller than earlier estimates using polarized DIS data. This improvement makes the comparison between αs\alpha_{s} extracted from polarized DIS and other techniques a valuable test of QCD.Comment: Published in Phys. Rev. D. V1: 8 pages, 3 figures. V2: Updated references; Included threshold matching in \alpha_s evolution. Corrected a typo on the uncertainty for \Lambda_QCD. V3: Published versio

    Effect of differences in proton and neutron density distributions on fission barriers

    Full text link
    The neutron and proton density distributions obtained in constrained Hartree-Fock-Bogolyubov calculations with the Gogny force along the fission paths of 232Th, 236U, 238U and 240Pu are analyzed. Significant differences in the multipole deformations of neutron and proton densities are found. The effect on potential energy surfaces and on barrier heights of an additional constraint imposing similar spatial distributions to neutrons and protons, as assumed in macroscopic-microscopic models, is studied.Comment: 5 pages in Latex, 4 figures in ep
    corecore