677 research outputs found

    The subarcsecond mid-infrared view of local active galactic nuclei: II. The mid-infrared--X-ray correlation

    Full text link
    We present an updated mid-infrared (MIR) versus X-ray correlation for the local active galactic nuclei (AGN) population based on the high angular resolution 12 and 18um continuum fluxes from the AGN subarcsecond MIR atlas and 2-10 keV and 14-195 keV data collected from the literature. We isolate a sample of 152 objects with reliable AGN nature and multi-epoch X-ray data and minimal MIR contribution from star formation. Although the sample is not homogeneous or complete, we show that our results are unlikely to be affected by biases. The MIR--X-ray correlation is nearly linear and within a factor of two independent of the AGN type and the wavebands used. The observed scatter is <0.4 dex. A possible flattening of the correlation slope at the highest luminosities probed (~ 10^45 erg/s) is indicated but not significant. Unobscured objects have, on average, an MIR--X-ray ratio that is only <= 0.15 dex higher than that of obscured objects. Objects with intermediate X-ray column densities (22 < log N_H < 23) actually show the highest MIR--X-ray ratio on average. Radio-loud objects show a higher mean MIR--X-ray ratio at low luminosities, while the ratio is lower than average at high luminosities. This may be explained by synchrotron emission from the jet contributing to the MIR at low-luminosities and additional X-ray emission at high luminosities. True Seyfert 2 candidates and double AGN do not show any deviation from the general behaviour. Finally, we show that the MIR--X-ray correlation can be used to verify the AGN nature of uncertain objects. Specifically, we give equations that allow to determine the intrinsic 2-10 keV luminosities and column densities for objects with complex X-ray properties to within 0.34 dex. These techniques are applied to the uncertain objects of the remaining AGN MIR atlas, demonstrating the usefulness of the MIR--X-ray correlation as an empirical tool.Comment: Accepted for publication in MNRAS, 40 pages, 25 figure

    Statistical Approaches for Signal Processing with Application to Automatic Singer Identification

    Get PDF
    In the music world, the oldest instrument is known as the singing voice that plays an important role in musical recordings. The singer\u27s identity serves as a primary aid for people to organize, browse, and retrieve music recordings. In this thesis, we focus on the problem of singer identification based on the acoustic features of singing voice. An automatic singer identification system is constructed and has achieved a very high identification accuracy. This system consists of three crucial parts: singing voice detection, background music removal and pattern recognition. These parts are introduced and explored in great details in this thesis. To be specific, in terms of the singing voice detection, we firstly study a traditional method, double GMM. Then an improved method, namely single GMM, is proposed. The experimental result shows that the detection accuracy of single GMM can be achieved as high as 96.42%. In terms of the background music removal, Non-negative Matrix Factorization (NMF) and Robust Principal Component Analysis (RPCA) are demonstrated. The evaluation result shows that RPCA outperforms NMF. In terms of pattern recognition, we explore the algorithms of Support Vector Machine (SVM) and Gaussian Mixture Model (GMM). Based on the experimental results, it turns out that the prediction accuracy of GMM classifier is about 16% higher than SVM

    Re-Tooling the Agency's Engineering Predictive Practices for Durability and Damage Tolerance

    Get PDF
    Over the past decade, the Agency has placed less emphasis on testing and has increasingly relied on computational methods to assess durability and damage tolerance (D&DT) behavior when evaluating design margins for fracture-critical components. With increased emphasis on computational D&DT methods as the standard practice, it is paramount that capabilities of these methods are understood, the methods are used within their technical limits, and validation by well-designed tests confirms understanding. The D&DT performance of a component is highly dependent on parameters in the neighborhood of the damage. This report discusses D&DT method vulnerabilities

    Contributions to an improved phenytoin monitoring and dosing in hospitalized patients

    Get PDF
    Phenytoin (PHT) is one of the mostly used and well established anticonvulsants for the treatment of epilepsy and a standard in the antiepileptic prophylaxis in adults with severe traumatic brain injuries before and after neurosurgical intervention. Its therapeutic use is challenging as PHT has a narrow therapeutic range and shows non-linear kinetics. It is extensively metabolized by a variety of CYP enzymes. PHT shows 85-95% binding to plasma proteins mostly albumin. This renders PHT also an important drug interaction candidate. Therefore, therapeutic drug monitoring is often required. A rational timing for good interpretation of the lab data translated in optimal individual dosing are necessary. Therapeutic guidance especially in teaching hospitals are needed and have to be implemented. Bayesian Forecasting (BF) versus conventional dosing (CD): a retrospective, long-term, single centre analysis In the hospital, medication management for effective antiepileptic therapy with PHT often needs rapid IV loading and subsequent dose adjustment according to TDM. To investigate PHT performance in reaching therapeutic target serum concentration, a BF regimen was compared to CD, according to the official summary of product characteristics. In a Swiss acute care teaching hospital (Kantonsspital Aarau), a retrospective, single centre, and long-term analysis was assessed by using all PHT serum tests from the central lab from 1997 to 2007. The BF regimen consisted of a guided, body weight-adapted rapid IV PHT loading over five days with pre-defined TDM time points. The CD was applied without written guidance. Assuming non-normally distributed data, non-parametric statistical methods were used. A total of 6’120 PHT serum levels (2’819 BF and 3’301 CD) from 2’589 patients (869 BF and 1’720 CD) were evaluated and compared. 63.6% of the PHT serum levels from the BF group were within the therapeutic range versus only 34.0% in the CD group (p<0.0001). The mean BF serum level was 52.0 ± 22.1 µmol/L (within target range), whereas the mean serum level of the CD was 39.8 ± 28.2 µmol/L (sub-target range). In the BF group, men had small but significantly lower PHT serum levels compared to women (p<0.0001). The CD group showed no significant gender difference (p=0.187). A comparative sub-analysis of age-related groups (children, adolescents, adults, seniors, and elderly) showed significant lower target levels (p<0.0001) for each group in the CD group, compared to BF. Comparing the two groups, BF showed significantly better performance in reaching therapeutic PHT serum levels. Free PHT assessment However, total serum drug levels of difficult-to-dose drugs like PHT are sometimes insufficient. The knowledge of the free fraction is necessary for correct dosing. In a subgroup analysis of the above BF vs. CD study we evaluated the suitability of the Sheiner-Tozer algorithm to calculate the free PHT fraction in hypoalbuminemic patients. Free PHT serum concentrations were calculated from total PHT concentration in hypoalbuminemic patients and compared with the measured free PHT. The patients were separated into two groups (a low albumin group; 35 ≤ albumin ≥ 25 g/L and a very low albumin group; albumin < 25 g/L). These two groups were compared and statistically analysed for the calculated and the measured free PHT concentration. The calculated (1.2 mg/L, SD=0.7) and the measured (1.1 mg/L, SD=0.5) free PHT concentration correlated. The mean difference in the low and the very low albumin group was 0.10 mg/L (SD=1.4, n=11) and 0.13 mg/L (SD=0.24, n=12), respectively. Although the variability of the data could be a bias, no statistically significant difference between the groups was found: t-test (p=0.78), the Passing-Bablok regression, the Spearman’s rank correlation coefficient of r=0.907 and p=0.00, and the Bland-Altman plot including the regression analysis between the calculated and the measured value (M=0.11, SD=0.28). We concluded that in absence of a free PHT serum concentration measurement also in hypoalbuminemic patients, the Sheiner-Tozer algorithm represents a useful tool to assist TDM to calculate or control free PHT by using total PHT and the albumin concentration. GC-MS Analysis of biological PHT samples To correlate PHT blood serum levels, with “brain PHT levels” (the site of action of PHT), extracellular fluid from microdialysates in neurosurgical patients could be analyzed for PHT by an appropriate quantifying analytical method. In this investigation we describe the development and validation of a sensitive gas chromatography–mass spectrometry (GC–MS) method to identify and quantitate PHT in brain microdialysate, saliva and blood from human samples. For sample clean-up a SPE was performed with a nonpolar C8-SCX column. The eluate was evaporated with nitrogen (50°C) and derivatized with trimethylsulfonium hydroxide before GC-MS analysis. 5-(p-methylphenyl)-5-phenylhydantoin was used as internal standard. The MS was run in scan mode and the identification was made with three ion fragment masses. All peaks were identified with MassLib. Spiked PHT samples showed recovery after SPE of ≥ 94%. The calibration curve (PHT 50 to 1’200 ng/ml, n=6 at six concentration levels) showed good linearity and correlation (r2 > 0.998). The limit of detection was 15 ng/mL, the limit of quantification was 50 ng/mL. Dried extracted samples were stable within a 15% deviation range for ≥ 4 weeks at room temperature. The method met International Organization for Standardization standards and was able to detect and quantify PHT in different biological matrices and patient samples. The GC-MS method with SPE is specific, sensitive, robust and well reproducible and therefore, an appropriate candidate for pharmacokinetic assessment of PHT concentrations in different biological samples of treated patients

    Automated mass spectrometry-based metabolomics data processing by blind source separation methods

    Get PDF
    Una de les principals limitacions de la metabolòmica és la transformació de dades crues en informació biològica. A més, la metabolòmica basada en espectrometria de masses genera grans quantitats de dades complexes caracteritzades per la co-elució de compostos i artefactes experimentals. L'objectiu d'aquesta tesi és desenvolupar estratègies automatitzades basades en deconvolució cega del senyal per millorar les capacitats dels mètodes existents que tracten les limitacions de les diferents passes del processament de dades en metabolòmica. L'objectiu d'aquesta tesi és també desenvolupar eines capaces d'executar el flux de treball del processament de dades en metabolòmica, que inclou el preprocessament de dades, deconvolució espectral, alineament i identificació. Com a resultat, tres nous mètodes automàtics per deconvolució espectral basats en deconvolució cega del senyal van ser desenvolupats. Aquests mètodes van ser inclosos en dues eines computacionals que permeten convertir automàticament dades crues en informació biològica interpretable i per tant, permeten resoldre hipòtesis biològiques i adquirir nous coneixements biològics.Una de les principals limitacions de la metabolòmica és la transformació de dades crues en informació biològica. A més, la metabolòmica basada en espectrometria de masses genera grans quantitats de dades complexes caracteritzades per la co-elució de compostos i artefactes experimentals. L'objectiu d'aquesta tesi és desenvolupar estratègies automatitzades basades en deconvolució cega del senyal per millorar les capacitats dels mètodes existents que tracten les limitacions de les diferents passes del processament de dades en metabolòmica. L'objectiu d'aquesta tesi és també desenvolupar eines capaces d'executar el flux de treball del processament de dades en metabolòmica, que inclou el preprocessament de dades, deconvolució espectral, alineament i identificació. Com a resultat, tres nous mètodes automàtics per deconvolució espectral basats en deconvolució cega del senyal van ser desenvolupats. Aquests mètodes van ser inclosos en dues eines computacionals que permeten convertir automàticament dades crues en informació biològica interpretable i per tant, permeten resoldre hipòtesis biològiques i adquirir nous coneixements biològics.Una de las principales limitaciones de la metabolómica es la transformación de datos crudos en información biológica. Además, la metabolómica basada en espectrometría de masas genera grandes cantidades de datos complejos caracterizados por la co-elución de compuestos y artefactos experimentales. El objetivo de esta tesis es desarrollar estrategias automatizadas basadas en deconvolución ciega de la señal para mejorar las capacidades de los métodos existentes que tratan las limitaciones de los diferentes pasos del procesamiento de datos en metabolómica. El objetivo de esta tesis es también desarrollar herramientas capaces de ejecutar el flujo de trabajo del procesamiento de datos en metabolómica, que incluye el preprocessamiento de datos, deconvolución espectral, alineamiento e identificación. Como resultado, tres nuevos métodos automáticos para deconvolución espectral basados en deconvolución ciega de la señal fueron desarrollados. Estos métodos fueron incluidos en dos herramientas computacionales que permiten convertir automáticamente datos crudos en información biológica interpretable y por lo tanto, permiten resolver hipótesis biológicas y adquirir nuevos conocimientos biológicos.One of the major bottlenecks in metabolomics is to convert raw data samples into biological interpretable information. Moreover, mass spectrometry-based metabolomics generates large and complex datasets characterized by co-eluting compounds and with experimental artifacts. This thesis main objective is to develop automated strategies based on blind source separation to improve the capabilities of the current methods that tackle the different metabolomics data processing workflow steps limitations. Also, the objective of this thesis is to develop tools capable of performing the entire metabolomics workflow for GC--MS, including pre-processing, spectral deconvolution, alignment and identification. As a result, three new automated methods for spectral deconvolution based on blind source separation were developed. These methods were embedded into two computation tools able to automatedly convert raw data into biological interpretable information and thus, allow resolving biological answers and discovering new biological insights

    Seismic Attribute Analysis of the Mississippian Chert at the Wellington Field, south-central Kansas

    Get PDF
    Mississippian chert reservoirs, important hydrocarbon resources in North America, are highly heterogeneous, typically below seismic resolution and, therefore, present a challenging task for predicting reservoir properties from seismic data. In this study, I conducted a seismic attribute analysis of the Mississippian chert reservoir at the Wellington Field, south-central Kansas using well and 3D PSTM seismic data. The microporous cherty dolomite reservoir exhibits a characteristic vertical gradational porosity reduction and associated increase in acoustic velocity, known as a ramp-transition velocity function. I investigated possible relationships of the Mississippian reservoir thickness and porosity with post-stack seismic attributes, including inverted acoustic impedance. The analysis of well-log and seismic data revealed that fault #1 divides the Wellington Field diagonally from the southwestern corner to the northeastern corner. The reservoir in the southeastern part of the field is characterized by a vertical gradational porosity decrease (from 25-30 to 4-6%), variable thickness (6-20 m), lower seismic amplitude and frequency content, locally developed double reflector, and high correlation between seismic amplitude and reservoir thickness conformable with the theoretical amplitude response of a ramp-transition velocity function. Amplitude envelope was used to predict the reservoir thickness in this part of the field. The Mississippian reservoir in the northwestern part of the field has more heterogeneous porosity distribution within the reservoir interval, thins in the north-north-west direction, while no clear relationship was found between reservoir thickness and instantaneous seismic attributes. The model-based inversion and porosity model predicted from inverted impedance supported the well-log and seismic attribute interpretation. The reliability of the predicted porosity model is tested by cross-validation. Resolution limits were determined using wedge modeling as 1/16λ for the amplitude envelope attribute and 1/8λ for the model-based inversion within the Mississippian reservoir characterized by a vertical gradational porosity reduction. The seismic response of a ramp-transition velocity function, well established in theory, but poorly studied using field seismic data, could benefit the characterization of similar chert as well as clastic and carbonate reservoirs characterized by downward porosity reduction as shown in this study. In addition, it might improve an understanding of depositional and diagenetic histories of such reservoirs
    • …
    corecore