4,834 research outputs found

    General method for extracting the quantum efficiency of dispersive qubit readout in circuit QED

    Full text link
    We present and demonstrate a general three-step method for extracting the quantum efficiency of dispersive qubit readout in circuit QED. We use active depletion of post-measurement photons and optimal integration weight functions on two quadratures to maximize the signal-to-noise ratio of the non-steady-state homodyne measurement. We derive analytically and demonstrate experimentally that the method robustly extracts the quantum efficiency for arbitrary readout conditions in the linear regime. We use the proven method to optimally bias a Josephson traveling-wave parametric amplifier and to quantify different noise contributions in the readout amplification chain.Comment: 10 pages, 6 figure

    Estimation des niveaux d'inondation pour une crue éclair en milieu urbain : comparaison de deux modèles hydrodynamiques sur la crue de Nîmes d'octobre 1988

    Get PDF
    Lors des crues extrêmes en ville, une forte part des écoulements reste en surface. Pour simuler ces inondations, deux modèles sont présentés : le logiciel REM2 U unidimensionnel a pour objectif de simuler la propagation des débits de crue dans l'ensemble d'un réseau de rues alors que le logiciel Rubar 20 bidimensionnel vise à fournir plus d'information sur ces écoulements. Des calculs avec ces deux logiciels ont été menés sur la crue d'octobre 1988 dans un quartier de Nîmes. Lors de cet événement, les hauteurs d'eau maximales ont dépassé deux mètres en certains points et les vitesses 2 m/s ce qui entraînait des passages en régime torrentiel. A partir des données rassemblées sur les sections en travers des rues, des maillages de calcul limités au réseau de rues ont été construits pour les deux logiciels afin de permettre un calcul détaillé. La comparaison des résultats avec les laisses de crue montre des situations très contrastées d'un point à un autre pour une hauteur d'eau maximale moyenne sur l'ensemble de la zone inondée correctement simulée. L'écart sur cette hauteur est, en moyenne, de 1 m ce qui provient des incertitudes sur les observations, sur la topographie et sur les conditions aux limites, des approximations lors de la modélisation et de particularités locales non décrites. Entre les deux logiciels, l'évolution des hauteurs et des vitesses est généralement très proche bien que, comme pour la comparaison avec les laisses de crue, des différences locales importantes sont observées.The hydraulic models that are used to simulate floods in rural areas are not adapted to model floods through urban areas, because of details that may deviate flows and create strong discontinuities in the water levels, and because of the possible water flow running in the sewage network. However, such modelling is strongly required because damage is often concentrated in urban areas. Thus, it is necessary to develop models specifically dedicated to such floods. In the southern part of France, rains may have a high intensity but floods generally last a few hours. During extreme events such as the October 1988 flood in the city of Nîmes, most of the flow remained on the ground with high water depths and high velocities, and the role of sewage network can be neglected. A 1-D model and a 2-D model were used to calculate such flows, which may become supercritical. On the catchments of the streams which cross the city of Nîmes, the rainfall was estimated as 80 mm in one hour and 250 mm in six hours in October 1988, although some uncertainties remain. The return period can be estimated between 150 and 250 years. The zone selected to test the models was an area 1.2 km long and less than 1 km wide in the north-eastern part of the city. It includes a southern part with a high density of houses. The slope from the North (upstream) to the South (downstream) was more than 1 % on average and was decreasing from North to South. Various topographical and hydrological data were obtained from the local Authorities. The basic data were composed of 258 cross sections of 69 streets with 11 to 19 points for each cross section. Observations of the limits of the flooded areas and of the peak water levels at more than 80 points can be used to validate the calculation results. The inputs consisted of two discharge hydrographs, estimated from a rainfall-discharge model from rains with a return period of 100 years, which may result in an underestimate of these inputs. These two hydrographs correspond to the two main structures that cross the railway embankment, which constitutes an impervious upstream boundary of the modelled area. Whereas the western and eastern boundaries are well delimitated by hills above maximum water levels, the downstream southern boundary is somewhat more questionable because of possibilities of backwater and inflows from neighbouring areas.The 1-D software REM2U solved the Saint Venant equations on a meshed network. At crossroads, continuities of discharge and of water heads were set. The hydraulic jump was modelled by a numerical diffusion applied wherever high water levels were found. The Lax Wendroff numerical scheme was implemented. It included a prediction step and a correction step, which implied precise solving of these very unsteady and hyperbolic problems. The software was validated on numerous test cases (Al Mikdad, 2000) which proved the adaptation to problems of calculations in a network of streets.The 2-D software Rubar 20 solves 2-D shallow water equations by an explicit second-order Van Leer type finite volume scheme on a computational grid made from triangles and quadrilaterals (Paquier, 1998). The discontinuities (hydraulic jumps for instance) are treated as ordinary points through the solving of Riemann problems. For the Nîmes case, the grid was built from the cross sections of the streets. Four grids were built with respectively 4, 5, 7 or 11 points for every cross section and these points correspond to the main characteristics of the cross section: the walls of the buildings, the sidewalks, the gutters and the middle point. The simplest crossroads were described from the crossings of the lines corresponding to these points, which provide respectively 16, 25, 49 or 121 computational cells. The space step was about 25 metres along the streets but went as low as 0.1 m in the crossroads; due to the explicit scheme, which implies that the Courant number was limited to 1, the time step was very small and a long computational time was required.The computations were performed with a uniform Strickler coefficient of 40 m1/3/s. Both 1-D and 2-D models provided results that agreed well with observed water levels. The limits of the flooded area were also quite well simulated. However, locally, the differences between calculated and observed maximum water depths were high, resulting in an average deviation of about 1 metre. The reasons for such deviations could come from three main causes. First, the uncertainty of topographical data is relatively high, because of the interpolation between measured cross sections without a detailed complementary DEM (digital elevation model). Second, the observed levels were also uncertain and reveal local situations that are not reconstructed by the hydraulic models which provided maximum water levels averaged on one cell which may not coincide with the exact location of the observations. Finally, modelling means a simplification of the processes, which implies cancelling the level variations due to some obstacles, such as cars, which are not simple to identify.In conclusion, both software packages can model a flood, even a flash flood, in an urbanised area. Research is still necessary to develop methods to fully use urban databases in order to define details more precisely. The improvements to the 1-D software should include a better modelling of storage and of crossroads with an integration of adapted relations for the head losses. 2-D software has a greater potential but the difficulty to build an optimal computational grid means a long computational time, which limits the use of such software to small areas. For both software packages, methods still need to be developed in order to represent exchanges with the sewage network, storage inside buildings and inputs directly coming from rainfall

    CT texture analysis: a potential tool for prediction of survival in patients with metastatic clear cell carcinoma treated with sunitinib

    Get PDF
    BACKGROUND: To assess CT texture based quantitative imaging biomarkers in the prediction of progression free survival (PFS) and overall survival (OS) in patients with clear cell renal cell carcinoma undergoing treatment with Sunitinib. METHODS: In this retrospective study, measurable lesions of 40 patients were selected based on RECIST criteria on standard contrast enhanced CT before and 2 months after treatment with Sunitinib. CT Texture analysis was performed using TexRAD research software (TexRAD Ltd, Cambridge, UK). Using a Cox regression model, correlation of texture parameters with measured time to progression and overall survival were assessed. Evaluation of combined International Metastatic Renal-Cell Carcinoma Database Consortium Model (IMDC) score with texture parameters was also performed. RESULTS: Size normalized standard deviation (nSD) alone at baseline and follow-up after treatment was a predictor of OS (Hazard ratio (HR) = 0.01 and 0.02; 95% confidence intervals (CI): 0.00 – 0.29 and 0.00 – 0.39; p = 0.01 and 0.01). Entropy following treatment and entropy change before and after treatment were both significant predictors of OS (HR = 2.68 and 87.77; 95% CI = 1.14 – 6.29 and 1.26 – 6115.69; p = 0.02 and p = 0.04). nSD was also a predictor of PFS at baseline and follow-up (HR = 0.01 and 0.01: 95% CI: 0.00 – 0.31 and 0.001 – 0.22; p = 0.01 and p = 0.003). When nSD at baseline or at follow-up was combined with IMDC, it improved the association with OS and PFS compared to IMDC alone. CONCLUSION: Size normalized standard deviation from CT at baseline and follow-up scans is correlated with OS and PFS in clear cell renal cell carcinoma treated with Sunitinib

    COVID-19 Induced Myocarditis: A Rare Cause of Heart Failure.

    Get PDF
    Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causing lung injury has been well documented in the literature recently. They do so primarily by binding to the membrane-bound form of angiotensin-converting enzyme 2 (ACE-2) receptors. However, since these receptors are also expressed in the heart and blood vessels, coronavirus can also cause damage to these organs by binding to the ACE-2 receptors. A typical case of coronavirus disease 2019 (COVID-19) usually presents with respiratory symptoms like cough and shortness of breath accompanied by fever. The literature regarding this pandemic has been growing and now we know very well that the effect of this deadly virus is not restricted to the lungs alone. It can, unfortunately, cause various other complications ranging from neurological damage to even myocardial injury in rare cases. We present an interesting case of a 40-year-old male patient who presented to us with shortness of breath. When further investigated, the patient was found to have a new onset of heart failure secondary to COVID-19 induced myocarditis

    Clinical, Radiological, and Molecular Findings of Acute Encephalitis in a COVID-19 Patient: A Rare Case Report.

    Get PDF
    We report a case of encephalitis in a young male patient with severe coronavirus disease 2019 (COVID-19) who initially presented with typical symptoms of fever, dry cough, and shortness of breath but later on developed acute respiratory distress syndrome and required mechanical ventilation. Two days post-extubation, the patient developed new-onset generalized tonic-clonic seizures and confusion. MRI of the brain was done and it showed an abnormal signal in the bilateral medial cortical frontal region. His cerebral spinal fluid (CSF) analysis revealed a characteristic picture of a viral infection with a high white blood cell count and normal glucose and protein levels. After ruling out all common causes of viral encephalitis such as herpes simplex virus (HSV) and based on the review of available literature regarding the neurological manifestations of COVID-19, this case was labeled as acute viral encephalitis secondary to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection

    Comparative Analysis of Eight Numerical Methods Using Weibull Distribution to Estimate Wind Power Density for Coastal Areas in Pakistan

    Get PDF
    Currently, Pakistan is facing severe energy crises and global warming effects. Hence, there is an urgent need to utilize renewable energy generation. In this context, Pakistan possesses massive wind energy potential across the coastal areas. This paper investigates and numerically analyzes coastal areas' wind power density potential. Eight different state-of-the-art numerical methods, namely an (a) empirical method, (b) graphical method, (c) wasp algorithm, (d) energy pattern method, (e) moment method, (f) maximum likelihood method, (g) energy trend method, and (h) least-squares regression method, were analyzed to calculate Weibull parameters. We computed Weibull shape parameters (WSP) and Weibull scale parameters (WCP) for four regions: Jiwani, Gwadar, Pasni, and Ormara in Pakistan. These Weibull parameters from the above-mentioned numerical methods were analyzed and compared to find an optimal numerical method for the coastal areas of Pakistan. Further, the following statistical indicators were used to compare the efficiency of the above numerical methods: (i) analysis of variance (R-2), (ii) chi-square (X-2), and (iii) root mean square error (RMSE). The performance validation showed that the energy trend and graphical method provided weak performance for the observed period for four coastal regions of Pakistan. Further, we observed that Ormara is the best and Jiwani is the worst area for wind power generation using comparative analyses for actual and estimated data of wind power density from four regions of Pakistan

    Evolutionary history of Carnivora (Mammalia, Laurasiatheria) inferred from mitochondrial genomes

    Get PDF
    The order Carnivora, which currently includes 296 species classified into 16 families, is dis- tributed across all continents. The phylogeny and the timing of diversification of members of the order are still a matter of debate. Here, complete mitochondrial genomes were analysed to reconstruct the phylogenetic relationships and to estimate divergence times among spe- cies of Carnivora. We assembled 51 new mitogenomes from 13 families, and aligned them with available mitogenomes by selecting only those showing more than 1% of nucleotide divergence and excluding those suspected to be of low-quality or from misidentified taxa. Our final alignment included 220 taxa representing 2,442 mitogenomes. Our analyses led to a robust resolution of suprafamilial and intrafamilial relationships. We identified 21 fossil cali- bration points to estimate a molecular timescale for carnivorans. According to our diver- gence time estimates, crown carnivorans appeared during or just after the Early Eocene Climatic Optimum; all major groups of Caniformia (Cynoidea/Arctoidea; Ursidae; Musteloi- dea/Pinnipedia) diverged from each other during the Eocene, while all major groups of Feli- formia (Nandiniidae; Feloidea; Viverroidea) diversified more recently during the Oligocene, with a basal divergence of Nandinia at the Eocene/Oligocene transition; intrafamilial diver- gences occurred during the Miocene, except for the Procyonidae, as Potos separated from other genera during the Oligocene

    Biscuit contaminants, their sources and mitigation strategies: A review

    Get PDF
    The scientific literature is rich in investigations on the presence of various contaminants in biscuits, and of articles aimed at proposing innovative solutions for their control and prevention. However, the relevant information remains fragmented. Therefore, the objective of this work was to review the current state of the scientific literature on the possible contaminants of biscuits, considering physical, chemical, and biological hazards, and making a critical analysis of the solutions to reduce such contaminations. The raw materials are primary contributors of a wide series of contaminants. The successive processing steps and machinery must be monitored as well, because if they cannot improve the initial safety condition, they could worsen it. The most effective mitigation strategies involve product reformulation, and the use of alternative baking technologies to minimize the thermal load. Low oxygen permeable packaging materials (avoiding direct contact with recycled ones), and reformulation are effective for limiting the increase of contaminations during biscuit storage. Continuous monitoring of raw materials, intermediates, finished products, and processing conditions are therefore essential not only to meet current regulatory restrictions but also to achieve the aim of banning dietary contaminants and coping with related diseases
    • …
    corecore