208 research outputs found

    A new medication-based prediction score for postoperative delirium in surgical patients: development and proof of feasibility in a retrospective patient cohort

    Get PDF
    Structured risk screening for postoperative delirium (POD) considering prehospital medication is not established. We aimed to develop a POD-risk prediction score based on known risk factors and delirium-risk increasing drugs to be used by pharmacists during medication reconciliation at hospital admission, and to test for feasibility in a retrospective cohort of surgical patients. Therefore, established POD-risk factors and drugs were extracted from the literature and a score was generated. Following this, the score was tested for feasibility in a retrospective 3-month-cohort of surgical patients. For patients with higher scores suggesting higher probability of POD, patient charts were screened for documentation of POD. For development of the score, the following POD-risk factors were defined and points assigned for score calculation: age (≥65 years=1 point/≥75 years=2), male sex (1), renal insufficiency (RI; 1), hepatic impairment (HI; Model-of-endstage-liver-disease (MELD) 10-14=1/≥15=2), delirium-risk increasing drugs (1 point per drug class), anticholinergic drug burden (ACB; ≥3=1). In the retrospective test cohort of 1174 surgical patients these factors concerned: age ≥65 years 567 patients (48%)/≥75 years 303 (26%), male 652 (55%), RI 238 (20%), MELD 10-14 106 (9%)/≥15 65 (5%), ≥ 1 delirium-risk increasing drug 418 (36%), ACB ≥3 106 (9%). The median POD-risk prediction score was 2 (range 0-9). Of 146 patients (12%) with a score ≥ 5, POD was documented for 43 (30%), no evidence for POD for 91 (62%) and data inconclusive for 12 (8%). For scores of ≥ 7, POD was documented for 50% of the patients with sufficient POD documentation. Overall, POD documentation was poor. To summarize, we developed and successfully tested the feasibility of a POD-prediction-score assessable by pharmacists at medication reconciliation at hospital admission

    Partial volume correction of brain PET studies using iterative deconvolution in combination with HYPR denoising

    Get PDF
    Background: Accurate quantification of PET studies depends on the spatial resolution of the PET data. The commonly limited PET resolution results in partial volume effects (PVE). Iterative deconvolution methods (IDM) have been proposed as a means to correct for PVE. IDM improves spatial resolution of PET studies without the need for structural information (e.g. MR scans). On the other hand, deconvolution also increases noise, which results in lower signal-to-noise ratios (SNR). The aim of this study was to implement IDM in combination with HighlY constrained back-PRojection (HYPR) denoising to mitigate poor SNR properties of conventional IDM.Methods: An anthropomorphic Hoffman brain phantom was filled with an [F-18]FDG solution of similar to 25 kBq mL(-1) and scanned for 30 min on a Philips Ingenuity TF PET/CT scanner (Philips, Cleveland, USA) using a dynamic brain protocol with various frame durations ranging from 10 to 300 s. Van Cittert IDM was used for PVC of the scans. In addition, HYPR was used to improve SNR of the dynamic PET images, applying it both before and/or after IDM. The Hoffman phantom dataset was used to optimise IDM parameters (number of iterations, type of algorithm, with/without HYPR) and the order of HYPR implementation based on the best average agreement of measured and actual activity concentrations in the regions. Next, dynamic [C-11]flumazenil (five healthy subjects) and [C-11]PIB (four healthy subjects and four patients with Alzheimer's disease) scans were used to assess the impact of IDM with and without HYPR on plasma input-derived distribution volumes (V-T) across various regions of the brain.Results: In the case of [C-11]flumazenil scans, Hypr-IDM-Hypr showed an increase of 5 to 20% in the regional V-T whereas a 0 to 10% increase or decrease was seen in the case of [C-11]PIB depending on the volume of interest or type of subject (healthy or patient). References for these comparisons were the V(T)s from the PVE-uncorrected scans.Conclusions: IDM improved quantitative accuracy of measured activity concentrations. Moreover, the use of IDM in combination with HYPR (Hypr-IDM-Hypr) was able to correct for PVE without increasing noise.</p

    Antitubercular activity assessment of fluorinated chalcones, 2-aminopyridine-3-carbonitrile and 2-amino-4H-pyran-3-carbonitrile derivatives: In vitro, molecular docking and in-silico drug likeliness studies

    Get PDF
    A series of newer previously synthesized fluorinated chalcones and their 2-amino-pyridine-3-carbonitrile and 2-amino-4H-pyran-3-carbonitrile derivatives were screened for their in vitro antitubercular activity and in silico methods. Compound 40 (MIC~ 8 μM) was the most potent among all 60 compounds, whose potency is comparable with broad spectrum antibiotics like ciprofloxacin and streptomycin and three times more potent than pyrazinamide. Additionally, compound 40 was also less selective and hence non-toxic towards the human live cell lines-LO2 in its MTT assay. Compounds 30, 27, 50, 41, 51, and 60 have exhibited streptomycin like activity (MIC~16–18 μM). Fluorinated chalcones, pyridine and pyran derivatives were found to occupy prime position in thymidylate kinase enzymatic pockets in molecular docking studies. The molecule 40 being most potent had shown a binding energy of -9.67 Kcal/mol, while docking against thymidylate kinase, which was compared with its in vitro MIC value (~8 μM). These findings suggest that 2-aminopyridine-3-carbonitrile and 2-amino-4H-pyran-3-carbonitrile derivatives are prospective lead molecules for the development of novel antitubercular drugs

    Signature of strong atom-cavity interaction on critical coupling

    Full text link
    We study a critically coupled cavity doped with resonant atoms with metamaterial slabs as mirrors. We show how resonant atom-cavity interaction can lead to a splitting of the critical coupling dip. The results are explained in terms of the frequency and lifetime splitting of the coupled system.Comment: 8 pages, 5 figure

    DEVELOPMENT AND EVALUATION OF HIGH PROTEIN AND LOW GLUTEN BISCUITS

    Get PDF
    The present study was conducted for the purpose to provide high protein and low gluten biscuits to children’s. First by substituting wheat flour with finger millet and soy bean flour formulation was standardized to make low gluten biscuits. To the standardized biscuits formula whole egg was added up to acceptable limit to make high protein and low gluten biscuits. The biscuits prepared were analyzed for chemical, mineral, physical analysis and sensory characteristics. From those results best samples C- (100% refined wheat flour), T5- (wheat flour (20%), finger millet flour (40%), soy bean flour (40%)) and T9- (wheat flour (20%), finger millet flour (40%), soy bean flour (40%), whole egg (40%)) has been selected for shelf life studies. The sensory parameters in control sample were decreased from initial to 90th day. Similarly the sensory parameters in T5 and T9 sample were also decreased from initial to 90th day. The results revealed that there was statistically significant increase in the mean moisture content of biscuits from 0th day to 90th day of storage period. The peroxide value of biscuits on 0th day and 90th day of storage was 5.53±0.14 meq/kg, 5.92±0.09 meq/kg,5.90±0.12 meq/kg and 6.05±0.06 meq/kg, 6.42±0.11 meq/kg, 6.55± meq/kg for the control, T5 and T9 respectively. In biscuits the microbial count such as total bacterial count and total fungal count was observed on 0th, 30th, 60th and 90th day

    Comparative study of echocardiography and electrocardiography criteria for detecting left ventricular hypertrophy in hypertensive patients

    Get PDF
    Background: The study aimed to compare seven different electrocardiogram (LVH) criteria for diagnosing left ventricular hypertrophy (LVH) with echocardiogram as diagnostic standard in hypertensive patients.Methods: This was a hospital-based, cross-sectional study conducted in out-patient department and at medical wards of a tertiary care hospital at Bangalore. The study was carried out for a total duration of 12 months. All hypertensive patients underwent examination for prevalence of LVH using echocardiogram and ECG. Seven different ECG criteria were applied to diagnose the presence of LVH. Then the specificity, sensitivity, kappa measurement value, positive predictive value and negative predictive value for all criteria was calculated subsequently.Results: Out of the 100 patients studied, 34 had LVH as diagnosed by echocardiography. Sokolow-Lyon criteria had a sensitivity of 35% and specificity of 94%. Cornell voltage had a sensitivity of 26% and specificity of 95%. Modified Cornell voltage had a sensitivity of32% and specificity of 94%. Framingham adjusted Cornell voltage, Minnesota code and Cornell product had a sensitivity of 23.5% and specificity of 98.4%. Framingham score had a sensitivity of 38% and specificity of 95.4%.Conclusions: It can be concluded that among all the different criteria used in the study, Framingham score showed better sensitivity compared to others. In the evaluation of hypertensive patients for LVH, the role of ECG with all the commonly used criteria is of limited value and echocardiography is the method of choice

    Cosmic rays and the magnetic field in the nearby starburst galaxy NGC 253. II The magnetic field

    Get PDF
    Original article can be found at: http://www.aanda.org/ Copyright The European Southern Observatory (ESO) DOI: 10.1051/0004-6361/200911698Context. There are several edge-on galaxies with a known magnetic field structure in their halo. A vertical magnetic field significantly enhances the cosmic-ray transport from the disk into the halo. This could explain the existence of the observed radio halos. Aims. We observed NGC 253 that possesses one of the brightest radio halos discovered so far. Since this galaxy is not exactly edge-on (i = 78◦) the disk magnetic field has to be modeled and subtracted from the observations in order to study the magnetic field in the halo. Methods. We used radio continuum polarimetry with the VLA in D-configuration and the Effelsberg 100-m telescope. NGC253 has a very bright nuclear point-like source, so that we had to correct for instrumental polarization. We used appropriate Effelsberg beam patterns and developed a tailored polarization calibration to cope with the off-axis location of the nucleus in the VLA primary beams. Observations at λλ6.2 cm and 3.6 cm were combined to calculate the RM distribution and to correct for Faraday rotation. Results. The large-scale magnetic field consists of a disk (r, φ) and a halo (r, z) component. The disk component can be described as an axisymmetric spiral field pointing inwards with a pitch angle of 25◦ ± 5◦ which is symmetric with respect to the plane (even parity). This field dominates in the disk, so that the observed magnetic field orientation is disk parallel at small distances from the midplane. The halo field shows a prominent X-shape centered on the nucleus similar to that of other edge-on galaxies. We propose a model where the halo field lines are along a cone with an opening angle of 90◦ ± 30◦ and are pointing away from the disk in both the northern and southern halo (even parity). We can not exclude that the field points inwards in the northern halo (odd parity). The X-shaped halo field follows the lobes seen in Hα and soft X-ray emission. Conclusions. Dynamo action and a disk wind can explain the X-shaped halo field. The nuclear starburst-driven superwind may further amplify and align the halo field by compression of the lobes of the expanding superbubbles. The disk wind is a promising candidate for the origin of the gas in the halo and for the expulsion of small-scale helical fields as requested for efficient dynamo action.Peer reviewe

    Techno-economic analysis of the distribution system with integration of distributed generators and electric vehicles

    Get PDF
    Electric vehicles (EVs) have become a feasible alternative to conventional vehicles due to their technical and environmental benefits. The rapid penetration of EVs might cause a significant impact on the distribution system (DS) due to the adverse effects of charging the EVs and grid integration technologies. In order to compensate for an additional EV load to the existing load demand on the DS, the distributed generators (DGs) are integrated into the grid system. Due to the stochastic nature of the DGs and EV load, the integration of DGs alone with the DS can minimize the power losses and increase the voltage level but not to the extent that might not improve the system stability. Here, the EV that acts as a load in the grid-to-vehicle (G2V) mode during charging can act as an energy source with its bidirectional mode of operation as vehicle-to-grid (V2G) while in the discharging mode. V2G is a novel resource for energy storage and provision of high and low regulations. The article proposes a smart charging model of EVs, estimates the off-load and peak load times over a period of time, and allocates charging and discharging based on the constraints of the state of charge (SoC), power, and intermittent load demand. A standardized IEEE 33-node DS integrated with an EV charging station (EVCS) and DGs is used to reduce the losses and improve the voltage profile of the proposed system. Simulation results are carried out for various possible cases to assess the effective utilization of V2G for stable operation of the DS. The cost–benefit analysis (CBA) is also determined for the G2V and V2G modes of operation for a 24-h horizon
    corecore