915 research outputs found

    High- and low-conductance NMDA receptors are present in layer 4 spiny stellate and layer 2/3 pyramidal neurons of mouse barrel cortex

    Full text link
    NMDA receptors are ion channels activated by the neurotransmitter glutamate in the mammalian brain and are important in synaptic function and plasticity, but are also found in extrasynaptic locations and influence neuronal excitability. There are different NMDA receptor subtypes which differ in their single-channel conductance. Recently, synaptic plasticity has been studied in mouse barrel cortex, the primary sensory cortex for input from the animal's whiskers. Pharmacological data imply the presence of low-conductance NMDA receptors in spiny stellate neurons of cortical layer 4, but of high-conductance NMDA receptors in pyramidal neurons of layer 2/3. Here, to obtain complementary electrophysiological information on the functional NMDA receptors expressed in layer 4 and layer 2/3 neurons, single NMDA receptor currents were recorded with the patch-clamp method. Both cell types were found to contain high-conductance as well as low-conductance NMDA receptors. The results are consistent with the reported pharmacological data on synaptic plasticity, and with previous claims of a prominent role of low-conductance NMDA receptors in layer 4 spiny stellate neurons, including broad integration, amplification and distribution of excitation within the barrel in response to whisker stimulation, as well as modulation of excitability by ambient glutamate. However, layer 4 cells also expressed high-conductance NMDA receptors. The presence of low-conductance NMDA receptors in layer 2/3 pyramidal neurons suggests that some of these functions may be shared with layer 4 spiny stellate neurons

    Matter Wave Turbulence: Beyond Kinetic Scaling

    Full text link
    Turbulent scaling phenomena are studied in an ultracold Bose gas away from thermal equilibrium. Fixed points of the dynamical evolution are characterized in terms of universal scaling exponents of correlation functions. The scaling behavior is determined analytically in the framework of quantum field theory, using a nonperturbative approximation of the two-particle irreducible effective action. While perturbative Kolmogorov scaling is recovered at higher energies, scaling solutions with anomalously large exponents arise in the infrared regime of the turbulence spectrum. The extraordinary enhancement in the momentum dependence of long-range correlations could be experimentally accessible in dilute ultracold atomic gases. Such experiments have the potential to provide insight into dynamical phenomena directly relevant also in other present-day focus areas like heavy-ion collisions and early-universe cosmology.Comment: 18 pages, 2 figure

    FEMA's Integration of Preparedness and Development of Robust Regional Offices

    Get PDF
    In October 2006, Congress enacted major legislation to reform the function and organization of the Federal Emergency Management Agency (FEMA) in response to the recognized failures in preparation for and response to Hurricane Katrina. The Post-Katrina Emergency Management Reform Act of 2006 (PKEMRA) focused national preparedness responsibilities within FEMA and directed additional resources and responsibilities to FEMA's ten regional offices. Directed by Congress, in October 2008 a National Academy Panel began an independent assessment of FEMA's integration of preparedness functions and progress in development of robust regional offices.Main FindingsOver the past three years, FEMA has taken significant steps in an effort to integrate preparedness and develop more robust regional offices. These efforts, undertaken by both the previous and current Administrations, are documented throughout this report and should be recognized and applauded. However, FEMA has yet to define specific goals and outcomes that would permit it, Congress or the public to determine when preparedness has been fully integrated into all aspects of FEMA's work and whether the development and ongoing operation of robust regional offices has been achieved. In the absence of well-defined, measurable outcome indicators, the National Academy Panel relied upon the assessments of FEMA leaders and staff, documentation provided by FEMA, and a review of secondary sources material to inform its findings and recommendations. Based upon this evidence, the Panel has concluded that, while progress has been made: (1) preparedness is not fully integrated across FEMA, (2) FEMA's regional offices do not yet have the capacity required to ensure the nation is fully prepared, (3) stakeholders are not yet full partners with FEMA in national preparedness, and (4) FEMA has ineffective internal business practices, particularly with regard to human resource management. The Panel made seven recommendations for FEMA:Establish a cross-organizational process, with participation from internal and external stakeholders, to develop a shared understanding of preparedness integrationEstablish a robust set of outcome metrics and standards for preparedness integration, as well as a system to monitor and evaluate progress on an ongoing basisWork to eliminate organizational barriers that are adversely impacting the full integration of preparedness across the agencyContinue to build regional office capacity and monitor implementation consistent with the Administrator's recent policy guidanceUndertake steps to improve the ongoing working relationship between headquarters and the regions in accord with Panel-identified principlesTake steps to improve stakeholder engagement and relationships at all levels in accord with Panel-identified principles; andStrengthen internal business practices, especially in the area of human capital planning

    A Fibreoptic Endoscopic Study of Upper Gastrointestinal Bleeding at Bugando Medical Centre in Northwestern Tanzania: a Retrospective Review of 240 Cases.

    Get PDF
    Upper gastrointestinal (GI) bleeding is recognized as a common and potentially life-threatening abdominal emergency that needs a prompt assessment and aggressive emergency treatment. A retrospective study was undertaken at Bugando Medical Centre in northwestern Tanzania between March 2010 and September 2011 to describe our own experiences with fibreoptic upper GI endoscopy in the management of patients with upper gastrointestinal bleeding in our setting and compare our results with those from other centers in the world. A total of 240 patients representing 18.7% of all patients (i.e. 1292) who had fibreoptic upper GI endoscopy during the study period were studied. Males outnumbered female by a ratio of 2.1:1. Their median age was 37 years and most of patients (60.0%) were aged 40 years and below. The vast majority of the patients (80.4%) presented with haematemesis alone followed by malaena alone in 9.2% of cases. The use of non-steroidal anti-inflammatory drugs, alcohol and smoking prior to the onset of bleeding was recorded in 7.9%, 51.7% and 38.3% of cases respectively. Previous history of peptic ulcer disease was reported in 22(9.2%) patients. Nine (3.8%) patients were HIV positive. The source of bleeding was accurately identified in 97.7% of patients. Diagnostic accuracy was greater within the first 24 h of the bleeding onset, and in the presence of haematemesis. Oesophageal varices were the most frequent cause of upper GI bleeding (51.3%) followed by peptic ulcers in 25.0% of cases. The majority of patients (60.8%) were treated conservatively. Endoscopic and surgical treatments were performed in 30.8% and 5.8% of cases respectively. 140 (58.3%) patients received blood transfusion. The median length of hospitalization was 8 days and it was significantly longer in patients who underwent surgical treatment and those with higher Rockall scores (P < 0.001). Rebleeding was reported in 3.3% of the patients. The overall mortality rate of 11.7% was significantly higher in patients with variceal bleeding, shock, hepatic decompensation, HIV infection, comorbidities, malignancy, age > 60 years and in patients with higher Rockall scores and those who underwent surgery (P < 0.001). Oesophageal varices are the commonest cause of upper gastrointestinal bleeding in our environment and it is associated with high morbidity and mortality. The diagnostic accuracy of fibreoptic endoscopy was related to the time interval between the onset of bleeding and endoscopy. Therefore, it is recommended that early endoscopy should be performed within 24 h of the onset of bleeding

    Künstliche Intelligenz in der Zahnheilkunde: Scoping-Review und Schließung beobachteter Wissenslücken durch eine methodische und eine klinische Studie

    Get PDF
    Objectives: The aims of this dissertation were to (1) conduct a scoping review of stud-ies on machine learning (ML) in dentistry and appraise their robustness, (2) perform a benchmarking study to systematically compare various ML algorithms for a specific dental task, and (3) evaluate the influence of a ML-based caries detection software on diagnostic accuracy and decision-making in a randomized controlled trial. Methods: The scoping review included studies using ML in dentistry published between 1st January 2015 and 31st May 2021 on MEDLINE, IEEE Xplore, and arXiv. The risk of bias and reporting quality were assessed with the QUADAS‐2 and TRIPOD checklists, respectively. In the benchmarking study, 216 ML models were built using permutations of six ML model architectures (U-Net, U-Net++, Feature Pyramid Networks, LinkNet, Pyramid Scene Parsing Network, and Mask Attention Network), 12 model backbones of varying complexities (ResNet18, ResNet34, ResNet50, ResNet101, ResNet152, VGG13, VGG16, VGG19, DenseNet121, DenseNet161, DenseNet169, and Dense-Net201), and three initialization strategies (random, ImageNet, and CheXpert weights). 1,625 dental bitewing radiographs were used for training and testing. Five-fold cross-validation was carried out and model performance assessed using F1-score. In the clin-ical trial, each one of 22 dentists examined 20 randomly selected bitewing images for proximal caries; 10 images were evaluated with ML and 10 images without ML. Accura-cy in lesion detection and the suggested treatment were evaluated. Results: The scoping review included 168 studies, describing different ML tasks, mod-els, input data, methods to generate reference tests, and performance metrics, imped-ing comparison across studies. The studies showed considerable risk of bias and mod-erate adherence to reporting standards. In the benchmarking study, more complex models only minimally outperformed their simpler counterparts, if at all. Models initial-ized by ImageNet or CheXpert weights outperformed those using random weights (p<0.05). The clinical trial demonstrated that dentists using ML showed increased accu-racy (area under the receiver operating characteristic [mean (95% confidence interval): 0.89 (0.87–0.90)]) compared with those not using ML [0.85 (0.83–0.86); p<0.05], pri-marily due to their higher sensitivity [0.81 (0.74–0.87) compared to 0.72 (0.64–0.79); p<0.05]. Notably, dentists using ML also showed a higher frequency of invasive treat-ment decisions than those not using it (p<0.05). Conclusion: To facilitate comparisons across ML studies in dentistry, a minimum (core) set of outcomes and metrics should be developed, and researchers should strive to improve robustness and reporting quality of their studies. ML model choice should be performed on an informed basis, and simpler models may often be similarly capable as more complex ones. ML can increase dentists’ diagnostic accuracy but also lead to more invasive treatment.Ziele: Die Ziele dieser Dissertation waren, (1) ein Scoping-Review von Studien über maschinelles Lernen (ML) in der Zahnmedizin, (2) eine Benchmarking-Studie zum systematischen Vergleich verschiedener ML-Algorithmen für eine bestimmte zahnmedizinische Aufgabe, und (3) eine randomisierte kontrollierte Studie zur Bewertung einer ML-basierten Karies-Erkennungssoftware bezüglich diagnostischer Genauigkeit und Einfluss auf den Entscheidungsprozess durchzuführen. Methoden: Das Scoping-Review umfasste Studien über ML in der Zahnmedizin, veröffentlicht vom 1. Januar 2015 bis 31. Mai 2021 auf MEDLINE, IEEE Xplore und arXiv. Bias-Risiko und Berichtsqualität wurden mit den Checklisten QUADAS-2 beziehungsweise TRIPOD bewertet. In der Benchmarking-Studie wurden 216 ML-Modelle durch Permutationen von sechs Architekturen (U-Net, U-Net++, Feature Pyramid Networks, LinkNet, Pyramid Scene Parsing Network und Mask Attention Network), 12 Backbones (Res-Net18, ResNet34, ResNet50, ResNet101, ResNet152, VGG13, VGG16, VGG19, DenseNet121, DenseNet161, DenseNet169 und DenseNet201) und drei Initialisierungsstrategien (zufällige-, ImageNet- und CheXpert-Gewichtungen) erstellt. Zum Training und Testen wurden 1.625 Bissflügel-Röntgenaufnahmen genutzt. Es wurde eine fünffache Kreuzvalidierung durchgeführt und die Modellleistung anhand des F1-Scores bewertet. In der klinischen Studie untersuchten 22 Zahnärzte jeweils 20 zufällig ausgewählte Bissflügelbilder auf Approximalkaries; 10 Bilder wurden mit und 10 Bilder ohne ML ausgewertet. Die Genauigkeit in der Erkennung von Läsionen sowie die abgeleitete Therapieempfehlung wurden bewertet. Ergebnisse: Das Scoping-Review schloss 168 Studien ein, in denen verschiedene ML-Aufgaben, Modelle, Eingabedaten, Methoden zur Generierung von Referenztests und Leistungsmetriken beschrieben wurden. Die Studien zeigten ein erhebliches Bias-Risiko und eine mäßige Einhaltung der Berichtsstandards. In der Benchmarking-Studie hatten komplexere Modelle gegenüber einfachen Modellen allenfalls geringe Vorteile. Mit ImageNet- oder CheXpert-Gewichtungen initialisierte Modelle übertrafen solche mit Zufallsgewichtungen (p<0,05). In der klinischen Studie erreichten Zahnärzte mit ML eine höhere Genauigkeit bei der Kariesdetektion (Receiver-Operating-Charakteristik [Mittelwert (95 % Konfidenzintervall) 0,89 (0,87–0,90)]) als ohne ML [0,85 (0,83–0,86); p<0,05], hauptsächlich aufgrund höherer Sensitivität [0,81 (0,74–0,87) verglichen mit 0,72 (0,64–0,79); p<0,05]. Zahnärzte mit ML wählten auffallend häufiger invasive Behandlungen als ohne ML (p<0,05). Schlussfolgerung: Zur besseren Vergleichbarkeit von ML-Studien in der Zahnmedizin, sollten Core Outcomes und Metriken definiert sowie Robustheit und Berichtsqualität verbessert werden. Die Entwicklung von ML-Modellen sollte auf informierter Basis erfolgen, bei oft ähnlicher Leistung von einfacheren und komplexeren Modellen. ML kann die diagnostische Genauigkeit erhöhen, aber auch zu mehr invasiven Behandlungen führen

    Análise de timol em cera de abelha por micro-extracção em fase sólida (SPME)

    Get PDF
    A aplicação contínua de acaricídas lipofílicos sintéticos no tratamento das abelhas conduz a uma acumulação que depende da frequência, lipofilicidade e quantidade de princípio activo utilizada. Este efeito é mais acentuado na cera de abelha que no mel, no entanto, e porque a persistência destes resíduos é elevada, provoca o aparecimento de resistências e a perda do seu efeito acaricida.[1] Esta razão levou à pesquisa de outros compostos alternativos não tóxicos e não persistentes, com efeito sobre o ácaro das abelhas, Varroa Jacobsoni. Entre estes compostos encontra-se o timol, um composto fenólico, volátil, presente no tomilho. Dos diversos componentes dos óleos essenciais este é sem dúvida o que demonstrou maior efeito acaricida, utilizando-se no tratamento das abelhas directamente ou como componente de diversas formulações.[2] Em Portugal, foi introduzido muito recentemente sob a forma comercial de APIGUARD: um gel, à base de timol, que controla termicamente a libertação do princípio activo. O controlo dos resíduos de timol na cera de abelha e no mel é assim um desafio actual quer do ponto de vista sanitário quer de qualidade alimentar. A micro-extracção em fase sólida (SPME) é uma técnica de preparação de amostras que se baseia na sorção de analítos no revestimento de uma fibra de sílica fundida e posterior desorção térmica no injector de um cromatógrafo em fase gasosa (GC). Para além de combinar num único processo etapas de extracção, purificação e concentração dos analitos, a técnica de SPME apresenta uma série de vantagens relativamente às técnicas de extracção convencionais, como a extracção líquido-líquido e extracção em fase sólida, nomeadamente a sua relativa simplicidade e rapidez, reduzido custo e não utilização de solventes para a extracção de analitos, para além de permitir a extracção por imersão directa na amostra gasosa ou líquida e extracção por amostragem do espaço-de-cabeça da amostra líquida ou sólida.[3] Ao contrário das técnicas tradicionais, que permitem uma extracção quantitativa dos analitos, a técnica de SPME baseia-se num equilíbrio de partição do analito. Esta particularidade torna a técnica de SPME bastante sensível a parâmetros experimentais que possam afectar os coeficientes de partição dos analitos e, consequentemente, a sensibilidade e reprodutibilidade dos resultados.[4] O objectivo deste trabalho é o desenvolvimento de uma metodologia para a análise de timol em ceras contaminadas, utilizando como padrão interno a benzofenona. Em primeiro lugar, procedeu-se à optimização da técnica através da determinação da quantidade de cera, temperatura de análise e período de contacto da fibra com o espaço-de-cabeça da amostra mais adequados para o caso em estudo. Numa segunda fase, procedeu-se à análise de diversas lâminas de cera contaminadas propositadamente com timol e sujeitas a diferentes condições de armazenamento: em frio, ao ar e em estufa. Finalmente, procedeu-se à construção da curva de calibração e quantificação do timol presente nas diversas amostras de cera analisadas. Considerando-se os resultados, para os níveis de contaminação avaliados, as condições analíticas mais adequadas ocorrem com a utilização de 1 g de cera, mantendo-se a fibra em contacto com o espaço-de-cabeça durante 40 minutos a uma temperatura de 60 ºC. Nestas condições experimentais foi possível obter uma boa correlação linear (r2=0,990) no intervalo de concentrações [3,5-14 mg/g]. A quantidade de timol encontrada nas amostras é significativamente inferior à colocada durante o processo de fabrico das lâminas, pelo que o processo de conservação não é o mais adequado, sendo evidente uma menor quantidade de timol quando a lâmina de cera é colocada na estufa

    Introduction to the nonequilibrium functional renormalization group

    Full text link
    In these lectures we introduce the functional renormalization group out of equilibrium. While in thermal equilibrium typically a Euclidean formulation is adequate, nonequilibrium properties require real-time descriptions. For quantum systems specified by a given density matrix at initial time, a generating functional for real-time correlation functions can be written down using the Schwinger-Keldysh closed time path. This can be used to construct a nonequilibrium functional renormalization group along similar lines as for Euclidean field theories in thermal equilibrium. Important differences include the absence of a fluctuation-dissipation relation for general out-of-equilibrium situations. The nonequilibrium renormalization group takes on a particularly simple form at a fixed point, where the corresponding scale-invariant system becomes independent of the details of the initial density matrix. We discuss some basic examples, for which we derive a hierarchy of fixed point solutions with increasing complexity from vacuum and thermal equilibrium to nonequilibrium. The latter solutions are then associated to the phenomenon of turbulence in quantum field theory.Comment: Lectures given at the 49th Schladming Winter School `Physics at all scales: The Renormalization Group' (to appear in the proceedings); 24 pages, 3 figure

    Far-from-equilibrium quantum many-body dynamics

    Full text link
    The theory of real-time quantum many-body dynamics as put forward in Ref. [arXiv:0710.4627] is evaluated in detail. The formulation is based on a generating functional of correlation functions where the Keldysh contour is closed at a given time. Extending the Keldysh contour from this time to a later time leads to a dynamic flow of the generating functional. This flow describes the dynamics of the system and has an explicit causal structure. In the present work it is evaluated within a vertex expansion of the effective action leading to time evolution equations for Green functions. These equations are applicable for strongly interacting systems as well as for studying the late-time behaviour of nonequilibrium time evolution. For the specific case of a bosonic N-component phi^4 theory with contact interactions an s-channel truncation is identified to yield equations identical to those derived from the 2PI effective action in next-to-leading order of a 1/N expansion. The presented approach allows to directly obtain non-perturbative dynamic equations beyond the widely used 2PI approximations.Comment: 20 pp., 6 figs; submitted version with added references and typos corrected

    Fluctuation analysis in nonstationary conditions: single Ca channel current in cortical pyramidal neurons

    Full text link
    Fluctuation analysis is a method which allows measurement of the single channel current of ion channels even when it is too small to be resolved directly with the patch clamp technique. This is the case for voltage-gated Ca2+ channels (VGCCs). They are present in all mammalian central neurons, controlling presynaptic release of transmitter, postsynaptic signaling and synaptic integration. The amplitudes of their single channel currents in a physiological concentration of extracellular Ca2+, however, are small and not well determined. But measurement of this quantity is essential for estimating numbers of functional VGCCs in the membrane and the size of channel-associated Ca2+ signaling domains, and for understanding the stochastic nature of Ca2+ signaling. Here, we recorded the VGCC current in nucleated patches from layer 5 pyramidal neurons in rat neocortex, in physiological external Ca2+ (1-2 mM). The ensemble-averaging of current responses required for conventional fluctuation analysis proved impractical because of the rapid rundown of VGCC currents. We therefore developed a more robust method, using mean current fitting of individual current responses and band-pass filtering. Furthermore, voltage ramp stimulation proved useful. We validated the accuracy of the method by analyzing simulated data. At an external Ca2+ concentration of 1 mM, and a membrane potential of -20 mV, we found that the average single channel current amplitude was about 0.04 pA, increasing to 0.065 pA at 2 mM external Ca2+, and 0.12 pA at 5 mM. The relaxation time constant of the fluctuations was in the range 0.2-0.8 ms. The results are relevant to understanding the stochastic properties of dendritic Ca2+ spikes in neocortical layer 5 pyramidal neurons. With the reported method, single channel current amplitude of native VGCCs can be resolved accurately despite conditions of unstable rundown
    corecore