211 research outputs found

    In Vitro Formation of Urinary Stones : Generation of Spherulites of Calcium Phosphate in Gel and Overgrowth with Calcium Oxalate Using a New Flow Model of Crystallization

    Get PDF
    Calcium phosphate (CaP) has been detected in the majority of urinary stones containing predominantly calcium oxalate (CaOx). Therefore, crystal phases of CaP might play an important role with respect to the formation of urinary calcium stones in general. Very often, CaP found in stones or tissue of human kidney occurs in the shape of small spherulites. In this paper, we report on a new flow model of crystallization (FMCG), which has been used to generate spherulites of CaP in a gel matrix of 1% agar-agar at 37°C from a supersaturated, metastable solution continuously flowing over the gel surface. Scanning electron microscopy (SEM), X-ray diffraction and microscopic Fourier transformed infrared spectroscopy (FTIR) revealed that the particles formed (diameter: up to 200 μm) consisted of a poorly crystal-line core of carbonatoapatite which was partly surrounded by a well-crystallized shell of octacalcium phosphate (OCP) showing radially oriented sheet-like structures. Subsequently, CaOx was grown on these spherulites from a flow of a correspondingly supersaturated solution conducted over the gel matrix. It could be shown by SEM that growth of calcium oxalate monohydrate (COM) was characteristically induced by the OCP shell. Radial sheet-like forms of OCP were directly continued by COM showing a certain radial orientation. The model of crystallization in gel matrices applied here should be well-suited to simulate the process of urinary stone formation under in vitro conditions

    Nosocomial outbreak of VIM-2 metallo-β-lactamase-producing Pseudomonas aeruginosa associated with retrograde urography

    Get PDF
    AbstractPseudomonas aeruginosa is well adapted to the hospital setting and can cause a wide array of nosocomial infections that occasionally culminate in recalcitrant outbreaks. In the present study, we describe the first nosocomial outbreak of infection caused by blaVIM-2-positive P. aeruginosa in Germany. In November and December 2007, highly resistant P. aeruginosa isolates were recovered from the urine of 11 patients in the Department of Urology of a University Hospital. Bacterial isolates were typed by multilocus sequence typing and screened for known metallo-β-lactamase (MBL) genes by PCR. Environmental sources of transmission were tested for bacterial contamination using surveillance cultures. Furthermore, a matched case–control study was performed in search of medical procedures significantly associated with case status. Typing of recovered isolates confirmed VIM-2 MBL-producing P. aeruginosa of sequence type 175 in all cases. Surveillance cultures did not lead to the identification of an environmental source of the outbreak strain. Case–control analysis revealed retrograde urography as the only exposure significantly associated with case status. The analyses suggest the transmission of a single clone of VIM-2 MBL-producing P. aeruginosa leading to the infection of 11 patients within 47 days. Events in temporal proximity to retrograde urographies appear to have facilitated infection in the majority of cases. Department-specific infection control measures, including reinforced hygiene procedures during retrograde urography, quickly terminated the outbreak

    Weighing Counts: Sequential Crowd Counting by Reinforcement Learning

    Full text link
    We formulate counting as a sequential decision problem and present a novel crowd counting model solvable by deep reinforcement learning. In contrast to existing counting models that directly output count values, we divide one-step estimation into a sequence of much easier and more tractable sub-decision problems. Such sequential decision nature corresponds exactly to a physical process in reality scale weighing. Inspired by scale weighing, we propose a novel 'counting scale' termed LibraNet where the count value is analogized by weight. By virtually placing a crowd image on one side of a scale, LibraNet (agent) sequentially learns to place appropriate weights on the other side to match the crowd count. At each step, LibraNet chooses one weight (action) from the weight box (the pre-defined action pool) according to the current crowd image features and weights placed on the scale pan (state). LibraNet is required to learn to balance the scale according to the feedback of the needle (Q values). We show that LibraNet exactly implements scale weighing by visualizing the decision process how LibraNet chooses actions. Extensive experiments demonstrate the effectiveness of our design choices and report state-of-the-art results on a few crowd counting benchmarks. We also demonstrate good cross-dataset generalization of LibraNet. Code and models are made available at: https://git.io/libranetComment: Accepted to Proc. Eur. Conf. Computer Vision (ECCV) 202

    On Multi-objective Policy Optimization as a Tool for Reinforcement Learning

    Full text link
    Many advances that have improved the robustness and efficiency of deep reinforcement learning (RL) algorithms can, in one way or another, be understood as introducing additional objectives, or constraints, in the policy optimization step. This includes ideas as far ranging as exploration bonuses, entropy regularization, and regularization toward teachers or data priors when learning from experts or in offline RL. Often, task reward and auxiliary objectives are in conflict with each other and it is therefore natural to treat these examples as instances of multi-objective (MO) optimization problems. We study the principles underlying MORL and introduce a new algorithm, Distillation of a Mixture of Experts (DiME), that is intuitive and scale-invariant under some conditions. We highlight its strengths on standard MO benchmark problems and consider case studies in which we recast offline RL and learning from experts as MO problems. This leads to a natural algorithmic formulation that sheds light on the connection between existing approaches. For offline RL, we use the MO perspective to derive a simple algorithm, that optimizes for the standard RL objective plus a behavioral cloning term. This outperforms state-of-the-art on two established offline RL benchmarks

    Feature extraction for the analysis of colon status from the endoscopic images

    Get PDF
    BACKGROUND: Extracting features from the colonoscopic images is essential for getting the features, which characterizes the properties of the colon. The features are employed in the computer-assisted diagnosis of colonoscopic images to assist the physician in detecting the colon status. METHODS: Endoscopic images contain rich texture and color information. Novel schemes are developed to extract new texture features from the texture spectra in the chromatic and achromatic domains, and color features for a selected region of interest from each color component histogram of the colonoscopic images. These features are reduced in size using Principal Component Analysis (PCA) and are evaluated using Backpropagation Neural Network (BPNN). RESULTS: Features extracted from endoscopic images were tested to classify the colon status as either normal or abnormal. The classification results obtained show the features' capability for classifying the colon's status. The average classification accuracy, which is using hybrid of the texture and color features with PCA (τ = 1%), is 97.72%. It is higher than the average classification accuracy using only texture (96.96%, τ = 1%) or color (90.52%, τ = 1%) features. CONCLUSION: In conclusion, novel methods for extracting new texture- and color-based features from the colonoscopic images to classify the colon status have been proposed. A new approach using PCA in conjunction with BPNN for evaluating the features has also been proposed. The preliminary test results support the feasibility of the proposed method

    Pattern Recognition and Event Reconstruction in Particle Physics Experiments

    Full text link
    This report reviews methods of pattern recognition and event reconstruction used in modern high energy physics experiments. After a brief introduction into general concepts of particle detectors and statistical evaluation, different approaches in global and local methods of track pattern recognition are reviewed with their typical strengths and shortcomings. The emphasis is then moved to methods which estimate the particle properties from the signals which pattern recognition has associated. Finally, the global reconstruction of the event is briefly addressed.Comment: 101 pages, 58 figure

    Neural network generated parametrizations of deeply virtual Compton form factors

    Full text link
    We have generated a parametrization of the Compton form factor (CFF) H based on data from deeply virtual Compton scattering (DVCS) using neural networks. This approach offers an essentially model-independent fitting procedure, which provides realistic uncertainties. Furthermore, it facilitates propagation of uncertainties from experimental data to CFFs. We assumed dominance of the CFF H and used HERMES data on DVCS off unpolarized protons. We predict the beam charge-spin asymmetry for a proton at the kinematics of the COMPASS II experiment.Comment: 16 pages, 5 figure
    corecore