4,619 research outputs found

    Resolution of secondary pulmonary alveolar proteinosis following treatment of rhinocerebral aspergillosis

    Get PDF
    SummaryPulmonary alveolar proteinosis can be secondary to inhaled dust exposure, malignancy, and chronic pulmonary infections. However, pulmonary alveolar proteinosis secondary to extrapulmonary aspergillosis has never been reported. We report herein a case of pulmonary alveolar proteinosis secondary to invasive rhinocerebral aspergillosis. Neither immune modulators nor whole lung lavage was applied during the treatment course. The severe respiratory distress subsided, hypoxia resolved, and radiological infiltrates improved following the successful treatment of invasive rhinocerebral aspergillosis alone

    Fabrication of multianalyte CeO2 nanograin electrolyte–insulator–semiconductor biosensors by using CF4 plasma treatment

    Get PDF
    Multianalyte CeO2 biosensors have been demonstrated to detect pH, glucose, and urine concentrations. To enhance the multianalyte sensing capability of these biosensors, CF4 plasma treatment was applied to create nanograin structures on the CeO2 membrane surface and thereby increase the contact surface area. Multiple material analyses indicated that crystallization or grainization caused by the incorporation of flourine atoms during plasma treatment might be related to the formation of the nanograins. Because of the changes in surface morphology and crystalline structures, the multianalyte sensing performance was considerably enhanced. Multianalyte CeO2 nanograin electrolyte–insulator–semiconductor biosensors exhibit potential for use in future biomedical sensing device applications

    An LSTM Based Generative Adversarial Architecture for Robotic Calligraphy Learning System

    Get PDF
    Robotic calligraphy is a very challenging task for the robotic manipulators, which can sustain industrial manufacturing. The active mechanism of writing robots require a large sized training set including sequence information of the writing trajectory. However, manual labelling work on those training data may cause the time wasting for researchers. This paper proposes a machine calligraphy learning system using a Long Short-Term Memory (LSTM) network and a generative adversarial network (GAN), which enables the robots to learn and generate the sequences of Chinese character stroke (i.e., writing trajectory). In order to reduce the size of the training set, a generative adversarial architecture combining an LSTM network and a discrimination network is established for a robotic manipulator to learn the Chinese calligraphy regarding its strokes. In particular, this learning system converts Chinese character stroke image into the trajectory sequences in the absence of the stroke trajectory writing sequence information. Due to its powerful learning ability in handling motion sequences, the LSTM network is used to explore the trajectory point writing sequences. Each generation process of the generative adversarial architecture contains a number of loops of LSTM. In each loop, the robot continues to write by following a new trajectory point, which is generated by LSTM according to the previously written strokes. The written stroke in an image format is taken as input to the next loop of the LSTM network until the complete stroke is finally written. Then, the final output of the LSTM network is evaluated by the discriminative network. In addition, a policy gradient algorithm based on reinforcement learning is employed to aid the robot to find the best policy. The experimental results show that the proposed learning system can effectively produce a variety of high-quality Chinese stroke writing

    Distributed Training Large-Scale Deep Architectures

    Full text link
    Scale of data and scale of computation infrastructures together enable the current deep learning renaissance. However, training large-scale deep architectures demands both algorithmic improvement and careful system configuration. In this paper, we focus on employing the system approach to speed up large-scale training. Via lessons learned from our routine benchmarking effort, we first identify bottlenecks and overheads that hinter data parallelism. We then devise guidelines that help practitioners to configure an effective system and fine-tune parameters to achieve desired speedup. Specifically, we develop a procedure for setting minibatch size and choosing computation algorithms. We also derive lemmas for determining the quantity of key components such as the number of GPUs and parameter servers. Experiments and examples show that these guidelines help effectively speed up large-scale deep learning training
    • …
    corecore