123,794 research outputs found

    Combining social network analysis and the NATO Approach Space to define agility. Topic 2: networks and networking

    No full text
    This paper takes the NATO SAS-050 Approach Space, a widely accepted model of command and control, and gives each of its primary axes a quantitative measure using social network analysis. This means that the actual point in the approach space adopted by real-life command and control organizations can be plotted along with the way in which that point varies over time and function. Part 1 of the paper presents the rationale behind this innovation and how it was subject to verification using theoretical data. Part 2 shows how the enhanced approach space was put to use in the context of a large scale military command post exercise. Agility is represented by the number of distinct areas in the approach space that the organization was able to occupy and there was a marked disparity between where the organization thought it should be and where it actually was, furthermore, agility varied across function. The humans in this particular scenario bestowed upon the organization the levels of agility that were observed, thus the findings are properly considered from a socio-technical perspective

    Pan-cancer classifications of tumor histological images using deep learning

    Get PDF
    Histopathological images are essential for the diagnosis of cancer type and selection of optimal treatment. However, the current clinical process of manual inspection of images is time consuming and prone to intra- and inter-observer variability. Here we show that key aspects of cancer image analysis can be performed by deep convolutional neural networks (CNNs) across a wide spectrum of cancer types. In particular, we implement CNN architectures based on Google Inception v3 transfer learning to analyze 27815 H&E slides from 23 cohorts in The Cancer Genome Atlas in studies of tumor/normal status, cancer subtype, and mutation status. For 19 solid cancer types we are able to classify tumor/normal status of whole slide images with extremely high AUCs (0.995±0.008). We are also able to classify cancer subtypes within 10 tissue types with AUC values well above random expectations (micro-average 0.87±0.1). We then perform a cross-classification analysis of tumor/normal status across tumor types. We find that classifiers trained on one type are often effective in distinguishing tumor from normal in other cancer types, with the relationships among classifiers matching known cancer tissue relationships. For the more challenging problem of mutational status, we are able to classify TP53 mutations in three cancer types with AUCs from 0.65-0.80 using a fully-trained CNN, and with similar cross-classification accuracy across tissues. These studies demonstrate the power of CNNs for not only classifying histopathological images in diverse cancer types, but also for revealing shared biology between tumors. We have made software available at: https://github.com/javadnoorb/HistCNNFirst author draf

    The Fire and Smoke Model Evaluation Experiment—A Plan for Integrated, Large Fire–Atmosphere Field Campaigns

    Get PDF
    The Fire and Smoke Model Evaluation Experiment (FASMEE) is designed to collect integrated observations from large wildland fires and provide evaluation datasets for new models and operational systems. Wildland fire, smoke dispersion, and atmospheric chemistry models have become more sophisticated, and next-generation operational models will require evaluation datasets that are coordinated and comprehensive for their evaluation and advancement. Integrated measurements are required, including ground-based observations of fuels and fire behavior, estimates of fire-emitted heat and emissions fluxes, and observations of near-source micrometeorology, plume properties, smoke dispersion, and atmospheric chemistry. To address these requirements the FASMEE campaign design includes a study plan to guide the suite of required measurements in forested sites representative of many prescribed burning programs in the southeastern United States and increasingly common high-intensity fires in the western United States. Here we provide an overview of the proposed experiment and recommendations for key measurements. The FASMEE study provides a template for additional large-scale experimental campaigns to advance fire science and operational fire and smoke models

    \u3csup\u3e99m\u3c/sup\u3eTc-Labeled C2A Domain of Synaptotagmin I as a Target-Specific Molecular Probe for Noninvasive Imaging of Acute Myocardial Infarction

    Get PDF
    Abstract: The exposure of phosphatidylserine (PtdS) is a common molecular marker for both apoptosis and necrosis and enables the simultaneous detection of these distinct modes of cell death. Our aim was to develop a radiotracer based on the PtdS-binding activity of the C2A domain of synaptotagmin I and assess 99mTc-C2A-GST (GST is glutathione S-transferase) using a reperfused acute myocardial infarction (AMI) rat model. Methods: The binding of C2A-GST toward apoptosis and necrosis was validated in vitro. After labeling with 99mTc via 2-iminothiolane thiolation, radiochemical purity and radiostability were tested. Pharmacokinetics and biodistribution were studied in healthy rats. The uptake of 99mTc-C2A-GST within the area at risk was quantified by direct γ-counting, whereas nonspecific accumulation was estimated using inactivated 99mTc-C2A-GST. In vivo planar imaging of AMI in rats was performed on a γ-camera using a parallel-hole collimator. Radioactivity uptake was investigated by region-of-interest analysis, and postmortem tetrazolium staining versus autoradiography. Results: Fluorescently labeled and radiolabeled C2A-GST bound both apoptotic and necrotic cells. 99mTc-C2A-GST had a radiochemical purity of \u3e98% and remained stable. After intravenous injection, the uptake in the liver and kidneys was significant. For 99mTc-C2A-GST, radioactivity uptake in the area at risk reached between 2.40 and 2.63 %ID/g (%ID/g is percentage injected dose per gram) within 30 min and remained plateaued for at least 3 h. In comparison, with the inactivated tracer the radioactivity reached 1.06 ± 0.49 %ID/g at 30 min, followed by washout to 0.52 ± 0.23 %ID/g. In 7 of 7 rats, the infarct was clearly identifiable as focal uptake in planar images. At 3 h after injection, the infarct-to-lung ratios were 2.48 ± 0.27, 1.29 ± 0.09, and 1.46 ± 0.04 for acute-infarct rats with 99mTc-C2A-GST, sham-operated rats with 99mTc-C2A-GST, and acute-infarct rats with 99mTc-C2A-GST-NHS (NHS is N-hydroxy succinimide), respectively. The distribution of radioactivity was confirmed by autoradiography and histology. Conclusion: The C2A domain of synaptotagmin I labeled with fluorochromes or a radioisotope binds to both apoptotic and necrotic cells. Ex vivo and in vivo data indicate that, because of elevated vascular permeability, both specific binding and passive leakage contribute to the accumulation of the radiotracer in the area at risk. However, the latter component alone is insufficient to achieve detectable target-to-background ratios with in vivo planar imaging

    Stability and Thermal Properties Study of Metal Chalcogenide-Based Nanofluids for Concentrating Solar Power

    Get PDF
    Nanofluids are colloidal suspensions of nanomaterials in a fluid which exhibit enhanced thermophysical properties compared to conventional fluids. The addition of nanomaterials to a fluid can increase the thermal conductivity, isobaric-specific heat, diffusivity, and the convective heat transfer coefficient of the original fluid. For this reason, nanofluids have been studied over the last decades in many fields such as biomedicine, industrial cooling, nuclear reactors, and also in solar thermal applications. In this paper, we report the preparation and characterization of nanofluids based on one-dimensional MoS2 and WS2 nanosheets to improve the thermal properties of the heat transfer fluid currently used in concentrating solar plants (CSP). A comparative study of both types of nanofluids was performed for explaining the influence of nanostructure morphologies on nanofluid stability and thermal properties. The nanofluids prepared in this work present a high stability over time and thermal conductivity enhancements of up to 46% for MoS2-based nanofluid and up to 35% for WS2-based nanofluid. These results led to an increase in the efficiency of the solar collectors of 21.3% and 16.8% when the nanofluids based on MoS2 nanowires or WS2 nanosheets were used instead of the typical thermal oil

    ALOJA: A framework for benchmarking and predictive analytics in Hadoop deployments

    Get PDF
    This article presents the ALOJA project and its analytics tools, which leverages machine learning to interpret Big Data benchmark performance data and tuning. ALOJA is part of a long-term collaboration between BSC and Microsoft to automate the characterization of cost-effectiveness on Big Data deployments, currently focusing on Hadoop. Hadoop presents a complex run-time environment, where costs and performance depend on a large number of configuration choices. The ALOJA project has created an open, vendor-neutral repository, featuring over 40,000 Hadoop job executions and their performance details. The repository is accompanied by a test-bed and tools to deploy and evaluate the cost-effectiveness of different hardware configurations, parameters and Cloud services. Despite early success within ALOJA, a comprehensive study requires automation of modeling procedures to allow an analysis of large and resource-constrained search spaces. The predictive analytics extension, ALOJA-ML, provides an automated system allowing knowledge discovery by modeling environments from observed executions. The resulting models can forecast execution behaviors, predicting execution times for new configurations and hardware choices. That also enables model-based anomaly detection or efficient benchmark guidance by prioritizing executions. In addition, the community can benefit from ALOJA data-sets and framework to improve the design and deployment of Big Data applications.This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 639595). This work is partially supported by the Ministry of Economy of Spain under contracts TIN2012-34557 and 2014SGR1051.Peer ReviewedPostprint (published version

    A Computationally Efficient Method for Calculation of Strand Eddy Current Losses in Electric Machines

    Get PDF
    In this paper, a fast finite element (FE)-based method for the calculation of eddy current losses in the stator windings of randomly wound electric machines with a focus on fractional slot concentrated winding (FSCW) permanent magnet (PM) machines will be presented. The method is particularly suitable for implementation in large-scale design optimization algorithms where a qualitative characterization of such losses at higher speeds is most beneficial for identification of the design solutions which exhibit the lowest overall losses including the ac losses in the stator windings. Unlike the common practice of assuming a constant slot fill factor, sf, for all the design variations, the maximum sf in the developed method is determined based on the individual slot structure/dimensions and strand wire specifications. Furthermore, in lieu of detailed modeling of the conductor strands in the initial FE model, which significantly adds to the complexity of the problem, an alternative rectangular coil modeling subject to a subsequent flux mapping technique for determination of the impinging flux on each individual strand is pursued. The research focus of the paper is placed on development of a computationally efficient technique for the ac winding loss derivation applicable in design-optimization, where both the electromagnetic and thermal machine behavior are accounted for. The analysis is supplemented with an investigation on the influence of the electrical loading on ac winging loss effects for a particular machine design, a subject which has received less attention in the literature. Experimental ac loss measurements on a 12-slot 10-pole stator assembly will be discussed to verify the existing trends in the simulation results

    Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation

    Full text link
    TensorFlow has been the most widely adopted Machine/Deep Learning framework. However, little exists in the literature that provides a thorough understanding of the capabilities which TensorFlow offers for the distributed training of large ML/DL models that need computation and communication at scale. Most commonly used distributed training approaches for TF can be categorized as follows: 1) Google Remote Procedure Call (gRPC), 2) gRPC+X: X=(InfiniBand Verbs, Message Passing Interface, and GPUDirect RDMA), and 3) No-gRPC: Baidu Allreduce with MPI, Horovod with MPI, and Horovod with NVIDIA NCCL. In this paper, we provide an in-depth performance characterization and analysis of these distributed training approaches on various GPU clusters including the Piz Daint system (6 on Top500). We perform experiments to gain novel insights along the following vectors: 1) Application-level scalability of DNN training, 2) Effect of Batch Size on scaling efficiency, 3) Impact of the MPI library used for no-gRPC approaches, and 4) Type and size of DNN architectures. Based on these experiments, we present two key insights: 1) Overall, No-gRPC designs achieve better performance compared to gRPC-based approaches for most configurations, and 2) The performance of No-gRPC is heavily influenced by the gradient aggregation using Allreduce. Finally, we propose a truly CUDA-Aware MPI Allreduce design that exploits CUDA kernels and pointer caching to perform large reductions efficiently. Our proposed designs offer 5-17X better performance than NCCL2 for small and medium messages, and reduces latency by 29% for large messages. The proposed optimizations help Horovod-MPI to achieve approximately 90% scaling efficiency for ResNet-50 training on 64 GPUs. Further, Horovod-MPI achieves 1.8X and 3.2X higher throughput than the native gRPC method for ResNet-50 and MobileNet, respectively, on the Piz Daint cluster.Comment: 10 pages, 9 figures, submitted to IEEE IPDPS 2019 for peer-revie
    • …
    corecore