648 research outputs found

    Supercritical Light Water Reactor (SCLWR) with Intermediate Heat Exchanger (IHX)

    Get PDF

    Toward automatic comparison of visualization techniques: Application to graph visualization

    Full text link
    Many end-user evaluations of data visualization techniques have been run during the last decades. Their results are cornerstones to build efficient visualization systems. However, designing such an evaluation is always complex and time-consuming and may end in a lack of statistical evidence and reproducibility. We believe that modern and efficient computer vision techniques, such as deep convolutional neural networks (CNNs), may help visualization researchers to build and/or adjust their evaluation hypothesis. The basis of our idea is to train machine learning models on several visualization techniques to solve a specific task. Our assumption is that it is possible to compare the efficiency of visualization techniques based on the performance of their corresponding model. As current machine learning models are not able to strictly reflect human capabilities, including their imperfections, such results should be interpreted with caution. However, we think that using machine learning-based pre-evaluation, as a pre-process of standard user evaluations, should help researchers to perform a more exhaustive study of their design space. Thus, it should improve their final user evaluation by providing it better test cases. In this paper, we present the results of two experiments we have conducted to assess how correlated the performance of users and computer vision techniques can be. That study compares two mainstream graph visualization techniques: node-link (\NL) and adjacency-matrix (\MD) diagrams. Using two well-known deep convolutional neural networks, we partially reproduced user evaluations from Ghoniem \textit{et al.} and from Okoe \textit{et al.}. These experiments showed that some user evaluation results can be reproduced automatically.Comment: 35 pages, 6 figures, 4 table

    Evaluation of Explanation Methods of AI -- CNNs in Image Classification Tasks with Reference-based and No-reference Metrics

    Full text link
    The most popular methods in AI-machine learning paradigm are mainly black boxes. This is why explanation of AI decisions is of emergency. Although dedicated explanation tools have been massively developed, the evaluation of their quality remains an open research question. In this paper, we generalize the methodologies of evaluation of post-hoc explainers of CNNs' decisions in visual classification tasks with reference and no-reference based metrics. We apply them on our previously developed explainers (FEM, MLFEM), and popular Grad-CAM. The reference-based metrics are Pearson correlation coefficient and Similarity computed between the explanation map and its ground truth represented by a Gaze Fixation Density Map obtained with a psycho-visual experiment. As a no-reference metric, we use stability metric, proposed by Alvarez-Melis and Jaakkola. We study its behaviour, consensus with reference-based metrics and show that in case of several kinds of degradation on input images, this metric is in agreement with reference-based ones. Therefore, it can be used for evaluation of the quality of explainers when the ground truth is not available.Comment: Due to a bug found in the code, all tables and figures were redone. The new results did not change the main conclusion, except for the best explainer. FEM has performed better than MLFEM; 25 pages, 16 tables, 16 figures; Submitted to "Advances in Artificial Intelligence and Machine Learning" (ISSN: 2582-9793

    Accelerating, hyperaccelerating, and decelerating networks

    Get PDF
    Many growing networks possess accelerating statistics where the number of links added with each new node is an increasing function of network size so the total number of links increases faster than linearly with network size. In particular, biological networks can display a quadratic growth in regulator number with genome size even while remaining sparsely connected. These features are mutually incompatible in standard treatments of network theory which typically require that every new network node possesses at least one connection. To model sparsely connected networks, we generalize existing approaches and add each new node with a probabilistic number of links to generate either accelerating, hyperaccelerating, or even decelerating network statistics in different regimes. Under preferential attachment for example, slowly accelerating networks display stationary scale-free statistics relatively independent of network size while more rapidly accelerating networks display a transition from scale-free to exponential statistics with network growth. Such transitions explain, for instance, the evolutionary record of single-celled organisms which display strict size and complexity limits

    Magnetic states and spin-glass properties in Bi0.67Ca0.33MnO3: macroscopic ac measurements and neutron scattering

    Full text link
    We report on the magnetic properties of the manganite Bi_{1-x}Ca_{x}MnO_3 (x=0.33) at low temperature. The analysis of the field expansion of the ac susceptibility and the observation of aging properties make clear that a spin glass phase appears below T = 39K, in the presence of magnetic order. Neutron scattering shows both magnetic Bragg scattering and magnetic diffusion at small angles, and confirms this coexistence. In contrast to Pr_{1-x}Ca_{x}MnO_3 (x=0.3-0.33) which exhibits a mesoscopic phase separation responsible for a field driven percolation, the glassy and short range ferromagnetic order observed here does not cause colossal magnetoresistance (CMR).Comment: accepted in Phys Rev

    Predictive biometrics: A review and analysis of predicting personal characteristics from biometric data

    Get PDF
    Interest in the exploitation of soft biometrics information has continued to develop over the last decade or so. In comparison with traditional biometrics, which focuses principally on person identification, the idea of soft biometrics processing is to study the utilisation of more general information regarding a system user, which is not necessarily unique. There are increasing indications that this type of data will have great value in providing complementary information for user authentication. However, the authors have also seen a growing interest in broadening the predictive capabilities of biometric data, encompassing both easily definable characteristics such as subject age and, most recently, `higher level' characteristics such as emotional or mental states. This study will present a selective review of the predictive capabilities, in the widest sense, of biometric data processing, providing an analysis of the key issues still adequately to be addressed if this concept of predictive biometrics is to be fully exploited in the future

    The Biomarker Toolkit - an evidence-based guideline to predict cancer biomarker success and guide development

    Get PDF
    BACKGROUND: An increased number of resources are allocated on cancer biomarker discovery, but very few of these biomarkers are clinically adopted. To bridge the gap between Biomarker discovery and clinical use, we aim to generate the Biomarker Toolkit, a tool designed to identify clinically promising biomarkers and promote successful biomarker translation. METHODS: All features associated with a clinically useful biomarker were identified using mixed-methodology, including systematic literature search, semi-structured interviews, and an online two-stage Delphi-Survey. Validation of the checklist was achieved by independent systematic literature searches using keywords/subheadings related to clinically and non-clinically utilised breast and colorectal cancer biomarkers. Composite aggregated scores were generated for each selected publication based on the presence/absence of an attribute listed in the Biomarker Toolkit checklist. RESULTS: Systematic literature search identified 129 attributes associated with a clinically useful biomarker. These were grouped in four main categories including: rationale, clinical utility, analytical validity, and clinical validity. This checklist was subsequently developed using semi-structured interviews with biomarker experts (n=34); and 88.23% agreement was achieved regarding the identified attributes, via the Delphi survey (consensus level:75%, n=51). Quantitative validation was completed using clinically and non-clinically implemented breast and colorectal cancer biomarkers. Cox-regression analysis suggested that total score is a significant driver of biomarker success in both cancer types (BC: p>0.0001, 95.0% CI: 0.869-0.935, CRC: p>0.0001, 95.0% CI: 0.918-0.954). CONCLUSIONS: This novel study generated a validated checklist with literature-reported attributes linked with successful biomarker implementation. Ultimately, the application of this toolkit can be used to detect biomarkers with the highest clinical potential and shape how biomarker studies are designed/performed
    corecore