6,005 research outputs found

    Effective retrieval and new indexing method for case based reasoning: Application in chemical process design

    Get PDF
    In this paper we try to improve the retrieval step for case based reasoning for preliminary design. This improvement deals with three major parts of our CBR system. First, in the preliminary design step, some uncertainties like imprecise or unknown values remain in the description of the problem, because they need a deeper analysis to be withdrawn. To deal with this issue, the faced problem description is soften with the fuzzy sets theory. Features are described with a central value, a percentage of imprecision and a relation with respect to the central value. These additional data allow us to build a domain of possible values for each attributes. With this representation, the calculation of the similarity function is impacted, thus the characteristic function is used to calculate the local similarity between two features. Second, we focus our attention on the main goal of the retrieve step in CBR to find relevant cases for adaptation. In this second part, we discuss the assumption of similarity to find the more appropriated case. We put in highlight that in some situations this classical similarity must be improved with further knowledge to facilitate case adaptation. To avoid failure during the adaptation step, we implement a method that couples similarity measurement with adaptability one, in order to approximate the cases utility more accurately. The latter gives deeper information for the reusing of cases. In a last part, we present a generic indexing technique for the base, and a new algorithm for the research of relevant cases in the memory. The sphere indexing algorithm is a domain independent index that has performances equivalent to the decision tree ones. But its main strength is that it puts the current problem in the center of the research area avoiding boundaries issues. All these points are discussed and exemplified through the preliminary design of a chemical engineering unit operation

    Developement of real time diagnostics and feedback algorithms for JET in view of the next step

    Full text link
    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model–based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER.Comment: 12th International Congress on Plasma Physics, 25-29 October 2004, Nice (France

    ATMSeer: Increasing Transparency and Controllability in Automated Machine Learning

    Full text link
    To relieve the pain of manually selecting machine learning algorithms and tuning hyperparameters, automated machine learning (AutoML) methods have been developed to automatically search for good models. Due to the huge model search space, it is impossible to try all models. Users tend to distrust automatic results and increase the search budget as much as they can, thereby undermining the efficiency of AutoML. To address these issues, we design and implement ATMSeer, an interactive visualization tool that supports users in refining the search space of AutoML and analyzing the results. To guide the design of ATMSeer, we derive a workflow of using AutoML based on interviews with machine learning experts. A multi-granularity visualization is proposed to enable users to monitor the AutoML process, analyze the searched models, and refine the search space in real time. We demonstrate the utility and usability of ATMSeer through two case studies, expert interviews, and a user study with 13 end users.Comment: Published in the ACM Conference on Human Factors in Computing Systems (CHI), 2019, Glasgow, Scotland U

    PIN generation using EEG : a stability study

    Get PDF
    In a previous study, it has been shown that brain activity, i.e. electroencephalogram (EEG) signals, can be used to generate personal identification number (PIN). The method was based on brain–computer interface (BCI) technology using a P300-based BCI approach and showed that a single-channel EEG was sufficient to generate PIN without any error for three subjects. The advantage of this method is obviously its better fraud resistance compared to conventional methods of PIN generation such as entering the numbers using a keypad. Here, we investigate the stability of these EEG signals when used with a neural network classifier, i.e. to investigate the changes in the performance of the method over time. Our results, based on recording conducted over a period of three months, indicate that a single channel is no longer sufficient and a multiple electrode configuration is necessary to maintain acceptable performances. Alternatively, a recording session to retrain the neural network classifier can be conducted on shorter intervals, though practically this might not be viable

    Discriminatory fees, coordination and investment in shared ATM networks

    Get PDF
    This paper empirically examines the effects of discriminatory fees on ATM investment and welfare, and considers the role of coordination in ATM investment between banks. Our main findings are that foreign fees tend to reduce ATM availability and (consumer) welfare, whereas surcharges positively affect ATM availability and the different welfare components when the consumers' price elasticity is not too large. Second, an organization of the ATM market that contains some degree of coordination between the banks may be desirable from a welfare perspective. Finally, ATM availability is always higher when a social planner decides on discriminatory fees and ATM investment to maximize total welfare. This implies that there is underinvestment in ATMs, even in the presence of discriminatory feesinvestment, coordination, ATMs, network industries, empirical entry models, spatial discrete choice demand models

    New centrality and causality metrics assessing air traffic network interactions

    Get PDF
    In ATM systems, the massive number of interacting entities makes it difficult to identify critical elements and paths of disturbance propagation, as well as to predict the system-wide effects that innovations might have. To this end, suitable metrics are required to assess the role of the interconnections between the elements and complex network science provides several network metrics to evaluate the network functioning. Here we focus on centrality and causality metrics measuring, respectively, the importance of a node and the propagation of disturbances along links. By investigating a dataset of US flights, we show that existing centrality and causality metrics are not suited to characterise the effect of delays in the system. We then propose generalisations of such metrics that we prove suited to ATM applications. Specifically, the new centrality is able to account for the temporal and multi-layer structure of ATM network, while the new causality metric focuses on the propagation of extreme events along the system

    Learning mutational graphs of individual tumour evolution from single-cell and multi-region sequencing data

    Full text link
    Background. A large number of algorithms is being developed to reconstruct evolutionary models of individual tumours from genome sequencing data. Most methods can analyze multiple samples collected either through bulk multi-region sequencing experiments or the sequencing of individual cancer cells. However, rarely the same method can support both data types. Results. We introduce TRaIT, a computational framework to infer mutational graphs that model the accumulation of multiple types of somatic alterations driving tumour evolution. Compared to other tools, TRaIT supports multi-region and single-cell sequencing data within the same statistical framework, and delivers expressive models that capture many complex evolutionary phenomena. TRaIT improves accuracy, robustness to data-specific errors and computational complexity compared to competing methods. Conclusions. We show that the application of TRaIT to single-cell and multi-region cancer datasets can produce accurate and reliable models of single-tumour evolution, quantify the extent of intra-tumour heterogeneity and generate new testable experimental hypotheses
    • …
    corecore