18 research outputs found

    Modeling, Fabrication, and Optimization of Niobium Cavities: Phase II

    Full text link
    Niobium cavities are important parts of the integrated NC/SC high-power linacs. Over the years, researchers in several countries have tested various cavity shapes. They concluded that elliptically shaped cells are the most appropriate shape for superconducting cavities. The need for very clean surfaces led to the use of a buffered chemical polishing procedure for surface cleaning to get good performance of the cavities. This proposal discusses the second phase of research in the second year of the project. The first phase (starting Summer 2001) has resulted in improving the basic understanding of multipacting and the process of chemical etching. Based on our conclusions so far, as well as our interaction with personnel of Los Alamos National Laboratory (LANL), we propose to focus on the following topics in the second phase of this project: 1. Continue optimizing the cavity shape to reduce or minimize the possibilities of multipacting. 2. Redesign the etching process to maximize surface uniformity. 3. Experimental study of multipacting conditions. 4. Experimental study of the etching process and the resulting quality of the surface

    Remarks on logic for process descriptions in ontological reasoning: A Drug Interaction Ontology case study

    Get PDF
    We present some ideas on logical process descriptions, using relations from the DIO (Drug Interaction Ontology) as examples and explaining how these relations can be naturally decomposed in terms of more basic structured logical process descriptions using terms from linear logic. In our view, the process descriptions are able to clarify the usual relational descriptions of DIO. In particular, we discuss the use of logical process descriptions in proving linear logical theorems. Among the types of reasoning supported by DIO one can distinguish both (1) basic reasoning about general structures in reality and (2) the domain-specific reasoning of experts. We here propose a clarification of this important distinction between (realist) reasoning on the basis of an ontology and rule-based inferences on the basis of an expert’s view

    Confidence Measure for DNA Base Calling Using a Fuzzy System

    Get PDF
    Base calling is the central part of any large-scale genomic sequencing effort. Current sequencing technology produces error rates less than 3.5%. This corresponds to at least 35 errors in a 1000 base read. As the base calling algorithm\u27s error rates drop, the smaller base call errors could be difficult to locate. Hence, assembling algorithms and human operators use a confidence value measure to determine how well the base calling algorithm has performed for each base call. This will clearly make it easier to uncover potential errors and correct them, thus increasing the throughput of genetic sequencing. The model developed here employs fuzzy logic, providing flexibility, adaptability and intuition through the use of linguistic variables and fuzzy membership functions. The proposed approach uses a fuzzy logic system to provide the confidence values of bases called. Three variables that are calculated during the base calling procedure are involved in the fuzzy system. These variables can be calculated at any spatial location and are: peakness, height, and base spacing. In addition to the first most likely candidate (the base called), the peakness and height are also found for the second likely candidate. The technique has been tested on over 3000 ABI 3700 DNA files and the result has shown improved performance over the existing Phred\u27s and ABI\u27s quality value

    Credibility coefficients based on frequent sets

    Get PDF
    Credibility coefficients are heuristic measures applied to objects of information system. Credibility coefficients were introduced to assess similarity of objects in respect to other data in information systems or decision tables. By applying knowledge discovery methods it is possible to gain some rules and dependencies between data. However the knowledge obtained from the data can be corrupted or incomplete due to improper data. Hence identification of these exceptions cannot be overestimated. It is assumed that majority of data is correct and only a minor part may be improper. Credibility coefficients of objects should indicate to which group a particular object probably belongs. A main focus of the paper is set on an algorithm of calculating credibility coefficients. This algorithm is based on frequent sets, which are produced while using data analysis based on the rough set theory. Some information on the rough set theory is supplied to enable expression of credibility coefficient formulas. Implementation and applications of credibility coefficients are presented in the paper. Discussion of some practical results of identifying improper data by credibility coefficients is inserted as well

    An infrastructure of stream data mining, fusion and management for monitored patients

    Get PDF
    Paper presented at the 19th IEEE International Symposium on Computer-Based Medical Systems, CBMS 2006, Salt Lake City, UT.This paper proposes an infrastructure for data mining, fusion and patient care management using continuous stream data monitored from critically ill patients. Stream data mining, fusion, and management provide efficient ways to increase data utilization and to support knowledge discovery, which can be utilized in many clinical areas to improve the quality of patient care services. The primary goal of our work is to establish a customized infrastructure model designed for critical care services at hospitals. However this structure can be easily expanded to other areas of clinical specialties

    Развитие методологии системного анализа проблем космической отрасли, исследование динамики объектов ракетно-космической техники

    Get PDF
    Выполнен обзор методов системного анализа деятельности космической отрасли. Обсуждаются методы оценки проектов космических программ, алгоритмы формирования научно-технических программ. Описаны результаты исследования динамики больших космических конструкций, космических тросовых систем, а также микроспутников.System analysis methods for space industry activities are reviewed. Methods of projects estimation of space programs, for scientific and technical programs algorithms are discussed. Results of researchs on the dynamics of the large space structures, space tethered systems as well as microsatellites are described

    Challenges in the Analysis of Mass-Throughput Data: A Technical Commentary from the Statistical Machine Learning Perspective

    Get PDF
    Sound data analysis is critical to the success of modern molecular medicine research that involves collection and interpretation of mass-throughput data. The novel nature and high-dimensionality in such datasets pose a series of nontrivial data analysis problems. This technical commentary discusses the problems of over-fitting, error estimation, curse of dimensionality, causal versus predictive modeling, integration of heterogeneous types of data, and lack of standard protocols for data analysis. We attempt to shed light on the nature and causes of these problems and to outline viable methodological approaches to overcome them

    Genetic Studies of Complex Human Diseases: Characterizing SNP-Disease Associations Using Bayesian Networks

    Get PDF
    Detecting epistatic interactions plays a significant role in improving pathogenesis, prevention, diagnosis, and treatment of complex human diseases. Applying machine learning or statistical methods to epistatic interaction detection will encounter some common problems, e.g., very limited number of samples, an extremely high search space, a large number of false positives, and ways to measure the association between disease markers and the phenotype. RESULTS: To address the problems of computational methods in epistatic interaction detection, we propose a score-based Bayesian network structure learning method, EpiBN, to detect epistatic interactions. We apply the proposed method to both simulated datasets and three real disease datasets. Experimental results on simulation data show that our method outperforms some other commonly-used methods in terms of power and sample-efficiency, and is especially suitable for detecting epistatic interactions with weak or no marginal effects. Furthermore, our method is scalable to real disease data. CONCLUSIONS: We propose a Bayesian network-based method, EpiBN, to detect epistatic interactions. In EpiBN, we develop a new scoring function, which can reflect higher-order epistatic interactions by estimating the model complexity from data, and apply a fast Branch-and-Bound algorithm to learn the structure of a two-layer Bayesian network containing only one target node. To make our method scalable to real data, we propose the use of a Markov chain Monte Carlo (MCMC) method to perform the screening process. Applications of the proposed method to some real GWAS (genome-wide association studies) datasets may provide helpful insights into understanding the genetic basis of Age-related Macular Degeneration, late-onset Alzheimer's disease, and autism
    corecore