1,287 research outputs found

    Flexible Resource Allocation: A Comparison Of Linear Diophantine Analysis And Integer Programming

    Get PDF
    To help production managers cope with an ever changing and complex business environment, we investigate a flexible methodology for solving integer resource allocation problems. Solutions obtained to an example problem using the Linear Diophantine Equation (LDE) methodology presented are compared to solutions produced using Integer Programming. Tradeoffs are examined and discussed, and suggestions are made for managers facing resource decisions similar to the example studied

    Logical and uncertainty models for information access: current trends

    Get PDF
    The current trends of research in information access as emerged from the 1999 Workshop on Logical and Uncertainty Models for Information Systems (LUMIS'99) are briefly reviewed in this paper. We believe that some of these issues will be central to future research on theory and applications of logical and uncertainty models for information access

    Total Empiricism: Learning from Data

    Full text link
    Statistical analysis is an important tool to distinguish systematic from chance findings. Current statistical analyses rely on distributional assumptions reflecting the structure of some underlying model, which if not met lead to problems in the analysis and interpretation of the results. Instead of trying to fix the model or "correct" the data, we here describe a totally empirical statistical approach that does not rely on ad hoc distributional assumptions in order to overcome many problems in contemporary statistics. Starting from elementary combinatorics, we motivate an information-guided formalism to quantify knowledge extracted from the given data. Subsequently, we derive model-agnostic methods to identify patterns that are solely evidenced by the data based on our prior knowledge. The data-centric character of empiricism allows for its universal applicability, particularly as sample size grows larger. In this comprehensive framework, we re-interpret and extend model distributions, scores and statistical tests used in different schools of statistics.Comment: Keywords: effective description, large-N, operator formalism, statistical testing, inference, information divergenc

    Combinatorics and Geometry of Transportation Polytopes: An Update

    Full text link
    A transportation polytope consists of all multidimensional arrays or tables of non-negative real numbers that satisfy certain sum conditions on subsets of the entries. They arise naturally in optimization and statistics, and also have interest for discrete mathematics because permutation matrices, latin squares, and magic squares appear naturally as lattice points of these polytopes. In this paper we survey advances on the understanding of the combinatorics and geometry of these polyhedra and include some recent unpublished results on the diameter of graphs of these polytopes. In particular, this is a thirty-year update on the status of a list of open questions last visited in the 1984 book by Yemelichev, Kovalev and Kravtsov and the 1986 survey paper of Vlach.Comment: 35 pages, 13 figure

    An evaluation of two new inference control methods

    Get PDF
    [[abstract]]An evaluation method is developed to measure the cost-effectiveness of two inference methods. The factors of the evaluation function consist of: preparation cost for the control method; query complexity; and security level under various attacks. The first control method is based on restriction, and the second on perturbation. Simulation results indicate that both methods have higher preparation cost, better security, and faster response time than L.H. Cox's method (1980) and L.L. Beck's method (1980). Finally, these two methods are compared to each other. In general, the control methods based on restriction have higher preparation cost and better security, and the control methods based on perturbation have fast response time for a query, but more information leak[[fileno]]2030204010032[[department]]資訊工程學

    Molecular Formula Identification using High Resolution Mass Spectrometry: Algorithms and Applications in Metabolomics and Proteomics

    Get PDF
    Wir untersuchen mehrere theoretische und praktische Aspekte der Identifikation der Summenformel von Biomolekülen mit Hilfe von hochauflösender Massenspektrometrie. Durch die letzten Forschritte in der Instrumentation ist die Massenspektrometrie (MS) zur einen der Schlüsseltechnologien für die Analyse von Biomolekülen in der Proteomik und Metabolomik geworden. Sie misst die Massen der Moleküle in der Probe mit hoher Genauigkeit, und ist für die Messdatenerfassung im Hochdurchsatz gut geeignet. Eine der Kernaufgaben in der MS-basierten Proteomik und Metabolomik ist die Identifikation der Moleküle in der Probe. In der Metabolomik unterliegen Metaboliten der Strukturaufklärung, beginnend bei der Summenformel eines Moleküls, d.h. der Anzahl der Atome jedes Elements. Dies ist der entscheidende Schritt in der Identifikation eines unbekannten Metabolits, da die festgelegte Formel die Anzahl der möglichen Molekülstrukturen auf eine viel kleinere Menge reduziert, die mit Methoden der automatischen Strukturaufklärung weiter analysiert werden kann. Nach der Vorverarbeitung ist die Ausgabe eines Massenspektrometers eine Liste von Peaks, die den Molekülmassen und deren Intensitäten, d.h. der Anzahl der Moleküle mit einer bestimmten Masse, entspricht. Im Prinzip können die Summenformel kleiner Moleküle nur mit präzisen Massen identifiziert werden. Allerdings wurde festgestellt, dass aufgrund der hohen Anzahl der chemisch legitimer Formeln in oberen Massenbereich eine exzellente Massengenaugkeit alleine für die Identifikation nicht genügt. Hochauflösende MS erlaubt die Bestimmung der Molekülmassen und Intensitäten mit hervorragender Genauigkeit. In dieser Arbeit entwickeln wir mehrere Algorithmen und Anwendungen, die diese Information zur Identifikation der Summenformel der Biomolekülen anwenden
    corecore