231 research outputs found

    Parameter Estimation and Quantitative Parametric Linkage Analysis with GENEHUNTER-QMOD

    Get PDF
    Objective: We present a parametric method for linkage analysis of quantitative phenotypes. The method provides a test for linkage as well as an estimate of different phenotype parameters. We have implemented our new method in the program GENEHUNTER-QMOD and evaluated its properties by performing simulations. Methods: The phenotype is modeled as a normally distributed variable, with a separate distribution for each genotype. Parameter estimates are obtained by maximizing the LOD score over the normal distribution parameters with a gradient-based optimization called PGRAD method. Results: The PGRAD method has lower power to detect linkage than the variance components analysis (VCA) in case of a normal distribution and small pedigrees. However, it outperforms the VCA and Haseman-Elston regression for extended pedigrees, nonrandomly ascertained data and non-normally distributed phenotypes. Here, the higher power even goes along with conservativeness, while the VCA has an inflated type I error. Parameter estimation tends to underestimate residual variances but performs better for expectation values of the phenotype distributions. Conclusion: With GENEHUNTER-QMOD, a powerful new tool is provided to explicitly model quantitative phenotypes in the context of linkage analysis. It is freely available at http://www.helmholtz-muenchen.de/genepi/downloads. Copyright (C) 2012 S. Karger AG, Base

    Relax-and-fix heuristics applied to a real-world lot-sizing and scheduling problem in the personal care consumer goods industry

    Full text link
    This paper addresses an integrated lot-sizing and scheduling problem in the industry of consumer goods for personal care, a very competitive market in which the good customer service level and the cost management show up in the competition for the clients. In this research, a complex operational environment composed of unrelated parallel machines with limited production capacity and sequence-dependent setup times and costs is studied. There is also a limited finished-goods storage capacity, a characteristic not found in the literature. Backordering is allowed but it is extremely undesirable. The problem is described through a mixed integer linear programming formulation. Since the problem is NP-hard, relax-and-fix heuristics with hybrid partitioning strategies are investigated. Computational experiments with randomly generated and also with real-world instances are presented. The results show the efficacy and efficiency of the proposed approaches. Compared to current solutions used by the company, the best proposed strategies yield results with substantially lower costs, primarily from the reduction in inventory levels and better allocation of production batches on the machines

    Field investigation of novel self-sensing asphalt pavement for weigh-in-motion sensing

    Get PDF
    The integration of weigh-in-motion (WIM) sensors within highways or bridge structural health monitoring systems is becoming increasingly popular to ensure structural integrity and users safety. Compared to standard technologies, smart self-sensing materials and systems present a simpler sensing setup, a longer service life, and increased durability against environmental effects. Field deployment of such technologies requires characterization and design optimization for realistic scales. This paper presents a field investigation of the vehicle load-sensing capabilities of a newly developed low-cost, eco-friendly and high durability smart composite paving material. The novel contributions of the work include the design and installation of a full-scale sensing pavement section and of the sensing hardware and software using tailored low-cost electronics and a learning algorithm for vehicle load estimation. The outcomes of the research demonstrate the effectiveness of the proposed system for traffic monitoring of infrastructures and WIM sensing by estimating the gross weight of passing trucks within a 20% error during an autonomous sensing period of two months

    Intervenciones en edificios escolares de valor histórico: Escuela n° 7 “Juan Martín de Pueyrredón” y Escuela Técnica n° 1 “Ing. Otto Krause”

    Get PDF
    La DIRECCION GENERAL DE INFRAESTUCTURA, MANTENIMIENTO Y EQUIPAMIENTO del MINISTERIO DE EDUCACION del GOBIERNO DE CIUDAD DE BUENOS, tiene a su cargo un parque edilicio cercano a los 750 edificios, con un gran porcentaje de ellos (63%) de una antig&uuml;edad mayor a los 80 a&ntilde;os, muchos de ellos declarados Monumento Hist&oacute;rico (Escuela Industrial N&deg; 1 &ldquo;Otto Krause&rdquo;, Escuela Normal N&deg; 1 &ldquo;Roque S&aacute;enz Pe&ntilde;a&rdquo;, Escuela Normal N&deg; 2 &ldquo;Mariano Acosta&rdquo;, Escuela No. 7 DE1 &ldquo;Pte. Julio A. Roca&rdquo;) o con protecci&oacute;n hist&oacute;rica. (P&aacute;rrafo extra&iacute;do a modo de resumen)</em

    Evaluation complexity for nonlinear constrained optimization using unscaled kkt conditions and high-order models

    Get PDF
    FAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOThe evaluation complexity of general nonlinear, possibly nonconvex, constrained optimization is analyzed. It is shown that, under suitable smoothness conditions, an epsilon-approximate first-order critical point of the problem can be computed in order O(epsilon(1-2(p+1)/p)) evaluations of the problem's functions and their first p derivatives. This is achieved by using a two-phase algorithm inspired by Cartis, Gould, and Toint [SIAM J. Optim., 21 (2011), pp. 1721-1739; SIAM J. Optim., 23 (2013), pp. 1553-1574]. It is also shown that strong guarantees (in terms of handling degeneracies) on the possible limit points of the sequence of iterates generated by this algorithm can be obtained at the cost of increased complexity. At variance with previous results, the epsilon-approximate first-order criticality is defined by satisfying a version of the KKT conditions with an accuracy that does not depend on the size of the Lagrange multipliers.The evaluation complexity of general nonlinear, possibly nonconvex, constrained optimization is analyzed. It is shown that, under suitable smoothness conditions, an epsilon-approximate first-order critical point of the problem can be computed in order O(epsilon(1-2(p+1)/p)) evaluations of the problem's functions and their first p derivatives. This is achieved by using a two-phase algorithm inspired by Cartis, Gould, and Toint [SIAM J. Optim., 21 (2011), pp. 1721-1739; SIAM J. Optim., 23 (2013), pp. 1553-1574]. It is also shown that strong guarantees (in terms of handling degeneracies) on the possible limit points of the sequence of iterates generated by this algorithm can be obtained at the cost of increased complexity. At variance with previous results, the epsilon-approximate first-order criticality is defined by satisfying a version of the KKT conditions with an accuracy that does not depend on the size of the Lagrange multipliers.262951967FAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOFAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO2010/10133-0; 2013/03447-6; 2013/05475-7; 2013/07375-0; 2013/23494-9304032/2010-7; 309517/2014-1; 303750/2014-6; 490326/2013-

    Conferencia: Escolarización, identidad y formación docente

    Get PDF
    Quiero proponerles hoy retomar algunos rasgos que conocemos y vincularlos con otros nuevos, para pensar cuáles fueron los modos en que históricamente se fueron configurando las identidades docentes; luego paso más adelante, quisiera discutir con ustedes cómo esas identidades clásicas de la docencia argentina se ponen hoy en tensión y se enfrentan con nuevos dilemas.Facultad de Humanidades y Ciencias de la Educació

    Implementation of an Optimal First-Order Method for Strongly Convex Total Variation Regularization

    Get PDF
    We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ\mu-strongly convex objective functions with LL-Lipschitz continuous gradient. In the framework of Nesterov both μ\mu and LL are assumed known -- an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient μ\mu and LL during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss the iteration complexity of several first-order methods, including the proposed algorithm, and we use a 3D tomography problem to compare the performance of these methods. The results show that for ill-conditioned problems solved to high accuracy, the proposed method significantly outperforms state-of-the-art first-order methods, as also suggested by theoretical results.Comment: 23 pages, 4 figure

    Guaranteed clustering and biclustering via semidefinite programming

    Get PDF
    Identifying clusters of similar objects in data plays a significant role in a wide range of applications. As a model problem for clustering, we consider the densest k-disjoint-clique problem, whose goal is to identify the collection of k disjoint cliques of a given weighted complete graph maximizing the sum of the densities of the complete subgraphs induced by these cliques. In this paper, we establish conditions ensuring exact recovery of the densest k cliques of a given graph from the optimal solution of a particular semidefinite program. In particular, the semidefinite relaxation is exact for input graphs corresponding to data consisting of k large, distinct clusters and a smaller number of outliers. This approach also yields a semidefinite relaxation for the biclustering problem with similar recovery guarantees. Given a set of objects and a set of features exhibited by these objects, biclustering seeks to simultaneously group the objects and features according to their expression levels. This problem may be posed as partitioning the nodes of a weighted bipartite complete graph such that the sum of the densities of the resulting bipartite complete subgraphs is maximized. As in our analysis of the densest k-disjoint-clique problem, we show that the correct partition of the objects and features can be recovered from the optimal solution of a semidefinite program in the case that the given data consists of several disjoint sets of objects exhibiting similar features. Empirical evidence from numerical experiments supporting these theoretical guarantees is also provided
    corecore