997 research outputs found

    CORNN: Convex optimization of recurrent neural networks for rapid inference of neural dynamics

    Full text link
    Advances in optical and electrophysiological recording technologies have made it possible to record the dynamics of thousands of neurons, opening up new possibilities for interpreting and controlling large neural populations in behaving animals. A promising way to extract computational principles from these large datasets is to train data-constrained recurrent neural networks (dRNNs). Performing this training in real-time could open doors for research techniques and medical applications to model and control interventions at single-cell resolution and drive desired forms of animal behavior. However, existing training algorithms for dRNNs are inefficient and have limited scalability, making it a challenge to analyze large neural recordings even in offline scenarios. To address these issues, we introduce a training method termed Convex Optimization of Recurrent Neural Networks (CORNN). In studies of simulated recordings, CORNN attained training speeds ~100-fold faster than traditional optimization approaches while maintaining or enhancing modeling accuracy. We further validated CORNN on simulations with thousands of cells that performed simple computations such as those of a 3-bit flip-flop or the execution of a timed response. Finally, we showed that CORNN can robustly reproduce network dynamics and underlying attractor structures despite mismatches between generator and inference models, severe subsampling of observed neurons, or mismatches in neural time-scales. Overall, by training dRNNs with millions of parameters in subminute processing times on a standard computer, CORNN constitutes a first step towards real-time network reproduction constrained on large-scale neural recordings and a powerful computational tool for advancing the understanding of neural computation.Comment: Accepted at NeurIPS 202

    Scalable computation of intracellular metabolite concentrations

    Full text link
    Current mathematical frameworks for predicting the flux state and macromolecular composition of the cell do not rely on thermodynamic constraints to determine the spontaneous direction of reactions. These predictions may be biologically infeasible as a result. Imposing thermodynamic constraints requires accurate estimations of intracellular metabolite concentrations. These concentrations are constrained within physiologically possible ranges to enable an organism to grow in extreme conditions and adapt to its environment. Here, we introduce tractable computational techniques to characterize intracellular metabolite concentrations within a constraint-based modeling framework. This model provides a feasible concentration set, which can generally be nonconvex and disconnected. We examine three approaches based on polynomial optimization, random sampling, and global optimization. We leverage the sparsity and algebraic structure of the underlying biophysical models to enhance the computational efficiency of these techniques. We then compare their performance in two case studies, showing that the global-optimization formulation exhibits more desirable scaling properties than the random-sampling and polynomial-optimization formulation, and, thus, is a promising candidate for handling large-scale metabolic networks

    Approximate Geometric Regularities

    Get PDF

    Topics in exact precision mathematical programming

    Get PDF
    The focus of this dissertation is the advancement of theory and computation related to exact precision mathematical programming. Optimization software based on floating-point arithmetic can return suboptimal or incorrect resulting because of round-off errors or the use of numerical tolerances. Exact or correct results are necessary for some applications. Implementing software entirely in rational arithmetic can be prohibitively slow. A viable alternative is the use of hybrid methods that use fast numerical computation to obtain approximate results that are then verified or corrected with safe or exact computation. We study fast methods for sparse exact rational linear algebra, which arises as a bottleneck when solving linear programming problems exactly. Output sensitive methods for exact linear algebra are studied. Finally, a new method for computing valid linear programming bounds is introduced and proven effective as a subroutine for solving mixed-integer linear programming problems exactly. Extensive computational results are presented for each topic.Ph.D.Committee Chair: Dr. William J. Cook; Committee Member: Dr. George Nemhauser; Committee Member: Dr. Robin Thomas; Committee Member: Dr. Santanu Dey; Committee Member: Dr. Shabbir Ahmed; Committee Member: Dr. Zonghao G

    Mathematical optimization and learning models to address uncertainties and sustainability of supply chain management

    Get PDF
    As concerns about climate change, biodiversity loss, and pollution have become more widespread, new worldwide challenges deal with the protection of the environment and the conservation of natural resources. Thus, in order to empower sustainability and circular economy ambitions, the world has shifted to embrace sustainable practices and policies. This is carried out, primarily, through the implementation of sustainable business practices and increased investments in green technology. Advanced information systems, digital technologies and mathematical models are required to respond to the demanding targets of the sustainability paradigm. This trend is expanding with the growing interest in production and services sustainability in order to achieve economic growth and development while preventing their negative impact on the environment. A significant step forward in this direction is enabled by Supply Chain Management (SCM) practices that exploit mathematical and statistical modeling to better support decisions affecting both profitability and sustainability targets. Indeed, these targets should not be approached as competing goals, but rather addressed simultaneously within a comprehensive vision that responds adequately to both of them. Accordingly, Green Supply Chain Management (GSCM) can achieve its goals through innovative management approaches that consider sustainable efficiency and profitability to be clearly linked by the savings that result from applying optimization techniques. To confirm the above, there is a growing trend of applying mathematical optimization models for enhancing decision-making in pursuit of both environmental and profit performance. Indeed, GSCM takes into account many decision problems, such as facility location, capacity allocation, production planning and vehicle routing. Besides sustainability, uncertainty is another critical issue in Supply Chain Management (SCM). Considering a deterministic approach would definitely fail to provide concrete decision support when modeling those kinds of scenarios. According to various hypothesis and strategies, uncertainties can be addressed by exploiting several modeling approaches arising from statistics, statistical learning and mathematical programming. While statistical and learning models accounts variability by definition, Robust Optimization (RO) is a particular modeling approach that is commonly applied in solving mathematical programming problems where a certain set of parameters are subject to uncertainty. In this dissertation, mathematical and learning models are exploited according to different approaches and models combinations, providing new formulations and frameworks to address strategic and operational problems of GSCM under uncertainty. All models and frameworks presented in this dissertation are tested and validated on real-case instances

    Combining mathematical programming and SysML for component sizing as applied to hydraulic systems

    Get PDF
    In this research, the focus is on improving a designer's capability to determine near-optimal sizes of components for a given system architecture. Component sizing is a hard problem to solve because of the presence of competing objectives, requirements from multiple disciplines, and the need for finding a solution quickly for the architecture being considered. In current approaches, designers rely on heuristics and iterate over the multiple objectives and requirements until a satisfactory solution is found. To improve on this state of practice, this research introduces advances in the following two areas: a.) Formulating a component sizing problem in a manner that is convenient to designers and b.) Solving the component sizing problem in an efficient manner so that all of the imposed requirements are satisfied simultaneously and the solution obtained is mathematically optimal. In particular, an acausal, algebraic, equation-based, declarative modeling approach is taken to solve component sizing problems efficiently. This is because global optimization algorithms exist for algebraic models and the computation time is considerably less as compared to the optimization of dynamic simulations. In this thesis, the mathematical programming language known as GAMS (General Algebraic Modeling System) and its associated global optimization solvers are used to solve component sizing problems efficiently. Mathematical programming languages such as GAMS are not convenient for formulating component sizing problems and therefore the Systems Modeling Language developed by the Object Management Group (OMG SysML ) is used to formally capture and organize models related to component sizing into libraries that can be reused to compose new models quickly by connecting them together. Model-transformations are then used to generate low-level mathematical programming models in GAMS that can be solved using commercial off-the-shelf solvers such as BARON (Branch and Reduce Optimization Navigator) to determine the component sizes that satisfy the requirements and objectives imposed on the system. This framework is illustrated by applying it to an example application for sizing a hydraulic log splitter.M.S.Committee Co-Chair: Paredis, Chris ; Committee Co-Chair: Schaefer, Dirk; Committee Member: Goel, Asho

    Verifying chemical reaction network implementations: A bisimulation approach

    Get PDF
    Efforts in programming DNA and other biological molecules have recently focused on general schemes to physically implement arbitrary Chemical Reaction Networks. Errors in some of the proposed schemes have driven a desire for formal verification methods. By interpreting each implementation species as a multiset of formal species, the concept of weak bisimulation can be adapted to CRNs in a way that agrees with an intuitive notion of a correct implementation. The theory of CRN bisimulation can be used to prove the correctness of a general implementation scheme or to detect subtle problems. Given a specific formal CRN and a specific implementation CRN, the complexity of finding a valid interpretation between the two CRNs if one exists, and that of checking whether an interpretation is valid are both PSPACE-complete in the general case, but are NP-complete and polynomial-time respectively under an assumption that holds in many cases of interest. We present effective algorithms for both of those problems. We further discuss features of CRN bisimulation including a transitivity property and a modularity condition, the precise connection to the general theory of bisimulation, and an extension that takes into account spurious catalysts

    Evolutionary Search Techniques with Strong Heuristics for Multi-Objective Feature Selection in Software Product Lines

    Get PDF
    Software design is a process of trading off competing objectives. If the user objective space is rich, then we should use optimizers that can fully exploit that richness. For example, this study configures software product lines (expressed as feature models) using various search-based software engineering methods. Our main result is that as we increase the number of optimization objectives, the methods in widespread use (e.g. NSGA-II, SPEA2) perform much worse than IBEA (Indicator-Based Evolutionary Algorithm). IBEA works best since it makes most use of user preference knowledge. Hence it does better on the standard measures (hypervolume and spread) but it also generates far more products with 0 violations of domain constraints. We also present significant improvements to IBEA\u27s performance by employing three strong heuristic techniques that we call PUSH, PULL, and seeding. The PUSH technique forces the evolutionary search to respect certain rules and dependencies defined by the feature models, while the PULL technique gives higher weight to constraint satisfaction as an optimization objective and thus achieves a higher percentage of fully-compliant configurations within shorter runtimes. The seeding technique helps in guiding very large feature models to correct configurations very early in the optimization process. Our conclusion is that the methods we apply in search-based software engineering need to be carefully chosen, particularly when studying complex decision spaces with many optimization objectives. Also, we conclude that search methods must be customized to fit the problem at hand. Specifically, the evolutionary search must respect domain constraints
    • …
    corecore