422 research outputs found

    Enzyme economy in metabolic networks

    Full text link
    Metabolic systems are governed by a compromise between metabolic benefit and enzyme cost. This hypothesis and its consequences can be studied by kinetic models in which enzyme profiles are chosen by optimality principles. In enzyme-optimal states, active enzymes must provide benefits: a higher enzyme level must provide a metabolic benefit to justify the additional enzyme cost. This entails general relations between metabolic fluxes, reaction elasticities, and enzyme costs, the laws of metabolic economics. The laws can be formulated using economic potentials and loads, state variables that quantify how metabolites, reactions, and enzymes affect the metabolic performance in a steady state. Economic balance equations link them to fluxes, reaction elasticities, and enzyme levels locally in the network. Economically feasible fluxes must be free of futile cycles and must lead from lower to higher economic potentials, just like thermodynamics makes them lead from higher to lower chemical potentials. Metabolic economics provides algebraic conditions for economical fluxes, which are independent of the underlying kinetic models. It justifies and extends the principle of minimal fluxes and shows how to construct kinetic models in enzyme-optimal states, where all enzymes have a positive influence on the metabolic performance

    Condition number analysis and preconditioning of the finite cell method

    Get PDF
    The (Isogeometric) Finite Cell Method - in which a domain is immersed in a structured background mesh - suffers from conditioning problems when cells with small volume fractions occur. In this contribution, we establish a rigorous scaling relation between the condition number of (I)FCM system matrices and the smallest cell volume fraction. Ill-conditioning stems either from basis functions being small on cells with small volume fractions, or from basis functions being nearly linearly dependent on such cells. Based on these two sources of ill-conditioning, an algebraic preconditioning technique is developed, which is referred to as Symmetric Incomplete Permuted Inverse Cholesky (SIPIC). A detailed numerical investigation of the effectivity of the SIPIC preconditioner in improving (I)FCM condition numbers and in improving the convergence speed and accuracy of iterative solvers is presented for the Poisson problem and for two- and three-dimensional problems in linear elasticity, in which Nitche's method is applied in either the normal or tangential direction. The accuracy of the preconditioned iterative solver enables mesh convergence studies of the finite cell method

    Weighted Mahalanobis Distance for Hyper-Ellipsoidal Clustering

    Get PDF
    Cluster analysis is widely used in many applications, ranging from image and speech coding to pattern recognition. A new method that uses the weighted Mahalanobis distance (WMD) via the covariance matrix of the individual clusters as the basis for grouping is presented in this thesis. In this algorithm, the Mahalanobis distance is used as a measure of similarity between the samples in each cluster. This thesis discusses some difficulties associated with using the Mahalanobis distance in clustering. The proposed method provides solutions to these problems. The new algorithm is an approximation to the well-known expectation maximization (EM) procedure used to find the maximum likelihood estimates in a Gaussian mixture model. Unlike the EM procedure, WMD eliminates the requirement of having initial parameters such as the cluster means and variances as it starts from the raw data set. Properties of the new clustering method are presented by examining the clustering quality for codebooks designed with the proposed method and competing methods on a variety of data sets. The competing methods are the Linde-Buzo-Gray (LBG) algorithm and the Fuzzy c-means (FCM) algorithm, both of them use the Euclidean distance. The neural network for hyperellipsoidal clustering (HEC) that uses the Mahalnobis distance is also studied and compared to the WMD method and the other techniques as well. The new method provides better results than the competing methods. Thus, this method becomes another useful tool for use in clustering

    Recent Development in Electricity Price Forecasting Based on Computational Intelligence Techniques in Deregulated Power Market

    Get PDF
    The development of artificial intelligence (AI) based techniques for electricity price forecasting (EPF) provides essential information to electricity market participants and managers because of its greater handling capability of complex input and output relationships. Therefore, this research investigates and analyzes the performance of different optimization methods in the training phase of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for the accuracy enhancement of EPF. In this work, a multi-objective optimization-based feature selection technique with the capability of eliminating non-linear and interacting features is implemented to create an efficient day-ahead price forecasting. In the beginning, the multi-objective binary backtracking search algorithm (MOBBSA)-based feature selection technique is used to examine various combinations of input variables to choose the suitable feature subsets, which minimizes, simultaneously, both the number of features and the estimation error. In the later phase, the selected features are transferred into the machine learning-based techniques to map the input variables to the output in order to forecast the electricity price. Furthermore, to increase the forecasting accuracy, a backtracking search algorithm (BSA) is applied as an efficient evolutionary search algorithm in the learning procedure of the ANFIS approach. The performance of the forecasting methods for the Queensland power market in the year 2018, which is well-known as the most competitive market in the world, is investigated and compared to show the superiority of the proposed methods over other selected methods.© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    Study of hardware and software optimizations of SPEA2 on hybrid FPGAs

    Get PDF
    Traditional radar technology consists of multiple platforms, each designed to process only a single mission objective, such as Ground Moving Target Indication (GMTI), Airborne Moving Target Indication (AMTI) or Synthetic Aperture Radar (SAR). This is no longer considered a cost effective solution, thus leading to the increased need for a single radar platform which can perform multiple radar missions. Many algorithms have been developed to specifically address multi-objective design problems. One such approach, the Strength Pareto Evolutionary Algorithm 2 (SPEA2), applies the concept of evolution through a Genetic Algorithm (GA) to the design of simultaneous orthogonal waveforms. The objectives of the various radar missions are often conflicting. The goal of SPEA2 is to find the best waveform suite in the Pareto sense. Preliminary results of this algorithm applied to a scaled down multi-objective mission scenario have been promising. One setback of the use of this algorithm is its abundant computational complexity. Even in a scaled down simulation, performance does not meet expectations. This thesis investigated a hardware and software optimization of SPEA2 applied to simultaneous multi-mission waveform design, using hybrid FPGAs. Hybrid FPGAs contain a combination of a single or multiple embedded processors and reconfigurable hardware. The algorithm was first implemented in C on a PC, then profiled and analyzed. The C code was translated to run on an embedded PowerPC 405 processing core on a Virtex4 FX (V4FX). The hardware fabric of the V4FX was utilized to offload the main bottleneck of the algorithm from the PowerPC 405 core to hardware for speedup, while various software optimizations were also implemented, in an effort to improve performance. Performance results from the V4FX implementation were not ideal. Thus, many suggestions for futur
    corecore