6,908 research outputs found

    Analysis of the lactose metabolism in E. coli using sum-of-squares decomposition

    Get PDF
    We provide a system-theoretic analysis of the mathematical model of lactose induction in E.coli which predicts the level of lactose induction into the cell for specified values of external lactose. Depending on the levels of external lactose and other parameters, the Lac operon is known to have a low steady state in which it is said to be turned off and high steady state where it is said to be turned on. Furthermore, the model has been shown experimentally to exhibit a bi-stable behavior. Using ideas from Lyapunov stability theory and sum-of-squares decomposition, we characterize the reachable state space for different sets of initial conditions, calculating estimates of the regions of attraction of the biologically relevant equilibria of this system. The changes in the basins of attraction with changes in model parameters can be used to provide biological insight. Specifically, we explain the crucial role played by a small basal transcription rate in the Lac operon. We show that if the basal rate is below a threshold, the region of attraction of the low steady state grows significantly, indicating that system is trapped in the (off) mode, showing the importance of the basal rate of transcription

    A Framework for Worst-Case and Stochastic Safety Verification Using Barrier Certificates

    Get PDF
    This paper presents a methodology for safety verification of continuous and hybrid systems in the worst-case and stochastic settings. In the worst-case setting, a function of state termed barrier certificate is used to certify that all trajectories of the system starting from a given initial set do not enter an unsafe region. No explicit computation of reachable sets is required in the construction of barrier certificates, which makes it possible to handle nonlinearity, uncertainty, and constraints directly within this framework. In the stochastic setting, our method computes an upper bound on the probability that a trajectory of the system reaches the unsafe set, a bound whose validity is proven by the existence of a barrier certificate. For polynomial systems, barrier certificates can be constructed using convex optimization, and hence the method is computationally tractable. Some examples are provided to illustrate the use of the method

    A General Framework for Fair Regression

    Full text link
    Fairness, through its many forms and definitions, has become an important issue facing the machine learning community. In this work, we consider how to incorporate group fairness constraints in kernel regression methods, applicable to Gaussian processes, support vector machines, neural network regression and decision tree regression. Further, we focus on examining the effect of incorporating these constraints in decision tree regression, with direct applications to random forests and boosted trees amongst other widespread popular inference techniques. We show that the order of complexity of memory and computation is preserved for such models and tightly bound the expected perturbations to the model in terms of the number of leaves of the trees. Importantly, the approach works on trained models and hence can be easily applied to models in current use and group labels are only required on training data.Comment: 8 pages, 4 figures, 2 pages reference

    Coordinating Complementary Waveforms for Sidelobe Suppression

    Full text link
    We present a general method for constructing radar transmit pulse trains and receive filters for which the radar point-spread function in delay and Doppler, given by the cross-ambiguity function of the transmit pulse train and the pulse train used in the receive filter, is essentially free of range sidelobes inside a Doppler interval around the zero-Doppler axis. The transmit pulse train is constructed by coordinating the transmission of a pair of Golay complementary waveforms across time according to zeros and ones in a binary sequence P. The pulse train used to filter the received signal is constructed in a similar way, in terms of sequencing the Golay waveforms, but each waveform in the pulse train is weighted by an element from another sequence Q. We show that a spectrum jointly determined by P and Q sequences controls the size of the range sidelobes of the cross-ambiguity function and by properly choosing P and Q we can clear out the range sidelobes inside a Doppler interval around the zero- Doppler axis. The joint design of P and Q enables a tradeoff between the order of the spectral null for range sidelobe suppression and the signal-to-noise ratio at the receiver output. We establish this trade-off and derive a necessary and sufficient condition for the construction of P and Q sequences that produce a null of a desired order

    Spectrum sensing by cognitive radios at very low SNR

    Full text link
    Spectrum sensing is one of the enabling functionalities for cognitive radio (CR) systems to operate in the spectrum white space. To protect the primary incumbent users from interference, the CR is required to detect incumbent signals at very low signal-to-noise ratio (SNR). In this paper, we present a spectrum sensing technique based on correlating spectra for detection of television (TV) broadcasting signals. The basic strategy is to correlate the periodogram of the received signal with the a priori known spectral features of the primary signal. We show that according to the Neyman-Pearson criterion, this spectral correlation-based sensing technique is asymptotically optimal at very low SNR and with a large sensing time. From the system design perspective, we analyze the effect of the spectral features on the spectrum sensing performance. Through the optimization analysis, we obtain useful insights on how to choose effective spectral features to achieve reliable sensing. Simulation results show that the proposed sensing technique can reliably detect analog and digital TV signals at SNR as low as -20 dB.Comment: IEEE Global Communications Conference 200

    Analysis of Fisher Information and the Cram\'{e}r-Rao Bound for Nonlinear Parameter Estimation after Compressed Sensing

    Full text link
    In this paper, we analyze the impact of compressed sensing with complex random matrices on Fisher information and the Cram\'{e}r-Rao Bound (CRB) for estimating unknown parameters in the mean value function of a complex multivariate normal distribution. We consider the class of random compression matrices whose distribution is right-orthogonally invariant. The compression matrix whose elements are i.i.d. standard normal random variables is one such matrix. We show that for all such compression matrices, the Fisher information matrix has a complex matrix beta distribution. We also derive the distribution of CRB. These distributions can be used to quantify the loss in CRB as a function of the Fisher information of the non-compressed data. In our numerical examples, we consider a direction of arrival estimation problem and discuss the use of these distributions as guidelines for choosing compression ratios based on the resulting loss in CRB.Comment: 12 pages, 3figure
    corecore