584,863 research outputs found

    Learning-aided Stochastic Network Optimization with Imperfect State Prediction

    Full text link
    We investigate the problem of stochastic network optimization in the presence of imperfect state prediction and non-stationarity. Based on a novel distribution-accuracy curve prediction model, we develop the predictive learning-aided control (PLC) algorithm, which jointly utilizes historic and predicted network state information for decision making. PLC is an online algorithm that requires zero a-prior system statistical information, and consists of three key components, namely sequential distribution estimation and change detection, dual learning, and online queue-based control. Specifically, we show that PLC simultaneously achieves good long-term performance, short-term queue size reduction, accurate change detection, and fast algorithm convergence. In particular, for stationary networks, PLC achieves a near-optimal [O(ϵ)[O(\epsilon), O(log(1/ϵ)2)]O(\log(1/\epsilon)^2)] utility-delay tradeoff. For non-stationary networks, \plc{} obtains an [O(ϵ),O(log2(1/ϵ)[O(\epsilon), O(\log^2(1/\epsilon) +min(ϵc/21,ew/ϵ))]+ \min(\epsilon^{c/2-1}, e_w/\epsilon))] utility-backlog tradeoff for distributions that last Θ(max(ϵc,ew2)ϵ1+a)\Theta(\frac{\max(\epsilon^{-c}, e_w^{-2})}{\epsilon^{1+a}}) time, where ewe_w is the prediction accuracy and a=Θ(1)>0a=\Theta(1)>0 is a constant (the Backpressue algorithm \cite{neelynowbook} requires an O(ϵ2)O(\epsilon^{-2}) length for the same utility performance with a larger backlog). Moreover, PLC detects distribution change O(w)O(w) slots faster with high probability (ww is the prediction size) and achieves an O(min(ϵ1+c/2,ew/ϵ)+log2(1/ϵ))O(\min(\epsilon^{-1+c/2}, e_w/\epsilon)+\log^2(1/\epsilon)) convergence time. Our results demonstrate that state prediction (even imperfect) can help (i) achieve faster detection and convergence, and (ii) obtain better utility-delay tradeoffs

    SIFTER search: a web server for accurate phylogeny-based protein function prediction.

    Get PDF
    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded

    Development of a Non-Iterative Balance Load Prediction Algorithm for the NASA Ames Unitary Plan Wind Tunnel

    Get PDF
    A non-iterative load prediction algorithm for strain-gage balances was developed for the NASA Ames Unitary Plan Wind Tunnels that computes balance loads from the electrical outputs of the balance bridges and a set of state variables. A state variable could be, for example, a balance temperature difference or the bellows pressure of a flow-through balance. The algorithm directly uses regression models of the balance loads for the load prediction that were obtained by applying global regression analysis to balance calibration data. This choice greatly simplifies both implementation and use of the load prediction process for complex balance configurations as no load iteration needs to be performed. The regression model of a balance load is constructed by using terms from a total of nine term groups. Four term groups are derived from a Taylor Series expansion of the relationship between the load, gage outputs, and state variables. The remaining five term groups are defined by using absolute values of the gage outputs and state variables. Terms from these groups should only be included in the regression model if calibration data from a balance with known bi-directional outputs is analyzed. It is illustrated in detail how global regression analysis may be applied to obtain the coefficients of the chosen regression model of a load component assuming that no linear or massive near-linear dependencies between the regression model terms exist. Data from the machine calibration of a six-component force balance is used to illustrate both application and accuracy of the non-iterative load prediction process

    When Backpressure Meets Predictive Scheduling

    Full text link
    Motivated by the increasing popularity of learning and predicting human user behavior in communication and computing systems, in this paper, we investigate the fundamental benefit of predictive scheduling, i.e., predicting and pre-serving arrivals, in controlled queueing systems. Based on a lookahead window prediction model, we first establish a novel equivalence between the predictive queueing system with a \emph{fully-efficient} scheduling scheme and an equivalent queueing system without prediction. This connection allows us to analytically demonstrate that predictive scheduling necessarily improves system delay performance and can drive it to zero with increasing prediction power. We then propose the \textsf{Predictive Backpressure (PBP)} algorithm for achieving optimal utility performance in such predictive systems. \textsf{PBP} efficiently incorporates prediction into stochastic system control and avoids the great complication due to the exponential state space growth in the prediction window size. We show that \textsf{PBP} can achieve a utility performance that is within O(ϵ)O(\epsilon) of the optimal, for any ϵ>0\epsilon>0, while guaranteeing that the system delay distribution is a \emph{shifted-to-the-left} version of that under the original Backpressure algorithm. Hence, the average packet delay under \textsf{PBP} is strictly better than that under Backpressure, and vanishes with increasing prediction window size. This implies that the resulting utility-delay tradeoff with predictive scheduling beats the known optimal [O(ϵ),O(log(1/ϵ))][O(\epsilon), O(\log(1/\epsilon))] tradeoff for systems without prediction

    Using accelerometer, high sample rate GPS and magnetometer data to develop a cattle movement and behaviour model

    Get PDF
    The study described in this paper developed a model of animal movement, which explicitly recognised each individual as the central unit of measure. The model was developed by learning from a real dataset that measured and calculated, for individual cows in a herd, their linear and angular positions and directional and angular speeds. Two learning algorithms were implemented: a Hidden Markov model (HMM) and a long-term prediction algorithm. It is shown that a HMM can be used to describe the animal's movement and state transition behaviour within several “stay” areas where cows remained for long periods. Model parameters were estimated for hidden behaviour states such as relocating, foraging and bedding. For cows’ movement between the “stay” areas a long-term prediction algorithm was implemented. By combining these two algorithms it was possible to develop a successful model, which achieved similar results to the animal behaviour data collected. This modelling methodology could easily be applied to interactions of other animal specie
    corecore