19,753 research outputs found

    Knowledge Representation Concepts for Automated SLA Management

    Full text link
    Outsourcing of complex IT infrastructure to IT service providers has increased substantially during the past years. IT service providers must be able to fulfil their service-quality commitments based upon predefined Service Level Agreements (SLAs) with the service customer. They need to manage, execute and maintain thousands of SLAs for different customers and different types of services, which needs new levels of flexibility and automation not available with the current technology. The complexity of contractual logic in SLAs requires new forms of knowledge representation to automatically draw inferences and execute contractual agreements. A logic-based approach provides several advantages including automated rule chaining allowing for compact knowledge representation as well as flexibility to adapt to rapidly changing business requirements. We suggest adequate logical formalisms for representation and enforcement of SLA rules and describe a proof-of-concept implementation. The article describes selected formalisms of the ContractLog KR and their adequacy for automated SLA management and presents results of experiments to demonstrate flexibility and scalability of the approach.Comment: Paschke, A. and Bichler, M.: Knowledge Representation Concepts for Automated SLA Management, Int. Journal of Decision Support Systems (DSS), submitted 19th March 200

    Profitable Scheduling on Multiple Speed-Scalable Processors

    Full text link
    We present a new online algorithm for profit-oriented scheduling on multiple speed-scalable processors. Moreover, we provide a tight analysis of the algorithm's competitiveness. Our results generalize and improve upon work by \textcite{Chan:2010}, which considers a single speed-scalable processor. Using significantly different techniques, we can not only extend their model to multiprocessors but also prove an enhanced and tight competitive ratio for our algorithm. In our scheduling problem, jobs arrive over time and are preemptable. They have different workloads, values, and deadlines. The scheduler may decide not to finish a job but instead to suffer a loss equaling the job's value. However, to process a job's workload until its deadline the scheduler must invest a certain amount of energy. The cost of a schedule is the sum of lost values and invested energy. In order to finish a job the scheduler has to determine which processors to use and set their speeds accordingly. A processor's energy consumption is power \Power{s} integrated over time, where \Power{s}=s^{\alpha} is the power consumption when running at speed ss. Since we consider the online variant of the problem, the scheduler has no knowledge about future jobs. This problem was introduced by \textcite{Chan:2010} for the case of a single processor. They presented an online algorithm which is αα+2eα\alpha^{\alpha}+2e\alpha-competitive. We provide an online algorithm for the case of multiple processors with an improved competitive ratio of αα\alpha^{\alpha}.Comment: Extended abstract submitted to STACS 201

    Exact algorithms for L1L^1-TV regularization of real-valued or circle-valued signals

    Full text link
    We consider L1L^1-TV regularization of univariate signals with values on the real line or on the unit circle. While the real data space leads to a convex optimization problem, the problem is non-convex for circle-valued data. In this paper, we derive exact algorithms for both data spaces. A key ingredient is the reduction of the infinite search spaces to a finite set of configurations, which can be scanned by the Viterbi algorithm. To reduce the computational complexity of the involved tabulations, we extend the technique of distance transforms to non-uniform grids and to the circular data space. In total, the proposed algorithms have complexity O(KN)\mathscr{O}(KN) where NN is the length of the signal and KK is the number of different values in the data set. In particular, the complexity is O(N)\mathscr{O}(N) for quantized data. It is the first exact algorithm for TV regularization with circle-valued data, and it is competitive with the state-of-the-art methods for scalar data, assuming that the latter are quantized

    A statistical approach for array CGH data analysis

    Get PDF
    BACKGROUND: Microarray-CGH experiments are used to detect and map chromosomal imbalances, by hybridizing targets of genomic DNA from a test and a reference sample to sequences immobilized on a slide. These probes are genomic DNA sequences (BACs) that are mapped on the genome. The signal has a spatial coherence that can be handled by specific statistical tools. Segmentation methods seem to be a natural framework for this purpose. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose BACs share the same relative copy number on average. We model a CGH profile by a random Gaussian process whose distribution parameters are affected by abrupt changes at unknown coordinates. Two major problems arise : to determine which parameters are affected by the abrupt changes (the mean and the variance, or the mean only), and the selection of the number of segments in the profile. RESULTS: We demonstrate that existing methods for estimating the number of segments are not well adapted in the case of array CGH data, and we propose an adaptive criterion that detects previously mapped chromosomal aberrations. The performances of this method are discussed based on simulations and publicly available data sets. Then we discuss the choice of modeling for array CGH data and show that the model with a homogeneous variance is adapted to this context. CONCLUSIONS: Array CGH data analysis is an emerging field that needs appropriate statistical tools. Process segmentation and model selection provide a theoretical framework that allows precise biological interpretations. Adaptive methods for model selection give promising results concerning the estimation of the number of altered regions on the genome

    Computationally Efficient Trajectory Optimization for Linear Control Systems with Input and State Constraints

    Full text link
    This paper presents a trajectory generation method that optimizes a quadratic cost functional with respect to linear system dynamics and to linear input and state constraints. The method is based on continuous-time flatness-based trajectory generation, and the outputs are parameterized using a polynomial basis. A method to parameterize the constraints is introduced using a result on polynomial nonpositivity. The resulting parameterized problem remains linear-quadratic and can be solved using quadratic programming. The problem can be further simplified to a linear programming problem by linearization around the unconstrained optimum. The method promises to be computationally efficient for constrained systems with a high optimization horizon. As application, a predictive torque controller for a permanent magnet synchronous motor which is based on real-time optimization is presented.Comment: Proceedings of the American Control Conference (ACC), pp. 1904-1909, San Francisco, USA, June 29 - July 1, 201

    Optimal Navigation Functions for Nonlinear Stochastic Systems

    Full text link
    This paper presents a new methodology to craft navigation functions for nonlinear systems with stochastic uncertainty. The method relies on the transformation of the Hamilton-Jacobi-Bellman (HJB) equation into a linear partial differential equation. This approach allows for optimality criteria to be incorporated into the navigation function, and generalizes several existing results in navigation functions. It is shown that the HJB and that existing navigation functions in the literature sit on ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. In particular, it is shown that under certain criteria the optimal navigation function is related to Laplace's equation, previously used in the literature, through an exponential transform. Further, analytical solutions to the HJB are available in simplified domains, yielding guidance towards optimality for approximation schemes. Examples are used to illustrate the role that noise, and optimality can potentially play in navigation system design.Comment: Accepted to IROS 2014. 8 Page
    • 

    corecore