60,150 research outputs found

    A regret model applied to the facility location problem with limited capacity facilities

    Get PDF
    This article addresses issues related to location and allocation problems. Herein, we intend to demonstrate the influence of congestion, through the random number generation, of such systems in final solutions. An algorithm is presented which, in addition to the GRASP, incorporates the Regret with the pminmax method to evaluate the heuristic solution obtained with regard to its robustness for different scenarios. Taking as our point of departure the Facility Location Problem proposed by Balinski [27], an alternative perspective is added associating regret values to particular solutions.N/

    A regret model applied to the maximum capture location problem

    Get PDF
    This article addresses issues related to location and allocation problems. Herein, we intend to demonstrate the influence of congestion, through the random number generation, of such systems in final solutions. An algorithm is presented which, in addition to the GRASP, incorporates the Regret with the pminmax method to evaluate the heuristic solution obtained with regard to its robustness for different scenarios. Taking as our point of departure the Maximum Capture Location Problem proposed by Church and Revelle [1, 26], an alternative perspective is added in which the choice behavior of the server does not depend only on the elapsed time from the demand point looking to the center, but includes also the service waiting time.N/

    zCap: a zero configuration adaptive paging and mobility management mechanism

    Get PDF
    Today, cellular networks rely on fixed collections of cells (tracking areas) for user equipment localisation. Locating users within these areas involves broadcast search (paging), which consumes radio bandwidth but reduces the user equipment signalling required for mobility management. Tracking areas are today manually configured, hard to adapt to local mobility and influence the load on several key resources in the network. We propose a decentralised and self-adaptive approach to mobility management based on a probabilistic model of local mobility. By estimating the parameters of this model from observations of user mobility collected online, we obtain a dynamic model from which we construct local neighbourhoods of cells where we are most likely to locate user equipment. We propose to replace the static tracking areas of current systems with neighbourhoods local to each cell. The model is also used to derive a multi-phase paging scheme, where the division of neighbourhood cells into consecutive phases balances response times and paging cost. The complete mechanism requires no manual tracking area configuration and performs localisation efficiently in terms of signalling and response times. Detailed simulations show that significant potential gains in localisation effi- ciency are possible while eliminating manual configuration of mobility management parameters. Variants of the proposal can be implemented within current (LTE) standards

    A loudspeaker response interpolation model based on one-twelfth octave interval frequency measurements

    Get PDF
    A practical loudspeaker frequency response interpolation model is developed using a modification of the Tuneable Approximate Piecewise Linear Regression (TAPLR) model that can provide a complete magnitude and phase response over the full frequency range of the loudspeaker. This is achieved by first taking standard one-twelfth octave frequency interval acoustic intensity measurements at a one meter distance in front of the loudspeaker. These measurements are inserted directly into the formulation, which then requires only minimal tuning to achieve a magnitude response model to better than +/- 1 dB error as compared with the magnitude of the Fourier transform of the impulse response for typical hi-fi loudspeakers. The Hilbert transform can then be used to compute the corresponding phase response directly from the resulting magnitude response. Even though it is initially based on consecutive piecewise linear sections this new model provides a continuous smooth interpolation between the measured values that is much more satisfactory than normal piecewise linear segment interpolation and much simpler to do than polynomial interpolation. It only requires the tuning of a single parameter to control the degree of smoothness from a stair step response at one extreme to a straight mean horizontal line at the other. It is easy to find the best tuning parameter value in between these two extremes by either trial and error or by the minimisation of a mean squared interpolation error

    Risk-sharing and probabilistic network structure

    Get PDF
    This paper studies the impact of a probabilistic risk-sharing network structure on the optimal portfolio composition. We show that, even assuming identical agents, we are able to differentiate their optimal risk-choice once we assume the link-structure defining their relationship probabilistic. In particular, the final agent's portfolio composition is function of his location in the network. If we assume positive asset-correlation coefficients, the relative location of a player in the graph influences his risk-behaviour as much as those of his direct and indirect partners in a not-straightforward way. We analyse also two potential "centrality measures" able to select the key-player in the risk-sharing network. The findings may help to select the "central" agent in a risk-sharing community and to forecast the risk-exposure of the players. Finally, this paper may explain natural differences between identical rational agents' choices emerging in a probabilistic network setup

    Bayesian decision support for complex systems with many distributed experts

    Get PDF
    Complex decision support systems often consist of component modules which, encoding the judgements of panels of domain experts, describe a particular sub-domain of the overall system. Ideally these modules need to be pasted together to provide a comprehensive picture of the whole process. The challenge of building such an integrated system is that, whilst the overall qualitative features are common knowledge to all, the explicit forecasts and their associated uncertainties are only expressed individually by each panel, resulting from its own analysis. The structure of the integrated system therefore needs to facilitate the coherent piecing together of these separate evaluations. If such a system is not available there is a serious danger that this might drive decision makers to incoherent and so indefensible policy choices. In this paper we develop a graphically based framework which embeds a set of conditions, consisting of the agreement usually made in practice of certain probability and utility models, that, if satisfied in a given context, are sufficient to ensure the composite system is truly coherent. Furthermore, we develop new message passing algorithms entailing the transmission of expected utility scores between the panels, that enable the uncertainties within each module to be fully accounted for in the evaluation of the available alternatives in these composite systems

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling
    corecore