608 research outputs found

    Scoping study on the significance of mesh resolution vs. scenario uncertainty in the CFD modelling of residential smoke control systems

    Get PDF
    Computational fluid dynamics (CFD) modelling is a commonly applied tool adopted to support the specification and design of common corridor ventilation systems in UK residential buildings. Inputs for the CFD modelling of common corridor ventilation systems are typically premised on a ‘reasonable worst case’, i.e. no specific uncertainty quantification process is undertaken to evaluate the safety level. As such, where the performance of a specific design sits on a probability spectrum is not defined. Furthermore, mesh cell sizes adopted are typically c. 100 – 200 mm. For a large eddy simulation (LES) based CFD code, this is considered coarse for this application and creates a further uncertainty in respect of capturing key behaviours in the CFD model. Both co-existing practices summarised above create uncertainty, either due to parameter choice or the (computational fire and smoke) model. What is not clear is the relative importance of these uncertainties. This paper summarises a scoping study that subjects the noted common corridor CFD application to a probabilistic risk assessment (PRA), using the MaxEnt method. The uncertainty associated with the performance of a reference design is considered at different grid scales (achieving different ‘a posteriori’ mesh quality indicators), with the aim of quantifying the relative importance of uncertainties associated with inputs and scenarios, vs. the fidelity of the CFD model. For the specific case considered herein, it is found that parameter uncertainty has a more significant impact on the confidence of a given design solution relative to that arising from grid resolution, for grid sizes of 100 mm or less. Above this grid resolution, it was found that uncertainty associated with the model dictates. Given the specific ventilation arrangement modelled in this work care should be undertaken in generalising such conclusions

    Impromptu Deployment of Wireless Relay Networks: Experiences Along a Forest Trail

    Full text link
    We are motivated by the problem of impromptu or as- you-go deployment of wireless sensor networks. As an application example, a person, starting from a sink node, walks along a forest trail, makes link quality measurements (with the previously placed nodes) at equally spaced locations, and deploys relays at some of these locations, so as to connect a sensor placed at some a priori unknown point on the trail with the sink node. In this paper, we report our experimental experiences with some as-you-go deployment algorithms. Two algorithms are based on Markov decision process (MDP) formulations; these require a radio propagation model. We also study purely measurement based strategies: one heuristic that is motivated by our MDP formulations, one asymptotically optimal learning algorithm, and one inspired by a popular heuristic. We extract a statistical model of the propagation along a forest trail from raw measurement data, implement the algorithms experimentally in the forest, and compare them. The results provide useful insights regarding the choice of the deployment algorithm and its parameters, and also demonstrate the necessity of a proper theoretical formulation.Comment: 7 pages, accepted in IEEE MASS 201

    Assessment of accident investigation methods for wildland firefighting incidents by case study method

    Get PDF

    Improved Hardness for Cut, Interdiction, and Firefighter Problems

    Get PDF
    We study variants of the classic s-t cut problem and prove the following improved hardness results assuming the Unique Games Conjecture (UGC). * For Length-Bounded Cut and Shortest Path Interdiction, we show that both problems are hard to approximate within any constant factor, even if we allow bicriteria approximation. If we want to cut vertices or the graph is directed, our hardness ratio for Length-Bounded Cut matches the best approximation ratio up to a constant. Previously, the best hardness ratio was 1.1377 for Length-Bounded Cut and 2 for Shortest Path Interdiction. * For any constant k >= 2 and epsilon > 0, we show that Directed Multicut with k source-sink pairs is hard to approximate within a factor k - epsilon. This matches the trivial k-approximation algorithm. By a simple reduction, our result for k = 2 implies that Directed Multiway Cut with two terminals (also known as s-t Bicut} is hard to approximate within a factor 2 - epsilon, matching the trivial 2-approximation algorithm. * Assuming a variant of the UGC (implied by another variant of Bansal and Khot), we prove that it is hard to approximate Resource Minimization Fire Containment within any constant factor. Previously, the best hardness ratio was 2. For directed layered graphs with b layers, our hardness ratio Omega(log b) matches the best approximation algorithm. Our results are based on a general method of converting an integrality gap instance to a length-control dictatorship test for variants of the s-t cut problem, which may be useful for other problems

    Optimisation of stochastic networks with blocking: a functional-form approach

    Full text link
    This paper introduces a class of stochastic networks with blocking, motivated by applications arising in cellular network planning, mobile cloud computing, and spare parts supply chains. Blocking results in lost revenue due to customers or jobs being permanently removed from the system. We are interested in striking a balance between mitigating blocking by increasing service capacity, and maintaining low costs for service capacity. This problem is further complicated by the stochastic nature of the system. Owing to the complexity of the system there are no analytical results available that formulate and solve the relevant optimization problem in closed form. Traditional simulation-based methods may work well for small instances, but the associated computational costs are prohibitive for networks of realistic size. We propose a hybrid functional-form based approach for finding the optimal resource allocation, combining the speed of an analytical approach with the accuracy of simulation-based optimisation. The key insight is to replace the computationally expensive gradient estimation in simulation optimisation with a closed-form analytical approximation that is calibrated using a single simulation run. We develop two implementations of this approach and conduct extensive computational experiments on complex examples to show that it is capable of substantially improving system performance. We also provide evidence that our approach has substantially lower computational costs compared to stochastic approximation
    • …
    corecore