6,224 research outputs found

    Expert Elicitation for Reliable System Design

    Full text link
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287], [arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Animation of a process for identifying and merging raster polygon areas

    Get PDF

    Energy efficiency in wireless communications for mobile user devices

    Get PDF
    Mención Internacional en el título de doctorMobile user devices’ market has experi-enced an exponential growth worldwide over the last decade, and wireless communications are the main driver for the next generation of 5G networks. The ubiquity of battery-powered connected devices makes energy efficiency a major research issue. While most studies assumed that network interfaces dominate the energy consumption of wireless communications, a recent work unveils that the frame processing carried out by the device could drain as much energy as the interface itself for many devices. This discovery poses doubts on prior energy models for wireless communications and forces us to reconsider existing energy-saving schemes. From this standpoint, this thesis is de-voted to the study of the energy efficiency of mobile user devices at multiple layers. To that end, we assemble a comprehensive en-ergy measurement framework, and a robust methodology, to be able to characterise a wide range of mobile devices, as well as individual parts of such devices. Building on this, we first delve into the en-ergy consumption of frame processing within the devices’ protocol stack. Our results identify the CPU as the leading cause of this energy consumption. Moreover, we discover that the characterisation of the energy toll ascribed to the device is much more complex than the previous work showed. Devices with complex CPUs (several frequencies and sleep states) require novel methodologies and models to successfully characterise their consumption. We then turn our attention to lower levels of the communication stack by investigating the behaviour of idle WiFi interfaces. Due to the design of the 802.11 protocol, together with the growing trend of network densification, WiFi devices spend a long time receiving frames addressed to other devices when they might be dormant. In order to mitigate this issue, we study the timing constraints of a commercial WiFi card, which is developed into a standard-compliant algorithm that saves energy during such transmissions. At a higher level, rate adaptation and power control techniques adapt data rate and output power to the channel conditions. However, these have been typically studied with other metrics rather than energy efficiency in mind (i.e., performance figures such as throughput and capacity). In fact, our analyses and sim-ulations unveil an inherent trade-off between throughput and energy efficiency maximisa-tion in 802.11. We show that rate adaptation and power control techniques may incur inef-ficiencies at mode transitions, and we provide energy-aware heuristics to make such decisions following a conservative approach. Finally, our research experience on simula-tion methods pointed us towards the need for new simulation tools commited to the middle-way approach: less specificity than complex network simulators in exchange for easier and faster prototyping. As a result, we developed a process-oriented and trajectory-based discrete-event simulation package for the R language, which is designed as a easy-to-use yet pow-erful framework with automatic monitoring capabilities. The use of this simulator in net-working is demonstrated through the energy modelling of an Internet-of-Things scenario with thousands of metering devices in just a few lines of code.El mercado de los dispositivos de usuario móviles ha experimentado un crecimiento exponencial a nivel mundial en la última década, y las comunicaciones inalámbricas son el principal motor de la siguiente generación de redes 5G. La ubicuidad de estos dispos-itivos alimentados por baterías hace de la eficiencia energética un importante tema de investigación. Mientras muchos estudios asumían que la interfaz de red domina el consumo energético de las comuni-caciones inalámbricas, un trabajo reciente revela que el procesado de tramas que se lleva a cabo en el disposi-tivo podría gastar tanta energía como la propia interfaz para muchos dispositivos. Este descubrimiento plantea dudas sobre los anteriores modelos energéticos para comunicaciones inalámbricas y nos obliga a reconsid-erar los esquemas de ahorro energético existentes. Desde este punto de vista, esta tesis está dedicada al estudio de la eficiencia energética de dispositivos de usuario móviles en múltiples capas. Para ello, se construye un completo sistema de medida de energía, y una metodología robusta, capaz de caracterizar un amplio rango de dispositivos móviles, así como partes individuales de tales dispositivos. A partir de esto, en primer lugar se profundiza en el consumo energético del procesamiento de tramas en la pila de protocolos de los dispositivos. Nuestros resul-tados identifican a la CPU como principal causa de tal consumo. Además, se descubre que la caracterización de la cuota energética adscrita al dispositivo es mucho más compleja que lo mostrado por el trabajo ante-rior. Los dispositivos con CPU complejas (múltiples frecuencias y modos de apagado) requieren nuevas metodologías y modelos para caracterizar su consumo de manera existosa. En este punto, volvemos nuestra atención hacia niveles más bajos de la pila de comunicaciones para investigar el comportamiento de las interfaces WiFi en estado inactivo. Debido al diseño del protocolo 802.11, junto con la tendencia creciente hacia la densifi-cación de las redes, los dispositivos WiFi pasan mucho tiempo recibiendo tramas destinadas a otros dispos-itivos cuando podrían estar apagados. Para mitigar este problema, se estudian las limitaciones temporales de una tarjeta WiFi comercial, lo que posteriormente se utiliza para desarrollar un algoritmo conforme con el estándar que es capaz de ahorrar energía durante dichas transmisiones. A un nivel más alto, las técnicas de adaptación de tasa y control de potencia adaptan la tasa de datos y la potencia de salida a las condiciones del canal. No obstante, estas técnicas han sido típicamente es-tudiadas con otras métricas en mente (i.e., figuras de rendimiento como la tasa total y la capacidad). De hecho, nuestros análisis y simulaciones desvelan un conflicto entre la maximización de la tasa total y la efi-ciencia energética en 802.11. Se muestra que las técni-cas de adaptación de tasa y control de potencia pueden incurrir en ineficiencias en los cambios de modo, y se proporcionan heurísticos para tomar tales decisiones de un modo conservador y eficiente energéticamente. Finalmente, nuestra experiencia investigadora en métodos de simulación nos hizo conscientes de la necesidad de nuevas herramientas de simulación comprometidas con un enfoque intermedio: menos especificidad que los complejos simuladores de re-des a cambio de facilidad y rapidez en el prototipado. Como resultado, se desarrolló un paquete de simu-lación por eventos discretos para el lenguaje R orien-tado a procesos y basado en trayectorias, el cual está diseñado como una herramienta fácil de utilizar a la par que potente con capacidad de monitorización au-tomática integrada. El uso de este simulador en redes se demuestra mediante el modelado en energía de un escenario de la Internet de las Cosas con miles de dis-positivos de medida en tan solo unas pocas líneas de código.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Juan Manuel López Soler.- Secretario: Francisco Valera Pintor.- Vocal: Paul Horatiu Patra

    Information Sharing for improved Supply Chain Collaboration – Simulation Analysis

    Get PDF
    Collaboration among consumer good’s manufacturer and retailers is vital in order to elevate their performance. Such mutual cooperation’s, focusing beyond day to day business and transforming from a contract-based relationship to a value-based relationship is well received in the industries. Further coupling of information sharing with the collaboration is valued as an effective forward step. The advent of technologies naturally supports information sharing across the supply chain. Satisfying consumers demand is the main goal of any supply chain, so studying supply chain behaviour with demand as a shared information, makes it more beneficial. This thesis analyses demand information sharing in a two-stage supply chain. Three different collaboration scenarios (None, Partial and Full) are simulated using Discrete Event Simulation and their impact on supply chain costs analyzed. Arena software is used to simulate the inventory control scenarios. The test simulation results show that the total system costs decrease with the increase in the level of information sharing. There is 7% cost improvement when the information is partially shared and 43% improvement when the information is fully shared in comparison with the no information sharing scenario. The proposed work can assist decision makers in design and planning of information sharing scenarios between various supply chain partners to gain competitive advantage

    SCS: 60 years and counting! A time to reflect on the Society's scholarly contribution to M&S from the turn of the millennium.

    Get PDF
    The Society for Modeling and Simulation International (SCS) is celebrating its 60th anniversary this year. Since its inception, the Society has widely disseminated the advancements in the field of modeling and simulation (M&S) through its peer-reviewed journals. In this paper we profile research that has been published in the journal SIMULATION: Transactions of the Society for Modeling and Simulation International from the turn of the millennium to 2010; the objective is to acknowledge the contribution of the authors and their seminal research papers, their respective universities/departments and the geographical diversity of the authors' affiliations. Yet another objective is to contribute towards the understanding of the overall evolution of the discipline of M&S; this is achieved through the classification of M&S techniques and its frequency of use, analysis of the sectors that have seen the predomination application of M&S and the context of its application. It is expected that this paper will lead to further appreciation of the contribution of the Society in influencing the growth of M&S as a discipline and, indeed, in steering its future direction

    A Survey of Monte Carlo Tree Search Methods

    Get PDF
    Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work

    Simulation and Flight Test Capability for Testing Prototype Sense and Avoid System Elements

    Get PDF
    NASA Langley Research Center (LaRC) and The MITRE Corporation (MITRE) have developed, and successfully demonstrated, an integrated simulation-to-flight capability for evaluating sense and avoid (SAA) system elements. This integrated capability consists of a MITRE developed fast-time computer simulation for evaluating SAA algorithms, and a NASA LaRC surrogate unmanned aircraft system (UAS) equipped to support hardware and software in-the-loop evaluation of SAA system elements (e.g., algorithms, sensors, architecture, communications, autonomous systems), concepts, and procedures. The fast-time computer simulation subjects algorithms to simulated flight encounters/ conditions and generates a fitness report that records strengths, weaknesses, and overall performance. Reviewed algorithms (and their fitness report) are then transferred to NASA LaRC where additional (joint) airworthiness evaluations are performed on the candidate SAA system-element configurations, concepts, and/or procedures of interest; software and hardware components are integrated into the Surrogate UAS research systems; and flight safety and mission planning activities are completed. Onboard the Surrogate UAS, candidate SAA system element configurations, concepts, and/or procedures are subjected to flight evaluations and in-flight performance is monitored. The Surrogate UAS, which can be controlled remotely via generic Ground Station uplink or automatically via onboard systems, operates with a NASA Safety Pilot/Pilot in Command onboard to permit safe operations in mixed airspace with manned aircraft. An end-to-end demonstration of a typical application of the capability was performed in non-exclusionary airspace in October 2011; additional research, development, flight testing, and evaluation efforts using this integrated capability are planned throughout fiscal year 2012 and 2013

    COllective INtelligence with sequences of actions

    Get PDF
    The design of a Multi-Agent System (MAS) to perform well on a collective task is non-trivial. Straightforward application of learning in a MAS can lead to sub optimal solutions as agents compete or interfere. The COllective INtelligence (COIN) framework of Wolpert et al. proposes an engineering solution for MASs where agents learn to focus on actions which support a common task. As a case study, we investigate the performance of COIN for representative token retrieval problems found to be difficult for agents using classic Reinforcement Learning (RL). We further investigate several techniques from RL (model-based learning, Q(lambda))Q(lambda)) to scale application of the COIN framework. Lastly, the COIN framework is extended to improve performance for sequences of actions

    A neural network architecture for data editing in the Bank of ItalyÂ’s business surveys

    Get PDF
    This paper presents an application of neural network models to predictive classification for data quality control. Our aim is to identify data affected by measurement error in the Bank of ItalyÂ’s business surveys. We build an architecture consisting of three feed-forward networks for variables related to employment, sales and investment respectively: the networks are trained on input matrices extracted from the error-free final survey database for the 2003 wave, and subjected to stochastic transformations reproducing known error patterns. A binary indicator of unit perturbation is used as the output variable. The networks are trained with the Resilient Propagation learning algorithm. On the training and validation sets, correct predictions occur in about 90 per cent of the records for employment, 94 per cent for sales, and 75 per cent for investment. On independent test sets, the respective quotas average 92, 80 and 70 per cent. On our data, neural networks perform much better as classifiers than logistic regression, one of the most popular competing methods, on our data. They appear to provide a valid means of improving the efficiency of the quality control process and, ultimately, the reliability of survey data.data quality, data editing, binary classification, neural networks, measurement error
    corecore