18 research outputs found

    2014 International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    No full text
    The present book includes a set of selected extended papers from the 4th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2014), held in Vienna, Austria, from 28 to 30 August 2014. The conference brought together researchers, engineers and practitioners interested in methodologies and applications of modeling and simulation. New and innovative solutions are reported in this book. SIMULTECH 2014 received 167 submissions, from 45 countries, in all continents. After a double blind paper review performed by the Program Committee, 23% were accepted as full papers and thus selected for oral presentation. Additional papers were accepted as short papers and posters. A further selection was made after the Conference, based also on the assessment of presentation quality and audience interest, so that this book includes the extended and revised versions of the very best papers of SIMULTECH 2014. Commitment to high quality standards is a major concern of SIMULTECH that will be maintained in the next editions, considering not only the stringent paper acceptance ratios but also the quality of the program committee, keynote lectures, participation level and logistics

    Modeling network-controlled device-to-device communications in SimuLTE

    Get PDF
    In Long Term Evolution-Advanced (LTE-A), network-controlled device-to-device (D2D) communications allow User Equipments (UEs) to communicate directly, without involving the Evolved Node-B in data relaying, while the latter still retains control of resource allocation. The above paradigm allows reduced latencies for the UEs and increased resource efficiency for the network operator, and is therefore foreseen to support several services, from Machine-to-machine to vehicular communications. D2D communications introduce research challenges that might affect the performance of applications and upper-layer protocols, hence simulations represent a valuable tool for evaluating these aspects. However, simulating D2D features might pose additional com-putational burden to the simulation environment. To this aim, a careful modeling is required in order to reduce computational overhead. In this paper we describe our modeling of net-work-controlled D2D communications in SimuLTE, a system-level LTE-A simulation library based on OMNeT++. We describe the core modeling choices of SimuLTE, and show how these allow an easy extension to D2D communications. Moreover, we describe in detail the modeling of specific problems arising with D2D communications, such as scheduling with frequency reuse, connection mode switching and broadcast transmission. We document the computational efficiency of our modeling choices, showing that simulation of D2D communications is not more complex than simulation of classical cellular communications of comparable scale. Results show that the heaviest computational burden of D2D communication lies in estimating the Sidelink channel quality. We show that SimuLTE allows one to evaluate the interplay between D2D communication and end-to-end performance of UDP- and TCP-based services. Moreover, we assess the accuracy of using a binary interference model for frequency reuse, and we evaluate the trade-off between speed of execution and accuracy in modeling the reception probability

    Integral seismic risk assessment through fuzzy models

    Get PDF
    The usage of indicators as constituent parts of composite indices is an extended practice in many fields of knowledge. Even if rigorous statistical analyses are implemented, many of the methodologies follow simple arithmetic assumptions to aggregate indicators to build an index. One of the consequences of such assumptions can be the concealment of the influence of some of the composite index’s components. We developed a fuzzy method that aggregates indicators using non-linear methods and, in this paper, compare it to a well-known example in the field of risk assessment, called Moncho’s equation, which combines physical and social components and uses a linear aggregation method to estimate a level of seismic risk. By comparing the spatial pattern of the risk level obtained from these two methodologies, we were able to evaluate to what extent a fuzzy approach allows a more realistic representation of how social vulnerability levels might shape the seismic risk panorama in an urban environment. We found that, in some cases, this approach can lead to risk level values that are up to 80% greater than those obtained using a linear aggregation method for the same areas.Peer ReviewedPostprint (published version

    Enhancing System Realisation in Formal Model Development

    Get PDF
    Software for mission-critical systems is sometimes analysed using formal specification to increase the chances of the system behaving as intended. When sufficient insights into the system have been obtained from the formal analysis, the formal specification is realised in the form of a software implementation. One way to realise the system's software is by automatically generating it from the formal specification -- a technique referred to as code generation. However, in general it is difficult to make guarantees about the correctness of the generated code -- especially while requiring automation of the steps involved in realising the formal specification. This PhD dissertation investigates ways to improve the automation of the steps involved in realising and validating a system based on a formal specification. The approach aims to develop properly designed software tools which support the integration of formal methods tools into the software development life cycle, and which leverage the formal specification in the subsequent validation of the system. The tools developed use a new code generation infrastructure that has been built as part of this PhD project and implemented in the Overture tool -- a formal methods tool that supports the Vienna Development Method. The development of the code generation infrastructure has involved the re-design of the software architecture of Overture. The new architecture brings forth the reuse and extensibility features of Overture to take into account the needs and requirements of software extensions targeting Overture. The tools developed in this PhD project have successfully supported three case studies from externally funded projects. The feedback received from the case study work has further helped improve the code generation infrastructure and the tools built using it

    A cloudification methodology for high performance simulations

    Get PDF
    Mención Internacional en el título de doctorMany scientific areas make extensive use of computer simulations to study complex real-world processes. These computations are typically very resource-intensive and present scalability issues as experiments get larger, even in dedicated supercomputers since they are limited by their own hardware resources. Cloud computing raises as an option to move forward into the ideal unlimited scalability by providing virtually infinite resources, yet applications must be adapted to this paradigm. The major goal of this thesis is to analyze the suitability of performing simulations in clouds by performing a paradigm shift, from classic parallel approaches to data-centric models, in those applications where that is possible. The aim is to maintain the scalability achieved in traditional HPC infrastructures, while taking advantage of Cloud Computing paradigm features. The thesis also explores the characteristics that make simulators suitable or unsuitable to be deployed on HPC or Cloud infrastructures, defining a generic architecture and extracting common elements present among the majority of simulators. As result, we propose a generalist cloudification methodology based on the MapReduce paradigm to migrate high performance simulations into the cloud to provide greater scalability. We analysed its viability by applying it to a real engineering simulator and running the resulting implementation on HPC and cloud environments. Our evaluations will aim to show that the cloudified application is highly scalable and there is still a large margin to improve the theoretical model and its implementations, and also to extend it to a wider range of simulations.Muchas áreas de investigación hacen uso extensivo de simulaciones informáticas para estudiar procesos complejos del mundo real. Estas simulaciones suelen hacer uso intensivo de recursos, y presentan problemas de escalabilidad conforme los experimentos aumentan en tamaño incluso en clústeres, ya que estos están limitados por sus propios recursos hardware. Cloud Computing (computación en la nube) surge como alternativa para avanzar hacia el ideal de escalabilidad ilimitada mediante el aprovisionamiento de infinitos recursos (de forma virtual). No obstante, las aplicaciones deben ser adaptadas a este nuevo paradigma. La principal meta de esta tesis es analizar la idoneidad de realizar simulaciones en la nube mediante un cambio de paradigma, de las clásicas aproximaciones paralelas a nuevos modelos centrados en los datos, en aquellas aplicaciones donde esto sea posible. El objetivo es mantener la escalabilidad alcanzada en las tradicionales infraestructuras HPC, mientras se explotan las ventajas del paradigma de computación en la nube. La tesis explora las características que hacen a los simuladores ser o no adecuados para ser desplegados en infraestructuras clúster o en la nube, definiendo una arquitectura genérica y extrayendo elementos comunes presentes en la mayoría de los simuladores. Como resultado, proponemos una metodología genérica de cloudificación, basada en el paradigma MapReduce, para migrar simulaciones de alto rendimiento a la nube con el fin de proveer mayor escalabilidad. Analizamos su viabilidad aplicándola a un simulador real de ingeniería, y ejecutando la implementación resultante en entornos clúster y en la nube. Nuestras evaluaciones pretenden mostrar que la aplicación cloudificada es altamente escalable, y que existe un amplio margen para mejorar el modelo teórico y sus implementaciones, y para extenderlo a un rango más amplio de simulaciones.- Administrador de Infraestructuras Ferroviarias (ADIF), Estudio y realización de programas de cálculo de pórticos rígidos de catenaria (CALPOR) y de sistema de simulación de montaje de agujas aéreas de línea aérea de contacto (SIA), JM/RS 3.6/4100.0685-9/00100 – Administrador de Infraestructuras Ferroviarias (ADIF), Proyecto para la Investigación sobre la aplicación de las TIC a la innovación de las diferentes infraestructuras correspondientes a las instalaciones de electrificación y suministro de energía (SIRTE), JM/RS 3.9/1500.0009/0-00000 – Spanish Ministry of Education, TIN2010-16497, Scalable Input/Output techniques for high-performance distributed and parallel computing environments – Spanish Ministry of Economics and Competitiveness, TIN2013-41350-P, Técnicas de gestión escalable de datos para high-end computing systems – European Union, COST Action IC1305, ”Network for Sustainable Ultrascale Computing Platforms” (NESUS) – European Union, COST Action IC0805, ”Open European Network for High Performance Computing on Complex Environments” – Spanish Ministry of Economics and Competitiveness, TIN2011-15734-E, Red de Computación de Altas Prestaciones sobre Arquitecturas Paralelas Heterogéneas (CAPAP-H)Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Domenica Talia.- Presidente: José Daniel García Sánchez.- Secretario: José Manuel Moya Fernánde

    Model-based development of energy-efficient automation systems

    Get PDF
    Der Energieverbrauch ist ein immer wichtigeres Entscheidungskriterium, das bei der Suche nach guten architektonischen und gestalterischen Alternativen technischer Systeme einbezogen werden muss. Diese Monographie stellt eine Methodik für das modellbasierte Engineering energieeffizienter Automatisierungssysteme vor. In dieser Monografie wird ein eingebettetes System als eine Kombination der Prozessorhardware und des Softwareteils betrachtet. Im entwickelten Verfahren wird der erste Teil durch ein Betriebsmodell (operational model) beschrieben, das alle möglichen Zustände und Übergänge des betrachteten Systems darstellt. Der letzte Teil wird durch ein Anwendungsmodell (application model) repräsentiert, das den Arbeitsablauf eines konkreten für dieses System erstellten Programms widerspiegelt. Gemeinsam werden die beiden Modelle in ein stochastisches Petri-Netz umgewandelt, um eine Analyse des Systems zu ermöglichen. Die entwickelten Transformationsregeln werden vorgestellt und mathematisch beschrieben. Es ist dann möglich, die Leistungsaufnahme des Systems mittels einer Standardauswertung von Petri-Netzen vorherzusagen. Die UML (vereinheitlichte Modellierungssprache) wird in dieser Monographie für die Modellierung der Echtzeitsysteme verwendet. Die mit dem MARTE-Profil (Modellierung und Analyse der Echtzeit- und eingebetteten Systeme) erweiterten Zustandsübergangsdiagramme sind für die Modellierung und Leistungsbewertung ausgewählt. Die vorgestellte Methodik wird durch eine Implementierung der notwendigen Algorithmen und grafischen Editoren in der integrierten Entwicklungsumgebung TimeNET unterstützt. Die entwickelte Erweiterung implementiert die vorgestellte Methode zur Modellierung und Bewertung des Energieverbrauchs basierend auf den erweiterten UML-Modellen, die nun automatisch in ein stochastisches Petri-Netz transformiert werden können. Der Energieverbrauch des Systems kann dann durch die Analyse-Module für stochastische Petri-Netze von TimeNET vorhergesagt werden. Die Vorteile der vorgeschlagenen Methode werden anhand von Anwendungsbeispielen demonstriert.Power consumption is an increasingly important decision criterion that has to be included in the search for good architectural and design alternatives of technical systems. This monograph presents a methodology for the model-based engineering of energy-aware automation systems. In this monograph, an embedded system is considered as an alliance of the processor hardware and the software part. In the developed method, the former part is described by an operational model, which depicts all possible states and transitions of the system under consideration. The latter part is represented by an application model, which reflects the workflow of a concrete program created for this system. Together, these two models are translated into one stochastic Petri net to make analyzing of the system possible. The developed transformation rules are presented and described mathematically. It is then possible to predict the system’s power consumption by a standard evaluation of Petri nets. The Unified Modeling Language (UML) is used in this monograph for modeling of real-time systems. State machine diagrams extended with the MARTE profile (Modeling and Analysis of Real-Time and Embedded Systems) are chosen for modeling and performance evaluation. The presented methodology is supported by an implementation of the necessary algorithms and graphical editors in the software tool TimeNET. The developed extension implements the presented method for power consumption modeling and evaluation based on the extended UML models, which now can be automatically transformed into a stochastic Petri net. The system’s power consumption can be then predicted by the standard Petri net analysis modules of TimeNET. The methodology is validated and its advantages are demonstrated using application examples

    Image Processing and Simulation Toolboxes of Microscopy Images of Bacterial Cells

    Get PDF
    Recent advances in microscopy imaging technology have allowed the characterization of the dynamics of cellular processes at the single-cell and single-molecule level. Particularly in bacterial cell studies, and using the E. coli as a case study, these techniques have been used to detect and track internal cell structures such as the Nucleoid and the Cell Wall and fluorescently tagged molecular aggregates such as FtsZ proteins, Min system proteins, inclusion bodies and all the different types of RNA molecules. These studies have been performed with using multi-modal, multi-process, time-lapse microscopy, producing both morphological and functional images. To facilitate the finding of relationships between cellular processes, from small-scale, such as gene expression, to large-scale, such as cell division, an image processing toolbox was implemented with several automatic and/or manual features such as, cell segmentation and tracking, intra-modal and intra-modal image registration, as well as the detection, counting and characterization of several cellular components. Two segmentation algorithms of cellular component were implemented, the first one based on the Gaussian Distribution and the second based on Thresholding and morphological structuring functions. These algorithms were used to perform the segmentation of Nucleoids and to identify the different stages of FtsZ Ring formation (allied with the use of machine learning algorithms), which allowed to understand how the temperature influences the physical properties of the Nucleoid and correlated those properties with the exclusion of protein aggregates from the center of the cell. Another study used the segmentation algorithms to study how the temperature affects the formation of the FtsZ Ring. The validation of the developed image processing methods and techniques has been based on benchmark databases manually produced and curated by experts. When dealing with thousands of cells and hundreds of images, these manually generated datasets can become the biggest cost in a research project. To expedite these studies in terms of time and lower the cost of the manual labour, an image simulation was implemented to generate realistic artificial images. The proposed image simulation toolbox can generate biologically inspired objects that mimic the spatial and temporal organization of bacterial cells and their processes, such as cell growth and division and cell motility, and cell morphology (shape, size and cluster organization). The image simulation toolbox was shown to be useful in the validation of three cell tracking algorithms: Simple Nearest-Neighbour, Nearest-Neighbour with Morphology and DBSCAN cluster identification algorithm. It was shown that the Simple Nearest-Neighbour still performed with great reliability when simulating objects with small velocities, while the other algorithms performed better for higher velocities and when there were larger clusters present

    Flexible Views for View-based Model-driven Development

    Get PDF
    Modern software development faces the problem of fragmentation of information across heterogeneous artefacts in different modelling and programming languages. In this dissertation, the Vitruvius approach for view-based engineering is presented. Flexible views offer a compact definition of user-specific views on software systems, and can be defined the novel ModelJoin language. The process is supported by a change metamodel for metamodel evolution and change impact analysis

    Demand and Capacity Modelling of Acute Services Using Simulation and Optimization Techniques

    Get PDF
    The level of difficulty that hospital management have been experiencing over the past decade in terms of balancing demand and capacity needs has been at an unprecedented level in the UK. Due to shortage of capacity, hospitals are unable to treat patients, and in some cases, patients are transferred to other hospitals, outpatient referrals are delayed, and accident and emergency (A&E) waiting times are prolonged. So, it’s time to do things differently, because the current status quo is not an option. A whole hospital level decision support system (DSS) was developed to assess and respond to the needs of local populations. The model integrates every component of a hospital (including A&E, all outpatient and inpatient specialties) to aid with efficient and effective use of scarce resources. An individual service or a specialty cannot be assumed to be independent, they are all interconnected. It is clear from the literature that this level of generic hospital simulation model has never been developed before (so this is an innovative DSS). Using the Hospital Episode Statistics and local datasets, 768 forecasting models for the 28 outpatient and inpatient specialties are developed (to capture demand). Within this context, a variety of forecasting models (i.e. ARIMA, exponential smoothing, stepwise linear regression and STLF) for each specialty of outpatient and inpatient including the A&E department were developed. The best forecasting methods and periods were selected by comparing 4 forecasting methods and 3 periods (i.e. daily, weekly and monthly) according to forecast accuracy values calculated by the mean absolute scaled error (MASE). Demand forecasts were then used as an input into the simulation model for the entire hospital (all specialties). The generic hospital simulation model was developed by taking into account all specialties and interactions amongst the A&E, outpatient and inpatient specialties. Six hundred observed frequency distributions were established for the simulation model. All distributions used in the model were based on age groups. Using other inputs (i.e. financial inputs, number of follow ups, etc.), the hospital was therefore modelled to measure key output metrics in strategic planning. This decision support system eliminates the deficiencies of the current and past studies around modelling hospitals within a single framework. A new output metric which is called ‘demand coverage ratio’ was developed to measure the percentage of patients who are admitted and discharged with available resources of the associated specialty. In addition, a full factorial experimental design with 4 factors (A&E, elective and non-elective admissions and outpatient attendance) at 2 levels (possible 5% and 10% demand increases) was carried out in order to investigate the effects of demand increases on the key outputs (i.e. demand coverage ratio, bed occupancy rate and total revenue). As a result, each factor is found to affect total revenue, as well as the interaction between elective and non-elective admissions. The demand coverage ratio is affected by the changes in outpatient demands as well as A&E arrivals and non-elective admissions. In addition, the A&E arrivals, non-elective admissions and elective admissions are most important for bed occupancy rates, respectively. After an exhaustive review of the literature we notice that an entire hospital model has never been developed that combines forecasting, simulation and optimization techniques. A linear optimization model was developed to estimate the required bed capacity and staff needs of a mid-size hospital in England (using essential outputs from forecasting and forecasting-simulation) for each inpatient elective and non-elective specialty. In conclusion, these results will bring a different perspective to key decision makers with a decision support tool for short and long term strategic planning to make rational and realistic plans. This hospital decision support system can become a crucial instrument for decision makers for efficient service in hospitals in England and other parts of the world
    corecore