81 research outputs found

    Turku Centre for Computer Science – Annual Report 2013

    Get PDF
    Due to a major reform of organization and responsibilities of TUCS, its role, activities, and even structures have been under reconsideration in 2013. The traditional pillar of collaboration at TUCS, doctoral training, was reorganized due to changes at both universities according to the renewed national system for doctoral education. Computer Science and Engineering and Information Systems Science are now accompanied by Mathematics and Statistics in newly established doctoral programs at both University of Turku and &Aring;bo Akademi University. Moreover, both universities granted sufficient resources to their respective programmes for doctoral training in these fields, so that joint activities at TUCS can continue. The outcome of this reorganization has the potential of proving out to be a success in terms of scientific profile as well as the quality and quantity of scientific and educational results.&nbsp; International activities that have been characteristic to TUCS since its inception continue strong. TUCS&rsquo; participation in European collaboration through EIT ICT Labs Master&rsquo;s and Doctoral School is now more active than ever. The new double degree programs at MSc and PhD level between University of Turku and Fudan University in Shaghai, P.R.China were succesfully set up and are&nbsp; now running for their first year. The joint students will add to the already international athmosphere of the ICT House.&nbsp; The four new thematic reseach programmes set up acccording to the decision by the TUCS Board have now established themselves, and a number of events and other activities saw the light in 2013. The TUCS Distinguished Lecture Series managed to gather a large audience with its several prominent speakers. The development of these and other research centre activities continue, and&nbsp; new practices and structures will be initiated to support the tradition of close academic collaboration.&nbsp; The TUCS&rsquo; slogan Where Academic Tradition Meets the Exciting Future has proven true throughout these changes. Despite of the dark clouds on the national and European economic sky, science and higher education in the field have managed to retain all the key ingredients for success. Indeed, the future of ICT and Mathematics in Turku seems exciting.</p

    Workflow models for heterogeneous distributed systems

    Get PDF
    The role of data in modern scientific workflows becomes more and more crucial. The unprecedented amount of data available in the digital era, combined with the recent advancements in Machine Learning and High-Performance Computing (HPC), let computers surpass human performances in a wide range of fields, such as Computer Vision, Natural Language Processing and Bioinformatics. However, a solid data management strategy becomes crucial for key aspects like performance optimisation, privacy preservation and security. Most modern programming paradigms for Big Data analysis adhere to the principle of data locality: moving computation closer to the data to remove transfer-related overheads and risks. Still, there are scenarios in which it is worth, or even unavoidable, to transfer data between different steps of a complex workflow. The contribution of this dissertation is twofold. First, it defines a novel methodology for distributed modular applications, allowing topology-aware scheduling and data management while separating business logic, data dependencies, parallel patterns and execution environments. In addition, it introduces computational notebooks as a high-level and user-friendly interface to this new kind of workflow, aiming to flatten the learning curve and improve the adoption of such methodology. Each of these contributions is accompanied by a full-fledged, Open Source implementation, which has been used for evaluation purposes and allows the interested reader to experience the related methodology first-hand. The validity of the proposed approaches has been demonstrated on a total of five real scientific applications in the domains of Deep Learning, Bioinformatics and Molecular Dynamics Simulation, executing them on large-scale mixed cloud-High-Performance Computing (HPC) infrastructures

    Contributions to Desktop Grid Computing : From High Throughput Computing to Data-Intensive Sciences on Hybrid Distributed Computing Infrastructures

    Get PDF
    Since the mid 90’s, Desktop Grid Computing - i.e the idea of using a large number of remote PCs distributed on the Internet to execute large parallel applications - has proved to be an efficient paradigm to provide a large computational power at the fraction of the cost of a dedicated computing infrastructure.This document presents my contributions over the last decade to broaden the scope of Desktop Grid Computing. My research has followed three different directions. The first direction has established new methods to observe and characterize Desktop Grid resources and developed experimental platforms to test and validate our approach in conditions close to reality. The second line of research has focused on integrating Desk- top Grids in e-science Grid infrastructure (e.g. EGI), which requires to address many challenges such as security, scheduling, quality of service, and more. The third direction has investigated how to support large-scale data management and data intensive applica- tions on such infrastructures, including support for the new and emerging data-oriented programming models.This manuscript not only reports on the scientific achievements and the technologies developed to support our objectives, but also on the international collaborations and projects I have been involved in, as well as the scientific mentoring which motivates my candidature for the Habilitation `a Diriger les Recherches

    Laitteistokiihdytetyn vuoronnuksen suorituskykyanalyysi

    Get PDF
    Performance analysis of heterogeneous MPSoCs (Multiprocessor System-on-Chip) is difficult. The non-determinism of parallel computation, communication delays and memory accesses force the system components into complex interaction. Hardware acceleration is used both to speed up the computations and the scheduling on MPSoCs. Finding an accompanying software structuring and efficient scheduling algorithms is not a straightforward task. In this thesis we investigate the use of simulation, measurement and modeling methods for analyzing the performance of heterogeneous MPSoCs. The viewpoint of this thesis is in simulation and modeling: How a high abstraction level simulation methodology can be used in modeling and analyzing of parallel systems based on MPSoCs. In particular we are interested in efficient use of hardware accelerated scheduling mechanisms and how they can be analyzed. Both parallel simulation and simulation of parallel systems contains many different methods, tools and approaches that attempt to balance between competing goals and cope with a specific subset of the problem space. Challenge is that in all approaches most of the simulation and modeling related problems remain and new challenges emerge. This thesis shows that the resource network methodology and dynamic scheduling models are a viable approach in modeling heterogeneous MPSoCs with accelerators. Concrete contributions are based on upgrading an existing simulation framework to support parallelism. Main contribution is on one hand that modeling concepts have been widened, and on the other hand that the supporting mechanisms have been implemented. The thesis work in progress was published in a peer reviewed international scientific workshop and the final results in a peer reviewed international scientific conference. The toolset has also been used in multiuniversity organized teaching and by the industry.Heterogeenisten moniydinjärjestelmien suorituskykyanalyysi on haasteellista. Laskennan epä-deterministisyys, kommunikaatioviiveet ja lukuisat muistioperaatiot saattavat järjestelmän komponentit monimutkaisiin vuorovaikutussuhteisiin. Laitteistokiihdytettyjä ajoitusmenetelmiä käytetään nopeuttamaan ajoituspäätöksiä. Sopivan ohjelmarakenteen ja tehokkaiden ajoitusalgoritmien löytäminen ei ole helppoa. Tässä työssä tutkitaan miten simulointi-, mittaus- ja mallinnusmenetelmiä voi käyttää laitteistokiihdytettyjen moniydinjärjestelmien suorituskykyanalyysiin. Työn näkökulma on simuloinnissa ja mallinnuksessa: Miten korkean abstraktiotason simulointimenetelmät soveltuvat moniydinjärjestelmiin pohjautuvien rinnakkaisten järjestelmien mallinnukseen ja suorituskykyanalyysiin. Erityisen kiinnostuksen kohteena on laitteistokiihdytteisten ajoitusmenetelmien tehokas käyttö sekä analysointi. Rinnakkaissimulointi pitää sisällään erilaisia menetelmiä, työkaluja ja lähestymistapoja jotka pyrkivät tasapainottelemaan ristiriitaisten tavoitteiden välillä. Haasteena on se, että kaikissa lähestymistavoissa simulaation ja mallinnuksen useimmat ongelmat säilyvät ja uusia ongelmia ilmaantuu. Työn tulokset viittaavat siihen että resurssiverkkopohjainen menetelmä dynaamisen ajoituksen kanssa on toimiva lähestymistapa rinnakkaisten järjestelmien suorituskykyanalyysiin. Työn konkreettiset tulokset pitävät sisällään olemassa olevan simulointiympäristön päivittämisen rinnakkaisuutta tukevaksi. Keskeinen tulos on toisaalta se että mallinnusmenetelmiä on laajennettu ja toisaalta se että näitä tukevat mekanismit on toteutettu. Keskeneräisen työn tulokset on julkaistu vertaisarvioidussa tieteellisessä seminaarissa ja valmiin työn tulokset vertaisarvioidussa tieteellisessä konferenssissa. Simulointiympäristöä on käytetty usean yliopiston järjestämässä yhteisopetuksessa sekä teollisuudessa

    Optimización de arquitecturas distribuidas para el procesado de datos masivos

    Full text link
    Tesis por compendio[ES] La utilización de sistemas para el tratamiento eficiente de grandes volúmenes de información ha crecido en popularidad durante los últimos años. Esto conlleva el desarrollo de nuevas tecnologías, métodos y algoritmos, que permitan un uso eficiente de las infraestructuras. El tratamiento de grandes volúmenes de información no está exento de numerosos problemas y retos, algunos de los cuales se tratarán de mejorar. Dentro de las posibilidades actuales debemos tener en cuenta la evolución que han tenido los sistemas durante los últimos años y las oportunidades de mejora que existan en cada uno de ellos. El primer sistema de estudio, el Grid, constituye una aproximación inicial de procesamiento masivo y representa uno de los primeros sistemas distribuidos de tratamiento de grandes conjuntos de datos. Participando en la modernización de uno de los mecanismos de acceso a los datos se facilita la mejora de los tratamientos que se realizan en la genómica actual. Los estudios que se presentan están centrados en la transformada de Burrows-Wheeler, que ya es conocida en el análisis genómico por su capacidad para mejorar los tiempos en el alineamiento de cadenas cortas de polinucleótidos. Esta mejora en los tiempos, se perfecciona mediante la reducción de los accesos remotos con la utilización de un sistema de caché intermedia que optimiza su ejecución en un sistema Grid ya consolidado. Esta caché se implementa como complemento a la librería de acceso estándar GFAL utilizada en la infraestructura de IberGrid. En un segundo paso se plantea el tratamiento de los datos en arquitecturas de Big Data. Las mejoras se realizan tanto en la arquitectura Lambda como Kappa mediante la búsqueda de métodos para tratar grandes volúmenes de información multimedia. Mientras que en la arquitectura Lambda se utiliza Apache Hadoop como tecnología para este tratamiento, en la arquitectura Kappa se utiliza Apache Storm como sistema de computación distribuido en tiempo real. En ambas arquitecturas se amplía el ámbito de utilización y se optimiza la ejecución mediante la aplicación de algoritmos que mejoran los problemas en cada una de las tecnologías. El problema del volumen de datos es el centro de un último escalón, por el que se permite mejorar la arquitectura de microservicios. Teniendo en cuenta el número total de nodos que se ejecutan en sistemas de procesamiento tenemos una aproximación de las magnitudes que podemos obtener para el tratamiento de grandes volúmenes. De esta forma, la capacidad de los sistemas para aumentar o disminuir su tamaño permite un gobierno óptimo. Proponiendo un sistema bioinspirado se aporta un método de autoescalado dinámico y distribuido que mejora el comportamiento de los métodos comúnmente utilizados frente a las circunstancias cambiantes no predecibles. Las tres magnitudes clave del Big Data, también conocidas como V's, están representadas y mejoradas: velocidad, enriqueciendo los sistemas de acceso de datos por medio de una reducción de los tiempos de tratamiento de las búsquedas en los sistemas Grid bioinformáticos; variedad, utilizando sistemas multimedia menos frecuentes que los basados en datos tabulares; y por último, volumen, incrementando las capacidades de autoescalado mediante el aprovechamiento de contenedores software y algoritmos bioinspirados.[CA] La utilització de sistemes per al tractament eficient de grans volums d'informació ha crescut en popularitat durant els últims anys. Açò comporta el desenvolupament de noves tecnologies, mètodes i algoritmes, que aconsellen l'ús eficient de les infraestructures. El tractament de grans volums d'informació no està exempt de nombrosos problemes i reptes, alguns dels quals es tractaran de millorar. Dins de les possibilitats actuals hem de tindre en compte l'evolució que han tingut els sistemes durant els últims anys i les ocasions de millora que existisquen en cada un d'ells. El primer sistema d'estudi, el Grid, constituïx una aproximació inicial de processament massiu i representa un dels primers sistemes de tractament distribuït de grans conjunts de dades. Participant en la modernització d'un dels mecanismes d'accés a les dades es facilita la millora dels tractaments que es realitzen en la genòmica actual. Els estudis que es presenten estan centrats en la transformada de Burrows-Wheeler, que ja és coneguda en l'anàlisi genòmica per la seua capacitat per a millorar els temps en l'alineament de cadenes curtes de polinucleòtids. Esta millora en els temps, es perfecciona per mitjà de la reducció dels accessos remots amb la utilització d'un sistema de memòria cau intermèdia que optimitza la seua execució en un sistema Grid ja consolidat. Esta caché s'implementa com a complement a la llibreria d'accés estàndard GFAL utilitzada en la infraestructura d'IberGrid. En un segon pas es planteja el tractament de les dades en arquitectures de Big Data. Les millores es realitzen tant en l'arquitectura Lambda com a Kappa per mitjà de la busca de mètodes per a tractar grans volums d'informació multimèdia. Mentre que en l'arquitectura Lambda s'utilitza Apache Hadoop com a tecnologia per a este tractament, en l'arquitectura Kappa s'utilitza Apache Storm com a sistema de computació distribuït en temps real. En ambdós arquitectures s'àmplia l'àmbit d'utilització i s'optimitza l'execució per mitjà de l'aplicació d'algoritmes que milloren els problemes en cada una de les tecnologies. El problema del volum de dades és el centre d'un últim escaló, pel qual es permet millorar l'arquitectura de microserveis. Tenint en compte el nombre total de nodes que s'executen en sistemes de processament tenim una aproximació de les magnituds que podem obtindre per al tractaments de grans volums. D'aquesta manera la capacitat dels sistemes per a augmentar o disminuir la seua dimensió permet un govern òptim. Proposant un sistema bioinspirat s'aporta un mètode d'autoescalat dinàmic i distribuït que millora el comportament dels mètodes comunment utilitzats enfront de les circumstàncies canviants no predictibles. Les tres magnituds clau del Big Data, també conegudes com V's, es troben representades i millorades: velocitat, enriquint els sistemes d'accés de dades per mitjà d'una reducció dels temps de tractament de les busques en els sistemes Grid bioinformàtics; varietat, utilitzant sistemes multimèdia menys freqüents que els basats en dades tabulars; i finalment, volum, incrementant les capacitats d'autoescalat per mitjà de l'aprofitament de contenidors i algoritmes bioinspirats.[EN] The use of systems for the efficient treatment of large data volumes has grown in popularity during the last few years. This has led to the development of new technologies, methods and algorithms to efficiently use of infrastructures. The Big Data treatment is not exempt from numerous problems and challenges, some of which will be attempted to improve. Within the existing possibilities, we must take into account the evolution that systems have had during the last years and the improvement that exists in each one. The first system of study, the Grid, constitutes an initial approach of massive distributed processing and represents one of the first treatment systems of big data sets. By researching in the modernization of the data access mechanisms, the advance of the treatments carried out in current genomics is facilitated. The studies presented are centred on the Burrows-Wheeler Transform, which is already known in genomic analysis for its ability to improve alignment times of short polynucleotids chains. This time, the update is enhanced by reducing remote accesses by using an intermediate cache system that optimizes its execution in an already consolidated Grid system. This cache is implemented as a GFAL standard file access library complement used in IberGrid infrastructure. In a second step, data processing in Big Data architectures is considered. Improvements are made in both the Lambda and Kappa architectures searching for methods to process large volumes of multimedia information. For the Lambda architecture, Apache Hadoop is used as the main processing technology, while for the Kappa architecture, Apache Storm is used as a real time distributed computing system. In both architectures the use scope is extended and the execution is optimized applying algorithms that improve problems for each technology. The last step is focused on the data volume problem, which allows the improvement of the microservices architecture. The total number of nodes running in a processing system provides a measure for the capacity of processing large data volumes. This way, the ability to increase and decrease capacity allows optimal governance. By proposing a bio-inspired system, a dynamic and distributed self-scaling method is provided improving common methods when facing unpredictable workloads. The three key magnitudes of Big Data, also known as V's, will be represented and improved: speed, enriching data access systems by reducing search processing times in bioinformatic Grid systems; variety, using multimedia data less used than tabular data; and finally, volume, increasing self-scaling capabilities using software containers and bio-inspired algorithms.Herrera Hernández, J. (2020). Optimización de arquitecturas distribuidas para el procesado de datos masivos [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/149374TESISCompendi

    Security Analysis of System Behaviour - From "Security by Design" to "Security at Runtime" -

    Get PDF
    The Internet today provides the environment for novel applications and processes which may evolve way beyond pre-planned scope and purpose. Security analysis is growing in complexity with the increase in functionality, connectivity, and dynamics of current electronic business processes. Technical processes within critical infrastructures also have to cope with these developments. To tackle the complexity of the security analysis, the application of models is becoming standard practice. However, model-based support for security analysis is not only needed in pre-operational phases but also during process execution, in order to provide situational security awareness at runtime. This cumulative thesis provides three major contributions to modelling methodology. Firstly, this thesis provides an approach for model-based analysis and verification of security and safety properties in order to support fault prevention and fault removal in system design or redesign. Furthermore, some construction principles for the design of well-behaved scalable systems are given. The second topic is the analysis of the exposition of vulnerabilities in the software components of networked systems to exploitation by internal or external threats. This kind of fault forecasting allows the security assessment of alternative system configurations and security policies. Validation and deployment of security policies that minimise the attack surface can now improve fault tolerance and mitigate the impact of successful attacks. Thirdly, the approach is extended to runtime applicability. An observing system monitors an event stream from the observed system with the aim to detect faults - deviations from the specified behaviour or security compliance violations - at runtime. Furthermore, knowledge about the expected behaviour given by an operational model is used to predict faults in the near future. Building on this, a holistic security management strategy is proposed. The architecture of the observing system is described and the applicability of model-based security analysis at runtime is demonstrated utilising processes from several industrial scenarios. The results of this cumulative thesis are provided by 19 selected peer-reviewed papers

    Proyecto Docente e Investigador, Trabajo Original de Investigación y Presentación de la Defensa, preparado por Germán Moltó para concursar a la plaza de Catedrático de Universidad, concurso 082/22, plaza 6708, área de Ciencia de la Computación e Inteligencia Artificial

    Full text link
    Este documento contiene el proyecto docente e investigador del candidato Germán Moltó Martínez presentado como requisito para el concurso de acceso a plazas de Cuerpos Docentes Universitarios. Concretamente, el documento se centra en el concurso para la plaza 6708 de Catedrático de Universidad en el área de Ciencia de la Computación en el Departamento de Sistemas Informáticos y Computación de la Universitat Politécnica de València. La plaza está adscrita a la Escola Técnica Superior d'Enginyeria Informàtica y tiene como perfil las asignaturas "Infraestructuras de Cloud Público" y "Estructuras de Datos y Algoritmos".También se incluye el Historial Académico, Docente e Investigador, así como la presentación usada durante la defensa.Germán Moltó Martínez (2022). Proyecto Docente e Investigador, Trabajo Original de Investigación y Presentación de la Defensa, preparado por Germán Moltó para concursar a la plaza de Catedrático de Universidad, concurso 082/22, plaza 6708, área de Ciencia de la Computación e Inteligencia Artificial. http://hdl.handle.net/10251/18903

    Distributed and Lightweight Meta-heuristic Optimization method for Complex Problems

    Get PDF
    The world is becoming more prominent and more complex every day. The resources are limited and efficiently use them is one of the most requirement. Finding an Efficient and optimal solution in complex problems needs to practical methods. During the last decades, several optimization approaches have been presented that they can apply to different optimization problems, and they can achieve different performance on various problems. Different parameters can have a significant effect on the results, such as the type of search spaces. Between the main categories of optimization methods (deterministic and stochastic methods), stochastic optimization methods work more efficient on big complex problems than deterministic methods. But in highly complex problems, stochastic optimization methods also have some issues, such as execution time, convergence to local optimum, incompatible with distributed systems, and dependence on the type of search spaces. Therefore this thesis presents a distributed and lightweight metaheuristic optimization method (MICGA) for complex problems focusing on four main tracks. 1) The primary goal is to improve the execution time by MICGA. 2) The proposed method increases the stability and reliability of the results by using the multi-population strategy in the second track. 3) MICGA is compatible with distributed systems. 4) Finally, MICGA is applied to the different type of optimization problems with other kinds of search spaces (continuous, discrete and order based optimization problems). MICGA has been compared with other efficient optimization approaches. The results show the proposed work has been achieved enough improvement on the main issues of the stochastic methods that are mentioned before.Maailmasta on päivä päivältä tulossa yhä monimutkaisempi. Resurssit ovat rajalliset, ja siksi niiden tehokas käyttö on erittäin tärkeää. Tehokkaan ja optimaalisen ratkaisun löytäminen monimutkaisiin ongelmiin vaatii tehokkaita käytännön menetelmiä. Viime vuosikymmenien aikana on ehdotettu useita optimointimenetelmiä, joilla jokaisella on vahvuutensa ja heikkoutensa suorituskyvyn ja tarkkuuden suhteen erityyppisten ongelmien ratkaisemisessa. Parametreilla, kuten hakuavaruuden tyypillä, voi olla merkittävä vaikutus tuloksiin. Optimointimenetelmien pääryhmistä (deterministiset ja stokastiset menetelmät) stokastinen optimointi toimii suurissa monimutkaisissa ongelmissa tehokkaammin kuin deterministinen optimointi. Erittäin monimutkaisissa ongelmissa stokastisilla optimointimenetelmillä on kuitenkin myös joitain ongelmia, kuten korkeat suoritusajat, päätyminen paikallisiin optimipisteisiin, yhteensopimattomuus hajautetun toteutuksen kanssa ja riippuvuus hakuavaruuden tyypistä. Tämä opinnäytetyö esittelee hajautetun ja kevyen metaheuristisen optimointimenetelmän (MICGA) monimutkaisille ongelmille keskittyen neljään päätavoitteeseen: 1) Ensisijaisena tavoitteena on pienentää suoritusaikaa MICGA:n avulla. 2) Lisäksi ehdotettu menetelmä lisää tulosten vakautta ja luotettavuutta käyttämällä monipopulaatiostrategiaa. 3) MICGA tukee hajautettua toteutusta. 4) Lopuksi MICGA-menetelmää sovelletaan erilaisiin optimointiongelmiin, jotka edustavat erityyppisiä hakuavaruuksia (jatkuvat, diskreetit ja järjestykseen perustuvat optimointiongelmat). Työssä MICGA-menetelmää verrataan muihin tehokkaisiin optimointimenetelmiin. Tulokset osoittavat, että ehdotetulla menetelmällä saavutetaan selkeitä parannuksia yllä mainittuihin stokastisten menetelmien pääongelmiin liittyen

    Choosing between remote I/O versus staging in distributed environments

    Get PDF
    Today, scientifi_x000C_c applications and experiments have become increasingly complex and more demanding in terms of their computational and data requirements. The amount of data generated and used has grown at a very rapid rate. As tens or hundreds of terabytes of data for a single application is very common today; petabytes and even exabytes of data will be very common in a few years. One of the major challenges in distributed computing environments is how to access these large datasets remotely over the network. Data staging and remote I/O are the most widely used data access methods for distributed applications. Application developers generally chose one over the other intuitively without making any scienti_x000C_fic comparison specifi_x000C_c to their applications since there is no generic model available that they can use. In this thesis, we develop generic models and set guidelines for the application developers which would help them to choose the most appropriate data access method for their application. We de_x000C_fine the parameters that potentially aff_x000B_ect the end-to-end performance of the distributed applications which need to access remote data. To achieve our goal, we implement a series of synthetic benchmark applications to simulate di_x000B_fferent data access patterns. We run these benchmark applications on diff_x000B_erent distributed computing settings with di_x000B_fferent parameters, such as network bandwidth, server and client capabilities, and data access ratio. We also use di_x000B_fferent remote I/O protocols to show the importance of the protocol in making a decision. We use regression analysis to develop applicable generic models for comparing diff_x000B_erent data access methods, and test our models in a real life application. The main contribution of our thesis is generic models that can be applied to most data-intensive distributed applications to decide the best data access technique for those applications. Our models provide the scientists and application developers an opportunity to choose the best data access method before actually running the application

    MINA - a tool for MSC-based performance analysis and simulation of distributed systems

    Get PDF
    Performance analysis can help to address quantitative system analysis from the early stages of the system development life cycle, e.g., to compare design alternatives or to identify system bottlenecks. This thesis addresses the problem of performance evaluation of distributed systems by employing a viewpoint where analytical and simulative evaluation techniques are unified in the MINA tool to make use of both techniques. We suggest a modelling tool chain to evaluate the performance of distributed systems like computer and communication systems based on an MSC description of the system. MSC-based performance evaluation of distributed systems is an approach that uses performance models, which are based on an MSC description of a system to evaluate system performance measures. To determine the system performance, these descriptions can be extended by notions for time consumption and resource usage and afterwards be included in a system performance model. Based on this unique model specification, analytical as well as simulative techniques can be applied to achieve either quick mean value results by queueing networks analysis or confidence intervals or transient measures by simulation. The applicability to real world systems and the advantages of the tool has been demonstrated by a large application example in the field of mobile communication systems, and its effectiveness has been evaluated by comparing it with other approaches. The experimental results show that the tool is scalable, the way it can model simple as well as complex systems. Moreover, it is straightforward and has the ability to find reasonable solutions in an efficient manner
    corecore