158 research outputs found

    High Availability and Scalability of Mainframe Environments using System z and z/OS as example

    Get PDF
    Mainframe computers are the backbone of industrial and commercial computing, hosting the most relevant and critical data of businesses. One of the most important mainframe environments is IBM System z with the operating system z/OS. This book introduces mainframe technology of System z and z/OS with respect to high availability and scalability. It highlights their presence on different levels within the hardware and software stack to satisfy the needs for large IT organizations

    A technical study of charge back and monitoring systems in virtual environment

    Get PDF
    In the recent years the shared services concept has become an integral part of business. These shared services can be in the form of information technology, engineering and lot more. Service providers spent huge amounts of money to build an infrastructure that can provide efficient and valued services to the customers. In IT business these services varies from providing basic consultancy and managing the IT operations of the customers to running high priority business processes,(online banking). Customers of these services pay for these services, so a mechanism of resource usage metering is required to accurately charge the users and at the same time a monitoring mechanism is required to have a check on the services being provided to the customers for any resource contention and service degradation and future capacity planning. If a service provider is unable to develop an accurate charge back and monitoring mechanism then the equation of service provider and customer becomes a point of frustration for both sides. charge back and monitoring systems developed for physical environment are not capable to measure the resource usage in virtual environment because in virtual environment (Z/VM) resources are shared between users and it becomes difficult to measure the resource usage by a specific user. Until now a few tools have been developed that provides efficient resource metering and monitoring in virtual environment (Z/VM) but every business has its own requirements and system setup so mostly these tools need some customizations to fit into the business. This work mainly concentrated on what kind of resource utilization data is available on Z/VM and on LINUX guests running on Z/VM to effectively charge the customers running there guest Linux Operating systems in virtual environment (Z/VM based) and to monitor the cpu and memory utilization to check whether the estimate of memory allocation for linux guests running different applications made by system (PWSS) is a good estimate or require some optimizations. Because memory utilization is considered more expensive in virtual environment in the context of system performance. The study also includes a comparison between this technique of charge back and some commercial products from IBM and CA (Computer Associates) that provides charge back and monitoring facility in Z/VM based virtual environment, and provides some benefits of this work in the proposed environment.Master i nettverks- og systemadministrasjo

    The economic value of remote sensing of earth resources from space: An ERTS overview and the value of continuity of service. Volume 5: Inland water resources

    Get PDF
    The economic value of an ERTS system in the area of inland water resources management is investigated. Benefits are attributed to new capabilities for managing inland water resources in the field of power generation, agriculture, and urban water supply. These benefits are obtained in the area of equal capability (cost savings) and increased capability (equal budget), and are estimated by applying conservative assumptions to Federal budgeting information, Congressional appropriation hearings, and ERTS technical capabilities

    Introducing the Game Design Matrix: A Step-by-Step Process for Creating Serious Games

    Get PDF
    The Game Design Matrix makes effective game design accessible to novice game designers. Serious Games are a powerful tool for educators seeking to boost the level of student engagement and application in academic environments, but the can be difficult to incorporate into existing courses due to availability and the cost of quality game design. The Game Design Matrix was used by two educators, novice game designers, to create a serious game. The games were assessed in an academic setting and observed to be effective in engagement, interaction, and achieving higher levels of learning

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    A General Methodology to Optimize and Benchmark Edge Devices

    Get PDF
    The explosion of Internet Of Things (IoT), embedded and “smart” devices has also seen the addition of “general purpose” single board computers also referred to as “edge devices.” Determining if one of these generic devices meets the need of a new given task however can be challenging. Software generically written to be portable or plug and play may be too bloated to work properly without significant modification due to much tighter hardware resources. Previous work in this area has been focused on micro or chip-level benchmarking which is mainly useful for chip designers or low level system integrators. A higher or macro level method is needed to not only observe the behavior of these devices under a load but ensure they are appropriately configured for the new task, especially as they begin being integrated on platforms with higher cost of failure like self driving cars or drones. In this research we propose a macro level methodology that iteratively benchmarks and optimizes specific workloads on edge devices. With automation provided by Ansible, a multi stage 2k full factorial experiment and robust analysis process ensures the test workload is maximizing the use of available resources before establishing a final benchmark score. By framing the validation tests with a family of network security monitoring applications an end to end scenario fully exercises and validates the developed process. This also provides an additional vector for future research in the realm of network security. The analysis of the results show the developed process met its original design goals and intentions, with the added fact that the latest edge devices like the XAVIER, TX2 and RPi4 can easily perform as an edge network sensor

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    A risk management framework for a complex adaptive transport system

    Get PDF
    La science a connu des avancées significatives en matière de gestion du risque au cours de la dernière décennie. Toutefois, les pratiques actuelles de gestion du risque dans le domaine industriel n'ont pas tiré tout le profit de ces développements. Le sujet de recherche de cette thèse peut être formulé ainsi : comment bâtir un cadre de gestion du risque afin de gérer les risques dans le transport, en adoptant les perspectives modernes du risque et les dernières connaissances de sécurité, tout en considérant le système de transport comme un système adaptatif complexe ? Ceci, à travers la perspective d'une agence nationale de la sécurité des transports, dont la responsabilité est la supervision de la sécurité de plusieurs modes de transport, incluant l'aérien, le maritime, le ferroviaire et le routier. La connaissance scientifique actuelle est passée en revue pour les sujets de risques et d'appréciation du risque, de sécurité et de gestion de la sécurité ainsi que les systèmes adaptatifs complexes. L'approche moderne du risque implique reconnaitre l'importance de l'incertitude et la solidité des connaissances dans l'analyse du risque ainsi que le rôle des imprévus. Le système de transport est identifié comme un système adaptatif complexe. De tels systèmes se caractérisent par un large volume d'interactions, de nombreuses boucles de rétroaction, des phénomènes non-linéaires, l'émergence, l'imprévisibilité et la contre-intuitivité. Sont étudiées les façons recommandées d'interagir avec les systèmes complexes afin de tenter de parvenir à un changement positif. Les concepts relatifs à la gestion de la sécurité sont également présentés et notamment le concept de résilience qui peut être interprété soit comme une élégante extensibilité des équipes ou des organisations, soit comme une adaptabilité continue. Les cadres existants de management du risque sont revus à la fois dans l'industrie et dans la littérature scientifique ainsi que la norme internationale ISO 31000. Basé sur l'état de l'art, un ensemble de critères pour un processus moderne de management du risque est développé. Le cadre proposé de gestion du risque dans le transport comprend des perspectives modernes du risque et considère le système de transport comme un système adaptatif complexe. Il permet de présenter les risques des différents modes de transport dans une visualisation globale de risque et de l'utiliser en tant que support pour prise de décision afin d'optimiser l'impact sur la sécurité avec les ressources qui sont toujours limitées. L'impact est encore renforcé par les moyens d'intervention tels que les stratégies adaptives et l'expérimentation, qui sont bien adaptés aux systèmes complexes. Elle est validée selon les critères élaborés et par comparaison avec les cadres existants. Le cadre proposé de gestion du risque ainsi que la thèse sont tous deux structurés d'après la norme ISO 31000. Enfin une étude de cas présente la mise en œuvre actuelle de cette nouvelle approche à l'Agence Nationale Finlandaise de la Sécurité des Transports.Over the last ten-fifteen years, science has made significant advances in fields relevant for risk management. However, current risk management practices in industry have not yet benefitted from these developments. The research question addressed in this dissertation is: What kind of risk management framework should be used for managing transport risks when the modern risk perspectives and the latest understanding of safety are embraced, and the transport system is considered a complex adaptive system? The focus of this research is on transport risks, taking the perspective of a national transport safety agency, tasked with overseeing safety across several modes of transport, including aviation, maritime, railway and road safety. The scientific literature on risk and risk assessment, safety and safety management, as well as complex adaptive systems are reviewed. The research illustrates that a modern risk perspective recognizes the importance of uncertainty and strength of knowledge in risk analysis, as well as the role of surprises. The transport system is identified as a complex adaptive system, characterized by a high number of interactions, emergence, multiple feedback loops, nonlinear phenomena, unpredictability and counter-intuitiveness. The recommended ways to interact with such complex systems and to try to achieve positive change are explained. Concepts related to safety management are also investigated, especially the concept of resilience, which is interpreted as graceful extensibility of teams or organizations, or as sustained adaptability. Evidence of existing risk management frameworks in both the industry and scientific literature is outlined and reference is made to the international ISO 31000 standard for risk management. Based on the literature review, a set of criteria for a modern risk management process is developed. A risk management framework for managing transport risks which embraces modern risk perspectives and accounts for the transport system as a complex adaptive system is proposed. It enables risks in all transport modes to be presented in a single risk picture and supports decision-making to maximize the safety impact achievable with limited resources. The impact is further enhanced by intervention strategies such as adaptive policies and experimentation, which are well-suited to complex systems. The framework is validated against the criteria developed, and by comparison to existing methods. A case study presents the on-going implementation of the developed risk management framework at the Finnish Transport Safety Agency. Both the proposed risk management framework and the dissertation are structured according to the ISO 31000 framework

    Cross-systems Personalisierung

    Get PDF
    The World Wide Web provides access to a wealth of information and services to a huge and heterogeneous user population on a global scale. One important and successful design mechanism in dealing with this diversity of users is to personalize Web sites and services, i.e. to customize system content, characteristics, or appearance with respect to a specific user. Each system independently builds up user profiles and uses this information to personalize the service offering. Such isolated approaches have two major drawbacks: firstly, investments of users in personalizing a system either through explicit provision of information or through long and regular use are not transferable to other systems. Secondly, users have little or no control over the information that defines their profile, since user data are deeply buried in personalization engines running on the server side. Cross system personalization (CSP) (Mehta, Niederee, & Stewart, 2005) allows for sharing information across different information systems in a user-centric way and can overcome the aforementioned problems. Information about users, which is originally scattered across multiple systems, is combined to obtain maximum leverage and reuse of information. Our initial approaches to cross system personalization relied on each user having a unified profile which different systems can understand. The unified profile contains facets modeling aspects of a multidimensional user which is stored inside a "Context Passport" that the user carries along in his/her journey across information space. The user’s Context Passport is presented to a system, which can then understand the context in which the user wants to use the system. The basis of ’understanding’ in this approach is of a semantic nature, i.e. the semantics of the facets and dimensions of the unified profile are known, so that the latter can be aligned with the profiles maintained internally at a specific site. The results of the personalization process are then transfered back to the user’s Context Passport via a protocol understood by both parties. The main challenge in this approach is to establish some common and globally accepted vocabulary and to create a standard every system will comply with. Machine Learning techniques provide an alternative approach to enable CSP without the need of accepted semantic standards or ontologies. The key idea is that one can try to learn dependencies between profiles maintained within one system and profiles maintained within a second system based on data provided by users who use both systems and who are willing to share their profiles across systems – which we assume is in the interest of the user. Here, instead of requiring a common semantic framework, it is only required that a sufficient number of users cross between systems and that there is enough regularity among users that one can learn within a user population, a fact that is commonly exploited in collaborative filtering. In this thesis, we aim to provide a principled approach towards achieving cross system personalization. We describe both semantic and learning approaches, with a stronger emphasis on the learning approach. We also investigate the privacy and scalability aspects of CSP and provide solutions to these problems. Finally, we also explore in detail the aspect of robustness in recommender systems. We motivate several approaches for robustifying collaborative filtering and provide the best performing algorithm for detecting malicious attacks reported so far.Die Personalisierung von Software Systemen ist von stetig zunehmender Bedeutung, insbesondere im Zusammenhang mit Web-Applikationen wie Suchmaschinen, Community-Portalen oder Electronic Commerce Sites, die große, stark diversifizierte Nutzergruppen ansprechen. Da explizite Personalisierung typischerweise mit einem erheblichen zeitlichem Aufwand für den Nutzer verbunden ist, greift man in vielen Applikationen auf implizite Techniken zur automatischen Personalisierung zurück, insbesondere auf Empfehlungssysteme (Recommender Systems), die typischerweise Methoden wie das Collaborative oder Social Filtering verwenden. Während diese Verfahren keine explizite Erzeugung von Benutzerprofilen mittels Beantwortung von Fragen und explizitem Feedback erfordern, ist die Qualität der impliziten Personalisierung jedoch stark vom verfügbaren Datenvolumen, etwa Transaktions-, Query- oder Click-Logs, abhängig. Ist in diesem Sinne von einem Nutzer wenig bekannt, so können auch keine zuverlässigen persönlichen Anpassungen oder Empfehlungen vorgenommen werden. Die vorgelegte Dissertation behandelt die Frage, wie Personalisierung über Systemgrenzen hinweg („cross system“) ermöglicht und unterstützt werden kann, wobei hauptsächlich implizite Personalisierungstechniken, aber eingeschränkt auch explizite Methodiken wie der semantische Context Passport diskutiert werden. Damit behandelt die Dissertation eine wichtige Forschungs-frage von hoher praktischer Relevanz, die in der neueren wissenschaftlichen Literatur zu diesem Thema nur recht unvollständig und unbefriedigend gelöst wurde. Automatische Empfehlungssysteme unter Verwendung von Techniken des Social Filtering sind etwas seit Mitte der 90er Jahre mit dem Aufkommen der ersten E-Commerce Welle popularisiert orden, insbesondere durch Projekte wie Information Tapistery, Grouplens und Firefly. In den späten 90er Jahren und Anfang dieses Jahrzehnts lag der Hauptfokus der Forschungsliteratur dann auf verbesserten statistischen Verfahren und fortgeschrittenen Inferenz-Methodiken, mit deren Hilfe die impliziten Beobachtungen auf konkrete Anpassungs- oder Empfehlungsaktionen abgebildet werden können. In den letzten Jahren sind vor allem Fragen in den Vordergrund gerückt, wie Personalisierungssysteme besser auf die praktischen Anforderungen bestimmter Applikationen angepasst werden können, wobei es insbesondere um eine geeignete Anpassung und Erweiterung existierender Techniken geht. In diesem Rahmen stellt sich die vorgelegte Arbeit
    corecore