1,031 research outputs found

    Heterogeneous concurrent computing with exportable services

    Get PDF
    Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent

    An Architectural Framework for Performance Analysis: Supporting the Design, Configuration, and Control of DIS /HLA Simulations

    Get PDF
    Technology advances are providing greater capabilities for most distributed computing environments. However, the advances in capabilities are paralleled by progressively increasing amounts of system complexity. In many instances, this complexity can lead to a lack of understanding regarding bottlenecks in run-time performance of distributed applications. This is especially true in the domain of distributed simulations where a myriad of enabling technologies are used as building blocks to provide large-scale, geographically disperse, dynamic virtual worlds. Persons responsible for the design, configuration, and control of distributed simulations need to understand the impact of decisions made regarding the allocation and use of the logical and physical resources that comprise a distributed simulation environment and how they effect run-time performance. Distributed Interactive Simulation (DIS) and High Level Architecture (HLA) simulation applications historically provide some of the most demanding distributed computing environments in terms of performance, and as such have a justified need for performance information sufficient to support decision-makers trying to improve system behavior. This research addresses two fundamental questions: (1) Is there an analysis framework suitable for characterizing DIS and HLA simulation performance? and (2) what kind of mechanism can be used to adequately monitor, measure, and collect performance data to support different performance analysis objectives for DIS and HLA simulations? This thesis presents a unified, architectural framework for DIS and HLA simulations, provides details on a performance monitoring system, and shows its effectiveness through a series of use cases that include practical applications of the framework to support real-world U.S. Department of Defense (DoD) programs. The thesis also discusses the robustness of the constructed framework and its applicability to performance analysis of more general distributed computing applications

    NASA high performance computing and communications program

    Get PDF
    The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 100-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientist's abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects as well as summaries of individual research and development programs within each project

    Green Buildings and Ambient Intelligence: case study for N.A.S.A. Sustainability Base and future Smart Infrastructures

    Get PDF
    Con la diffusione delle smart infrastructures, espressione con cui ci si riferisce collettivamente ai concetti di smart cities e smart grid, i sistemi di building automation vedono il proprio ruolo espandersi oltre i tradizionali limiti degli ambienti isolati che sono progettati per gestire, supervisionare ed ottimizzare. Da sistemi isolati all’interno di edifici residenziali o commerciali, stanno iniziando ad ottenere un ruolo importante su scala più ampia nell’ambito di scenari più complessi a livello urbano o a livello di infrastruttura. Esempi di questa tendenza possono essere le attuali sperimentazioni in varie città del mondo per automatizzare l’illuminazione pubblica, complessi residenziali diffusi (spesso denominati smart connected comunities) e microgrid locali generate dalla federazione di varie unità residenziali a formare cosidette virtual power plants. A causa di questo processo, ci sono aspettative crescenti circa il potenziale delle reti di automazione di introdurre funzionalità sofisticate da un parte ed efficienza energetica dall’altra, ed entrambi gli aspetti su vasta scala. Sfortunatamente questi due obiettivi sono per diversi motivi in conflitto ed è dunque inevitabile individuare un ragionevole compromesso di progettazione. Questa ricerca realizza una caratterizzazione delle attuali tecnologie di automazione per identificare i termini di tale compromesso, con un’attenzione maggiormente polarizzata sugli aspetti di efficienza energetica, analizzata seguendo un approccio olistico, affrontando diversi aspetti del problema. Indubbiamente, data la complessità del vasto scenario tecnologico delle future smart infrastructures, non c’è una finalità sistematica nel lavoro. Piuttosto si intende fornire un contributo alla conoscenza, dando priorità ad alcune sfide di ricerca che sono altresì spesso sottovalutate. Il Green networking, ovvero l’efficienza energetica nel funzionamento di rete, è una di tali sfide. L’attuale infrastruttura IT globale è costruita su attrezzature che collettivamente consumano 21.4 TWh/anno (Global e-Sustainability Initiative, 2010). Questo è dovuto alla scarsa consapevolezza del fatto che le specifiche dei protocolli di comunicazione hanno varie implicazioni sull’efficienza energetica e alla generale tendenza ad una progettazione ridondante e sovra-dimensionata per il caso peggiore. Questo problema potrebbe essere riscontrato anche nelle reti di automazione, specialmente data la tendenza di cui si discuteva sopra, e in tal caso, queste potrebbero introdurre un ulteriore carbon footprint, in aggiunta a quello della rete internet. In questa ricerca si intende dimensionare tale problema e proporre approcci alternativi agli attuali modelli di hardware e protocollo tipici delle tecnologie di automazione in commercio. Spostandosi dalla rete di controllo all’ambiente fisico, altro obiettivo di questo lavoro è la caratterizzazione di sistemi di gestione automatica dei plug loads, carichi elettrici altrimenti non gestiti da alcun impianto di building automation. Per tali sistemi verranno mostrati i limiti e le potenzialità, identificando potenziali problematiche di design e proponendo un approccio integrato di tali sistemi all’interno di sistemi più ampi di gestione dell’energia. Infine, il meccanismo introdotto nella parte di green networking è potenzialmente in grado di fornire informazioni in tempo reale circa il contesto controllato. Si tratta di un potenziale sfruttabile per sviluppare soluzioni di Demand Side Management, allo scopo di effettuare previsioni di picco e di carico. Questa analisi è attualmente in corso, attraverso una partnership con Enel Distribuzione. With the advent of smart infrastructures, collective expression used here to refer to novel concepts such as smart cities and smart grid, building automation and control networks are having their role expanded beyond the traditional boundaries of the isolated environments they are designed to manage, supervise and optimize. From being confined within residential or commercial buildings as islanded, self-contained systems, they are starting to gain an important role on a wider scale for more complex scenarios at urban or infrastructure level. Example of this ongoing process are current experimental setups in cities worldwide to automate urban street lighting, diffused residential facilities (also often addressed to as smart connected communities) and local micro-grids generated by the federation of several residential units into so-called virtual power plants. Given this underlying process, expectations are dramatically increasing about the potential of control networks to introduce sophisticated features on one side and energy efficiency on the other, and both on a wide scale. Unfortunately, these two objectives are, in several ways, conflicting, and impose to settle for reasonable trade-offs. This research work performs an assessment of current control and automation technologies to identify the terms of this trade-off with a stronger focus on energy efficiency which is analyzed following a holistic approach covering several aspects of the problem. Nevertheless, given the complexity of the wide technology scenario of future smart infrastructure, there isn’t a systematic intention in the work. Rather, this research will aim at providing valuable contribution to the knowledge in the field, prioritizing challenges within the whole picture that are often neglected. Green networking, that is energy efficiency of the very network operation, is one of these challenges. The current worldwide IT infrastructure is built upon networking equipment that collectively consume 21.4 TWh/year (Global e-Sustainability Initiative, 2010). This is the result of an overall unawareness of energy efficiency implications of communication protocols specifications and a tendency toward over-provisioning and redundancy in architecture design. As automation and control networks become global, they may be subject to the same issue and introduce an additional carbon footprint along with that of the internet. This research work performs an assessment of the dimension of this problem and proposes an alternative approach to current hardware and protocol design found in commercial building automation technologies. Shifting from the control network to the physical environment, another objective of this work is related to plug load management systems, which will be characterized as to their performance and limitations, highlighting potential design pitfalls and proposing an approach toward integrating these systems into more general energy management systems. Finally, the mechanism introduced above to increase networking energy efficiency also demonstrated a potential to provide real-time awareness about the context being managed. This potential is currently under investigation for its implications in performing basic load/peak forecasting to support demand side management architectures for the smart grid, through a partnership with the Italian electric utility

    Probabilistic grid scheduling based on job statistics and monitoring information

    Get PDF
    This transfer thesis presents a novel, probabilistic approach to scheduling applications on computational Grids based on their historical behaviour, current state of the Grid and predictions of the future execution times and resource utilisation of such applications. The work lays a foundation for enabling a more intuitive, user-friendly and effective scheduling technique termed deadline scheduling. Initial work has established motivation and requirements for a more efficient Grid scheduler, able to adaptively handle dynamic nature of the Grid resources and submitted workload. Preliminary scheduler research identified the need for a detailed monitoring of Grid resources on the process level, and for a tool to simulate non-deterministic behaviour and statistical properties of Grid applications. A simulation tool, GridLoader, has been developed to enable modelling of application loads similar to a number of typical Grid applications. GridLoader is able to simulate CPU utilisation, memory allocation and network transfers according to limits set through command line parameters or a configuration file. Its specific strength is in achieving set resource utilisation targets in a probabilistic manner, thus creating a dynamic environment, suitable for testing the scheduler’s adaptability and its prediction algorithm. To enable highly granular monitoring of Grid applications, a monitoring framework based on the Ganglia Toolkit was developed and tested. The suite is able to collect resource usage information of individual Grid applications, integrate it into standard XML based information flow, provide visualisation through a Web portal, and export data into a format suitable for off-line analysis. The thesis also presents initial investigation of the utilisation of University College London Central Computing Cluster facility running Sun Grid Engine middleware. Feasibility of basic prediction concepts based on the historical information and process meta-data have been successfully established and possible scheduling improvements using such predictions identified. The thesis is structured as follows: Section 1 introduces Grid computing and its major concepts; Section 2 presents open research issues and specific focus of the author’s research; Section 3 gives a survey of the related literature, schedulers, monitoring tools and simulation packages; Section 4 presents the platform for author’s work – the Self-Organising Grid Resource management project; Sections 5 and 6 give detailed accounts of the monitoring framework and simulation tool developed; Section 7 presents the initial data analysis while Section 8.4 concludes the thesis with appendices and references

    Active security vulnerability notification and resolution

    Get PDF
    The early version of the Internet was designed for connectivity only, without the consideration of security, and the Internet is consequently an open structure. Networked systems are vulnerable for a number of reasons; design error, implementation, and management. A vulnerability is a hole or weak point that can be exploited to compromise the security of the system. Operating systems and applications are often vulnerable because of design errors. Software vendors release patches for discovered vulnerabilities, and rely upon system administrators to accept and install patches on their systems. Many system administrators fail to install patches on time, and consequently leave their systems vulnerable to exploitation by hackers. This exploitation can result in various security breaches, including website defacement, denial of service, or malware attacks. The overall problem is significant with an average of 115 vulnerabilities per week being documented during 2005. This thesis considers the problem of vulnerabilities in IT networked systems, and maps the vulnerability types into a technical taxonomy. The thesis presents a thorough analysis of the existing methods of vulnerability management which determine that these methods have failed to mange the problem in a comprehensive way, and show the need for a comprehensive management system, capable of addressing the awareness and patch deploymentp roblems. A critical examination of vulnerability databasess tatistics over the past few years is provided, together with a benchmarking of the problem in a reference environment with a discussion of why a new approach is needed. The research examined and compared different vulnerability advisories, and proposed a generic vulnerability format towards automating the notification process. The thesis identifies the standard process of addressing vulnerabilities and the over reliance upon the manual method. An automated management system must take into account new vulnerabilities and patch deploymentt o provide a comprehensives olution. The overall aim of the research has therefore been to design a new framework to address these flaws in the networked systems harmonised with the standard system administrator process. The approach, known as AVMS (Automated Vulnerability Management System), is capable of filtering and prioritising the relevant messages, and then downloading the associated patches and deploying them to the required machines. The framework is validated through a proof-of-concept prototype system. A series of tests involving different advisories are used to illustrate how AVMS would behave. This helped to prove that the automated vulnerability management system prototype is indeed viable, and that the research has provided a suitable contribution to knowledge in this important domain.The Saudi Government and the Network Research Group at the University of Plymouth
    • …
    corecore