20 research outputs found

    TCG based approach for secure management of virtualized platforms: state-of-the-art

    Get PDF
    There is a strong trend shift in the favor of adopting virtualization to get business benefits. The provisioning of virtualized enterprise resources is one kind of many possible scenarios. Where virtualization promises clear advantages it also poses new security challenges which need to be addressed to gain stakeholders confidence in the dynamics of new environment. One important facet of these challenges is establishing 'Trust' which is a basic primitive for any viable business model. The Trusted computing group (TCG) offers technologies and mechanisms required to establish this trust in the target platforms. Moreover, TCG technologies enable protecting of sensitive data in rest and transit. This report explores the applicability of relevant TCG concepts to virtualize enterprise resources securely for provisioning, establish trust in the target platforms and securely manage these virtualized Trusted Platforms

    A Model for Resource Sharing for Internet Data Center Providers within the Grid

    Get PDF
    Internet data center providers are still struggling to lower the operational costs of their data centers. One reason is the low utilization of servers over a long period of time during the day. The paper describes a system for optimizing the server resources within Internet data centers, which host different services such as web servers or enterprise resource planning systems. The system, called resource management system, allows Internet data center providers to allocate their resources in an economically efficient way. The results may indicate that there is free capacity or a lack of capacity. Based on the results, the resource management system can sell or purchase resources on the Grid. The idea behind this approach is to enable Internet data center providers to gradually transition from the current environment to an environment where utility computing is possible. Our approach separates between the local resource allocation and the external one (Grid)

    QoS Assessment and Modelling of Connected Vehicle Network within Internet of Vehicles

    Get PDF
    Connected vehicles have huge potential in improving road safety and traffic congestion. The primary aim of this paper is threefold: firstly to present an overview of network models in connected vehicles; secondly to analyze the factors that impact the Quality of Service (QoS) of connected vehicles and thirdly to present initial modelling results on Link QoS. We use the open access Geometry-based Efficient Propagation Model (GEMV2 ) data to carry out Analysis of Variance, Principal Component Analysis and Classical Multi-Dimensional scaling on the link quality for vehicle-2-vehicle (V2V) and vehicle-2-infrastucture (V2i) data and found that both line of sight and non-line of sight has a significant impact on the link quality. We further carried out modelling using system identification method of the connected vehicle network (CVN) in terms of Link QoS based on the parameters identified by the QoS assessment. We evaluated the CVN in terms of a step response achieving steady-state within 80 seconds for V2V data and 500 seconds for V2i data. The work presented here will further help in the development of CVN prediction model and control for V2V and vehicle-2-anything connectivity

    Planning of production and utility systems under unit performance degradation and alternative resource-constrained cleaning policies

    Get PDF
    A general optimization framework for the simultaneous operational planning of utility and production systems is presented with the main purpose of reducing the energy needs and material resources utilization of the overall system. The proposed mathematical model focuses mainly on the utility system and considers for the utility units: (i) unit commitment constraints, (ii) performance degradation and recovery, (iii) different types of cleaning tasks (online or offline, and fixed or flexible time-window), (iv) alternative options for cleaning tasks in terms of associated durations, cleaning resources requirements and costs, and (v) constrained availability of resources for cleaning operations. The optimization function includes the operating costs for utility and production systems, cleaning costs for utility systems, and energy consumption costs. Several case studies are presented in order to highlight the applicability and the significant benefits of the proposed approach. In particular, in comparison with the traditional sequential planning approach for production and utility systems, the proposed integrated approach can achieve considerable reductions in startup/shutdown and cleaning costs, and most importantly in utilities purchases, as it is shown in one of the case studies

    Enabling Data-Guided Evaluation of Bioinformatics Workflow Quality

    Get PDF
    Bioinformatics can be divided into two phases, the first phase is conversion of raw data into processed data and the second phase is using processed data to obtain scientific results. It is important to consider the first “workflow” phase carefully, as there are many paths on the way to a final processed dataset. Some workflow paths may be different enough to influence the second phase, thereby, leading to ambiguity in the scientific literature. Workflow evaluation in bioinformatics enables the investigator to carefully plan how to process their data. A system that uses real data to determine the quality of a workflow can be based on the inherent biological relationships in the data itself. To our knowledge, a general software framework that performs real data-driven evaluation of bioinformatics workflows does not exist. The Evaluation and Utility of workFLOW (EUFLOW) decision-theoretic framework, developed and tested on gene expression data, enables users of bioinformatics workflows to evaluate alternative workflow paths using inherent biological relationships. EUFLOW is implemented as an R package to enable users to evaluate workflow data. EUFLOW is a framework which also permits user-guided utility and loss functions, which enables the type of analysis to be considered in the workflow path decision. This framework was originally developed to address the quality of identifier mapping services between UNIPROT accessions and Affymetrix probesets to facilitate integrated analysis1. An extension to this framework evaluates Affymetrix probeset filtering methods on real data from endometrial cancer and TCGA ovarian serous carcinoma samples.2 Further evaluation of RNASeq workflow paths demonstrates generalizability of the EUFLOW framework. Three separate evaluations are performed including: 1) identifier filtering of features with biological attributes, 2) threshold selection parameter choice for low gene count features, and 3) commonly utilized RNASeq data workflow paths on The Cancer Genome Atlas data. The EUFLOW decision-theoretic framework developed and tested in my dissertation enables users of bioinformatics workflows to evaluate alternative workflow paths guided by inherent biological relationships and user utility

    Service Quality and Profit Control in Utility Computing Service Life Cycles

    Get PDF
    Utility Computing is one of the most discussed business models in the context of Cloud Computing. Service providers are more and more pushed into the role of utilities by their customer's expectations. Subsequently, the demand for predictable service availability and pay-per-use pricing models increases. Furthermore, for providers, a new opportunity to optimise resource usage offers arises, resulting from new virtualisation techniques. In this context, the control of service quality and profit depends on a deep understanding of the representation of the relationship between business and technique. This research analyses the relationship between the business model of Utility Computing and Service-oriented Computing architectures hosted in Cloud environments. The relations are clarified in detail for the entire service life cycle and throughout all architectural layers. Based on the elaborated relations, an approach to a delivery framework is evolved, in order to enable the optimisation of the relation attributes, while the service implementation passes through business planning, development, and operations. Related work from academic literature does not cover the collected requirements on service offers in this context. This finding is revealed by a critical review of approaches in the fields of Cloud Computing, Grid Computing, and Application Clusters. The related work is analysed regarding appropriate provision architectures and quality assurance approaches. The main concepts of the delivery framework are evaluated based on a simulation model. To demonstrate the ability of the framework to model complex pay-per-use service cascades in Cloud environments, several experiments have been conducted. First outcomes proof that the contributions of this research undoubtedly enable the optimisation of service quality and profit in Cloud-based Service-oriented Computing architectures

    Improving the performance of a dismounted Future Force Warrior by means of C4I2SR

    Get PDF
    This thesis comprises seven peer-reviewed articles and examines systems and applications suitable for increasing Future Force Warrior performance, minimizing collateral damage, improving situational awareness and Common Operational Picture. Based on a literature study, missing functionalities of Future Force Warrior were identified and new ideas, concepts and solutions were created as part of early stages of Systems of Systems creation. These introduced ideas have not yet been implemented or tested in combat and for this reason benefit analyses are excluded. The main results of this thesis include the following: A new networking concept, Wireless Polling Sensor Network, which is a swarm of a few Unmanned Aerial Vehicles forming an ad-hoc network and polling a large number of fixed sensor nodes. The system is more robust in a military environment than traditional Wireless Sensor Networks. A Business Process approach to Service Oriented Architecture in a tactical setting is a concept for scheduling and sharing limited resources. New components to military Service Oriented Architecture have been introduced in the thesis. Other results of the thesis include an investigation of the use of Free Space Optics in tactical communications, a proposal for tracking neutral forces, a system for upgrading simple collaboration tools for command, control and collaboration purposes, a three-level hierarchy of Future Force Warrior, and methods for reducing incidents of fratricide

    Knowledge-centric autonomic systems

    Get PDF
    Autonomic computing revolutionised the commonplace understanding of proactiveness in the digital world by introducing self-managing systems. Built on top of IBM’s structural and functional recommendations for implementing intelligent control, autonomic systems are meant to pursue high level goals, while adequately responding to changes in the environment, with a minimum amount of human intervention. One of the lead challenges related to implementing this type of behaviour in practical situations stems from the way autonomic systems manage their inner representation of the world. Specifically, all the components involved in the control loop have shared access to the system’s knowledge, which, for a seamless cooperation, needs to be kept consistent at all times.A possible solution lies with another popular technology of the 21st century, the Semantic Web,and the knowledge representation media it fosters, ontologies. These formal yet flexible descriptions of the problem domain are equipped with reasoners, inference tools that, among other functions, check knowledge consistency. The immediate application of reasoners in an autonomic context is to ensure that all components share and operate on a logically correct and coherent “view” of the world. At the same time, ontology change management is a difficult task to complete with semantic technologies alone, especially if little to no human supervision is available. This invites the idea of delegating change management to an autonomic manager, as the intelligent control loop it implements is engineered specifically for that purpose.Despite the inherent compatibility between autonomic computing and semantic technologies,their integration is non-trivial and insufficiently investigated in the literature. This gap represents the main motivation for this thesis. Moreover, existing attempts at provisioning autonomic architectures with semantic engines represent bespoke solutions for specific problems (load balancing in autonomic networking, deconflicting high level policies, informing the process of correlating diverse enterprise data are just a few examples). The main drawback of these efforts is that they only provide limited scope for reuse and cross-domain analysis (design guidelines, useful architectural models that would scale well across different applications and modular components that could be integrated in other systems seem to be poorly represented). This work proposes KAS (Knowledge-centric Autonomic System), a hybrid architecture combining semantic tools such as: • an ontology to capture domain knowledge,• a reasoner to maintain domain knowledge consistent as well as infer new knowledge, • a semantic querying engine,• a tool for semantic annotation analysis with a customised autonomic control loop featuring: • a novel algorithm for extracting knowledge authored by the domain expert, • “software sensors” to monitor user requests and environment changes, • a new algorithm for analysing the monitored changes, matching them against known patterns and producing plans for taking the necessary actions, • “software effectors” to implement the planned changes and modify the ontology accordingly. The purpose of KAS is to act as a blueprint for the implementation of autonomic systems harvesting semantic power to improve self-management. To this end, two KAS instances were built and deployed in two different problem domains, namely self-adaptive document rendering and autonomic decision2support for career management. The former case study is intended as a desktop application, whereas the latter is a large scale, web-based system built to capture and manage knowledge sourced by an entire (relevant) community. The two problems are representative for their own application classes –namely desktop tools required to respond in real time and, respectively, online decision support platforms expected to process large volumes of data undergoing continuous transformation – therefore, they were selected to demonstrate the cross-domain applicability (that state of the art approaches tend to lack) of the proposed architecture. Moreover, analysing KAS behaviour in these two applications enabled the distillation of design guidelines and of lessons learnt from practical implementation experience while building on and adapting state of the art tools and methodologies from both fields.KAS is described and analysed from design through to implementation. The design is evaluated using ATAM (Architecture Trade off Analysis Method) whereas the performance of the two practical realisations is measured both globally as well as deconstructed in an attempt to isolate the impact of each autonomic and semantic component. This last type of evaluation employs state of the art metrics for each of the two domains. The experimental findings show that both instances of the proposed hybrid architecture successfully meet the prescribed high-level goals and that the semantic components have a positive influence on the system’s autonomic behaviour
    corecore