1,098,726 research outputs found

    Autonomous platform for life-critical decision support in the ICU

    Get PDF
    Part 2: PhD Workshop: Autonomic Network and Service ManagementInternational audienceThe Intensive Care Unit is a complex, data-intensive and critical environment in which the adoption of Information Technology is growing. As physicians become more dependent on the computing technology to support decisions, raise real-time alerts and notifications of patient-specific conditions, this software has strong dependability requirements. The dependability challenges are expressed in terms of availability, reliability, performance, usability and maintenance of the system. Our research focuses on the design and development of a generic autonomous ICU service platform. COSARA is a computer-based platform for infection surveillance and antibiotic management in ICU. During its design, development and evaluation, we identified both technological and human factors that affect robustness. We presented the identified research questions that will be addressed in detail during PhD research

    Resource allocation for massively multiplayer online games using fuzzy linear assignment technique

    Get PDF
    This paper investigates the possible use of fuzzy system and Linear Assignment Problem (LAP) for resource allocation for Massively Multiplayer Online Games (MMOGs). Due to the limitation of design capacity of such complex MMOGs, resources available in the game cannot be unlimited. Resources in this context refer to items used to support the game play and activities in the MMOGs, also known as in-game resources. As for network resources, it is also one of the important research areas for MMOGs due to the increasing number of players. One of the main objectives is to ensure the Quality of Service (QoS) in the MMOGs environment for each player. Regardless, which context the resource is defined, the proposed method can still be used. Simulated results based on the network resources to ensure QoS shows that the proposed method could be an alternative

    Planning Rural Water Services in Nicaragua: A Systems-Based Analysis of Impact Factors Using Graphical Modeling

    Full text link
    The success or failure of rural water services in the developing world is a result of numerous factors that interact in a complex set of connections that are difficult to separate and identify. This research effort presented a novel means to empirically reveal the systemic interactions of factors that influence rural water service sustainability in the municipalities of Darío and Terrabona, Nicaragua. To accomplish this, the study employed graphical modeling to build and analyze factor networks. Influential factors were first identified by qualitatively and quantitatively analyzing transcribed interviews from community water committee members. Factor influences were then inferred by graphical modeling to create factor network diagrams that revealed the direct and indirect interaction of factors. Finally, network analysis measures were used to identify “impact factors” based on their relative influence within each factor network. Findings from this study elucidated the systematic nature of such factor interactions in both Darío and Terrabona, and highlighted key areas for programmatic impact on water service sustainability for both municipalities. Specifically, in Darío, the impact areas related to the current importance of water service management by community water committees, while in Terrabona, the impact areas related to the current importance of finances, viable water sources, and community capacity building by external support. Overall, this study presents a rigorous and useful means to identify impact factors as a way to facilitate the thoughtful planning and evaluation of sustainable rural water services in Nicaragua and beyond

    A framework for network RTK data processing based on grid computing

    Get PDF
    Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work

    Stochastic performance analysis of Network Function Virtualisation in future internet

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this recordIEEE Network Function Virtualisation (NFV) has been considered as a promising technology for future Internet to increase network flexibility, accelerate service innovation and reduce the Capital Expenditures (CAPEX) and Operational Expenditures (OPEX) costs, through migrating network functions from dedicated network devices to commodity hardware. Recent studies reveal that although this migration of network function brings the network operation unprecedented flexibility and controllability, NFV-based architecture suffers from serious performance degradation compared with traditional service provisioning on dedicated devices. In order to achieve a comprehensive understanding of the service provisioning capability of NFV, this paper proposes a novel analytical model based on Stochastic Network Calculus (SNC) to quantitatively investigate the end-to-end performance bound of NFV networks. To capture the dynamic and on-demand NFV features, both the non-bursty traffic, e.g. Poisson process, and the bursty traffic, e.g. Markov Modulated Poisson Process (MMPP), are jointly considered in the developed model to characterise the arriving traffic. To address the challenges of resource competition and end-to-end NFV chaining, the property of convolution associativity and leftover service technologies of SNC are exploited to calculate the available resources of Virtual Network Function (VNF) nodes in the presence of multiple competing traffic, and transfer the complex NFV chain into an equivalent system for performance derivation and analysis. Both the numerical analysis and extensive simulation experiments are conducted to validate the accuracy of the proposed analytical model. Results demonstrate that the analytical performance metrics match well with those obtained from the simulation experiments and numerical analysis. In addition, the developed model is used as a practical and cost-effective tool to investigate the strategies of the service chain design and resource allocations in NFV networks.Engineering and Physical Sciences Research Council (EPSRC

    VNF performance modelling : from stand-alone to chained topologies

    Get PDF
    One of the main incentives for deploying network functions on a virtualized or cloud-based infrastructure, is the ability for on-demand orchestration and elastic resource scaling following the workload demand. This can also be combined with a multi-party service creation cycle: the service provider sources various network functions from different vendors or developers, and combines them into a modular network service. This way, multiple virtual network functions (VNFs) are connected into more complex topologies called service chains. Deployment speed is important here, and it is therefore beneficial if the service provider can limit extra validation testing of the combined service chain, and rely on the provided profiling results of the supplied single VNFs. Our research shows that it is however not always evident to accurately predict the performance of a total service chain, from the isolated benchmark or profiling tests of its discrete network functions. To mitigate this, we propose a two-step deployment workflow: First, a general trend estimation for the chain performance is derived from the stand-alone VNF profiling results, together with an initial resource allocation. This information then optimizes the second phase, where online monitored data of the service chain is used to quickly adjust the estimated performance model where needed. Our tests show that this can lead to a more efficient VNF chain deployment, needing less scaling iterations to meet the chain performance specification, while avoiding the need for a complete proactive and time-consuming VNF chain validation

    Segment Routing: a Comprehensive Survey of Research Activities, Standardization Efforts and Implementation Results

    Full text link
    Fixed and mobile telecom operators, enterprise network operators and cloud providers strive to face the challenging demands coming from the evolution of IP networks (e.g. huge bandwidth requirements, integration of billions of devices and millions of services in the cloud). Proposed in the early 2010s, Segment Routing (SR) architecture helps face these challenging demands, and it is currently being adopted and deployed. SR architecture is based on the concept of source routing and has interesting scalability properties, as it dramatically reduces the amount of state information to be configured in the core nodes to support complex services. SR architecture was first implemented with the MPLS dataplane and then, quite recently, with the IPv6 dataplane (SRv6). IPv6 SR architecture (SRv6) has been extended from the simple steering of packets across nodes to a general network programming approach, making it very suitable for use cases such as Service Function Chaining and Network Function Virtualization. In this paper we present a tutorial and a comprehensive survey on SR technology, analyzing standardization efforts, patents, research activities and implementation results. We start with an introduction on the motivations for Segment Routing and an overview of its evolution and standardization. Then, we provide a tutorial on Segment Routing technology, with a focus on the novel SRv6 solution. We discuss the standardization efforts and the patents providing details on the most important documents and mentioning other ongoing activities. We then thoroughly analyze research activities according to a taxonomy. We have identified 8 main categories during our analysis of the current state of play: Monitoring, Traffic Engineering, Failure Recovery, Centrally Controlled Architectures, Path Encoding, Network Programming, Performance Evaluation and Miscellaneous...Comment: SUBMITTED TO IEEE COMMUNICATIONS SURVEYS & TUTORIAL

    Solving multi-objective hub location problems by hybrid algorithms

    Get PDF
    In many logistic, telecommunications and computer networks, direct routing of commodities between any origin and destination is not viable due to economic and technolog- ical constraints. In that cases, a network with centralized units, known as hub facilities, and a small number of links is commonly used to connect any origin-destination pair. The purpose of these hub facilities is to consolidate, sort and transship e ciently any commodity in the network. Hub location problems (HLPs) consider the design of these networks by locating a set of hub facilities, establishing an interhub subnet, and routing the commodities through the network while optimizing some objective(s) based on the cost or service. Hub location has evolved into a rich research area, where a huge number of papers have been published since the seminal work of O'Kelly [1]. Early works were focused on analogue facility location problems, considering some assumptions to simplify network design. Recent works [2] have studied more complex models that relax some of these assumptions and in- corporate additional real-life features. In most HLPs considered in the literature, the input parameters are assumed to be known and deterministic. However, in practice, this assumption is unrealistic since there is a high uncertainty on relevant parameters, such as costs, demands or even distances. In this work, we will study the multi-objective hub location problems with uncertainty.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    Autonomic management of software defined networks : DAIM can provide the environment for building autonomy in distributed electronic environments - using OpenFlow networks as the case study

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Next generation networks need to support a broad range of services and functionalities with capabilities such as autonomy, scalability, and adaptability for managing networks complexity. In present days, network infrastructures are becoming increasingly complex and challenging to administer due to scale and heterogeneous nature of the infrastructures. Furthermore, among various vendors, services, and platforms, managing networks require expert operators who have expertise in all different fields. This research relied on distributed active information model (DAIM) to establish a foundation which will meet future network management requirements. The DAIM is an information model for network solutions which considers challenges of autonomic functionalities, where the network devices can make local and overall network decisions by collected information. The DAIM model can facilitate networks management by introducing autonomic behaviours. The autonomic behaviours for communication networks lead networks to be self-managed and emerge as promising solutions to manage networks complexity. Autonomic networks management aims at reducing the workload on network operators from low-level tasks. Over the years, researchers have proposed a number of models for developing self-managed network solutions. One such example is the common information model (CIM), which is described as the managed environment that attempts to merge and extend the existing conventional management and also uses object-oriented constructs for overall network representation. However, the CIM has limitations coping in complex distributed electronic environments with multiple disciplines. The goal of this research is defined as development of a network architecture or a solution based on the DAIM model, which is effectively distribute and automate network’s functions to various network devices. The research first looks into the possibilities of local decision-making and programmability of network elements for distributed electronic environments with an intention to simplify network management by providing abstracted network infrastructures. After investigating and implementing different elements of the DAIM model in network forwarding devices by utilising virtual network switches, it discovers that a common high-level interface and framework for network devices are essential for the development of network solutions which will meet future network requirements. The outcome of this research is the development of (DAIM OS) specification. The DAIM OS is a network forwarding device operating system which is compliant with the DAIM model when it comes to network infrastructure management and provides a high-level abstracted application programming interface (DAIM OS API) for creating network service applications. Through the DAIM OS, network elements will be able to adapt to ever changing environments to meet the goals of service providers, vendors, and end users. Furthermore, the DAIM OS API aims to reduce complexity and time of network service applications development. If the developed DAIM OS specification is implemented and if it functions as predicted in the design analyses; that will result in a significant milestone in the development of distributed network management. This dissertation has an introduction in chapter 1 followed by five parts in order to draw a blueprint for information model as a distributed independent computing environment for autonomic network management. The five parts include lending weight to the proposition, gaining confidence in the proposition, drawing conclusions, supporting work and lastly is appendices. The introduction in chapter 1 includes motivations for the research, main challenges of the research, overall objectives, and review of research contributions. After that, to lend weight to the proposition as the first part of the dissertation, there is chapter 2 which presents the background and literature review, and chapter 3 which has a theoretical foundation for the proposed model. The foundation consists of a generic architecture for complex network management and agents to aggregate distributed network information. Moreover, chapter 3 is probably more about a state of the art in software engineering than about real implementation to engineer autonomic network management. The second part of the dissertation is to gain confidence in the proposition which includes attempting to implement the DAIM model in chapter 4 with some tests to report good performance regarding convergence and robustness for the service configuration process of network management. Also, the second part has a specification of true abstraction layers in chapter 5. The specification of true abstraction layers proposes a high-level abstraction for forwarding networking devices and provides an application program interface for network service applications developed by network operators and service providers. The implementation in chapter 4 is supported by the fourth part of the dissertation in chapter 10 which supports the theoretical foundation, designing, modelling, and developing the distributed active information model via simulation, emulation and real environments. The third part of this dissertation provides the way to draw conclusions as shown in chapter 7 which has the overall research summary, validation of the propositions, contributions and discussion, limitations and finally recommendations for future works. Finally are the appendices in Appendix A, Appendix B, Appendix C and Appendix D which provide a developing code of the core DAIM model and show different setting up for testbed environments
    • …
    corecore