873,414 research outputs found

    Service introduction in an active network

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 1999.Includes bibliographical references (p. 151-157).by David J. Wetherall.Ph.D

    Mechanisms of Funding for Universal Service Obligations: the Electricity Case

    Get PDF
    The transition towards a more competitive regime in network industries (and specially in electricity sector) raises the relevant question of funding for the Universal Service Obligations (USOs). Our paper focuses on two ways of funding for universal service and equal treatment obligations (Ubiquity and Non Discrimination constraints): the funding through access charge (CS regime) or taxation (T regime). Using a network model including competition between an historical monopoly (in charge for the USOs) and an entrant, we obtain some results concerning gains and losses of social welfare due to those mechanisms. We show that most of the time it is socially better to let the historical monopoly be active whatever the type of funding for USOs applying, and whatever profitability of the firms is. However, when the entrant is active, we can highlight that the introduction of the T regime (compared to the CS one) implies either welfare deterioration or an entry prevention strategy by the historical Þrm. Therefore, the T regime could not be an argument for the regulator to promote vertical separation of the historical firm (according to the European community line).ELECTRICITY SECTOR; NETWORKS; REGULATION; UNIVERSAL SERVICE

    The structural and parametrical organization of elements of a power supply system in the conditions of network centrism

    Get PDF
    Purpose. Development of indicators of the structural and parametrical organization of effective active and adaptive system of service of power supply systems in the conditions of ideology of Smart Grid. Methodology. In the conditions of application of ideology of Smart Grid for increase of intellectualization of electrical power system there is a need of introduction of the principle of a network centrism in the structural and parametrical organization of elements of power supply systems that involves performance of conditions on implementation of provisions of the principle of Situational Awareness. The essence of this principle consists in that, information on a condition of system has to be presented in the form convenient for the analysis, recognition, transfer, distribution and storage, to be coordinated for flexible and optimum development at the subsystem and object-by-object levels. Results. Structural and parametrical optimization of elements of power supply systems in the conditions of a network centrism and the concept of SG involves use of provisions of the theory of systems and concepts of multicriteria optimizing synthesis. It is offered to use the modified adaptive indicator of the generalizing effect of synthesis of structure of active and adaptive system of service of power supply systems in the form of a difference of the generalizing effects: the introduced option of structure of system and basic. Originality. Introduction of an adaptive indicator of synthesis of system of service of power supply systems considers the concept of «service of system on the basis of a response» in the presence of false and true refusals. Practical value. Use of the specified indicator will allow to specify procedure of selection of competitive options for the purpose of definition of a set of admissible structures which meet the requirements of criterion function.В статье рассмотрены тенденции развития и принципы организации интеллектуальных энергосистем при введении понятия сетецентризма в условиях идеологии Smart Grid. В качестве решения задач указанной проблематики предлагается создание активно-адаптивной системы, реализующей концепцию «обслуживания системы на основе отклика»

    Privacy preserving social network data publication

    Get PDF
    The introduction of online social networks (OSN) has transformed the way people connect and interact with each other as well as share information. OSN have led to a tremendous explosion of network-centric data that could be harvested for better understanding of interesting phenomena such as sociological and behavioural aspects of individuals or groups. As a result, online social network service operators are compelled to publish the social network data for use by third party consumers such as researchers and advertisers. As social network data publication is vulnerable to a wide variety of reidentification and disclosure attacks, developing privacy preserving mechanisms are an active research area. This paper presents a comprehensive survey of the recent developments in social networks data publishing privacy risks, attacks, and privacy-preserving techniques. We survey and present various types of privacy attacks and information exploited by adversaries to perpetrate privacy attacks on anonymized social network data. We present an in-depth survey of the state-of-the-art privacy preserving techniques for social network data publishing, metrics for quantifying the anonymity level provided, and information loss as well as challenges and new research directions. The survey helps readers understand the threats, various privacy preserving mechanisms, and their vulnerabilities to privacy breach attacks in social network data publishing as well as observe common themes and future directions

    Autonomic management of software defined networks : DAIM can provide the environment for building autonomy in distributed electronic environments - using OpenFlow networks as the case study

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Next generation networks need to support a broad range of services and functionalities with capabilities such as autonomy, scalability, and adaptability for managing networks complexity. In present days, network infrastructures are becoming increasingly complex and challenging to administer due to scale and heterogeneous nature of the infrastructures. Furthermore, among various vendors, services, and platforms, managing networks require expert operators who have expertise in all different fields. This research relied on distributed active information model (DAIM) to establish a foundation which will meet future network management requirements. The DAIM is an information model for network solutions which considers challenges of autonomic functionalities, where the network devices can make local and overall network decisions by collected information. The DAIM model can facilitate networks management by introducing autonomic behaviours. The autonomic behaviours for communication networks lead networks to be self-managed and emerge as promising solutions to manage networks complexity. Autonomic networks management aims at reducing the workload on network operators from low-level tasks. Over the years, researchers have proposed a number of models for developing self-managed network solutions. One such example is the common information model (CIM), which is described as the managed environment that attempts to merge and extend the existing conventional management and also uses object-oriented constructs for overall network representation. However, the CIM has limitations coping in complex distributed electronic environments with multiple disciplines. The goal of this research is defined as development of a network architecture or a solution based on the DAIM model, which is effectively distribute and automate network’s functions to various network devices. The research first looks into the possibilities of local decision-making and programmability of network elements for distributed electronic environments with an intention to simplify network management by providing abstracted network infrastructures. After investigating and implementing different elements of the DAIM model in network forwarding devices by utilising virtual network switches, it discovers that a common high-level interface and framework for network devices are essential for the development of network solutions which will meet future network requirements. The outcome of this research is the development of (DAIM OS) specification. The DAIM OS is a network forwarding device operating system which is compliant with the DAIM model when it comes to network infrastructure management and provides a high-level abstracted application programming interface (DAIM OS API) for creating network service applications. Through the DAIM OS, network elements will be able to adapt to ever changing environments to meet the goals of service providers, vendors, and end users. Furthermore, the DAIM OS API aims to reduce complexity and time of network service applications development. If the developed DAIM OS specification is implemented and if it functions as predicted in the design analyses; that will result in a significant milestone in the development of distributed network management. This dissertation has an introduction in chapter 1 followed by five parts in order to draw a blueprint for information model as a distributed independent computing environment for autonomic network management. The five parts include lending weight to the proposition, gaining confidence in the proposition, drawing conclusions, supporting work and lastly is appendices. The introduction in chapter 1 includes motivations for the research, main challenges of the research, overall objectives, and review of research contributions. After that, to lend weight to the proposition as the first part of the dissertation, there is chapter 2 which presents the background and literature review, and chapter 3 which has a theoretical foundation for the proposed model. The foundation consists of a generic architecture for complex network management and agents to aggregate distributed network information. Moreover, chapter 3 is probably more about a state of the art in software engineering than about real implementation to engineer autonomic network management. The second part of the dissertation is to gain confidence in the proposition which includes attempting to implement the DAIM model in chapter 4 with some tests to report good performance regarding convergence and robustness for the service configuration process of network management. Also, the second part has a specification of true abstraction layers in chapter 5. The specification of true abstraction layers proposes a high-level abstraction for forwarding networking devices and provides an application program interface for network service applications developed by network operators and service providers. The implementation in chapter 4 is supported by the fourth part of the dissertation in chapter 10 which supports the theoretical foundation, designing, modelling, and developing the distributed active information model via simulation, emulation and real environments. The third part of this dissertation provides the way to draw conclusions as shown in chapter 7 which has the overall research summary, validation of the propositions, contributions and discussion, limitations and finally recommendations for future works. Finally are the appendices in Appendix A, Appendix B, Appendix C and Appendix D which provide a developing code of the core DAIM model and show different setting up for testbed environments

    Service-oriented networking architecture

    Full text link
    University of Technology, Sydney. Faculty of Information Technology.Demand for new services offered across shared networking infrastructure, such as the Internet, is at an ever increasing level. Everyday, innovative services are continuously being proposed and developed to meet end users' demands. However, the monolithic and inflexible design of current networking infrastructure constrains the deployment of such new services. Current networking infrastructure consists of a fixed set of connectivity functions governed by static overlays of Service Level Agreements between administrative boundaries. This infrastructure hinders new service deployment to a slow process of standardisation and legal agreements, and requires large capital expenditure for the roll out of new network elements. Service-Oriented Networking is a new paradigm aimed at transforming networking infrastructure to meet new demands in a responsive and inexpensive manner. It proposes enabling on-demand introduction of services across shared and heterogeneous networking infrastructure. However, architecting the building blocks of a feasible service-oriented network poses many critical research challenges. The first challenge is in providing an architecture that enables on-demand injection and programmability of services. This architecture must not compromise current scalability and performance levels of networks. Furthermore, due to the heterogeneous nature of networks, this architecture must cater for a large number of platforms with varying capabilities. The second challenge is in enforcing security among services of competing entities on leveraging shared infrastructure. With the possibility of faulty or malicious services being deployed, mechanisms are needed to impose isolation of risk to maintain a robust network. These mechanisms must scale to a large number of entities and should not impose restrictions on programmability that would limit the operations of services. Furthermore, this needs to be achieved without the introduction of checking operations in the path of network traffic which would impede the performance of the network. The third challenge is in guaranteeing Quality of Service (QoS) levels across competing services in a differentiated and fair manner. Providing QoS guarantee would no longer be just a problem of bandwidth allocation but would now involve the allocation of computational resources needed in the fulfilment of a service. The critical issue is in formulating a resource allocation scheme among competing services where resource requirements or availability cannot be predetermined. Again, any mechanism used must be scalable for large numbers of services. Recent research in the fields of Active and Programmable Networks has produced novel architectures which adopt user-extensible software components or programmable network processors to enable rapid service deployment. However, it is currently impractical to adopt such concepts as the associated challenges (outlined above) have only been partially addressed. Meanwhile, commercial platforms are becoming both faster and increasingly more programmable. However, commercial manufacturers have developed their platforms in a proprietary and closed manner, thereby restricting users from deploying new services or customising existing services. This thesis explores a holistic approach to overcoming the challenges of Service-Oriented Networks. Specifically, it presents a new and novel architecture called Serviter: a new Service-Oriented Network Architecture for Shared Networks. With this architecture, a new class of network elements enriched with programmable functionality can be deployed to serve as the fundamental building blocks of a new Service-Oriented Networking model. Under this model, service provisioning responsibilities are divided among manufacturers, network providers, and service providers. Manufacturers' responsibilities focus on the provisioning of increasingly programmable high-performance infrastructure and their system-level drivers. Network providers are responsible for the management of their infrastructure, which would be divided into isolated shares and opened to third party service providers. The service providers are then able to deploy new services within their shares of a domain. These services can then be aggregated across domains to provision end-to-end services through the purchase of dedicated shares, or a collaborative model, spanning the required paths. Serviter enables on-demand service deployment onto commercial programmable platforms leveraging their high performance and scalability characteristics. These characteristics are maintained by enforcing the separation of the control and the forwarding planes. A programmability interface is provided through a layer of System Services. To cater for the heterogeneous nature of networks, the System Services layer is extensible. It enables each manufacturer to utilise a unified programmability approach to develop and deploy new System Services to exploit the functionality of their reprogrammable hardware. The programmability of the underlying modules is offered through a structured and flexible approach of Active Flow Manipulation (AFM) Paths. Users deploy User Services that construct AFM Paths to offer new network services. Serviter introduces novel scalable and simple partitioning techniques to address the issues of network integrity and security. Serviter provides each service provider with a secure, separate, and resource assured partition, representing a 'Virtual Router', to accommodate their services. These partitions span all components and restrict services from constructing AFM Paths on traffic outside of the Virtual Networks associated with their partition. To allocate internal router resources among competing partitions and among services within a partition, Serviter employs a scalable and autonomic resource management model called Control plane-Quality of Service {C-QoS). Due to the difficulty of determining resource availability in heterogeneous infrastructure or service resource requirements, this model is dynamically adaptive to demand and availability patterns on a per resource basis. To demonstrate the significance of the new architecture, this thesis presents an implementation of Serviter along with its deployment onto an advanced commercial networking platform. The implementation is assessed and evaluated for its ability to map on to commercial infrastructure, its partitioning enforcement, and its overall performance and scalability. This platform is used to implement novel services demonstrating Serviter capabilities. It is shown that Serviter is capable of facilitating on-demand deployment of a variety of services constrained by forward plane capabilities. This architecture opens the opportunity for service-oriented networking in large-scale shared networks, putting forth new challenging issues in the complete automation of service deployment - specifically, capability discovery, location selection, and dynamic domain aggregation to provide end-to-end service construction

    A Quality of Service Monitoring System for Service Level Agreement Verification

    Get PDF
    Service-level-agreement (SLA) monitoring measures network Quality-of-Service (QoS) parameters to evaluate whether the service performance complies with the SLAs. It is becoming increasingly important for both Internet service providers (ISPs) and their customers. However, the rapid expansion of the Internet makes SLA monitoring a challenging task. As an efficient method to reduce both complexity and overheads for QoS measurements, sampling techniques have been used in SLA monitoring systems. In this thesis, I conduct a comprehensive study of sampling methods for network QoS measurements. I develop an efficient sampling strategy, which makes the measurements less intrusive and more efficient, and I design a network performance monitoring software, which monitors such QoS parameters as packet delay, packet loss and jitter for SLA monitoring and verification. The thesis starts with a discussion on the characteristics of QoS metrics related to the design of the monitoring system and the challenges in monitoring these metrics. Major measurement methodologies for monitoring these metrics are introduced. Existing monitoring systems can be broadly classified into two categories: active and passive measurements. The advantages and disadvantages of both methodologies are discussed and an active measurement methodology is chosen to realise the monitoring system. Secondly, the thesis describes the most common sampling techniques, such as systematic sampling, Poisson sampling and stratified random sampling. Theoretical analysis is performed on the fundamental limits of sampling accuracy. Theoretical analysis is also conducted on the performance of the sampling techniques, which is validated using simulation with real traffic. Both theoretical analysis and simulation results show that the stratified random sampling with optimum allocation achieves the best performance, compared with the other sampling methods. However, stratified sampling with optimum allocation requires extra statistics from the parent traffic traces, which cannot be obtained in real applications. In order to overcome this shortcoming, a novel adaptive stratified sampling strategy is proposed, based on stratified sampling with optimum allocation. A least-mean-square (LMS) linear prediction algorithm is employed to predict the required statistics from the past observations. Simulation results show that the proposed adaptive stratified sampling method closely approaches the performance of the stratified sampling with optimum allocation. Finally, a detailed introduction to the SLA monitoring software design is presented. Measurement results are displayed which calibrate systematic error in the measurements. Measurements between various remote sites have demonstrated impressively good QoS provided by Australian ISPs for premium services

    A Quality of Service Monitoring System for Service Level Agreement Verification

    Get PDF
    Service-level-agreement (SLA) monitoring measures network Quality-of-Service (QoS) parameters to evaluate whether the service performance complies with the SLAs. It is becoming increasingly important for both Internet service providers (ISPs) and their customers. However, the rapid expansion of the Internet makes SLA monitoring a challenging task. As an efficient method to reduce both complexity and overheads for QoS measurements, sampling techniques have been used in SLA monitoring systems. In this thesis, I conduct a comprehensive study of sampling methods for network QoS measurements. I develop an efficient sampling strategy, which makes the measurements less intrusive and more efficient, and I design a network performance monitoring software, which monitors such QoS parameters as packet delay, packet loss and jitter for SLA monitoring and verification. The thesis starts with a discussion on the characteristics of QoS metrics related to the design of the monitoring system and the challenges in monitoring these metrics. Major measurement methodologies for monitoring these metrics are introduced. Existing monitoring systems can be broadly classified into two categories: active and passive measurements. The advantages and disadvantages of both methodologies are discussed and an active measurement methodology is chosen to realise the monitoring system. Secondly, the thesis describes the most common sampling techniques, such as systematic sampling, Poisson sampling and stratified random sampling. Theoretical analysis is performed on the fundamental limits of sampling accuracy. Theoretical analysis is also conducted on the performance of the sampling techniques, which is validated using simulation with real traffic. Both theoretical analysis and simulation results show that the stratified random sampling with optimum allocation achieves the best performance, compared with the other sampling methods. However, stratified sampling with optimum allocation requires extra statistics from the parent traffic traces, which cannot be obtained in real applications. In order to overcome this shortcoming, a novel adaptive stratified sampling strategy is proposed, based on stratified sampling with optimum allocation. A least-mean-square (LMS) linear prediction algorithm is employed to predict the required statistics from the past observations. Simulation results show that the proposed adaptive stratified sampling method closely approaches the performance of the stratified sampling with optimum allocation. Finally, a detailed introduction to the SLA monitoring software design is presented. Measurement results are displayed which calibrate systematic error in the measurements. Measurements between various remote sites have demonstrated impressively good QoS provided by Australian ISPs for premium services

    Actor-Network Theory and its role in understanding the implementation of information technology developments in healthcare

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Actor-Network Theory (ANT) is an increasingly influential, but still deeply contested, approach to understand humans and their interactions with inanimate objects. We argue that health services research, and in particular evaluations of complex IT systems in health service organisations, may benefit from being informed by Actor-Network Theory perspectives.</p> <p>Discussion</p> <p>Despite some limitations, an Actor-Network Theory-based approach is conceptually useful in helping to appreciate the complexity of reality (including the complexity of organisations) and the active role of technology in this context. This can prove helpful in understanding how social effects are generated as a result of associations between different actors in a network. Of central importance in this respect is that Actor-Network Theory provides a lens through which to view the role of technology in shaping social processes. Attention to this shaping role can contribute to a more holistic appreciation of the complexity of technology introduction in healthcare settings. It can also prove practically useful in providing a theoretically informed approach to sampling (by drawing on informants that are related to the technology in question) and analysis (by providing a conceptual tool and vocabulary that can form the basis for interpretations). We draw on existing empirical work in this area and our ongoing work investigating the integration of electronic health record systems introduced as part of England's National Programme for Information Technology to illustrate salient points.</p> <p>Summary</p> <p>Actor-Network Theory needs to be used pragmatically with an appreciation of its shortcomings. Our experiences suggest it can be helpful in investigating technology implementations in healthcare settings.</p

    Cost-causality based tariffs for distribution networks with distributed generation

    Get PDF
    Around the world, the amount of distributed generation (DG) deployed in distribution networks is increasing. It is well understood that DG has the potential to reduce network losses, decrease network utilization, postpone new investment in central generation, increase security of supply, and contribute to service quality through voltage regulation. In addition, DG can increase competition in electricity markets, and for the case of renewable DG provide environmental benefits. The increasing penetration of DG in the power systems worldwide has changed the concept of the distribution networks. Traditionally the costs of these networks were allocated only to demand customers, not generation because these networks were viewed as serving demand only. In this sense, traditional distribution networks were considered passive networks unlike transmission networks which serve both generation and demand and have always been considered active networks. The introduction of DG transforms a distribution network from a passive network into an active network. Present tariffs schemes at distribution level have been conceived using the traditional concept of distribution and do not recognize the new situation. Tariffs have been, and actually are, designed for networks which only have loads connected. These tariffs that normally average costs among network users are not able to capture the real costs and benefits of some customers like DG. Consequently, traditional tariffs schemes at the distribution level can affect the competitiveness of DG and can actually hinder or stop its development. In this work a cost-causality based tariff is proposed for distribution taking into account new distribution networks tend to be active networks, much like transmission. Two concepts based on the same philosophy used for transmission pricing are proposed. The first is nodal pricing for distribution networks, which is an economically efficient pricing mechanism for short term operation with which there is a great deal of experience and confidence from its use at transmission level. The second is an extent-of-use method for the allocation of fixed costs that uses marginal changes in a circuit s current flow with respect to active and reactive power changes in nodes, and thus was called Amp-mile method. The proposed scheme for distribution pricing results to give adequate price signals for location and operation for both generation and loads. An example application based on a typical 30 kV rural radial network in Uruguay is used to show the properties of the proposed methodology
    corecore