3,559 research outputs found

    Enabling Micro-level Demand-Side Grid Flexiblity in Resource Constrained Environments

    Full text link
    The increased penetration of uncertain and variable renewable energy presents various resource and operational electric grid challenges. Micro-level (household and small commercial) demand-side grid flexibility could be a cost-effective strategy to integrate high penetrations of wind and solar energy, but literature and field deployments exploring the necessary information and communication technologies (ICTs) are scant. This paper presents an exploratory framework for enabling information driven grid flexibility through the Internet of Things (IoT), and a proof-of-concept wireless sensor gateway (FlexBox) to collect the necessary parameters for adequately monitoring and actuating the micro-level demand-side. In the summer of 2015, thirty sensor gateways were deployed in the city of Managua (Nicaragua) to develop a baseline for a near future small-scale demand response pilot implementation. FlexBox field data has begun shedding light on relationships between ambient temperature and load energy consumption, load and building envelope energy efficiency challenges, latency communication network challenges, and opportunities to engage existing demand-side user behavioral patterns. Information driven grid flexibility strategies present great opportunity to develop new technologies, system architectures, and implementation approaches that can easily scale across regions, incomes, and levels of development

    Research-Based on Telecommunication in Mobile Service Provider's Performance using Enhanced Naive Bayes Classifier

    Get PDF
    In recent years, mobile service providers have rapidly expanded across all countries. Considering unpredictable development trends, mobile service providers are essential to knowledge-based service businesses. Performance may be improved by creating and disseminating new information through innovation activities based on the usage of business intelligence. This research examined the performance of mobile service providers across all countries utilizing an enhanced Naive Bayes classifier based on telecommunication. In comparison to quantitative variables, the naive Bayes performs quite well. In the beginning, data is collected and the normalization technique is used for data preprocessing. Feature extraction is carried out using “Term Frequency and Inverse Document Frequency (TF-IDF)”. “Decision Tree algorithm” is used for data analysis. Then the feature is selected using a two-stage Markov blanket algorithm. Enhanced Naïve Bayes Classifier is the proposed algorithm for telecommunication analysis and at last, the performance of the system is analyzed. This proposed algorithm is used to compare the mobile service provider's performances with existing algorithms. The proposed method measures the following metrics as Throughput, Packet loss, Packet duplication, and User quality of experience. The proposed algorithm is more effective and produces better results.&nbsp

    NASA and the challenge of ISDN: The role of satellites in an ISDN world

    Get PDF
    To understand what role satellites may play in Integrated Services Digital Network (ISDN), it is necessary to understand the concept of ISDN, including key organizations involved, the current status of key standards recommendations, and domestic and international progress implementation of ISDN. Each of these areas are explained. A summary of the technical performance criteria for ISDN, current standards for satellites in ISDN, key players in the ISDN environment, and what steps can be taken to encourage application of satellites in ISDN are also covered

    AN INVESTIGATION INTO AN EXPERT SYSTEM FOR TELECOMMUNICATION NETWORK DESIGN

    Get PDF
    Many telephone companies, especially in Eastern-Europe and the 'third world', are developing new telephone networks. In such situations the network design engineer needs computer based tools that not only supplement his own knowledge but also help him to cope with situations where not all the information necessary for the design is available. Often traditional network design tools are somewhat removed from the practical world for which they were developed. They often ignore the significant uncertain and statistical nature of the input data. They use data taken from a fixed point in time to solve a time variable problem, and the cost formulae tend to be on an average per line or port rather than the specific case. Indeed, data is often not available or just plainly unreliable. The engineer has to rely on rules of thumb honed over many years of experience in designing networks and be able to cope with missing data. The complexity of telecommunication networks and the rarity of specialists in this area often makes the network design process very difficult for a company. It is therefore an important area for the application of expert systems. Designs resulting from the use of expert systems will have a measure of uncertainty in their solution and adequate account must be made of the risk involved in implementing its design recommendations. The thesis reviews the status of expert systems as used for telecommunication network design. It further shows that such an expert system needs to reduce a large network problem into its component parts, use different modules to solve them and then combine these results to create a total solution. It shows how the various sub-division problems are integrated to solve the general network design problem. This thesis further presents details of such an expert system and the databases necessary for network design: three new algorithms are invented for traffic analysis, node locations and network design and these produce results that have close correlation with designs taken from BT Consultancy archives. It was initially supposed that an efficient combination of existing techniques for dealing with uncertainty within expert systems would suffice for the basis of the new system. It soon became apparent, however, that to allow for the differing attributes of facts, rules and data and the varying degrees of importance or rank within each area, a new and radically different method would be needed. Having investigated the existing uncertainty problem it is believed that a new more rational method has been found. The work has involved the invention of the 'Uncertainty Window' technique and its testing on various aspects of network design, including demand forecast, network dimensioning, node and link system sizing, etc. using a selection of networks that have been designed by BT Consultancy staff. From the results of the analysis, modifications to the technique have been incorporated with the aim of optimising the heuristics and procedures, so that the structure gives an accurate solution as early as possible. The essence of the process is one of associating the uncertainty windows with their relevant rules, data and facts, which results in providing the network designer with an insight into the uncertainties that have helped produce the overall system design: it indicates which sources of uncertainty and which assumptions are were critical for further investigation to improve upon the confidence of the overall design. The windowing technique works by virtue of its ability to retain the composition of the uncertainty and its associated values, assumption, etc. and allows for better solutions to be attained.BRITISH TELECOMMUNICATIONS PL

    Data distribution satellite

    Get PDF
    A description is given of a data distribution satellite (DDS) system. The DDS would operate in conjunction with the tracking and data relay satellite system to give ground-based users real time, two-way access to instruments in space and space-gathered data. The scope of work includes the following: (1) user requirements are derived; (2) communication scenarios are synthesized; (3) system design constraints and projected technology availability are identified; (4) DDS communications payload configuration is derived, and the satellite is designed; (5) requirements for earth terminals and network control are given; (6) system costs are estimated, both life cycle costs and user fees; and (7) technology developments are recommended, and a technology development plan is given. The most important results obtained are as follows: (1) a satellite designed for launch in 2007 is feasible and has 10 Gb/s capacity, 5.5 kW power, and 2000 kg mass; (2) DDS features include on-board baseband switching, use of Ku- and Ka-bands, multiple optical intersatellite links; and (3) system user costs are competitive with projected terrestrial communication costs

    Integrated monitoring of multi-domain backbone connections -- Operational experience in the LHC optical private network

    Full text link
    Novel large scale research projects often require cooperation between various different project partners that are spread among the entire world. They do not only need huge computing resources, but also a reliable network to operate on. The Large Hadron Collider (LHC) at CERN is a representative example for such a project. Its experiments result in a vast amount of data, which is interesting for researchers around the world. For transporting the data from CERN to 11 data processing and storage sites, an optical private network (OPN) has been constructed. As the experiment data is highly valuable, LHC defines very high requirements to the underlying network infrastructure. In order to fulfil those requirements, the connections have to be managed and monitored permanently. In this paper, we present the integrated monitoring solution developed for the LHCOPN. We first outline the requirements and show how they are met on the single network layers. After that, we describe, how those single measurements can be combined into an integrated view. We cover design concepts as well as tool implementation highlights.Comment: International Journal of Computer Networks & Communications (IJCNC

    Live-Migration in Cloud Computing Environment

    Get PDF
    O tráfego global de IP aumentou cinco vezes nos últimos cinco anos, e prevê-se que crescerá três vezes nos próximos cinco. Já para o período de 2013 a 2018, anteviu-se que o total do tráfego de IP iria aumentar a sua taxa composta de crescimento anual (CAGR) em, aproximadamente, 3.9 vezes. Assim, os Prestadores de Serviços estão a sofrer com este acréscimo exponencial, que é proveniente do número abismal de dispositivos e utilizadores que estão ligados à Internet, bem como das suas exigências por vários recursos e serviços de rede (como por exemplo, distribuição de conteúdo multimédia, segurança, mobilidade, etc.). Mais especificamente, estes estão com dificuldades em: introduzir novos serviços geradores de receitas; e otimizar e adaptar as suas infraestruturas mais caras, centros de processamento de dados, e redes empresariais e de longa distância (COMpuTIN, 2015). Estas redes continuam a ter sérios problemas (no que toca a agilidade, gestão, mobilidade e no tempo despendido para se adaptarem), que não foram corrigidos até ao momento. Portanto, foram propostos novos modelos de Virtualização de Funções da Rede (NFV) e tecnologias de Redes de Software Definidos (SDN) para solucionar gastos operacionais e de capital não otimizado, e limitações das redes (Lopez, 2014, Hakiri and Berthou, 2015). Para se ultrapassar tais adversidades, o Instituto Europeu de Normas de Telecomunicações (ETSI) e outras organizações propuseram novas arquiteturas de rede. De acordo com o ETSI, a NFV é uma técnica emergente e poderosa, com grande aplicabilidade, e com o objetivo de transformar a maneira como os operadores desenham as redes. Isto é alcançado pela evolução da tecnologia padrão de virtualização TI, de forma a consolidar vários tipos de equipamentos de redes como: servidores de grande volume, routers, switches e armazenamento (Xilouris et al., 2014). Nesta dissertação, foram usadas as soluções mais atuais de SDN e NFV, de forma a produzir um caso de uso que possa solucionar o crescimento do tráfego de rede e a excedência da sua capacidade máxima. Para o desenvolvimento e avalização da solução, foi instalada a plataforma de computação na nuvem OpenStack, de modo a implementar, gerir e testar um caso de uso de Live Migration.Global IP traffic has increased fivefold over the past five years, and will continue increasing threefold over the next five years. The overall IP traffic will grow at a compound annual growth rate (CAGR) nearly 3.9-fold from 2013 to 2018. Service Providers are experiencing the exponential growth of IP traffic that comes from the incredible increased number of devices and users who are connected to the internet along with their demands for various resources and network services like multimedia content distribution, security, mobility and else. Therefore, Service Providers are finding difficult to introduce new revenue generating services, optimize and adapt their expensive infrastructures, data centers, wide-area networks and enterprise networks (COMpuTIN, 2015). The networks continue to have serious known problems, such as, agility, manageability, mobility and time-to-application that have not been successfully addressed so far. Thus, novel Network Function Virtualization (NFV) models and Software-defined Networking (SDN) technologies have been proposed to solve the non-optimal capital and operational expenditures and network’s limitations (Lopez, 2014, Hakiri and Berthou, 2015). In order to solve these issues, the European Telecommunications Standards Institute (ETSI) and other standard organizations are proposing new network architecture approaches. According to ETSI, The Network Functions Virtualization is a powerful emerging technique with widespread applicability, aiming to transform the way that network operators design networks by evolving standard IT virtualization technology to consolidate many network equipment types: high volume servers, routers, switches and storage (Xilouris et al., 2014). In this thesis, the current Software-Defined Networking (SDN) and Network Function Virtualization (NFV) solutions were used in order to make a use case that can address the increasing of network traffic and exceeding its maximum capacity. To develop and evaluate the solution, OpenStack cloud computing platform was installed in order to deploy, manage and test a Live-Migration use-case

    The Relative Costs of Local Telephony Across Five Countries

    Get PDF
    In this paper we take three steps towards evaluating the relative performanceof the telecommunications markets in these five countries. First weidentify and then limit our examination to only those components of thetelecommunications network that may generate market power concerns. Wedetermine that market power concerns where they exist at all are limited tothose facilities connecting a customer with its neighboring central officecollectively called the local loop. Second we identify regional factors that maysignificantly affect relative costs for the local loop. Using data and a cost modelfrom the US to examine these regional factors we find that customer density is the most significant. Third using this same cost model along with regional data we estimate local loop costs relative to the US for New Zealand Australia the UK and Sweden

    Space station data system analysis/architecture study. Task 2: Options development, DR-5. Volume 2: Design options

    Get PDF
    The primary objective of Task 2 is the development of an information base that will support the conduct of trade studies and provide sufficient data to make key design/programmatic decisions. This includes: (1) the establishment of option categories that are most likely to influence Space Station Data System (SSDS) definition; (2) the identification of preferred options in each category; and (3) the characterization of these options with respect to performance attributes, constraints, cost and risk. This volume contains the options development for the design category. This category comprises alternative structures, configurations and techniques that can be used to develop designs that are responsive to the SSDS requirements. The specific areas discussed are software, including data base management and distributed operating systems; system architecture, including fault tolerance and system growth/automation/autonomy and system interfaces; time management; and system security/privacy. Also discussed are space communications and local area networking
    corecore