8,723 research outputs found

    A new test framework for communications-critical large scale systems

    Get PDF
    None of today’s large scale systems could function without the reliable availability of a varied range of network communications capabilities. Whilst software, hardware and communications technologies have been advancing throughout the past two decades, the methods commonly used by industry for testing large scale systems which incorporate critical communications interfaces have not kept pace. This paper argues for the need for a specifically tailored framework to achieve effective and precise testing of communications-critical large scale systems (CCLSSs). The paper briefly discusses how generic test approaches are leading to inefficient and costly test activities in industry. The paper then outlines the features of an alternative CCLSS domain-specific test framework, and then provides an example based on a real case study. The paper concludes with an evaluation of the benefits observed during the case study and an outline of the available evidence that such benefits can be realized with other comparable systems

    APMEC: An Automated Provisioning Framework for Multi-access Edge Computing

    Full text link
    Novel use cases and verticals such as connected cars and human-robot cooperation in the areas of 5G and Tactile Internet can significantly benefit from the flexibility and reduced latency provided by Network Function Virtualization (NFV) and Multi-Access Edge Computing (MEC). Existing frameworks managing and orchestrating MEC and NFV are either tightly coupled or completely separated. The former design is inflexible and increases the complexity of one framework. Whereas, the latter leads to inefficient use of computation resources because information are not shared. We introduce APMEC, a dedicated framework for MEC while enabling the collaboration with the management and orchestration (MANO) frameworks for NFV. The new design allows to reuse allocated network services, thus maximizing resource utilization. Measurement results have shown that APMEC can allocate up to 60% more number of network services. Being developed on top of OpenStack, APMEC is an open source project, available for collaboration and facilitating further research activities

    ERIGrid Holistic Test Description for Validating Cyber-Physical Energy Systems

    Get PDF
    Smart energy solutions aim to modify and optimise the operation of existing energy infrastructure. Such cyber-physical technology must be mature before deployment to the actual infrastructure, and competitive solutions will have to be compliant to standards still under development. Achieving this technology readiness and harmonisation requires reproducible experiments and appropriately realistic testing environments. Such testbeds for multi-domain cyber-physical experiments are complex in and of themselves. This work addresses a method for the scoping and design of experiments where both testbed and solution each require detailed expertise. This empirical work first revisited present test description approaches, developed a newdescription method for cyber-physical energy systems testing, and matured it by means of user involvement. The new Holistic Test Description (HTD) method facilitates the conception, deconstruction and reproduction of complex experimental designs in the domains of cyber-physical energy systems. This work develops the background and motivation, offers a guideline and examples to the proposed approach, and summarises experience from three years of its application.This work received funding in the European Community’s Horizon 2020 Program (H2020/2014–2020) under project “ERIGrid” (Grant Agreement No. 654113)

    Non-Intrusive Subscriber Authentication for Next Generation Mobile Communication Systems

    Get PDF
    Merged with duplicate record 10026.1/753 on 14.03.2017 by CS (TIS)The last decade has witnessed massive growth in both the technological development, and the consumer adoption of mobile devices such as mobile handsets and PDAs. The recent introduction of wideband mobile networks has enabled the deployment of new services with access to traditionally well protected personal data, such as banking details or medical records. Secure user access to this data has however remained a function of the mobile device's authentication system, which is only protected from masquerade abuse by the traditional PIN, originally designed to protect against telephony abuse. This thesis presents novel research in relation to advanced subscriber authentication for mobile devices. The research began by assessing the threat of masquerade attacks on such devices by way of a survey of end users. This revealed that the current methods of mobile authentication remain extensively unused, leaving terminals highly vulnerable to masquerade attack. Further investigation revealed that, in the context of the more advanced wideband enabled services, users are receptive to many advanced authentication techniques and principles, including the discipline of biometrics which naturally lends itself to the area of advanced subscriber based authentication. To address the requirement for a more personal authentication capable of being applied in a continuous context, a novel non-intrusive biometric authentication technique was conceived, drawn from the discrete disciplines of biometrics and Auditory Evoked Responses. The technique forms a hybrid multi-modal biometric where variations in the behavioural stimulus of the human voice (due to the propagation effects of acoustic waves within the human head), are used to verify the identity o f a user. The resulting approach is known as the Head Authentication Technique (HAT). Evaluation of the HAT authentication process is realised in two stages. Firstly, the generic authentication procedures of registration and verification are automated within a prototype implementation. Secondly, a HAT demonstrator is used to evaluate the authentication process through a series of experimental trials involving a representative user community. The results from the trials confirm that multiple HAT samples from the same user exhibit a high degree of correlation, yet samples between users exhibit a high degree of discrepancy. Statistical analysis of the prototypes performance realised early system error rates of; FNMR = 6% and FMR = 0.025%. The results clearly demonstrate the authentication capabilities of this novel biometric approach and the contribution this new work can make to the protection of subscriber data in next generation mobile networks.Orange Personal Communication Services Lt

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    Architectural Model for Evaluating Space Communication Networks

    Get PDF
    [ANGLÈS] The space exploration endeavor started in 1957 with the launch and operation of the first manmade satellite, the URSS Sputnik 1. Since then, multiple space programs have been developed, pushing the limits of technology and science but foremost unveiling the mysteries of the universe. In all these cases, the need for flexible and reliable communication systems has been primordial, allowing the return of collected science data and, when necessary, ensuring the well-being and safety of astronauts. To that end, multiple space communication networks have been globally deployed, be it through geographically distributed ground assets or through space relay satellites. Until now most of these systems have relied upon mature technology standards that have been adapted to the specific needs of particular missions and customers. Nevertheless, current trends in the space programs suggest that a shift of paradigm is needed: an Internet-like space network would increase the capacity and reliability of an interplanetary network while dramatically reducing its overall costs. In this context, the System Architecting Paradigm can be a good starting point. Through its formal decomposition of the system, it can help determine the architecturally distinguishing decisions and identify potential areas of commonality and cost reduction. This thesis presents a general framework to evaluate space communication relay systems for the near Earth domain. It indicates the sources of complexity in the modeling process, and discusses the validity and appropriateness of past approaches to the problem. In particular, it proposes a discussion of current models vis-à-vis the System Architecting Paradigm and how they fit into tradespace exploration studies. Next, the thesis introduces a computational performance model for the analysis and fast simulation of space relay satellite systems. The tool takes advantage of a specifically built-in rule-based expert system for storing the constitutive elements of the architecture and perform logical interactions between them. Analogously, it uses numerical models to assess the network topology over a given timeframe, perform physical layer computations and calculate plausible schedules for the overall system. In particular, it presents a newly developed heuristic scheduler that guarantees prioritization of specific missions and services while ensuring manageable computational times.[CASTELLÀ] El inicio de la carrera espacial se inició en 1957 con el lanzamiento y operación del primer satélite artificial, el Sputnik 1 de la URSS. Desde entonces se han desarrollado múltiples programas espaciales que han llevado al límite tanto la tecnología como la ciencia y han permitido desvelar los misterios del universo. En todos estos casos, la necesidad de sistemas de comunicación flexibles y fiables ha sido primordial con el fin de asegurar el retorno de los datos científicos recopilados y, en ciertos casos, garantizar la seguridad de los astronautas. Como consecuencia, múltiples redes de comunicaciones espaciales han sido desplegadas, ya sea a través de antenas globalmente distribuidas a través de la superficie terrestre o mediante satélites repetidores. Hasta ahora la mayoría de estos sistemas se ha basado en estándares tecnológicos maduros y testeados, los cuales se han adaptado con el fin de satisfacer las necesidades específicas de cada misión y cliente. Sin embargo, las tendencias actuales en el diseño de los nuevos programas espaciales indica que un cambio de paradigma es necesario: una red espacial a imagen de Internet permitiría incrementar la capacidad y fiabilidad de las comunicaciones interplanetarias y, a la vez, reducir dramáticamente sus costes. En este contexto, el paradigma de arquitectura de sistemas puede ser un buen punto de partida. Mediante la descomposición formal del sistema, puede ayudar a determinar las decisiones que tienen un impacto cabal en el diseño de la arquitectura así como identificar las áreas con tecnologías similares y de menor coste. Esta tesis presenta un marco teórico general para evaluar sistemas de comunicaciones espaciales para misiones que orbitan la Tierra. Adicionalmente, la tesis indica los principales orígenes de complejidad durante el modelado del sistema y presenta una discusión sobre la validez de anteriores estrategias para analizar el problema. En concreto, propone una comparación de anteriores modelos respecto el paradigma de arquitectura de sistemas y su grado de adecuación para evaluar y comprar arquitecturas. A continuación, la tesis introduce un modelo computacional para simular y evaluar el rendimiento de sistemas de repetidores por satélite. La herramienta utiliza un rule-based expert system específicamente diseñado con el fin de almacenar los principales elementos constitutivos de la arquitectura y comprender las interacciones lógicas entre ellos. Análogamente, el modelo usa métodos numéricos con el fin de calcular la evolución temporal de la topología de la red en un determinado intervalo de tiempo, así como su capa física y un posible programa de contactos. En concreto, presenta un nuevo scheduler heurístico que garantiza la correcta ordenación de las misiones y servicios a la vez que asegura un tiempo computacional aceptable.[CATALÀ] L'inici de la cursa espacial va iniciar-se l'any 1957 amb el llançament i operació del primer satèl·lit artificial, l'Sputnik 1 de la URSS. Des d'aleshores s'han dut a terme múltiples programes espacials que han portat al límit tant la tecnologia com la ciència i han permès desvelar els misteris de l'univers. En tots aquests casos, la necessitat de sistemes de comunicació flexibles i fiables ha sigut primordial per tal d'assegurar el retorn de les dades científiques recopilades i, en certs casos, garantir el benestar i seguretat dels astronautes. Com a conseqüència, múltiples xarxes de comunicacions espacials han sigut desplegades, ja sigui a través d'antenes globalment distribuïdes a través de la superfície terrestre o mitjançant satèl·lits repetidors. Fins ara la majoria d'aquests sistemes s'han basat en estàndards tecnològics madurs i testats, els quals s'han adaptat per tal de satisfer les necessitats específiques de cada missió i client. Això no obstant, les tendències actuals en el disseny dels nous programes espacials indica que un canvi de paradigma és necessari: una xarxa espacial a imatge d'Internet permetria incrementar la capacitat i fiabilitat de les comunicacions interplanetàries i, alhora, reduir dramàticament els seu costs. En aquest context, el paradigma d'arquitectura de sistemes pot ser un bon punt de partida. Mitjançant la descomposició formal del sistema, pot ajudar a determinar les decisions que tenen un impacte cabdal en el disseny de l'arquitectura així com permetre identificar àrees amb tecnologies similars i de menor cost. Aquesta tesi presenta un marc teòric general per avaluar sistemes de comunicacions espacials per missions orbitant la Terra. Addicionalment, la tesi indica els principals orígens de complexitat durant el modelatge del sistema i presenta una discussió sobre la validesa d'anteriors estratègies per analitzar el problema. En concret, proposa una comparació d'anteriors models respecte el paradigma d'arquitectura de sistemes i el seu grau d'adequació per avaluar i comparar arquitectures. A continuació, la tesi introdueix un model computacional per simular i avaluar el rendiment de sistemes de repetidors per satèl·lit. L'eina empra un rule-based expert system específicament dissenyat per tal d'emmagatzemar els principals elements constitutius de l'arquitectura i comprendre les interaccions lògiques entre ells. Anàlogament, el model utilitza mètodes numèrics per tal de calcular l'evolució temporal de la topologia de la xarxa en un determinat interval de temps, així com calcular la seva capa física i un possible programa de contactes. En concret, presenta un nou scheduler heurístic que garanteix la correcte ordenació de les missions i serveis tot assegurant un temps de computació acceptable

    Infrastructure sharing of 5G mobile core networks on an SDN/NFV platform

    Get PDF
    When looking towards the deployment of 5G network architectures, mobile network operators will continue to face many challenges. The number of customers is approaching maximum market penetration, the number of devices per customer is increasing, and the number of non-human operated devices estimated to approach towards the tens of billions, network operators have a formidable task ahead of them. The proliferation of cloud computing techniques has created a multitude of applications for network services deployments, and at the forefront is the adoption of Software-Defined Networking (SDN) and Network Functions Virtualisation (NFV). Mobile network operators (MNO) have the opportunity to leverage these technologies so that they can enable the delivery of traditional networking functionality in cloud environments. The benefit of this is reductions seen in the capital and operational expenditures of network infrastructure. When going for NFV, how a Virtualised Network Function (VNF) is designed, implemented, and placed over physical infrastructure can play a vital role on the performance metrics achieved by the network function. Not paying careful attention to this aspect could lead to the drastically reduced performance of network functions thus defeating the purpose of going for virtualisation solutions. The success of mobile network operators in the 5G arena will depend heavily on their ability to shift from their old operational models and embrace new technologies, design principles and innovation in both the business and technical aspects of the environment. The primary goal of this thesis is to design, implement and evaluate the viability of data centre and cloud network infrastructure sharing use case. More specifically, the core question addressed by this thesis is how virtualisation of network functions in a shared infrastructure environment can be achieved without adverse performance degradation. 5G should be operational with high penetration beyond the year 2020 with data traffic rates increasing exponentially and the number of connected devices expected to surpass tens of billions. Requirements for 5G mobile networks include higher flexibility, scalability, cost effectiveness and energy efficiency. Towards these goals, Software Defined Networking (SDN) and Network Functions Virtualisation have been adopted in recent proposals for future mobile networks architectures because they are considered critical technologies for 5G. A Shared Infrastructure Management Framework was designed and implemented for this purpose. This framework was further enhanced for performance optimisation of network functions and underlying physical infrastructure. The objective achieved was the identification of requirements for the design and development of an experimental testbed for future 5G mobile networks. This testbed deploys high performance virtualised network functions (VNFs) while catering for the infrastructure sharing use case of multiple network operators. The management and orchestration of the VNFs allow for automation, scalability, fault recovery, and security to be evaluated. The testbed developed is readily re-creatable and based on open-source software

    Factors shaping the evolution of electronic documentation systems

    Get PDF
    The main goal is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge. By anticipating advances, the design of Space Station Project (SSP) information systems can be tailored to facilitate a progression of increasingly sophisticated strategies as the space station evolves. Future generations of advanced information systems will use increases in power to deliver environmentally meaningful, contextually targeted, interconnected data (knowledge). The concept of a Knowledge Base Management System is emerging when the problem is focused on how information systems can perform such a conversion of raw data. Such a system would include traditional management functions for large space databases. Added artificial intelligence features might encompass co-existing knowledge representation schemes; effective control structures for deductive, plausible, and inductive reasoning; means for knowledge acquisition, refinement, and validation; explanation facilities; and dynamic human intervention. The major areas covered include: alternative knowledge representation approaches; advanced user interface capabilities; computer-supported cooperative work; the evolution of information system hardware; standardization, compatibility, and connectivity; and organizational impacts of information intensive environments
    corecore