240 research outputs found

    Decoupling User Interface Design Using Libraries of Reusable Components

    Get PDF
    The integration of electronic and mechanical hardware, software and interaction design presents a challenging design space for researchers developing physical user interfaces and interactive artifacts. Currently in the academic research community, physical user interfaces and interactive artifacts are predominantly designed and prototyped either as one-off instances from the ground up, or using functionally rich hardware toolkits and prototyping systems. During this prototyping phase, undertaking an integral design of the interface or interactive artifact’s electronic hardware is frequently constraining due to the tight couplings between the different design realms and the typical need for iterations as the design matures. Several current toolkit designs have consequently embraced component-sharing and component-swapping modular designs with a view to extending flexibility and improving researcher freedom by disentangling and softening the cause-effect couplings. Encouraged by early successes of these toolkits, this research work strives to further enhance these freedoms by pursuing an alternative style and dimension of hardware modularity. Another motivation is our goal to facilitate the design and development of certain classes of interfaces and interactive artifacts for which current electronic design approaches are argued to be restrictively constraining (e.g., relating to scale and complexity). Unfortunately, this goal of a new platform architecture is met with conceptual and technical challenges on the embedded system networking front. In response, this research investigates and extends a growing field of multi-module distributed embedded systems. We identify and characterize a sub-class of these systems, calling them embedded aggregates. We then outline and develop a framework for realizing the embedded aggregate class of systems. Toward this end, this thesis examines several architectures, topologies and communication protocols, making the case for and substantial steps toward the development of a suite of networking protocols and control algorithms to support embedded aggregates. We define a set of protocols, mechanisms and communication packets that collectively form the underlying framework for the aggregates. Following the aggregates design, we develop blades and tiles to support user interface researchers

    Reconfigurable microarchitectures at the programmable logic interface

    Get PDF

    Distributed collaborative knowledge management for optical network

    Get PDF
    Network automation has been long time envisioned. In fact, the Telecommunications Management Network (TMN), defined by the International Telecommunication Union (ITU), is a hierarchy of management layers (network element, network, service, and business management), where high-level operational goals propagate from upper to lower layers. The network management architecture has evolved with the development of the Software Defined Networking (SDN) concept that brings programmability to simplify configuration (it breaks down high-level service abstraction into lower-level device abstractions), orchestrates operation, and automatically reacts to changes or events. Besides, the development and deployment of solutions based on Artificial Intelligence (AI) and Machine Learning (ML) for making decisions (control loop) based on the collected monitoring data enables network automation, which targets at reducing operational costs. AI/ML approaches usually require large datasets for training purposes, which are difficult to obtain. The lack of data can be compensated with a collective self-learning approach. In this thesis, we go beyond the aforementioned traditional control loop to achieve an efficient knowledge management (KM) process that enhances network intelligence while bringing down complexity. In this PhD thesis, we propose a general architecture to support KM process based on four main pillars, which enable creating, sharing, assimilating and using knowledge. Next, two alternative strategies based on model inaccuracies and combining model are proposed. To highlight the capacity of KM to adapt to different applications, two use cases are considered to implement KM in a purely centralized and distributed optical network architecture. Along with them, various policies are considered for evaluating KM in data- and model- based strategies. The results target to minimize the amount of data that need to be shared and reduce the convergence error. We apply KM to multilayer networks and propose the PILOT methodology for modeling connectivity services in a sandbox domain. PILOT uses active probes deployed in Central Offices (COs) to obtain real measurements that are used to tune a simulation scenario reproducing the real deployment with high accuracy. A simulator is eventually used to generate large amounts of realistic synthetic data for ML training and validation. We apply KM process also to a more complex network system that consists of several domains, where intra-domain controllers assist a broker plane in estimating accurate inter-domain delay. In addition, the broker identifies and corrects intra-domain model inaccuracies, as well as it computes an accurate compound model. Such models can be used for quality of service (QoS) and accurate end-to-end delay estimations. Finally, we investigate the application on KM in the context of Intent-based Networking (IBN). Knowledge in terms of traffic model and/or traffic perturbation is transferred among agents in a hierarchical architecture. This architecture can support autonomous network operation, like capacity management.La automatización de la red se ha concebido desde hace mucho tiempo. De hecho, la red de gestión de telecomunicaciones (TMN), definida por la Unión Internacional de Telecomunicaciones (ITU), es una jerarquía de capas de gestión (elemento de red, red, servicio y gestión de negocio), donde los objetivos operativos de alto nivel se propagan desde las capas superiores a las inferiores. La arquitectura de gestión de red ha evolucionado con el desarrollo del concepto de redes definidas por software (SDN) que brinda capacidad de programación para simplificar la configuración (descompone la abstracción de servicios de alto nivel en abstracciones de dispositivos de nivel inferior), organiza la operación y reacciona automáticamente a los cambios o eventos. Además, el desarrollo y despliegue de soluciones basadas en inteligencia artificial (IA) y aprendizaje automático (ML) para la toma de decisiones (bucle de control) en base a los datos de monitorización recopilados permite la automatización de la red, que tiene como objetivo reducir costes operativos. AI/ML generalmente requieren un gran conjunto de datos para entrenamiento, los cuales son difíciles de obtener. La falta de datos se puede compensar con un enfoque de autoaprendizaje colectivo. En esta tesis, vamos más allá del bucle de control tradicional antes mencionado para lograr un proceso eficiente de gestión del conocimiento (KM) que mejora la inteligencia de la red al tiempo que reduce la complejidad. En esta tesis doctoral, proponemos una arquitectura general para apoyar el proceso de KM basada en cuatro pilares principales que permiten crear, compartir, asimilar y utilizar el conocimiento. A continuación, se proponen dos estrategias alternativas basadas en inexactitudes del modelo y modelo de combinación. Para resaltar la capacidad de KM para adaptarse a diferentes aplicaciones, se consideran dos casos de uso para implementar KM en una arquitectura de red óptica puramente centralizada y distribuida. Junto a ellos, se consideran diversas políticas para evaluar KM en estrategias basadas en datos y modelos. Los resultados apuntan a minimizar la cantidad de datos que deben compartirse y reducir el error de convergencia. Aplicamos KM a redes multicapa y proponemos la metodología PILOT para modelar servicios de conectividad en un entorno aislado. PILOT utiliza sondas activas desplegadas en centrales de telecomunicación (CO) para obtener medidas reales que se utilizan para ajustar un escenario de simulación que reproducen un despliegue real con alta precisión. Un simulador se utiliza finalmente para generar grandes cantidades de datos sintéticos realistas para el entrenamiento y la validación de ML. Aplicamos el proceso de KM también a un sistema de red más complejo que consta de varios dominios, donde los controladores intra-dominio ayudan a un plano de bróker a estimar el retardo entre dominios de forma precisa. Además, el bróker identifica y corrige las inexactitudes de los modelos intra-dominio, así como también calcula un modelo compuesto preciso. Estos modelos se pueden utilizar para estimar la calidad de servicio (QoS) y el retardo extremo a extremo de forma precisa. Finalmente, investigamos la aplicación en KM en el contexto de red basada en intención (IBN). El conocimiento en términos de modelo de tráfico y/o perturbación del tráfico se transfiere entre agentes en una arquitectura jerárquica. Esta arquitectura puede soportar el funcionamiento autónomo de la red, como la gestión de la capacidad.Postprint (published version

    Quantum computing challenges in the software industry. A fuzzy AHP-based approach

    Get PDF
    ContextThe current technology revolution has posed unexpected challenges for the software industry. In recent years, the field of quantum computing (QC) technologies has continued to grow in influence and maturity, and it is now poised to revolutionise software engineering. However, the evaluation and prioritisation of QC challenges in the software industry remain unexplored, relatively under-identified and fragmented.ObjectiveThe purpose of this study is to identify, examine and prioritise the most critical challenges in the software industry by implementing a fuzzy analytic hierarchy process (F-AHP).MethodFirst, to identify the key challenges, we conducted a systematic literature review by drawing data from the four relevant digital libraries and supplementing these efforts with a forward and backward snowballing search. Second, we followed the F-AHP approach to evaluate and rank the identified challenges, or barriers.ResultsThe results show that the key barriers to QC adoption are the lack of technical expertise, information accuracy and organisational interest in adopting the new process. Another critical barrier is the lack of standards of secure communication techniques for implementing QC.ConclusionBy applying F-AHP, we identified institutional barriers as the highest and organisational barriers as the second highest global weight ranked categories among the main QC challenges facing the software industry. We observed that the highest-ranked local barriers facing the software technology industry are the lack of resources for design and initiative while the lack of organisational interest in adopting the new process is the most significant organisational barrier. Our findings, which entail implications for both academicians and practitioners, reveal the emergent nature of QC research and the increasing need for interdisciplinary research to address the identified challenges.</p

    Data transmission, handling and dissemination issues of EUCLID Data

    Get PDF
    The key features of the Euclid Science Ground Segment (SGS) are the amount of data that the mission will generate, the heavy processing load that is needed to go from the raw data to the science products, the number of parties involved in the data processing, and the accuracy and quality control level that are required at every step of the processing. This enforces a data-centric approach, in the sense that all the operations of the SGS will revolve around a Euclid Archive System (EAS) that will play a central role in the storage of data products and their metadata
    corecore