5,688 research outputs found

    Paving the Way for a Real-Time Context-Aware Predictive Architecture

    Get PDF
    Internet of Things society generates and needs to consume huge amounts of data in a demanding context-aware scenario. Such exponentially growing data sources require the use of novel processing methodologies, technologies and tools to facilitate data processing in order to detect and prevent situations of interest for the users in their particular context. To solve this issue, we propose an architecture which making use of emerging technologies and cloud platforms can process huge amounts of heterogeneous data and promptly alert users of relevant situations for a particular domain according to their context. Last, but not least, we will provide a graphical tool for domain experts to easily model, automatically generate code and deploy the situations to be detected and the actions to be taken in consequence. The proposal will be evaluated through a real case study related to air quality monitoring and lung diseases in collaboration with a doctor specialist on lung diseases of a public hospital

    6G White Paper on Machine Learning in Wireless Communication Networks

    Full text link
    The focus of this white paper is on machine learning (ML) in wireless communications. 6G wireless communication networks will be the backbone of the digital transformation of societies by providing ubiquitous, reliable, and near-instant wireless connectivity for humans and machines. Recent advances in ML research has led enable a wide range of novel technologies such as self-driving vehicles and voice assistants. Such innovation is possible as a result of the availability of advanced ML models, large datasets, and high computational power. On the other hand, the ever-increasing demand for connectivity will require a lot of innovation in 6G wireless networks, and ML tools will play a major role in solving problems in the wireless domain. In this paper, we provide an overview of the vision of how ML will impact the wireless communication systems. We first give an overview of the ML methods that have the highest potential to be used in wireless networks. Then, we discuss the problems that can be solved by using ML in various layers of the network such as the physical layer, medium access layer, and application layer. Zero-touch optimization of wireless networks using ML is another interesting aspect that is discussed in this paper. Finally, at the end of each section, important research questions that the section aims to answer are presented

    Anticipatory Mobile Computing: A Survey of the State of the Art and Research Challenges

    Get PDF
    Today's mobile phones are far from mere communication devices they were ten years ago. Equipped with sophisticated sensors and advanced computing hardware, phones can be used to infer users' location, activity, social setting and more. As devices become increasingly intelligent, their capabilities evolve beyond inferring context to predicting it, and then reasoning and acting upon the predicted context. This article provides an overview of the current state of the art in mobile sensing and context prediction paving the way for full-fledged anticipatory mobile computing. We present a survey of phenomena that mobile phones can infer and predict, and offer a description of machine learning techniques used for such predictions. We then discuss proactive decision making and decision delivery via the user-device feedback loop. Finally, we discuss the challenges and opportunities of anticipatory mobile computing.Comment: 29 pages, 5 figure

    Procesamiento de Datos Heterogéneos en el Internet de las Cosas

    Get PDF
    Day after day the number of Internet of Things (IoT) and smart devices capable of producing, consuming, and exchanging information increases considerably. In most cases, the structure of the information produced by such devices is completely different, therefore providing heterogeneous information. This fact is becoming a challenge for researchers working on IoT, who need to perform homogenisation and pre-processing tasks before using the IoT data in their analytics. Moreover, the volume of these heterogeneous data sources is usually huge, thus leading us to the Big Data term, which relies on the three V’s: Velocity, Volume, and Variety. Being able to work with these large and heterogeneous datasets, performing specific domain analytics, and reacting in real time to situations of interests, would result in a big competitive advantage. Hence, there is a need of being able to operate with these heterogeneous data, to consume, to process, and to analyse them. In this context, Data Serialization Systems (DSS), Stream Processing (SP) platforms, and Complex Event Processing (CEP) are postulated as potential tools that will help developers to overcome these challenges previously commented. Firstly, DSS allows us to transmit and transport data quickly and effectively thanks to their serialization strategies. Secondly, SP platforms bring the possibility of establishing architectures capable of consuming, processing, and transforming vast amounts of data in real time. Finally, CEP is a well-established technology that facilitates the analytics of streams of data, detecting and notifying about anomalies in real time. At the same time, these advantageous tools require years of training to be able to dominate and use them efficiently and effectively. So, providing these technologies to domain experts, users who are experts on the domain itself but usually lack computer science or programming skills, is a must. This is where Model-Driven Development (MDD) comes up. MDD is a paradigm in software development that facilitates users the usage of complex technologies, due to it abstracts the user from the implementation details and allows them to focus on defining the problem directly. Therefore, in this PhD thesis, we aim to solve these issues. On the first hand, we have developed an architecture for processing and analysing data coming from heterogeneous sources with different structures in IoT scopes, allowing researchers to focus on data analysis, without having to worry about the structures of the data that are going to be processed. This architecture combines the real-time SP paradigm and DSS for information processing and transforming, together with the CEP for information analysis. The combination of these three technologies allows developers and researchers to build systems that can consume, process, transform, and analyse large amounts of heterogeneous data in real time. On the other hand, to bring this architecture to any kind of users, we have developed MEdit4CEP-SP, a model-driven system and extension of the tool MEdit4CEP, that integrates SP, DSS, and CEP for consuming, processing and analysing heterogeneous data in real time, providing domain experts with a graphical editor which allows them to infer and define heterogeneous data domains, as well as user-friendly modelling the situations of interest to be detected in such domains. In this editor, the graphical definitions are automatically transformed into code, thanks to the use of MDD techniques, which is deployed in the processing system at runtime. Also, all these definitions are persistently stored in a NoSQL database, so any user can reuse the definitions that already stored. Moreover, this set of tools could be used as collaborative, due to they can be deployed on the cloud, meaning that several domain experts or final users can be working together with their MEdit4CEP-SP instances, using their own computers, adding, removing and updating event types and event patterns from the same CEP engine. Furthermore, we have evaluated our solution thoroughly. First, we have tested our SP architecture to prove its scalability and its computing capacity, showing that the system can process more data using more nodes. Its performance is outstanding, reaching a maximum Transactions Per Second (TPS) of 135 080 Mb/s using 4 nodes. Next, we have tested the graphical editor with real users to show that it provides its functionalities in a friendly and intuitive way. The users were asked to fulfil a series of tasks and then they answered a questionnaire to evaluate their experience with the editor. The results of such questionnaire were successful. Finally, the benefits of this system are compared with other existing approaches in the literature with excellent results. In such comparative analysis, we have contrasted our proposal against others based on a series of key features that systems for modelling, consuming, processing, and analysing heterogeneous data in real time should present.Hoy en día, el número de dispositivos del Internet de las Cosas (Internet of Things, IoT) y dispositivos inteligentes capaces de producir, consumir e intercambiar información crece considerablemente. En la mayoría de casos, la estructura de la información producida por dichos dispositivos es completamente diferente, produciendo pues información heterogénea. Este hecho se está convirtiendo en todo un desafío para investigadores que trabajan en IoT, que necesitan llevar a cabo tareas de pre-procesamientos y homogeneización antes de poder hacer uso de estos datos del IoT en sus análisis. Además, el volumen de estas fuentes de datos heterogéneas es normalmente desmesurado, lo cual nos lleva al término del Big Data que se basa en las tres Vs: Velocidad, Volumen y Variedad. Ser capaces de trabajar con estos inmensos y heterogéneos volúmenes de datos, realizando análisis de dominio específico, y reaccionando a determinadas situaciones de interés en tiempo real, resultaría en una enorme ventaja competitiva. Por lo tanto, existe la necesidad de ser capaces de operar con estos datos heterogéneos, consumirlos, procesarlos y analizarlos. En este contexto, los Sistemas de Serialización de Datos (Data Serialization Systems, DSS), las plataformas de Procesamiento de Flujos (Stream Processing, SP) de datos y el Procesamiento de Eventos Complejos (Complex Event Processing, CEP) se han postulado como poderosas herramientas que ayudarán a los desarrolladores a superar estos desafíos previamente comentados. En primer lugar, DSS nos permite transmitir y transportar datos de una forma rápida y efectiva gracias a sus estrategias de serialización. En segundo lugar, las plataformas SP brindan la posibilidad de establecer arquitecturas capaces de consumir, procesar y transformar ingentes cantidades de datos en tiempo real. Finalmente, CEP es una tecnología bien conocida que facilita el análisis de flujos de datos, detectando y notificando anomalías en tiempo real. Al mismo tiempo, estas ventajosas herramientas requieren años de entrenamiento para poder ser dominadas y usadas de forma eficiente y efectiva. De esa forma, proporcionar estas tecnologías a los expertos del dominio, usuarios que son expertos en el dominio a tratar pero que, normalmente, carecen de habilidades de informática o programación, es una obligación. Aquí es donde el Desarrollo Dirigido por Modelos (Model-Driven Development, MDD) entra en escena. MDD es un paradigma en el desarrollo de software que facilita a los usuarios la utilización de tecnologías complejas ya que abstrae al usuario de los detalles de implementación y les permite centrarse en definir el problema directamente. Así pues, en esta Tesis Doctoral, perseguimos resolver estos problemas. Por un lado, hemos desarrollado una arquitectura para procesar y analizar datos provenientes de fuentes heterogéneas con diferentes estructuras en ámbitos del IoT, permitiendo a los investigadores centrarse en el análisis de los datos, sin tener que preocuparse por la estructura de los datos que van a ser procesados. Esta arquitectura combina el paradigma de SP y DSS, para procesar y transformar la información, junto a CEP para analizar dichos datos, todo ello en tiempo real. La combinación de estas tres tecnologías permite a los desarrolladores e investigadores construir sistemas capaces de consumir, procesar, transformar y analizar grandes cantidades de datos heterogéneos en tiempo real. Por otro lado, con la idea de poder acercar esta arquitectura a cualquier tipo de usuario, a partir de ella hemos desarrollado MEdit4CEP-SP, un sistema dirigido por modelos y una extensión de MEdit4CEP, que integra SP, DSS y CEP para consumir, procesar y analizar datos heterogéneos en tiempo real, proporcionándole a los expertos del dominio un editor grafico que les permita inferir y definir los dominios de datos heterogéneos, así como poder modelar de una forma amigable las diferentes situaciones de interés que se persiguen detectar en dichos dominios. En este editor, las definiciones gráficas son automáticamente transformadas en código, gracias al uso de técnicas de MDD, que es desplegado en el sistema de procesamiento en tiempo de ejecución. Además, todas estas definiciones son almacenadas persistentemente en una base de datos NoSQL, de forma que cualquier usuario puede reutilizar las definiciones que ya han sido almacenadas. Este conjunto de herramientas puede ser utilizado de forma colaborativa, ya que pueden ser desplegadas en la nube, de forma que diferentes expertos del dominio o usuarios finales pueden estar trabajando juntos con sus propias instancias de MEdit4CEP-SP en sus ordenadores, añadiendo, eliminando y actualizando tipos de eventos y patrones sobre el mismo motor CEP. Por otro lado, hemos evaluado nuestra solución exhaustivamente. Primero, hemos probado nuestra arquitectura con el objetivo de demostrar su escalabilidad y capacidad de computación, mostrando que el sistema es capaz de procesar más datos utilizando más nodos. Su rendimiento es excepcional, alcanzando en Transacciones por Segundo (Transanctions Per Second, TPS) un máximo de 135 080 Mb/s utilizando 4 nodos. A continuación, hemos probado el editor gráfico con usuarios reales para mostrar que este proporciona sus funcionalidades de una forma amigable e intuitiva. Los usuarios fueron sometidos a una serie de tareas y después tuvieron que contestar a un cuestionario que evaluaba su experiencia con el editor. Los resultados de dicho cuestionario fueron exitosos. Finalmente, los beneficios de este sistema han sido comparados con otros enfoques existentes en la literatura con excelentes resultados. En dicho análisis comparativo, hemos contrastado nuestra propuesta contra las demás utilizando una serie de requisitos indispensables que los sistemas para modelar, consumir, procesar y analizar datos heterogéneos en tiempo real deberían proporcionar.Universidad de Cádiz (2017-020/PU/EPIF-FPI-CT/CP) Ministerio de Economía y Competitividad (TIN2015-65845-C3-3-R) Ministerio De Ciencia, Innovación y Universidades (RTI2018-093608-B-C33

    Industrial internet of things platform for predictive maintenance

    Get PDF
    POCI-01-0247-FEDER-038436Industry 4.0, allied with the growth and democratization of Artificial Intelligence (AI) and the advent of IoT, is paving the way for the complete digitization and automation of industrial processes. Maintenance is one of these processes, where the introduction of a predictive approach, as opposed to the traditional techniques, is expected to considerably improve the industry maintenance strategies with gains such as reduced downtime, improved equipment effectiveness, lower maintenance costs, increased return on assets, risk mitigation, and, ultimately, profitable growth. With predictive maintenance, dedicated sensors monitor the critical points of assets. The sensor data then feed into machine learning algorithms that can infer the asset health status and inform operators and decision-makers. With this in mind, in this paper, we present TIP4.0, a platform for predictive maintenance based on a modular software solution for edge computing gateways. TIP4.0 is built around Yocto, which makes it readily available and compliant with Commercial Off-the-Shelf (COTS) or proprietary hardware. TIP4.0 was conceived with an industry mindset with communication interfaces that allow it to serve sensor networks in the shop floor and modular software architecture that allows it to be easily adjusted to new deployment scenarios. To showcase its potential, the TIP4.0 platform was validated over COTS hardware, and we considered a public data-set for the simulation of predictive maintenance scenarios. We used a Convolution Neural Network (CNN) architecture, which provided competitive performance over the state-of-the-art approaches, while being approximately four-times and two-times faster than the uncompressed model inference on the Central Processing Unit (CPU) and Graphical Processing Unit, respectively. These results highlight the capabilities of distributed large-scale edge computing over industrial scenarios.publishersversionpublishe

    Digital Twin for Non-Terrestrial Networks: Vision, Challenges, and Enabling Technologies

    Full text link
    This paper explores the transformative potential of digital twin (DT) technology in the context of non-terrestrial networks (NTNs). NTNs, encompassing both airborne and space-borne elements, present unique challenges in network control, management, and optimization. DT is a novel approach to design and manage complicated cyber-physical systems with a high degree of automation, intelligence, and resilience. The adoption of DTs within NTNs offers a dynamic and detailed virtual representation of the entire ecosystem, enabling real-time monitoring, simulations, and data-driven decision-making. This paper delves into the envisioned integration of DTs in NTNs, discussing the technical challenges and highlighting key enabling technologies. Emphasis is placed on technologies such as Internet of things (IoT), artificial intelligence (AI), space-based cloud computing, quantum computing, and others, providing a comprehensive overview of their potentials in empowering DT development for NTNs. In closing, we present a case study involving the implementation of a data-driven DT model to facilitate dynamic and service-oriented network slicing within an open radio access network (O-RAN) architecture for NTNs. This work contributes to shaping the future of network control and management in the dynamic and evolving landscape of non-terrestrial communication systems
    corecore