10 research outputs found

    Smart Computing and Sensing Technologies for Animal Welfare: A Systematic Review

    Get PDF
    Animals play a profoundly important and intricate role in our lives today. Dogs have been human companions for thousands of years, but they now work closely with us to assist the disabled, and in combat and search and rescue situations. Farm animals are a critical part of the global food supply chain, and there is increasing consumer interest in organically fed and humanely raised livestock, and how it impacts our health and environmental footprint. Wild animals are threatened with extinction by human induced factors, and shrinking and compromised habitat. This review sets the goal to systematically survey the existing literature in smart computing and sensing technologies for domestic, farm and wild animal welfare. We use the notion of \emph{animal welfare} in broad terms, to review the technologies for assessing whether animals are healthy, free of pain and suffering, and also positively stimulated in their environment. Also the notion of \emph{smart computing and sensing} is used in broad terms, to refer to computing and sensing systems that are not isolated but interconnected with communication networks, and capable of remote data collection, processing, exchange and analysis. We review smart technologies for domestic animals, indoor and outdoor animal farming, as well as animals in the wild and zoos. The findings of this review are expected to motivate future research and contribute to data, information and communication management as well as policy for animal welfare

    A comprehensive survey on reinforcement-learning-based computation offloading techniques in Edge Computing Systems

    Get PDF
    Producción CientíficaIn recent years, the number of embedded computing devices connected to the Internet has exponentially increased. At the same time, new applications are becoming more complex and computationally demanding, which can be a problem for devices, especially when they are battery powered. In this context, the concepts of computation offloading and edge computing, which allow applications to be fully or partially offloaded and executed on servers close to the devices in the network, have arisen and received increasing attention. Then, the design of algorithms to make the decision of which applications or tasks should be offloaded, and where to execute them, is crucial. One of the options that has been gaining momentum lately is the use of Reinforcement Learning (RL) and, in particular, Deep Reinforcement Learning (DRL), which enables learning optimal or near-optimal offloading policies adapted to each particular scenario. Although the use of RL techniques to solve the computation offloading problem in edge systems has been covered by some surveys, it has been done in a limited way. For example, some surveys have analysed the use of RL to solve various networking problems, with computation offloading being one of them, but not the primary focus. Other surveys, on the other hand, have reviewed techniques to solve the computation offloading problem, being RL just one of the approaches considered. To the best of our knowledge, this is the first survey that specifically focuses on the use of RL and DRL techniques for computation offloading in edge computing system. We present a comprehensive and detailed survey, where we analyse and classify the research papers in terms of use cases, network and edge computing architectures, objectives, RL algorithms, decision-making approaches, and time-varying characteristics considered in the analysed scenarios. In particular, we include a series of tables to help researchers identify relevant papers based on specific features, and analyse which scenarios and techniques are most frequently considered in the literature. Finally, this survey identifies a number of research challenges, future directions and areas for further study.Consejería de Educación de la Junta de Castilla y León y FEDER (VA231P20)Ministerio de Ciencia e Innovación y Agencia Estatal de Investigación (Proyecto PID2020-112675RB-C42, PID2021-124463OBI00 y RED2018-102585-T, financiados por MCIN/AEI/10.13039/501100011033

    An Efficient Method for Service Level Agreement Assessment

    No full text
    International audienceOn-line end-to-end Service Level Agreement (SLA) monitoring is of key importance nowadays. For this purpose, recent researches have focused on measuring (when possible), or estimating (most of the times) network Quality of Service (QoS) and performance parameters. Up to now, all the proposed solutions have the drawback of requiring a huge amount of resources with low accuracy, generally leading to unscalable systems. We observe, however, that the accurate estimation of network QoS parameters is not necessarily required for SLA assessment. What is required is an efficient and scalable method to directly detect SLA violations. To this end, this paper makes the following contributions. First, we introduce a polynomial-complexity algorithm based on Hausdorff Distance that efficiently detects SLA violations. Second, we propose a Simplified Hausdorff Distance, which provides better accuracy at lower computational cost. Our solution works on simple to measure time series--the Inter-Packet Arrival Times (IPATs) received in our case. The validity of our proposal is confirmed by comparing with perfect knowledge of the QoS status as well as other existing alternatives in a real testbed

    A combined intra-domain and interdomain qos routing model for optical networks

    No full text
    has become a strong requirement in the present Internet, and this requirement will also be present in the Next Generation Optical based worldwide network. At present end-to-end QoS Routing (QoSR) represents a complex problem mainly because the de-facto standard Inter-domain routing protocol, namely the Border Gateway Protocol (BGP) has not inbuilt QoSR capabilities. Moreover, BGP entirely obscures the availability of Intra-Domain resources in any transit domain within an end-to-end Inter-Domain path, which shifts any tentative proposal to cope with the issue of Inter-Domain QoSR even farther from optimality. Given that Inter-Domain routing in Optical Networks is an active research area in this moment, it seems wise to address the issue of QoSR provisioning from its very foundations. Thus, in this paper we introduce a Combined Intra-Domain and Inter-Domain QoSR Model for Optical Networks. Our goal is to provide a highly efficient coupling between both routing schemes with the aim that the combined QoSR model could be able to supply multiconstrained end-to-end optical paths closer to optimality

    Securing the LISP map registration process

    No full text
    The motivation behind the Locator/Identifier Separation Protocol (LISP) has shifted over time from routing scalability issues in the core Internet to a set of use cases for which LISP stands as a technology enabler. Among these are the mobility of physical and virtual appliances without breaking their TCP connections, seamless migration and fast deployments of IPv6, multihoming, and data-center applications. However, LISP was born without security, and therefore is susceptible to attacks in its control-plane. The IETF's LISP working group has recently started to work in this direction, but the protocol still lacks end-to-end mechanisms for securing the overall registration process on the mapping system. In this paper, we address this issue and propose a solution that counters the attacks. We have deployed LISP in a real testbed, and compared the performance of our proposal with current LISP implementations, in terms of both messaging and packet size overhead. Our preliminary results prove that our solution offers much higher security with minimum overhead.Atlant

    Leveraging Network Data Analytics Function and Machine Learning for Data Collection, Resource Optimization, Security and Privacy in 6G Networks

    No full text
    The full deployment of sixth-generation (6G) networks is inextricably connected with a holistic network redesign able to deal with various emerging challenges, such as integration of heterogeneous technologies and devices, as well as support of latency and bandwidth demanding applications. In such a complex environment, resource optimization, and security and privacy enhancement can be quite demanding, due to the vast and diverse data generation endpoints and associated hardware elements. Therefore, efficient data collection mechanisms are needed that can be deployed at any network infrastructure. In this context, the network data analytics function (NWDAF) has already been defined in the fifth-generation (5G) architecture from Release 15 of 3GPP, that can perform data collection from various network functions (NFs). When combined with advanced machine learning (ML) techniques, a full-scale network optimization can be supported, according to traffic demands and service requirements. In addition, the collected data from NWDAF can be used for anomaly detection and thus, security and privacy enhancement. Therefore, the main goal of this paper is to present the current state-of-the-art on the role of the NWDAF towards data collection, resource optimization and security enhancement in next generation broadband networks. Furthermore, various key enabling technologies for data collection and threat mitigation in the 6G framework are identified and categorized, along with advanced ML approaches. Finally, a high level architectural approach is presented and discussed, based on the NWDAF, for efficient data collection and ML model training in large scale heterogeneous environments

    Designing an efficient clustering strategy for combined Fog-to-Cloud scenarios

    Get PDF
    Producción CientíficaThe evolution of the Internet of Things (IoT) is imposing many distinct challenges, particularly to guarantee both wide and global systems communication, and to ensure an optimal execution of services. To that end, IoT services must make the most out of both cloud and fog computing (turning into combined fog–cloud scenarios), which indeed requires novel and efficient resource management procedures to handle such diversity of resources in a coordinated way. Most of the related works that can be found in the literature focus on resource mapping for service-specific execution in fog–cloud; however, those works assume a control and management architecture already deployed. Interestingly, few works propose algorithms to set that control architecture, necessary to execute the services and effectively implement services and resource mapping. This paper addresses that challenge by solving the problem of optimal clustering of devices located at the edge of the network offering their resources to support fog computing while defining the control and management role of each of them in the architecture, in order to ensure access to management functions in combined fog–cloud​ scenarios. In particular, we set out the Fog–Cloud Clustering (FCC) problem as an optimization problem, which is based on multi-objective optimization, including realistic, novel and stringent constraints; e.g., to improve the architecture’s robustness by means of a device acting as a backup in the cluster. We model the FCC problem as a Mixed Integer Linear Programming (MILP) formulation, for which a lower and an upper bound on the number of required clusters is derived. We also propose a machine learning-based heuristic that provides scalable and near-optimal solutions in realistic scenarios in which, due to the high number of connected devices, solving the MILP formulation is not viable. By means of a simulation study, we demonstrate the effectiveness of the algorithms comparing its results with those of MILP formulation.Spanish Thematic Networks (contracts RED2018-102585-T and TEC2015-71932-REDT)Ministerio de Economía, Industria y Competitividad - Fondo Europeo de Desarrollo Regional (projects RTI2018-094532-B-I00 and TEC2017-84423-C3-1-P)INTERREG V-A España-Portugal (POCTEP) program (project 0677_DISRUPTIVE_2_E)
    corecore