25 research outputs found

    Global burden of 369 diseases and injuries in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF
    Background: In an era of shifting global agendas and expanded emphasis on non-communicable diseases and injuries along with communicable diseases, sound evidence on trends by cause at the national level is essential. The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) provides a systematic scientific assessment of published, publicly available, and contributed data on incidence, prevalence, and mortality for a mutually exclusive and collectively exhaustive list of diseases and injuries. Methods: GBD estimates incidence, prevalence, mortality, years of life lost (YLLs), years lived with disability (YLDs), and disability-adjusted life-years (DALYs) due to 369 diseases and injuries, for two sexes, and for 204 countries and territories. Input data were extracted from censuses, household surveys, civil registration and vital statistics, disease registries, health service use, air pollution monitors, satellite imaging, disease notifications, and other sources. Cause-specific death rates and cause fractions were calculated using the Cause of Death Ensemble model and spatiotemporal Gaussian process regression. Cause-specific deaths were adjusted to match the total all-cause deaths calculated as part of the GBD population, fertility, and mortality estimates. Deaths were multiplied by standard life expectancy at each age to calculate YLLs. A Bayesian meta-regression modelling tool, DisMod-MR 2.1, was used to ensure consistency between incidence, prevalence, remission, excess mortality, and cause-specific mortality for most causes. Prevalence estimates were multiplied by disability weights for mutually exclusive sequelae of diseases and injuries to calculate YLDs. We considered results in the context of the Socio-demographic Index (SDI), a composite indicator of income per capita, years of schooling, and fertility rate in females younger than 25 years. Uncertainty intervals (UIs) were generated for every metric using the 25th and 975th ordered 1000 draw values of the posterior distribution. Findings: Global health has steadily improved over the past 30 years as measured by age-standardised DALY rates. After taking into account population growth and ageing, the absolute number of DALYs has remained stable. Since 2010, the pace of decline in global age-standardised DALY rates has accelerated in age groups younger than 50 years compared with the 1990–2010 time period, with the greatest annualised rate of decline occurring in the 0–9-year age group. Six infectious diseases were among the top ten causes of DALYs in children younger than 10 years in 2019: lower respiratory infections (ranked second), diarrhoeal diseases (third), malaria (fifth), meningitis (sixth), whooping cough (ninth), and sexually transmitted infections (which, in this age group, is fully accounted for by congenital syphilis; ranked tenth). In adolescents aged 10–24 years, three injury causes were among the top causes of DALYs: road injuries (ranked first), self-harm (third), and interpersonal violence (fifth). Five of the causes that were in the top ten for ages 10–24 years were also in the top ten in the 25–49-year age group: road injuries (ranked first), HIV/AIDS (second), low back pain (fourth), headache disorders (fifth), and depressive disorders (sixth). In 2019, ischaemic heart disease and stroke were the top-ranked causes of DALYs in both the 50–74-year and 75-years-and-older age groups. Since 1990, there has been a marked shift towards a greater proportion of burden due to YLDs from non-communicable diseases and injuries. In 2019, there were 11 countries where non-communicable disease and injury YLDs constituted more than half of all disease burden. Decreases in age-standardised DALY rates have accelerated over the past decade in countries at the lower end of the SDI range, while improvements have started to stagnate or even reverse in countries with higher SDI. Interpretation: As disability becomes an increasingly large component of disease burden and a larger component of health expenditure, greater research and developm nt investment is needed to identify new, more effective intervention strategies. With a rapidly ageing global population, the demands on health services to deal with disabling outcomes, which increase with age, will require policy makers to anticipate these changes. The mix of universal and more geographically specific influences on health reinforces the need for regular reporting on population health in detail and by underlying cause to help decision makers to identify success stories of disease control to emulate, as well as opportunities to improve. Funding: Bill & Melinda Gates Foundation. © 2020 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 licens

    Allocation des ressources dans les futurs réseaux d'accès radio

    No full text
    This dissertation considers radio and computing resource allocation in future radio access networks and more precisely Cloud Radio Access Network (Cloud-RAN) and Open Radio Access Network (Open-RAN). In these architectures, the baseband processing of multiple base stations is centralized and virtualized. This permits better network optimization and allows for saving capital expenditure and operational expenditure. In the first part, we consider a coordination scheme between radio and computing schedulers. In case the computing resources are not sufficient, the computing scheduler sends feedback to the radio scheduler to update the radio parameters. While this reduces the radio throughput of the user, it guarantees that the frame will be processed at the computing scheduler level. We model this coordination scheme using Integer Linear Programming (ILP) with the objectives of maximizing the total throughput and users' satisfaction. The results demonstrate the ability of this scheme to improve different parameters, including the reduction of wasted transmission power. Then, we propose low-complexity heuristics, and we test them in an environment of multiple services with different requirements. In the second part, we consider the joint radio and computing resource allocation. Radio and computing resources are jointly allocated with the aim of minimizing energy consumption. The problem is modeled as a Mixed Integer Linear Programming Problem (MILP) and is compared to another MILP problem that maximizes the total throughput. The results demonstrate the ability of joint allocation to minimize energy consumption in comparison with the sequential allocation. Finally, we propose a low-complexity matching game-based algorithm that can be an alternative for solving the high-complexity MILP problem. In the last part, we investigate the usage of machine learning tools. First, we consider a deep learning model that aims to learn how to solve the coordination ILP problem, but with a much shorter time. Then, we consider a reinforcement learning model that aims to allocate computing resources for users to maximize the operator's profit.Cette thèse considère l'allocation des ressources radio et de calcul dans les futurs réseaux d'accès radio et plus précisément dans les réseaux Cloud-RAN (Cloud-Radio Access Networks) ainsi que les réseaux Open-RAN (Open-Radio Access Networks). Dans ces architectures, le traitement en bande de base de plusieurs stations de base est centralisé et virtualisé. Cela permet une meilleure optimisation du réseau et une réduction des dépenses d'investissement et d'exploitation. Dans la première partie de cette thèse, nous considérons un schéma de coordination entre les ordonnanceurs radio et de calcul. Dans le cas où les ressources de calcul ne sont pas suffisantes, l'ordonnanceur de calcul envoie un retour d'information à l'ordonnanceur radio pour mettre à jour les paramètres radio. Bien que cela réduise le débit radio de l'utilisateur, il garantit que la trame sera traitée au niveau de l'ordonnanceur de calcul. Nous modélisons ce schéma de coordination à l'aide de la programmation linéaire en nombres entiers (ILP) avec comme objectifs de maximiser le débit total ainsi que la satisfaction des utilisateurs. Les résultats montrent la capacité de ce schéma de coordination à améliorer différents paramètres, notamment la réduction du gaspillage de puissance de transmission. Ensuite, nous proposons des heuristiques à faible complexité et nous les testons dans un environnement de services multiples avec des exigences différentes. Dans la deuxième partie de cette thèse, nous considérons l'allocation conjointe des ressources radio et de calcul. Les ressources radio et de calcul sont allouées conjointement dans le but de minimiser la consommation énergétique. Le problème est modélisé à l'aide de la programmation linéaire mixte en nombres entiers (MILP), et est ensuite comparé à un autre problème MILP ayant comme objectif de maximiser le débit total. Les résultats montrent que l'allocation conjointe des ressources radio et de calcul est plus efficace que l'allocation séquentielle pour minimiser la consommation énergétique. Enfin, nous proposons un algorithme basé sur la théorie de matching (matching theory) à faible complexité qui pourra être une alternative pour résoudre le problème MILP à haute complexité. Dans la dernière partie de cette thèse, nous étudions l'utilisation des outils de l'apprentissage machine (machine learning). Tout d'abord, nous considérons un modèle d'apprentissage profond (deep learning) qui vise à apprendre comment résoudre le problème de coordination ILP, mais en un temps beaucoup plus court. Ensuite, nous considérons un modèle d'apprentissage par renforcement (reinforcement learning) qui vise à allouer des ressources de calcul aux utilisateurs afin de maximiser le profit de l'opérateur

    Allocation des ressources dans les futurs réseaux d'accès radio

    No full text
    This dissertation considers radio and computing resource allocation in future radio access networks and more precisely Cloud Radio Access Network (Cloud-RAN) and Open Radio Access Network (Open-RAN). In these architectures, the baseband processing of multiple base stations is centralized and virtualized. This permits better network optimization and allows for saving capital expenditure and operational expenditure. In the first part, we consider a coordination scheme between radio and computing schedulers. In case the computing resources are not sufficient, the computing scheduler sends feedback to the radio scheduler to update the radio parameters. While this reduces the radio throughput of the user, it guarantees that the frame will be processed at the computing scheduler level. We model this coordination scheme using Integer Linear Programming (ILP) with the objectives of maximizing the total throughput and users' satisfaction. The results demonstrate the ability of this scheme to improve different parameters, including the reduction of wasted transmission power. Then, we propose low-complexity heuristics, and we test them in an environment of multiple services with different requirements. In the second part, we consider the joint radio and computing resource allocation. Radio and computing resources are jointly allocated with the aim of minimizing energy consumption. The problem is modeled as a Mixed Integer Linear Programming Problem (MILP) and is compared to another MILP problem that maximizes the total throughput. The results demonstrate the ability of joint allocation to minimize energy consumption in comparison with the sequential allocation. Finally, we propose a low-complexity matching game-based algorithm that can be an alternative for solving the high-complexity MILP problem. In the last part, we investigate the usage of machine learning tools. First, we consider a deep learning model that aims to learn how to solve the coordination ILP problem, but with a much shorter time. Then, we consider a reinforcement learning model that aims to allocate computing resources for users to maximize the operator's profit.Cette thèse considère l'allocation des ressources radio et de calcul dans les futurs réseaux d'accès radio et plus précisément dans les réseaux Cloud-RAN (Cloud-Radio Access Networks) ainsi que les réseaux Open-RAN (Open-Radio Access Networks). Dans ces architectures, le traitement en bande de base de plusieurs stations de base est centralisé et virtualisé. Cela permet une meilleure optimisation du réseau et une réduction des dépenses d'investissement et d'exploitation. Dans la première partie de cette thèse, nous considérons un schéma de coordination entre les ordonnanceurs radio et de calcul. Dans le cas où les ressources de calcul ne sont pas suffisantes, l'ordonnanceur de calcul envoie un retour d'information à l'ordonnanceur radio pour mettre à jour les paramètres radio. Bien que cela réduise le débit radio de l'utilisateur, il garantit que la trame sera traitée au niveau de l'ordonnanceur de calcul. Nous modélisons ce schéma de coordination à l'aide de la programmation linéaire en nombres entiers (ILP) avec comme objectifs de maximiser le débit total ainsi que la satisfaction des utilisateurs. Les résultats montrent la capacité de ce schéma de coordination à améliorer différents paramètres, notamment la réduction du gaspillage de puissance de transmission. Ensuite, nous proposons des heuristiques à faible complexité et nous les testons dans un environnement de services multiples avec des exigences différentes. Dans la deuxième partie de cette thèse, nous considérons l'allocation conjointe des ressources radio et de calcul. Les ressources radio et de calcul sont allouées conjointement dans le but de minimiser la consommation énergétique. Le problème est modélisé à l'aide de la programmation linéaire mixte en nombres entiers (MILP), et est ensuite comparé à un autre problème MILP ayant comme objectif de maximiser le débit total. Les résultats montrent que l'allocation conjointe des ressources radio et de calcul est plus efficace que l'allocation séquentielle pour minimiser la consommation énergétique. Enfin, nous proposons un algorithme basé sur la théorie de matching (matching theory) à faible complexité qui pourra être une alternative pour résoudre le problème MILP à haute complexité. Dans la dernière partie de cette thèse, nous étudions l'utilisation des outils de l'apprentissage machine (machine learning). Tout d'abord, nous considérons un modèle d'apprentissage profond (deep learning) qui vise à apprendre comment résoudre le problème de coordination ILP, mais en un temps beaucoup plus court. Ensuite, nous considérons un modèle d'apprentissage par renforcement (reinforcement learning) qui vise à allouer des ressources de calcul aux utilisateurs afin de maximiser le profit de l'opérateur

    Allocation des ressources dans les futurs réseaux d'accès radio

    No full text
    Cette thèse considère l'allocation des ressources radio et de calcul dans les futurs réseaux d'accès radio et plus précisément dans les réseaux Cloud-RAN (Cloud-Radio Access Networks) ainsi que les réseaux Open-RAN (Open-Radio Access Networks). Dans ces architectures, le traitement en bande de base de plusieurs stations de base est centralisé et virtualisé. Cela permet une meilleure optimisation du réseau et une réduction des dépenses d'investissement et d'exploitation. Dans la première partie de cette thèse, nous considérons un schéma de coordination entre les ordonnanceurs radio et de calcul. Dans le cas où les ressources de calcul ne sont pas suffisantes, l'ordonnanceur de calcul envoie un retour d'information à l'ordonnanceur radio pour mettre à jour les paramètres radio. Bien que cela réduise le débit radio de l'utilisateur, il garantit que la trame sera traitée au niveau de l'ordonnanceur de calcul. Nous modélisons ce schéma de coordination à l'aide de la programmation linéaire en nombres entiers (ILP) avec comme objectifs de maximiser le débit total ainsi que la satisfaction des utilisateurs. Les résultats montrent la capacité de ce schéma de coordination à améliorer différents paramètres, notamment la réduction du gaspillage de puissance de transmission. Ensuite, nous proposons des heuristiques à faible complexité et nous les testons dans un environnement de services multiples avec des exigences différentes. Dans la deuxième partie de cette thèse, nous considérons l'allocation conjointe des ressources radio et de calcul. Les ressources radio et de calcul sont allouées conjointement dans le but de minimiser la consommation énergétique. Le problème est modélisé à l'aide de la programmation linéaire mixte en nombres entiers (MILP), et est ensuite comparé à un autre problème MILP ayant comme objectif de maximiser le débit total. Les résultats montrent que l'allocation conjointe des ressources radio et de calcul est plus efficace que l'allocation séquentielle pour minimiser la consommation énergétique. Enfin, nous proposons un algorithme basé sur la théorie de matching (matching theory) à faible complexité qui pourra être une alternative pour résoudre le problème MILP à haute complexité. Dans la dernière partie de cette thèse, nous étudions l'utilisation des outils de l'apprentissage machine (machine learning). Tout d'abord, nous considérons un modèle d'apprentissage profond (deep learning) qui vise à apprendre comment résoudre le problème de coordination ILP, mais en un temps beaucoup plus court. Ensuite, nous considérons un modèle d'apprentissage par renforcement (reinforcement learning) qui vise à allouer des ressources de calcul aux utilisateurs afin de maximiser le profit de l'opérateur.This dissertation considers radio and computing resource allocation in future radio access networks and more precisely Cloud Radio Access Network (Cloud-RAN) and Open Radio Access Network (Open-RAN). In these architectures, the baseband processing of multiple base stations is centralized and virtualized. This permits better network optimization and allows for saving capital expenditure and operational expenditure. In the first part, we consider a coordination scheme between radio and computing schedulers. In case the computing resources are not sufficient, the computing scheduler sends feedback to the radio scheduler to update the radio parameters. While this reduces the radio throughput of the user, it guarantees that the frame will be processed at the computing scheduler level. We model this coordination scheme using Integer Linear Programming (ILP) with the objectives of maximizing the total throughput and users' satisfaction. The results demonstrate the ability of this scheme to improve different parameters, including the reduction of wasted transmission power. Then, we propose low-complexity heuristics, and we test them in an environment of multiple services with different requirements. In the second part, we consider the joint radio and computing resource allocation. Radio and computing resources are jointly allocated with the aim of minimizing energy consumption. The problem is modeled as a Mixed Integer Linear Programming Problem (MILP) and is compared to another MILP problem that maximizes the total throughput. The results demonstrate the ability of joint allocation to minimize energy consumption in comparison with the sequential allocation. Finally, we propose a low-complexity matching game-based algorithm that can be an alternative for solving the high-complexity MILP problem. In the last part, we investigate the usage of machine learning tools. First, we consider a deep learning model that aims to learn how to solve the coordination ILP problem, but with a much shorter time. Then, we consider a reinforcement learning model that aims to allocate computing resources for users to maximize the operator's profit

    Reinforcement Learning based model for Maximizing Operator's Profit in Open-RAN

    No full text
    International audienceOpen Radio Access Network (O-RAN) is a novel architecture that enables the disaggregation and the virtualization of network components. This would provide new ways to mix and match network components by "opening up" the interfaces between them. O-RAN enables driving down the costs of network deployments and allows the entry of new players into the RAN market. It enables network operators to maximize resource utilization and deliver new network edge services at a lower cost, resulting in higher profits for operators. In this context, we consider a computing resource allocation problem for maximizing the operator's profit. Given that an operator receives subscribers' payments and pays the infrastructure provider's costs, we model the problem using Mixed Integer Linear Programming (MILP). Then, we propose to solve the problem using Reinforcement Learning (RL). Our simulation results demonstrate the ability of the RL agent to increase the operator's profit while reducing the algorithmic complexity of the MILP solver

    Impact of Network Performance on GLOSA

    No full text

    A Recurrent Neural Network Based Approach for Coordinating Radio and Computing Resources Allocation in Cloud-RAN

    No full text
    International audienceCloud Radio Access Network (Cloud-RAN) is a novel architecture that aims at centralizing the baseband processing of base stations. This architecture opens paths for joint, flexible, and optimal management of radio and computing resources. To increase the benefit from this architecture, efficient resource management algorithms need to be devised. In this paper, we consider a coordinated allocation of radio and computing resources to mobile users. Optimal resource allocation that respects the Hybrid-Automatic-Repeat-Request deadline may require formulating high-complexity and resourceheavy algorithms. We consider two Integer Linear Programming problems (ILP) that implement a coordinated allocation of radio and computing resources with the objectives of maximizing throughput and maximizing users' satisfaction, respectively. Since solving these highly-complex problems requires a high execution time, we investigate low-complexity alternatives based on machine learning models; more precisely on Recurrent Neural Networks (RNN). These RNN models aim to depict the performance of the ILP problems with a much lower execution time. Our simulation results demonstrate the great ability of RNN models to perform very closely to the ILP problems while being able to reduce the execution time by up to 99.65%

    On Coordinated Scheduling of Radio and Computing Resources in Cloud-RAN

    No full text
    International audienceCloud Radio Access Network is a promising mobile network architecture based on centralizing the baseband processing of many cellular base stations in a BBU (BaseBand Unit) pool. Such architecture has many advantages. However, computing resources are shared among the base stations connected to the BBU pool. It is challenging to schedule the processing of users' data, especially on overloaded BBU pools, while respecting the time constraints imposed by the Hybrid Automatic Repeat Request (HARQ) mechanism. Given that the processing time of users' data and the computing requirement depends on the radio parameters such as the Modulation and Coding Scheme (MCS), we propose to enable the coordination between radio and computing resources schedulers; such coordination makes the selection of MCS dependent on the availability of radio and computing resources and on the ability to process data while respecting the HARQ-deadline. In this context, we propose and evaluate three Integer Linear Programming (ILP)-based schemes and three low-complexity heuristics, demonstrating their ability to reduce the wasted transmission power. Moreover, we evaluate the performance of the coordination under a multiservices scenario consisting of two services having heterogeneous requirements, enhanced Mobile Broadband (eMBB) and Ultra-Reliable Low-Latency Communication (URLLC)

    Dynamic Placement of O-CU and O-DU Functionalities in Open-RAN Architecture

    No full text
    International audienceOpen Radio Access Network (O-RAN) has recently emerged as a new trend for mobile network architecture. It is based on four founding principles: disaggregation, intelligence, virtualization, and open interfaces.In particular, RAN disaggregation involves dividing base station virtualized networking functions (VNFs) into three distinct components - the Open-Central Unit (O-CU), the Open-Distributed Unit (O-DU), and the Open-Radio Unit (O-RU) - enabling each component to be implemented independently. Such disaggregation aims to improve system performance and allow rapid and open innovation in many components while ensuring multi-vendor operability. As the disaggregation of network architecture becomes a key enabler of O-RAN, the deployment scenarios of VNFs over O-RAN clouds become critical. In this context, we propose an optimal and dynamic placement scheme of the O-CU and O-DU functionalities either on the edge or in regional O-clouds. The objective is to maximize users' admittance ratio by considering mid-haul delay and server capacity requirements. We develop an Integer Linear Programming (ILP) model for VNF placement in O-RAN architecture. Additionally, we introduce a Recurrent Neural Network (RNN) heuristic model that can effectively replicate the behavior of the ILP model. We get promising results in terms of improving users' admittance ratio by up to 10% when compared to baselines from state-of-the-art. Moreover, our proposed model minimizes the deployment costs and increases the overall throughput
    corecore