368 research outputs found
A Game-Theoretic Approach for Runtime Capacity Allocation in MapReduce
Nowadays many companies have available large amounts of raw, unstructured
data. Among Big Data enabling technologies, a central place is held by the
MapReduce framework and, in particular, by its open source implementation,
Apache Hadoop. For cost effectiveness considerations, a common approach entails
sharing server clusters among multiple users. The underlying infrastructure
should provide every user with a fair share of computational resources,
ensuring that Service Level Agreements (SLAs) are met and avoiding wastes. In
this paper we consider two mathematical programming problems that model the
optimal allocation of computational resources in a Hadoop 2.x cluster with the
aim to develop new capacity allocation techniques that guarantee better
performance in shared data centers. Our goal is to get a substantial reduction
of power consumption while respecting the deadlines stated in the SLAs and
avoiding penalties associated with job rejections. The core of this approach is
a distributed algorithm for runtime capacity allocation, based on Game Theory
models and techniques, that mimics the MapReduce dynamics by means of
interacting players, namely the central Resource Manager and Class Managers
Virtualisation and resource allocation in MECEnabled metro optical networks
The appearance of new network services and the ever-increasing network traffic and number
of connected devices will push the evolution of current communication networks towards the
Future Internet.
In the area of optical networks, wavelength routed optical networks (WRONs) are evolving
to elastic optical networks (EONs) in which, thanks to the use of OFDM or Nyquist WDM,
it is possible to create super-channels with custom-size bandwidth. The basic element in
these networks is the lightpath, i.e., all-optical circuits between two network nodes. The
establishment of lightpaths requires the selection of the route that they will follow and the
portion of the spectrum to be used in order to carry the requested traffic from the source to
the destination node. That problem is known as the routing and spectrum assignment (RSA)
problem, and new algorithms must be proposed to address this design problem.
Some early studies on elastic optical networks studied gridless scenarios, in which a slice
of spectrum of variable size is assigned to a request. However, the most common approach to
the spectrum allocation is to divide the spectrum into slots of fixed width and allocate multiple,
consecutive spectrum slots to each lightpath, depending on the requested bandwidth. Moreover,
EONs also allow the proposal of more flexible routing and spectrum assignment techniques,
like the split-spectrum approach in which the request is divided into multiple "sub-lightpaths".
In this thesis, four RSA algorithms are proposed combining two different levels of
flexibility with the well-known k-shortest paths and first fit heuristics. After comparing the
performance of those methods, a novel spectrum assignment technique, Best Gap, is proposed
to overcome the inefficiencies emerged when combining the first fit heuristic with highly
flexible networks. A simulation study is presented to demonstrate that, thanks to the use of
Best Gap, EONs can exploit the network flexibility and reduce the blocking ratio.
On the other hand, operators must face profound architectural changes to increase the
adaptability and flexibility of networks and ease their management. Thanks to the use of
network function virtualisation (NFV), the necessary network functions that must be applied
to offer a service can be deployed as virtual appliances hosted by commodity servers, which
can be located in data centres, network nodes or even end-user premises. The appearance of
new computation and networking paradigms, like multi-access edge computing (MEC), may
facilitate the adaptation of communication networks to the new demands. Furthermore, the
use of MEC technology will enable the possibility of installing those virtual network functions
(VNFs) not only at data centres (DCs) and central offices (COs), traditional hosts of VFNs, but
also at the edge nodes of the network. Since data processing is performed closer to the enduser,
the latency associated to each service connection request can be reduced. MEC nodes
will be usually connected between them and with the DCs and COs by optical networks.
In such a scenario, deploying a network service requires completing two phases: the
VNF-placement, i.e., deciding the number and location of VNFs, and the VNF-chaining,
i.e., connecting the VNFs that the traffic associated to a service must transverse in order to
establish the connection. In the chaining process, not only the existence of VNFs with available
processing capacity, but the availability of network resources must be taken into account to
avoid the rejection of the connection request. Taking into consideration that the backhaul of
this scenario will be usually based on WRONs or EONs, it is necessary to design the virtual
topology (i.e., the set of lightpaths established in the networks) in order to transport the tra c
from one node to another. The process of designing the virtual topology includes deciding the
number of connections or lightpaths, allocating them a route and spectral resources, and finally
grooming the traffic into the created lightpaths.
Lastly, a failure in the equipment of a node in an NFV environment can cause the
disruption of the SCs traversing the node. This can cause the loss of huge amounts of data
and affect thousands of end-users. In consequence, it is key to provide the network with faultmanagement
techniques able to guarantee the resilience of the established connections when a
node fails.
For the mentioned reasons, it is necessary to design orchestration algorithms which solve
the VNF-placement, chaining and network resource allocation problems in 5G networks
with optical backhaul. Moreover, some versions of those algorithms must also implements
protection techniques to guarantee the resilience system in case of failure.
This thesis makes contribution in that line. Firstly, a genetic algorithm is proposed to solve
the VNF-placement and VNF-chaining problems in a 5G network with optical backhaul based
on star topology: GASM (genetic algorithm for effective service mapping). Then, we propose
a modification of that algorithm in order to be applied to dynamic scenarios in which the
reconfiguration of the planning is allowed. Furthermore, we enhanced the modified algorithm
to include a learning step, with the objective of improving the performance of the algorithm.
In this thesis, we also propose an algorithm to solve not only the VNF-placement and
VNF-chaining problems but also the design of the virtual topology, considering that a WRON
is deployed as the backhaul network connecting MEC nodes and CO. Moreover, a version
including individual VNF protection against node failure has been also proposed and the
effect of using shared/dedicated and end-to-end SC/individual VNF protection schemes are
also analysed.
Finally, a new algorithm that solves the VNF-placement and chaining problems and
the virtual topology design implementing a new chaining technique is also proposed.
Its corresponding versions implementing individual VNF protection are also presented.
Furthermore, since the method works with any type of WDM mesh topologies, a technoeconomic
study is presented to compare the effect of using different network topologies in
both the network performance and cost.Departamento de TeorĂa de la Señal y Comunicaciones e IngenierĂa TelemáticaDoctorado en TecnologĂas de la InformaciĂłn y las Telecomunicacione
Big Earth Data and Machine Learning for Sustainable and Resilient Agriculture
Big streams of Earth images from satellites or other platforms (e.g., drones
and mobile phones) are becoming increasingly available at low or no cost and
with enhanced spatial and temporal resolution. This thesis recognizes the
unprecedented opportunities offered by the high quality and open access Earth
observation data of our times and introduces novel machine learning and big
data methods to properly exploit them towards developing applications for
sustainable and resilient agriculture. The thesis addresses three distinct
thematic areas, i.e., the monitoring of the Common Agricultural Policy (CAP),
the monitoring of food security and applications for smart and resilient
agriculture. The methodological innovations of the developments related to the
three thematic areas address the following issues: i) the processing of big
Earth Observation (EO) data, ii) the scarcity of annotated data for machine
learning model training and iii) the gap between machine learning outputs and
actionable advice.
This thesis demonstrated how big data technologies such as data cubes,
distributed learning, linked open data and semantic enrichment can be used to
exploit the data deluge and extract knowledge to address real user needs.
Furthermore, this thesis argues for the importance of semi-supervised and
unsupervised machine learning models that circumvent the ever-present challenge
of scarce annotations and thus allow for model generalization in space and
time. Specifically, it is shown how merely few ground truth data are needed to
generate high quality crop type maps and crop phenology estimations. Finally,
this thesis argues there is considerable distance in value between model
inferences and decision making in real-world scenarios and thereby showcases
the power of causal and interpretable machine learning in bridging this gap.Comment: Phd thesi
Spatial-temporal responses of Louisiana forests to climate change and hurricane disturbance
This dissertation research focused on three questions: (1) what is the current carbon stock in Louisiana’s forest ecosystems? (2) how will the biomass carbon stock respond to future climate change? and (3) how vulnerable are the coastal forest resources to natural disturbances, such as hurricanes? The research utilized a geographic information system, remote sensing techniques, ecosystem modeling, and statistical approaches with existing data and in-situ measurements. Future climate changes were adapted from predictions by the Community Climate System Model on the basis of low (B1), moderate (A1B), and high (A2) greenhouse gas emission scenarios. The study on forest carbon assessment found that Louisiana’s forests currently store 219.2 Tg of biomass carbon, 90% of which is stored in wetland and evergreen forests. Spatial variation of the carbon storage was mainly affected by forest biomass distribution. No correlation was identified between carbon storage in watersheds with the average watershed slope and drainage density. The modeling study on growth response to future climate found that forest net primary productivity (NPP) would decline from 2000 to 2050 under scenario B1, but may increase under scenarios A1B and A2 due primarily to minimum temperature and precipitation changes. Uncertainties of the NPP prediction were apparent, owing to spatial resolution of the climate variables. The remote sensing study on hurricane disturbance to coastal forests found that increases in the intensity of severe weather in the future would likely increase the turn-over rate of coastal forest carbon stock. Forest attributes and site conditions had a variety of effects on the vulnerability of forests to hurricane disturbance and thereby, spatial patterns of disturbed landscape. Soil groups and stand factors, including forest types, forest coverage, and stand density contributed to 85% of accuracy in the modeling probability of Hurricane Katrina disturbance to forests. In conclusion, this research demonstrated that quantification of forest biomass carbon, using geo-referenced datasets and GIS techniques, provides a credible approach to increase accuracy and constrain the uncertainty of large-scale carbon assessment. A combination of ecosystem modeling and GIS/Remote Sensing techniques can provide insight into future climate change effects on forest carbon change at the landscape scale
Utilisation d'identifiants cryptographiques pour la sécurisation IPv6
IPv6, protocole succédant à IPv4, est en cours de déploiement dans l Internet. Il repose fortement sur le mécanisme Neighbor Discovery Protocol (NDP). Celui-ci permet non seulement à deux nœuds IPv6 de pouvoir communiquer, à l instar du mécanisme Address Resolution Protocol (ARP) en IPv4, mais il apporte aussi de nouvelles fonctionnalités, telles que l autoconfiguration d adresse IPv6. Aussi, sa sécurisation pour le bon fonctionnement de l Internet en IPv6 est critique. Son mécanisme de sécurité standardisée à l Internet Engineering Task Force (IETF) se nomme Secure Neighbor Discovery (SEND). Il s appuie à la fois sur l utilisation d identifiants cryptographiques, adresses IPv6 appelées Cryptographically Generated Addresses (CGA) et qui sont générées à partir d une paire de clés publique/privée, et de certificats électroniques X.509. L objet de cette thèse est l étude de ces identifiants cryptographiques, les adresses CGA, ainsi que le mécanisme SEND les employant, et leurs réutilisations potentielles pour la sécurisation IPv6. Dans une première partie de cette thèse, tout d abord, nous posons l état de l art. Dans une deuxième partie de cette thèse, nous nous intéressons à la fiabilité du principal mécanisme connu employant les adresses CGA, le mécanisme SEND. Dans une troisième et dernière partie de cette thèse, nous présentons des utilisations des identifiants cryptographiques pour la sécurisation IPv6IPv6, next Internet protocol after IPv4, is under deployment in the Internet. It is strongly based on the Neighbor Discovery Protocol (NDP) mechanism. First, it allows two IPv6 nodes to communicate, like the Address Resolution Protocol (ARP) mechanism in IPv4, but it brings new functions too, as IPv6 address autoconfiguration. So, the security of this mechanism is critical for an Internet based on IPv6. The security mechanism standardized by the Internet Engineering Task Force (IETF) is Secure Neighbor Discovery (SEND). It is based on the use of cryptographical identifiers, IPv6 addresses named Cryptographically Generated Addresses (CGA) and generated from a public/private keys pair, and X.509 certificates. The goal of this PhD thesis is the study of such cryptographical identifiers, CGA addresses, as well as SEND using them, and their potential re-use to secure IPv6. In a first part of this thesis, we recall the main features of the IPv6 protocol. In a second part of this thesis, we are interested in the reliability of the main known mechanism using the CGA addresses, SEND. In a third and last part of this thesis, we present different uses of cryptographical identifiers to secure IPv6EVRY-INT (912282302) / SudocSudocFranceF
Cross-layer modeling and optimization of next-generation internet networks
Scaling traditional telecommunication networks so that they are able to cope with the volume of future traffic demands and the stringent European Commission (EC) regulations on emissions would entail unaffordable investments. For this very reason, the design of an innovative ultra-high bandwidth power-efficient network architecture is nowadays a bold topic within the research community. So far, the independent evolution of network layers has resulted in isolated, and hence, far-from-optimal contributions, which have eventually led to the issues today's networks are facing such as inefficient energy strategy, limited network scalability and flexibility, reduced network manageability and increased overall network and customer services costs. Consequently, there is currently large consensus among network operators and the research community that cross-layer interaction and coordination is fundamental for the proper architectural design of next-generation Internet networks.
This thesis actively contributes to the this goal by addressing the modeling, optimization and performance analysis of a set of potential technologies to be deployed in future cross-layer network architectures. By applying a transversal design approach (i.e., joint consideration of several network layers), we aim for achieving the maximization of the integration of the different network layers involved in each specific problem. To this end, Part I provides a comprehensive evaluation of optical transport networks (OTNs) based on layer 2 (L2) sub-wavelength switching (SWS) technologies, also taking into consideration the impact of physical layer impairments (PLIs) (L0 phenomena). Indeed, the recent and relevant advances in optical technologies have dramatically increased the impact that PLIs have on the optical signal quality, particularly in the context of SWS networks. Then, in Part II of the thesis, we present a set of case studies where it is shown that the application of operations research (OR) methodologies in the desing/planning stage of future cross-layer Internet network architectures leads to the successful joint optimization of key network performance indicators (KPIs) such as cost (i.e., CAPEX/OPEX), resources usage and energy consumption. OR can definitely play an important role by allowing network designers/architects to obtain good near-optimal solutions to real-sized problems within practical running times
A Dynamic Access Control Model Using Authorising Workfow and Task Role-based Access Control
Access control is fundamental and prerequisite to govern and safeguard information assets within an organisation. Organisations generally use Web enabled remote access coupled with applications access distributed across various networks. These networks face various challenges including increase operational burden and monitoring issues due to the dynamic and complex nature of security policies for access control. The increasingly dynamic nature of collaborations means that in one context a user should have access to sensitive information, whilst not being allowed access in other contexts. The current access control models are static and lack Dynamic Segregation of Duties (SoD), Task instance level of Segregation, and decision making in real time. This thesis addresses these limitations describes tools to support access management in borderless network environments with dynamic SoD capability and real time access control decision making and policy enforcement. This thesis makes three contributions: i) Defining an Authorising Workflow Task Role Based Access Control (AW-TRBAC) using existing task and workflow concepts. This new workflow integrates dynamic SoD, whilst considering task instance restriction to ensure overall access governance and accountability. It enhances existing access control models such as Role Based Access Control (RBAC) by dynamically granting users access rights and providing access governance. ii) Extension of the OASIS standard of XACML policy language to support dynamic access control requirements and enforce access control rules for real time decision making. This mitigates risks relating to access control, such as escalation of privilege in broken access control, and insucient logging and monitoring. iii) The AW-TRBAC model is implemented by extending the open source XACML (Balana) policy engine to demonstrate its applicability to a real industrial use case from a financial institution. The results show that AW-TRBAC is scalable, can process relatively large numbers of complex requests, and meets the requirements of real time access control decision making, governance and mitigating broken access control risk
Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art
Software-Defined Networking (SDN) is an evolutionary networking paradigm
which has been adopted by large network and cloud providers, among which are
Tech Giants. However, embracing a new and futuristic paradigm as an alternative
to well-established and mature legacy networking paradigm requires a lot of
time along with considerable financial resources and technical expertise.
Consequently, many enterprises can not afford it. A compromise solution then is
a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN
functionalities are leveraged while existing traditional network
infrastructures are acknowledged. Recently, hSDN has been seen as a viable
networking solution for a diverse range of businesses and organizations.
Accordingly, the body of literature on hSDN research has improved remarkably.
On this account, we present this paper as a comprehensive state-of-the-art
survey which expands upon hSDN from many different perspectives
Agents and Robots for Reliable Engineered Autonomy
This book contains the contributions of the Special Issue entitled "Agents and Robots for Reliable Engineered Autonomy". The Special Issue was based on the successful first edition of the "Workshop on Agents and Robots for reliable Engineered Autonomy" (AREA 2020), co-located with the 24th European Conference on Artificial Intelligence (ECAI 2020). The aim was to bring together researchers from autonomous agents, as well as software engineering and robotics communities, as combining knowledge from these three research areas may lead to innovative approaches that solve complex problems related to the verification and validation of autonomous robotic systems
Land Degradation Assessment with Earth Observation
This Special Issue (SI) on “Land Degradation Assessment with Earth Observation” comprises 17 original research papers with a focus on land degradation in arid, semiarid and dry-subhumid areas (i.e., desertification) in addition to temperate rangelands, grasslands, woodlands and the humid tropics. The studies cover different spatial, spectral and temporal scales and employ a wealth of different optical and radar sensors. Some studies incorporate time-series analysis techniques that assess the general trend of vegetation or the timing and duration of the reduction in biological productivity caused by land degradation. As anticipated from the latest trend in Earth Observation (EO) literature, some studies utilize the cloud-computing infrastructure of Google Earth Engine to cope with the unprecedented volume of data involved in current methodological approaches. This SI clearly demonstrates the ever-increasing relevance of EO technologies when it comes to assessing and monitoring land degradation. With the recently published IPCC Reports informing us of the severe impacts and risks to terrestrial and freshwater ecosystems and the ecosystem services they provide, the EO scientific community has a clear obligation to increase its efforts to address any remaining gaps—some of which have been identified in this SI—and produce highly accurate and relevant land-degradation assessment and monitoring tools
- …