582,053 research outputs found
Developing Landsat Based Algorithms To Augment In Situ Monitoring Of Freshwater Lakes And Reservoirs
Many lakes and reservoirs lack adequate water quality monitoring programs. With little information on the state of these systems, managing these resources and their contributing watersheds is a challenge. The use of remote sensing presents an opportunity to better characterize these freshwater systems. The full potential of using the Landsat program to measure optically active water quality parameters, such as chlorophyll-a, suspended sediments and water clarity was explored using the Qaraoun Reservoir in Lebanon as a case study. An in situ monitoring program was developed and synchronized with the overpass of Landsat 7 and the newly launched Landsat 8 satellites in an effort to develop, calibrate, and validate empirical relationships that link water quality parameters with sensor radiances. Collected monitoring data revealed that the reservoir was hypereutrophic, with median summer chlorophyll-a concentrations exceeding 70 ug/L. The generated models showed promise in capturing the state of the reservoir, with some differences between the models developed for Landsat 7 and 8. These differences are expected to have implications on the transferability of the developed algorithms and on blending data from both satellites. Yet, the results highlight the importance of using the Landsat program as part of future monitoring activities as well as for hindcasting surface water quality, both a key step towards tracking changes in the system over time
Recommended from our members
Every Moment Counts: Synchrophasors for Distribution Networks with Variable Resources
Chapter 34 in the textbook, "Renewable Energy Integration: Practical Management of Variability, Uncertainty and Flexibility
Proactive cloud management for highly heterogeneous multi-cloud infrastructures
Various literature studies demonstrated that the cloud computing paradigm can help to improve availability and performance of applications subject to the problem of software anomalies. Indeed, the cloud resource provisioning model enables users to rapidly access new processing resources, even distributed over different geographical regions, that can be promptly used in the case of, e.g., crashes or hangs of running machines, as well as to balance the load in the case of overloaded machines. Nevertheless, managing a complex geographically-distributed cloud deploy could be a complex and time-consuming task. Autonomic Cloud Manager (ACM) Framework is an autonomic framework for supporting proactive management of applications deployed over multiple cloud regions. It uses machine learning models to predict failures of virtual machines and to proactively redirect the load to healthy machines/cloud regions. In this paper, we study different policies to perform efficient proactive load balancing across cloud regions in order to mitigate the effect of software anomalies. These policies use predictions about the mean time to failure of virtual machines. We consider the case of heterogeneous cloud regions, i.e regions with different amount of resources, and we provide an experimental assessment of these policies in the context of ACM Framework
A model-driven approach to broaden the detection of software performance antipatterns at runtime
Performance antipatterns document bad design patterns that have negative
influence on system performance. In our previous work we formalized such
antipatterns as logical predicates that predicate on four views: (i) the static
view that captures the software elements (e.g. classes, components) and the
static relationships among them; (ii) the dynamic view that represents the
interaction (e.g. messages) that occurs between the software entities elements
to provide the system functionalities; (iii) the deployment view that describes
the hardware elements (e.g. processing nodes) and the mapping of the software
entities onto the hardware platform; (iv) the performance view that collects
specific performance indices. In this paper we present a lightweight
infrastructure that is able to detect performance antipatterns at runtime
through monitoring. The proposed approach precalculates such predicates and
identifies antipatterns whose static, dynamic and deployment sub-predicates are
validated by the current system configuration and brings at runtime the
verification of performance sub-predicates. The proposed infrastructure
leverages model-driven techniques to generate probes for monitoring the
performance sub-predicates and detecting antipatterns at runtime.Comment: In Proceedings FESCA 2014, arXiv:1404.043
Design and experimental validation of a software-defined radio access network testbed with slicing support
Network slicing is a fundamental feature of 5G systems to partition a single network into a number of segregated logical networks, each optimized for a particular type of service or dedicated to a particular customer or application. The realization of network slicing is particularly challenging in the Radio Access Network (RAN) part, where multiple slices can be multiplexed over the same radio channel and Radio Resource Management (RRM) functions shall be used to split the cell radio resources and achieve the expected behaviour per slice. In this context, this paper describes the key design and implementation aspects of a Software-Defined RAN (SD-RAN) experimental testbed with slicing support. The testbed has been designed consistently with the slicing capabilities and related management framework established by 3GPP in Release 15. The testbed is used to demonstrate the provisioning of RAN slices (e.g., preparation, commissioning, and activation phases) and the operation of the implemented RRM functionality for slice-aware admission control and scheduling.Peer ReviewedPostprint (published version
- …