9,577 research outputs found

    The future of Earth observation in hydrology

    Get PDF
    In just the past 5 years, the field of Earth observation has progressed beyond the offerings of conventional space-agency-based platforms to include a plethora of sensing opportunities afforded by CubeSats, unmanned aerial vehicles (UAVs), and smartphone technologies that are being embraced by both for-profit companies and individual researchers. Over the previous decades, space agency efforts have brought forth well-known and immensely useful satellites such as the Landsat series and the Gravity Research and Climate Experiment (GRACE) system, with costs typically of the order of 1 billion dollars per satellite and with concept-to-launch timelines of the order of 2 decades (for new missions). More recently, the proliferation of smart-phones has helped to miniaturize sensors and energy requirements, facilitating advances in the use of CubeSats that can be launched by the dozens, while providing ultra-high (3-5 m) resolution sensing of the Earth on a daily basis. Start-up companies that did not exist a decade ago now operate more satellites in orbit than any space agency, and at costs that are a mere fraction of traditional satellite missions. With these advances come new space-borne measurements, such as real-time high-definition video for tracking air pollution, storm-cell development, flood propagation, precipitation monitoring, or even for constructing digital surfaces using structure-from-motion techniques. Closer to the surface, measurements from small unmanned drones and tethered balloons have mapped snow depths, floods, and estimated evaporation at sub-metre resolutions, pushing back on spatio-temporal constraints and delivering new process insights. At ground level, precipitation has been measured using signal attenuation between antennae mounted on cell phone towers, while the proliferation of mobile devices has enabled citizen scientists to catalogue photos of environmental conditions, estimate daily average temperatures from battery state, and sense other hydrologically important variables such as channel depths using commercially available wireless devices. Global internet access is being pursued via high-altitude balloons, solar planes, and hundreds of planned satellite launches, providing a means to exploit the "internet of things" as an entirely new measurement domain. Such global access will enable real-time collection of data from billions of smartphones or from remote research platforms. This future will produce petabytes of data that can only be accessed via cloud storage and will require new analytical approaches to interpret. The extent to which today's hydrologic models can usefully ingest such massive data volumes is unclear. Nor is it clear whether this deluge of data will be usefully exploited, either because the measurements are superfluous, inconsistent, not accurate enough, or simply because we lack the capacity to process and analyse them. What is apparent is that the tools and techniques afforded by this array of novel and game-changing sensing platforms present our community with a unique opportunity to develop new insights that advance fundamental aspects of the hydrological sciences. To accomplish this will require more than just an application of the technology: in some cases, it will demand a radical rethink on how we utilize and exploit these new observing systems

    Auto-Scaling Network Resources using Machine Learning to Improve QoS and Reduce Cost

    Full text link
    Virtualization of network functions (as virtual routers, virtual firewalls, etc.) enables network owners to efficiently respond to the increasing dynamicity of network services. Virtual Network Functions (VNFs) are easy to deploy, update, monitor, and manage. The number of VNF instances, similar to generic computing resources in cloud, can be easily scaled based on load. Hence, auto-scaling (of resources without human intervention) has been receiving attention. Prior studies on auto-scaling use measured network traffic load to dynamically react to traffic changes. In this study, we propose a proactive Machine Learning (ML) based approach to perform auto-scaling of VNFs in response to dynamic traffic changes. Our proposed ML classifier learns from past VNF scaling decisions and seasonal/spatial behavior of network traffic load to generate scaling decisions ahead of time. Compared to existing approaches for ML-based auto-scaling, our study explores how the properties (e.g., start-up time) of underlying virtualization technology impacts Quality of Service (QoS) and cost savings. We consider four different virtualization technologies: Xen and KVM, based on hypervisor virtualization, and Docker and LXC, based on container virtualization. Our results show promising accuracy of the ML classifier using real data collected from a private ISP. We report in-depth analysis of the learning process (learning-curve analysis), feature ranking (feature selection, Principal Component Analysis (PCA), etc.), impact of different sets of features, training time, and testing time. Our results show how the proposed methods improve QoS and reduce operational cost for network owners. We also demonstrate a practical use-case example (Software-Defined Wide Area Network (SD-WAN) with VNFs and backbone network) to show that our ML methods save significant cost for network service leasers

    A Novel Fog Computing Approach for Minimization of Latency in Healthcare using Machine Learning

    Get PDF
    In the recent scenario, the most challenging requirements are to handle the massive generation of multimedia data from the Internet of Things (IoT) devices which becomes very difficult to handle only through the cloud. Fog computing technology emerges as an intelligent solution and uses a distributed environment to operate. The objective of the paper is latency minimization in e-healthcare through fog computing. Therefore, in IoT multimedia data transmission, the parameters such as transmission delay, network delay, and computation delay must be reduced as there is a high demand for healthcare multimedia analytics. Fog computing provides processing, storage, and analyze the data nearer to IoT and end-users to overcome the latency. In this paper, the novel Intelligent Multimedia Data Segregation (IMDS) scheme using Machine learning (k-fold random forest) is proposed in the fog computing environment that segregates the multimedia data and the model used to calculate total latency (transmission, computation, and network). With the simulated results, we achieved 92% as the classification accuracy of the model, an approximately 95% reduction in latency as compared with the pre-existing model, and improved the quality of services in e-healthcare

    Distributed Particle Filters for Data Assimilation in Simulation of Large Scale Spatial Temporal Systems

    Get PDF
    Assimilating real time sensor into a running simulation model can improve simulation results for simulating large-scale spatial temporal systems such as wildfire, road traffic and flood. Particle filters are important methods to support data assimilation. While particle filters can work effectively with sophisticated simulation models, they have high computation cost due to the large number of particles needed in order to converge to the true system state. This is especially true for large-scale spatial temporal simulation systems that have high dimensional state space and high computation cost by themselves. To address the performance issue of particle filter-based data assimilation, this dissertation developed distributed particle filters and applied them to large-scale spatial temporal systems. We first implemented a particle filter-based data assimilation framework and carried out data assimilation to estimate system state and model parameters based on an application of wildfire spread simulation. We then developed advanced particle routing methods in distributed particle filters to route particles among the Processing Units (PUs) after resampling in effective and efficient manners. In particular, for distributed particle filters with centralized resampling, we developed two routing policies named minimal transfer particle routing policy and maximal balance particle routing policy. For distributed PF with decentralized resampling, we developed a hybrid particle routing approach that combines the global routing with the local routing to take advantage of both. The developed routing policies are evaluated from the aspects of communication cost and data assimilation accuracy based on the application of data assimilation for large-scale wildfire spread simulations. Moreover, as cloud computing is gaining more and more popularity; we developed a parallel and distributed particle filter based on Hadoop & MapReduce to support large-scale data assimilation
    corecore