7,137 research outputs found

    Residential building damage from hurricane storm surge: proposed methodologies to describe, assess and model building damage

    Get PDF
    Although hydrodynamic models are used extensively to quantify the physical hazard of hurricane storm surge, the connection between the physical hazard and its effects on the built environment has not been well addressed. The focus of this dissertation research is the improvement of our understanding of the interaction of hurricane storm surge with the built environment. This is accomplished through proposed methodologies to describe, assess and model residential building damage from hurricane storm surge. Current methods to describe damage from hurricane events rely on the initiating mechanism. To describe hurricane damage to residential buildings, a combined wind and flood damage scale is developed that categorizes hurricane damage on a loss-consistent basis, regardless of the primary damage mechanism. The proposed Wind and Flood (WF) Damage Scale incorporates existing damage and loss assessment methodologies for wind and flood events and describes damage using a seven-category discrete scale. Assessment of hurricane damage has traditionally been conducted through field reconnaissance deployments where damage information is captured and cataloged. The increasing availability of high resolution satellite and aerial imagery in the last few years has led to damage assessments that rely on remotely sensed information. Existing remote sensing damage assessment methodologies are reviewed for high velocity flood events at the regional, neighborhood and per-building levels. The suitability of using remote sensing in assessing residential building damage from hurricane storm surge at the neighborhood and per-building levels is investigated using visual analysis of damage indicators. Existing models for flood damage in the United States generally quantify the economic loss that results from flooding as a function of depth, rather than assessing a level of physical damage. To serve as a first work in this area, a framework for the development of an analytical damage model for residential structures is presented. Input conditions are provided by existing hydrodynamic storm surge models and building performance is determined through a comparison of physical hazard and building resistance parameters in a geospatial computational environment. The proposed damage model consists of a two-tier framework, where overall structural response and the performance of specific components are evaluated

    Revista Economica

    Get PDF

    3D Indoor Positioning in 5G networks

    Get PDF
    Over the past two decades, the challenge of accurately positioning objects or users indoors, especially in areas where Global Navigation Satellite Systems (GNSS) are not available, has been a significant focus for the research community. With the rise of 5G IoT networks, the quest for precise 3D positioning in various industries has driven researchers to explore various machine learning-based positioning techniques. Within this context, researchers are leveraging a mix of existing and emerging wireless communication technologies such as cellular, Wi-Fi, Bluetooth, Zigbee, Visible Light Communication (VLC), etc., as well as integrating any available useful data to enhance the speed and accuracy of indoor positioning. Methods for indoor positioning involve combining various parameters such as received signal strength (RSS), time of flight (TOF), time of arrival (TOA), time difference of arrival (TDOA), direction of arrival (DOA) and more. Among these, fingerprint-based positioning stands out as a popular technique in Real Time Localisation Systems (RTLS) due to its simplicity and cost-effectiveness. Positioning systems based on fingerprint maps or other relevant methods find applications in diverse scenarios, including malls for indoor navigation and geo-marketing, hospitals for monitoring patients, doctors, and critical equipment, logistics for asset tracking and optimising storage spaces, and homes for providing Ambient Assisted Living (AAL) services. A significant challenge facing all indoor positioning systems is the objective evaluation of their performance. This challenge is compounded by the coexistence of heterogeneous technologies and the rapid advancement of computation. There is a vast potential for information fusion to be explored. These observations have led to the motivation behind our work. As a result, two novel algorithms and a framework are introduced in this thesis

    AUGUR: Forecasting the Emergence of New Research Topics

    Get PDF
    Being able to rapidly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. The literature presents several approaches to identifying the emergence of new research topics, which rely on the assumption that the topic is already exhibiting a certain degree of popularity and consistently referred to by a community of researchers. However, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. We address this issue by introducing Augur, a novel approach to the early detection of research topics. Augur analyses the diachronic relationships between research areas and is able to detect clusters of topics that exhibit dynamics correlated with the emergence of new research topics. Here we also present the Advanced Clique Percolation Method (ACPM), a new community detection algorithm developed specifically for supporting this task. Augur was evaluated on a gold standard of 1,408 debutant topics in the 2000-2011 interval and outperformed four alternative approaches in terms of both precision and recall

    Towards more intelligent wireless access networks

    Get PDF

    Monitoring and analysis system for performance troubleshooting in data centers

    Get PDF
    It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance troubleshooting in data centers to worldā€™s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming. To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers. VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the performance issue. VScopeā€™s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScopeā€™s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ā€˜thin layerā€™ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.Ph.D
    • ā€¦
    corecore