136 research outputs found

    From Traditional Adaptive Data Caching to Adaptive Context Caching: A Survey

    Full text link
    Context data is in demand more than ever with the rapid increase in the development of many context-aware Internet of Things applications. Research in context and context-awareness is being conducted to broaden its applicability in light of many practical and technical challenges. One of the challenges is improving performance when responding to large number of context queries. Context Management Platforms that infer and deliver context to applications measure this problem using Quality of Service (QoS) parameters. Although caching is a proven way to improve QoS, transiency of context and features such as variability, heterogeneity of context queries pose an additional real-time cost management problem. This paper presents a critical survey of state-of-the-art in adaptive data caching with the objective of developing a body of knowledge in cost- and performance-efficient adaptive caching strategies. We comprehensively survey a large number of research publications and evaluate, compare, and contrast different techniques, policies, approaches, and schemes in adaptive caching. Our critical analysis is motivated by the focus on adaptively caching context as a core research problem. A formal definition for adaptive context caching is then proposed, followed by identified features and requirements of a well-designed, objective optimal adaptive context caching strategy.Comment: This paper is currently under review with ACM Computing Surveys Journal at this time of publishing in arxiv.or

    Progressive collapse risk analysis: literature survey, relevant construction standards and guidelines

    Get PDF
    A technical literature survey has been conducted concerning the problems of building robustness and progressive collapse. These issues gained special interest in construction after the partial collapse of the Ronan Point apartment building in London in 1968. Enhanced interest appeared again after the disproportionate collapse of the A.P. Murrah Federal Building in Oklahoma City in 1995, and the total collapse of the World Trade Center towers in 2001, both caused by terrorist attacks. This report, which is an updated version of the 2009 one, aims at summarising the state-of-the-art in the subject of progressive collapse risk of civil engineering structures. First, a list of main terms and definitions related to progressive collapse are presented. Then, a review of procedures and strategies for progressive collapse avoidance is provided, based on selected EU and US design codes, standards and guidelines. A review of research efforts and results in the field follows, as reported in international journals and conference papers. Different proposals of robustness measures of structures are also examined, and some characteristic cases of progressive collapses of real buildings are presented.JRC.G.5-European laboratory for structural assessmen

    A Sociotechnical Systems Analysis of Building Information Modelling (STSaBIM) Implementation in Construction Organisations

    Get PDF
    The concept of BIM is nascent but evolving rapidly, thus, its deployment has become the latest shibboleth amongst both academics and practitioners in the construction sector in the recent couple of years. Due to construction clients buy-in of the BIM concept, the entire industry is encouraged to pursue a vision of changing work practices in line with the BIM ideas. Also, existing research recognises that the implementation of BIM affects all areas of the construction process from design of the building, through the organisation of projects, to the way in which the construction process is executed and how the finished product is maintained. The problem however is that, existing research in technology utilisation in general, and BIM literature in particular, has offered limited help to practitioners trying to implement BIM, for focusing predominantly, on technology-centric views. Not surprisingly therefore, the current BIM literature emphasises on topics such as capability maturity models and anticipated outcomes of BIM rollouts. Rarely does the extant literature offer practitioners a cohesive approach to BIM implementation. Such technology-centric views inevitably represent a serious barrier to utilising the inscribed capabilities of BIM. This research therefore is predicated on the need to strengthen BIM implementation theory through monitoring and analysing its implementation in practice. Thus, the focus of this thesis is to carry out a sociotechnical systems (STS) analysis of BIM implementation in construction organisations. The concept of STS accommodates the dualism of the inscribed functions of BIM technologies and the contextual issues in the organisations and allows for the analysis of their interactive combination in producing the anticipated effect from BIM appropriation. An interpretive research methodology is adopted to study practitioners through a change process, involving the implementation of BIM in their work contexts. The study is based on constructivist ontological interpretations of participants. The study adopts an abductive research approach which ensures a back-and-forth movement between research sites and the theoretical phenomenon, effectively comparing the empirical findings with the existing theories and to eventually generate a new theoretical understanding and knowledge regarding the phenomenon under investigation. A two-stage process is also formulated for the empirical data collection - comprising: 1) initial exploratory study to help establish the framework for analysing BIM implementation in the construction context; and 2) case studies approach to provide a context for formulating novel understanding and validation of theory regarding BIM implementation in construction organisations. The analysis and interpretation of the empirical work follows the qualitative content analysis technique to observe and reflect on the results. The findings have shown that BIM implementation demands a complete breakaway from the status quo. Contrary to the prevailing understanding of a top-down approach to BIM utilisation, the study revealed that different organisations with plethora of visions, expectations and skills combine with artefacts to form or transform BIM practices. The rollout and appropriation of BIM occurs when organisations shape sociotechnical systems of institutions, processes and technologies to support certain practices over others. The study also showed that BIM implementation endures in a causal chain of influences as different project organisations with their localised BIM ambitions and expectations combine to develop holistic BIM-enabled project visions. Thus, distributed responsibilities on holistic BIM protocols among the different levels of influences are instituted and enforced under binding contractual obligations. The study has illuminated the centrality of both the technical challenges and sociological factors in shaping BIM deployment in construction. It is also one of the few studies that have produced accounts of BIM deployment that is strongly mediated by the institutional contexts of construction organisations. However, it is acknowledged that the focus of the research on qualitative interpretive enquiry does not have the hard and fast view of generalising from specific cases to broader population/contexts. Thus, it is suggested that further quantitative studies, using much larger data sample of BIM-enabled construction organisations could provide an interesting point of comparison to the conclusions derived from the research findings

    Risk Management in Environment, Production and Economy

    Get PDF
    The term "risk" is very often associated with negative meanings. However, in most cases, many opportunities can present themselves to deal with the events and to develop new solutions which can convert a possible danger to an unforeseen, positive event. This book is a structured collection of papers dealing with the subject and stressing the importance of a relevant issue such as risk management. The aim is to present the problem in various fields of application of risk management theories, highlighting the approaches which can be found in literature

    System design for periodic data production management

    Get PDF
    This research project introduces a new type of information system, the periodic data production management system, and proposes several innovative system design concepts for this application area. Periodic data production systems are common in the information industry for the production of information. These systems process large quantities of data in order to produce statistical reports in predefined intervals. The workflow of such a system is typically distributed world-wide and consists of several semi-computerized production steps which transform data packages. For example, market research companies apply these systems in order to sell marketing information over specified timelines. production of information. These systems process large quantities of data in order to produce statistical reports in predefined intervals. The workflow of such a system is typically distributed world-wide and consists of several semi-computerized production steps which transform data packages. For example, market research companies apply these systems in order to sell marketing information over specified timelines. There has been identified a lack of concepts for IT-aided management in this area. This thesis clearly defines the complex requirements of periodic data production management systems. It is shown that these systems can be defines as IT-support for planning, monitoring and controlling periodic data production processes. Their significant advantages are that information industry will be enabled to increase production performance, and to ease (and speed up) the identification of the production progress as well as the achievable optimisation potential in order to control rationalisation goals. In addition, this thesis provides solutions for he generic problem how to introduce such a management system on top of an unchangeable periodic data production system. Two promising system designs for periodic data production management are derived, analysed and compared in order to gain knowledge about appropriate concepts and this application area. Production planning systems are the metaphor models used for the so-called closely coupled approach. The metaphor model for the loosely coupled approach is project management. The latter approach is prototyped as an application in the market research industry and used as case study. Evaluation results are real-world experiences which demonstrate the extraordinary efficiency of systems based on the loosely coupled approach. Special is a scenario-based evaluation that accurately demonstrates the many improvements achievable with this approach. Main results are that production planning and process quality can vitally be improved. Finally, among other propositions, it is suggested to concentrate future work on the development of product lines for periodic data production management systems in order to increase their reuse

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    On the assessment of precipitation extremes in reanalysis and ensemble forecast datasets

    Get PDF
    Precipitation extremes can trigger natural hazards with large impacts. The accurate quantification of the probability and the prediction of the occurrence of heavy precipitation events is crucial for the mitigation of precipitation-related hazards. This PhD thesis provides methods for the assessment of precipitation extremes. The methods are applied to different gridded datasets. The framework of extreme value theory, and more precisely the extended generalized Pareto distribution (EGPD), is used to quantify precipitation distributions. Chapter 2 compares ERA-5 precipitation dataset with observation-based datasets and identifies the regions of low or high agreement of ERA-5 precipitation with observations. ERA-5 is a reanalysis dataset, i.e. a reconstruction of the past weather obtained by combining past observations with weather forecast models. The strengths of reanalysis precipitation fields are the regular spatio-temporal coverage and the consistence with the data on the atmospheric circulation from the reanalysis. However, precipitation in ERA-5 stem from short-term forecasts and the precipitation data calculation does not include observed precipitation. Therefore a comparison with observational datasets is needed to assess the quality of the precipitation data. We compare ERA-5 precipitation with two observation-based gridded datasets: EOBS (station-based) over Europe and CMORPH (satellite-based) globally. Both intensity and occurrence of precipitation extremes are compared. We measure the co-occurrence of extremes between ERA-5 and the observational datasets with a hit rate of binary extreme events. We find a decrease in the hit rate with increasing rarity of events. Over Europe, the hit rate is rather homogeneous except near arid regions where it has a larger variability. In the global comparison, the midlatitude oceans are the regions with the largest agreement for the occurrence of extremes between the satellite observations and the reanalysis dataset. The areas with the largest disagreement are the tropics, especially over Africa. We compare the precipitation intensity extremes between ERA-5 and the observational datasets using confidence intervals on the estimation of extreme quantiles and a test based on the Kullback-Leibler divergence. Both the confidence intervals and the Kullback-Leibler divergence calculations are based on the fitting of the precipitation distribution with the EGPD. The quantile comparison indicates an overlap of the confidence intervals on extreme quantiles (with a probability of non-exceedance of 0.9) for about 85% of the grid points over Europe and 72% globally. The regions with non-overlapping confidence intervals between ERA-5 and EOBS correspond to regions where the observation coverage is sparse and therefore where EOBS is more uncertain. The two datasets have a good agreement over countries with dense observational coverage. ERA-5 and CMORPH precipitation intensities agree well over the midlatitudes. The tropics are a region of disagreement: ERA-5 underestimates quantiles for heavy precipitation compared to CMORPH. In Chapter 3, we provide return levels of heavy precipitation events with regional fittings of the EGPD. The goal of this chapter is to develop a regional fitting method being a good trade-off between a robust estimation of the distribution and parsimony of the model, with a focus on precipitation extremes. We apply the method to ERA-5 precipitation data over Europe. This area of the dataset contains more than 20,000 grid points. A local fit of EGPD distributions for all grid points in Europe would therefore imply estimating a large number of parameters. To reduce the number of estimated parameters, we identify homogeneous regions in terms of extreme precipitation behaviors. Locations with a similar distribution of extremes (up to a normalizing factor) are first clustered with a partitioning-around-medoid (PAM) procedure. The distance used in the clustering procedure is based on a scale-invariant ratio of probability-weighted moments focusing on the upper tail of the distribution. We then fit an EGPD with a constraint: only one parameter (out of three) is allowed to vary within a homogeneous region. The outputs of Chapter 3 are 1) a step-by-step blueprint that leverages a recently developed and fast clustering algorithm to infer return level estimates over large spatial domains and 2) maps of return levels over Europe for different return periods and seasons. The relatively parsimonious model with only one spatially varying parameter can compete well against statistical models of higher complexity. The last part of this thesis (Chapter 4) evaluates the prediction skill of operational forecasts on a subseasonal (S2S) time scale. Good forecasts of extreme precipitation are crucial for warnings and subsequent mitigation of natural hazards impacts. The skill of extreme precipitation forecasts is assessed over Europe in the S2S forecast model produced by the European Centre for Medium-Range Weather Forecasts. ERA-5 precipitation is used as a reference. Extreme events are defined as daily precipitation exceeding the 95th seasonal percentile. The precipitation data is transformed into a binary dataset (threshold exceedance vs. no threshold exceedance). The percentiles are calculated independently for the forecast and the reference dataset: the direct comparison of dataset-specific quantiles removes potential biases in the data. The Brier score is computed as a reference metric to quantify the skill of the forecast model. In addition to the Brier score, a binary loss function is used to focus the verification on the occurrence of the extreme, discarding the days when the daily precipitation is not extreme, in both the forecast and the verification datasets. A daily and local verification of extremes is conducted; the analysis is extended further by aggregating the data in space and time. Results consistently show higher skill in winter compared to summer. Portugal, Norway and the South of the Alps are the regions with the highest skill in general. The Mediterranean region also presents a relatively good skill in winter. The spatial and temporal aggregation increases the skill. Each part of this thesis provides methods to model and evaluate precipitation extremes. The outcome of Chapter 2 is an evaluation of ERA-5 precipitation. Europe is found to be a region of good performance in this dataset. ERA-5 is therefore used to apply the regionalized estimation of return levels developed in Chapter 3. Furthermore, the reanalysis dataset is used as a reference for the estimation of the S2S forecast skill for precipitation extremes, in Chapter 4. The appendix contains the additional articles in which I was involved during my PhD project, as a lead author or as a coauthor
    • …
    corecore