12 research outputs found

    Building Infrastructure for Preservation and Publication of Earthquake Engineering Research Data

    Get PDF
    The objective of this paper is to showcase the progress of the earthquake engineering community during a decade-long effort supported by the National Science Foundation in the George E. Brown Jr., Network for Earthquake Engineering Simulation (NEES). During the four years that NEES network operations have been headquartered at Purdue University, the NEEScomm management team has facilitated an unprecedented cultural change in the ways research is performed in earthquake engineering. NEES has not only played a major role in advancing the cyberinfrastructure required for transformative engineering research, but NEES research outcomes are making an impact by contributing to safer structures throughout the USA and abroad. This paper reflects on some of the developments and initiatives that helped instil change in the ways that the earthquake engineering and tsunami community share and reuse data and collaborate in general

    Exploring data practices of the earthquake engineering community

    Get PDF
    There is a need to compare and contrast data practices of different disciplines and groups. This study explores data practices in earthquake engineering (EE), an interdisciplinary field with a variety of research activities and dynamic data types and forms. Findings identify the activities of typical EE research projects, the types and forms of data produced and used in those activities, the project roles played by EE researchers in connection with data practices, the tools used to manage data in those activities, the types and sources of data quality problems in EE, and the perceptions of data quality in EE. A strong relation exists among these factors, with a stronger role for test specimens and high quality documentation and more blurring of project roles than in other fields. Suggestions are provided for resolving contradictions impeding EE researchers’ curation and archiving activities and for future research on data practices

    Report of the 2014 NSF Cybersecurity Summit for Large Facilities and Cyberinfrastructure

    Get PDF
    This event was supported in part by the National Science Foundation under Grant Number 1234408. Any opinions, findings, and conclusions or recommendations expressed at the event or in this report are those of the authors and do not necessarily reflect the views of the National Science Foundation

    Evaluation of Strength Reduction Factor for Concentrically Braced Frames Based on Nonlinear Single Degree-of-Freedom Systems

    Get PDF
    Strength Reduction Factor (R-Factor), often referred to as Response Modification Factor, is commonly used in the design of lateral force resisting systems under seismic loading. R-Factors allow for a reduction in design base shear demands, leading to more economical designs. The reduction of strength is remedied with ductile behavior in members of proper detailing. Modern seismic codes and provisions recommend R-Factors for many types of lateral force resisting systems. However these factors are independent of the system fundamental frequency and many other important system properties, resulting in factors that may result in an unfavorable seismic response. To evaluate the validity of prescribed R-Factors an extensive analytical parameter study was conducted using a FEM single degree-of-freedom Concentrically Braced Frame (CBF) under incremental dynamic analysis over a suite of ground motions. Parameters of the study include brace slenderness, fundamental frequency, increment resolution, FEM mesh refinement, effects of leaning columns, and effects of low-cycle fatigue. Results suggest that R-Factor can vary drastically for CBF systems with differing properties

    2nd EFAST Workshop, Reliable Testing of Seismic Performance

    Get PDF
    The EFAST project consisted of a design study of a new major seismic testing facility in Europe that will be comparable with important testing installations that are now working or under construction in Japan, U.S.A., China and Taiwan. The presentations by invited experts during the 2nd EFAST Workshop, which was held by the end of the project, emphasized the basic idea that experiments are necessary because reliable engineering cannot still rely only on numerical predictions. The relation between the experimental research and the improvements of the buildings codes in the last decades has also suggested that a consistent experimental activity is fundamental for properly understanding and predicting the real behaviour of complex structural elements. Today in many fields, as in the assessment of nuclear facilities for example, more reliability is required in order to increase the safety, which leads to a newer impulse for experimental testing of components, subsystems, soil-structure interaction effects and so on. The necessity and characteristics of the available testing methods was reviewed with up-to-date examples and studies on aspects such as shaking table, pseudo-dynamic and hybrid testing methods, centrifuge facilities, scale models, soil-structure interaction, control strategies and performance. Within the EFAST design study as it was presented, several solutions are proposed for the future experimental facility, among which the reference one is a laboratory composed, mainly, of a high performance shaking table array and a reaction structure where both traditional (pseudo-static/dynamic) and innovative testing techniques (e.g. real time hybrid testing) can be applied and combined. These shaking tables can be moved in the trench and can be also rigidly coupled between them, if necessary. A large SDOF shaking table for geotechnical studies is also foreseen in such solution. The discussion of the different solutions covered aspects such as costs (including safety, maintenance and operation), demand of experiments, flexibility and performance among others.JRC.G.5-European laboratory for structural assessmen

    Managing the Unmanageable: How IS Research Can Contribute to the Scholarship of Cyber Projects

    Get PDF
    Cyber projects are large-scale efforts to implement computer, information, and communication technologies in scientific communities. These projects seek to build scientific cyberinfrastructure that will promote new scientific collaborations and transform science in novel and unimagined ways. Their scope and complexity, the number and diversity of stakeholders, and their transformational goals make cyber projects extremely challenging to understand and manage. Consequently, scholars from multiple disciplines, including computer science, information science, sociology, and information systems, have begun to study cyber projects and their impacts. As IS scholars, our goal is to contribute to this growing body of inter-disciplinary knowledge by considering three areas of IS research that are particularly germane to this class of project, given their characteristics: development approaches, conflict, and success factors. After describing cyber projects, we explore how IS research findings in these three areas are relevant for cyber projects, and suggest promising avenues of future research. We conclude by discussing the importance and unique challenges of cyber projects and propose that, given our expertise and knowledge of project management, IS researchers are particularly well suited to contribute to the inter-disciplinary study of these projects

    A computational framework for data-driven infrastructure engineering using advanced statistical learning, prediction, and curing

    Get PDF
    Over the past few decades, in most science and engineering fields, data-driven research has been becoming a promising next-generation research paradigm due to noticeable advances in computing power and accumulation of valuable databases. Despite this valuable accomplishment, the leveraging of these databases is still in its infancy. To address this issue, this dissertation investigates the following studies that use advanced statistical methods. The first study aims to develop a computational framework for collecting and transforming data obtained from heterogeneous databases in the Federal Aviation Administration and build a flexible predictive model using a generalized additive model (GAM) to predict runway incursions for 15 years in the top major US 36 airports. Results show that GAM is a powerful method for RI prediction with a high prediction accuracy. A direct search for finding the best predictor variables appears to be superior over the variable section approach based on a principal component analysis. The prediction power of GAM turns out to be comparable to that of an artificial neural network (ANN). The second study is to build an accurate predictive model based on earthquake engineering databases. As with the previous study, GAM is adopted as a predictive model. The result shows a promising predictive power of GAM with application to existing reinforced concrete shear wall databases. The primary objective of the third study is to suggest an efficient predictor variable selection method and provide relative importance among predictor variables using field survey pavement and simulated airport pavement data. Results show that the direct search method always finds the best predictor model, but the method takes a long time depending on the size of data and the variables\u27 dimensions. The results also depict that all variables are not necessary for the best prediction and identify the relative importance of variables selected for the GAM model. The fourth study deals with the impact of fractional hot-deck imputation (FHDI) on statistical and machine learning and prediction using practical engineering databases. Multiple response rates and internal parameters (i.e., category number and donor number) are investigated regarding the behavior and impacts of FHDI on prediction models. GAM, ANN, support vector machine, and extremely randomized trees are adopted as predictive models. Results show that the FHDI holds a positive impact on the prediction for engineering-based databases. The optimal internal parameters are also suggested to achieve a better prediction accuracy. The last study aims to offer a systematic computational framework including data collection, transformation, and squashing to develop a prediction model for the structural behavior of the target bridge. Missing values in the bridge data are cured by using the FHDI method to avoid an inaccurate data analysis due to biasness and sparseness of data. Results show that the application of FHDI improves prediction performances. This dissertation is expected to provide a notable computational framework for data processing, suggest a seamless data curing method, and offer an advanced statistical predictive model based on multiple projects. This novel research approach will help researchers to investigate their databases with a better understanding and build a statistical model with high accuracy according to their knowledge about the data

    Enabling global experiments with interactive reconfiguration and steering by multiple users

    Get PDF
    In global scientific experiments with collaborative scenarios involving multinational teams there are big challenges related to data access, namely data movements are precluded to other regions or Clouds due to the constraints on latency costs, data privacy and data ownership. Furthermore, each site is processing local data sets using specialized algorithms and producing intermediate results that are helpful as inputs to applications running on remote sites. This paper shows how to model such collaborative scenarios as a scientific workflow implemented with AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic), a decentralized framework offering a feasible solution to run the workflow activities on distributed data centers in different regions without the need of large data movements. The AWARD workflow activities are independently monitored and dynamically reconfigured and steering by different users, namely by hot-swapping the algorithms to enhance the computation results or by changing the workflow structure to support feedback dependencies where an activity receives feedback output from a successor activity. A real implementation of one practical scenario and its execution on multiple data centers of the Amazon Cloud is presented including experimental results with steering by multiple users.info:eu-repo/semantics/publishedVersio
    corecore