120,930 research outputs found

    On systematic approaches for interpreted information transfer of inspection data from bridge models to structural analysis

    Get PDF
    In conjunction with the improved methods of monitoring damage and degradation processes, the interest in reliability assessment of reinforced concrete bridges is increasing in recent years. Automated imagebased inspections of the structural surface provide valuable data to extract quantitative information about deteriorations, such as crack patterns. However, the knowledge gain results from processing this information in a structural context, i.e. relating the damage artifacts to building components. This way, transformation to structural analysis is enabled. This approach sets two further requirements: availability of structural bridge information and a standardized storage for interoperability with subsequent analysis tools. Since the involved large datasets are only efficiently processed in an automated manner, the implementation of the complete workflow from damage and building data to structural analysis is targeted in this work. First, domain concepts are derived from the back-end tasks: structural analysis, damage modeling, and life-cycle assessment. The common interoperability format, the Industry Foundation Class (IFC), and processes in these domains are further assessed. The need for usercontrolled interpretation steps is identified and the developed prototype thus allows interaction at subsequent model stages. The latter has the advantage that interpretation steps can be individually separated into either a structural analysis or a damage information model or a combination of both. This approach to damage information processing from the perspective of structural analysis is then validated in different case studies

    Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)

    Get PDF
    Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through routing models. The most important input to debris \ufb02ow routing models are the topographic data, usually in the form of Digital Elevation Models (DEMs). The quality of DEMs depends on the accuracy, density, and spatial distribution of the sampled points; on the characteristics of the surface; and on the applied gridding methodology. Therefore, the choice of the interpolation method affects the realistic representation of the channel and fan morphology, and thus potentially the debris \ufb02ow routing modeling outcomes. In this paper, we initially investigate the performance of common interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor, Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging) in building DEMs with the complex topography of a debris \ufb02ow channel located in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full- waveform Light Detection And Ranging (LiDAR) data. The investigation is carried out through a combination of statistical analysis of vertical accuracy, algorithm robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms on the performance of a Geographic Information System (GIS)-based cell model for simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation between the DEMs heights uncertainty resulting from the gridding procedure and that on the corresponding simulated erosion/deposition depths, both the effect of interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid discharges, and channel morphology after the event. The comparison among the tested interpolation methods highlights that the ANUDEM and ordinary kriging algorithms are not suitable for building DEMs with complex topography. Conversely, the linear triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy and shape reliability. Anyway, the evaluation of the effects of gridding techniques on debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does not signi\ufb01cantly affect the model outcomes

    Transistor-Level Synthesis of Pipeline Analog-to-Digital Converters Using a Design-Space Reduction Algorithm

    Get PDF
    A novel transistor-level synthesis procedure for pipeline ADCs is presented. This procedure is able to directly map high-level converter specifications onto transistor sizes and biasing conditions. It is based on the combination of behavioral models for performance evaluation, optimization routines to minimize the power and area consumption of the circuit solution, and an algorithm to efficiently constraint the converter design space. This algorithm precludes the cost of lengthy bottom-up verifications and speeds up the synthesis task. The approach is herein demonstrated via the design of a 0.13 ÎŒm CMOS 10 bits@60 MS/s pipeline ADC with energy consumption per conversion of only 0.54 pJ@1 MHz, making it one of the most energy-efficient 10-bit video-rate pipeline ADCs reported to date. The computational cost of this design is of only 25 min of CPU time, and includes the evaluation of 13 different pipeline architectures potentially feasible for the targeted specifications. The optimum design derived from the synthesis procedure has been fine tuned to support PVT variations, laid out together with other auxiliary blocks, and fabricated. The experimental results show a power consumption of 23 [email protected] V and an effective resolution of 9.47-bit@1 MHz. Bearing in mind that no specific power reduction strategy has been applied; the mentioned results confirm the reliability of the proposed approach.Ministerio de Ciencia e InnovaciĂłn TEC2009-08447Junta de AndalucĂ­a TIC-0281

    A survey of carbon nanotube interconnects for energy efficient integrated circuits

    Get PDF
    This article is a review of the state-of-art carbon nanotube interconnects for Silicon application with respect to the recent literature. Amongst all the research on carbon nanotube interconnects, those discussed here cover 1) challenges with current copper interconnects, 2) process & growth of carbon nanotube interconnects compatible with back-end-of-line integration, and 3) modeling and simulation for circuit-level benchmarking and performance prediction. The focus is on the evolution of carbon nanotube interconnects from the process, theoretical modeling, and experimental characterization to on-chip interconnect applications. We provide an overview of the current advancements on carbon nanotube interconnects and also regarding the prospects for designing energy efficient integrated circuits. Each selected category is presented in an accessible manner aiming to serve as a survey and informative cornerstone on carbon nanotube interconnects relevant to students and scientists belonging to a range of fields from physics, processing to circuit design

    Agent and cyber-physical system based self-organizing and self-adaptive intelligent shopfloor

    Get PDF
    The increasing demand of customized production results in huge challenges to the traditional manufacturing systems. In order to allocate resources timely according to the production requirements and to reduce disturbances, a framework for the future intelligent shopfloor is proposed in this paper. The framework consists of three primary models, namely the model of smart machine agent, the self-organizing model, and the self-adaptive model. A cyber-physical system for manufacturing shopfloor based on the multiagent technology is developed to realize the above-mentioned function models. Gray relational analysis and the hierarchy conflict resolution methods were applied to achieve the self-organizing and self-adaptive capabilities, thereby improving the reconfigurability and responsiveness of the shopfloor. A prototype system is developed, which has the adequate flexibility and robustness to configure resources and to deal with disturbances effectively. This research provides a feasible method for designing an autonomous factory with exception-handling capabilities

    Systems cost performance analysis (study 2.3). Volume 1: Executive summary

    Get PDF
    A methodology for developing balanced designs of spacecraft subsystems which interrelates cost, performance, safety, and schedule considerations was developed. The methodology consists of a two-step process: selection of all hardware designs which satisfy the given performance and safety requirements; and estimating the cost and schedule required to design, build, and operate each spacecraft design. Aggregate equations were written to describe the performance, safety (reliability), cost, and schedule required for one type of stabilization and control subsystem in terms of the equipment used. The methodology was applied to unmanned, automated spacecraft subsystems, and the resulting model was implemented as a digital computer program

    System cost performance analysis (study 2.3). Volume 1: Executive summary

    Get PDF
    A study is described which was initiated to identify and quantify the interrelationships between and within the performance, safety, cost, and schedule parameters for unmanned, automated payload programs. The result of the investigation was a systems cost/performance model which was implemented as a digital computer program and could be used to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses for mission model and advanced payload studies. Program objectives and results are described briefly
    • 

    corecore