71 research outputs found

    HBIM application in historic timber structures: A systematic review

    Get PDF
    Despite the recent significant increase on the use of Building Information Modelling (BIM) in the cultural heritage field, its application on heritage timber structures aiming at their conservation and assessment has not yet been fully established. Comparing with other construction materials, timber presents singular features that must be addressed in order to carry out a proper condition assessment. For this reason, this review summarizes existing works on historical timber structures using Historical BIM (HBIM), focusing not only on various geometric surveying and 3D modelling methods, but also on nongeometric information included in the model which are especially related with conservation, testing, and monitoring. In addition, this work illustrates the effectiveness increase given by a structural analysis, as to assess structural heath, after being implemented within a HBIM-based framework. To that aim, a global framework is proposed where the development and implementation level of different analysis stages are described.This work has been supported by Xunta de Galicia through grant GRC-ED431C 2020/01. This work was partly financed by FCT/MCTES through national funds (PIDDAC) under the R&D Unit Institute for Sustainability and Innovation in Structural Engineering (ISISE), under reference UIDB/04029/2020. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 769255. The sole responsibility for the content of this publication lies with the authors. It does not necessarily reflect the opinion of the European Union. Neither the Innovation and Networks Executive Agency (INEA) nor the European Commission are responsible for any use that may be made of the information contained therein

    Optical characterisation of oxidised carbon nanohorn nanofluids for direct solar energy absorption applications

    Get PDF
    Carbon nanohorns and oxidised carbon nanohorns-based nanofluids were characterised by taking one step forward. The advantage of studying oxidised carbon nanohorns is that they are surfactant-free. The stability of both nanofluids was checked after a 3-month preparation period at high temperature, which comes closer to real applications. Two different dynamic light scattering (DLS) systems were used to measure stability at high temperature before being compared. A deep optical analysis was run. An integrating sphere was attached to the classic spectrophotometer to determine the scattering of nanofluids. After obtaining the experimental values of the optical parameters for both nanofluids, the Kubelka-Munk Theory was applied to obtain optical coefficients. Finally, the scattering albedo was calculated to facilitate comparisons with the literature. Studying both nanofluid types provided us with new knowledge about their potential use as direct solar absorbers in solar thermal collectors

    Automatic Extraction of Road Points from Airborne LiDAR Based on Bidirectional Skewness Balancing

    Get PDF
    Road extraction from Light Detection and Ranging (LiDAR) has become a hot topic over recent years. Nevertheless, it is still challenging to perform this task in a fully automatic way. Experiments are often carried out over small datasets with a focus on urban areas and it is unclear how these methods perform in less urbanized sites. Furthermore, some methods require the manual input of critical parameters, such as an intensity threshold. Aiming to address these issues, this paper proposes a method for the automatic extraction of road points suitable for different landscapes. Road points are identified using pipeline filtering based on a set of constraints defined on the intensity, curvature, local density, and area. We focus especially on the intensity constraint, as it is the key factor to distinguish between road and ground points. The optimal intensity threshold is established automatically by an improved version of the skewness balancing algorithm. Evaluation was conducted on ten study sites with different degrees of urbanization. Road points were successfully extracted in all of them with an overall completeness of 93%, a correctness of 83%, and a quality of 78%. These results are competitive with the state-of-the-artThis work has received financial support from the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2019-2022 ED431G-2019/04 and reference competitive group 2019-2021, ED431C 2018/19) and the European Regional Development Fund (ERDF), which acknowledges the CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System. This work was also supported in part by Babcock International Group PLC (Civil UAVs Initiative Fund of Xunta de Galicia) and the Ministry of Education, Culture and Sport, Government of Spain (Grant Number TIN2016-76373-P)S

    Characterizing zebra crossing zones using LiDAR data

    Get PDF
    Light detection and ranging (LiDAR) scanning in urban environments leads to accurate and dense three-dimensional point clouds where the different elements in the scene can be precisely characterized. In this paper, two LiDAR-based algorithms that complement each other are proposed. The first one is a novel profiling method robust to noise and obstacles. It accurately characterizes the curvature, the slope, the height of the sidewalks, obstacles, and defects such as potholes. It was effective for 48 of 49 detected zebra crossings, even in the presence of pedestrians or vehicles in the crossing zone. The second one is a detailed quantitative summary of the state of the zebra crossing. It contains information about the location, the geometry, and the road marking. Coarse grain statistics are more prone to obstacle-related errors and are only fully reliable for 18 zebra crossings free from significant obstacles. However, all the anomalous statistics can be analyzed by looking at the associated profiles. The results can help in the maintenance of urban roads. More specifically, they can be used to improve the quality and safety of pedestrian routesConsellería de Cultura, Educación e Ordenación Universitaria, Grant/Award Numbers: accreditation 2019-2022 ED431G-2019/04, 2022-2024, ED431C2022/16, ED481A-2020/231; European Regional Development Fund (ERDF); CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System; Ministry of Economy and Competitiveness, Government of Spain, Grant/Award Number: PID2019-104834GB-I00; National Department of Traffic (DGT) through the project Analysis of Indicators Big-Geodata on Urban Roads for the Dynamic Design of Safe School Roads, Grant/Award Number: SPIP2017-02340S

    A fast and optimal pathfinder using airborne LiDAR data

    Get PDF
    Determining the optimal path between two points in a 3D point cloud is a problem that have been addressed in many different situations: from road planning and escape routes determination, to network routing and facility layout. This problem is addressed using different input information, being 3D point clouds one of the most valuables. Its main utility is to save costs, whatever the field of application is. In this paper, we present a fast algorithm to determine the least cost path in an Airborne Laser Scanning point cloud. In some situations, like finding escape routes for instance, computing the solution in a very short time is crucial, and there are not many works developed in this theme. State of the art methods are mainly based on a digital terrain model (DTM) for calculating these routes, and these methods do not reflect well the topography along the edges of the graph. Also, the use of a DTM leads to a significant loss of both information and precision when calculating the characteristics of possible routes between two points. In this paper, a new method that does not require the use of a DTM and is suitable for airborne point clouds, whether they are classified or not, is proposed. The problem is modeled by defining a graph using the information given by a segmentation and a Voronoi Tessellation of the point cloud. The performance tests show that the algorithm is able to compute the optimal path between two points by processing up to 678,820 points per second in a point cloud of 40,000,000 points and 16 km² of extensionThis work has received financial support from the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2019-2022 ED431G-2019/04, reference competitive group 2019-2021, ED431C 2018/19) and the European Regional Development Fund (ERDF), which acknowledges the CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System. This work was also supported by the Ministry of Economy and Competitiveness, Government of Spain (Grant No. PID2019-104834 GB-I00). We also acknowledge the Centro de Supercomputación de Galicia (CESGA) for the use of their computersS

    Fast Ground Filtering of Airborne LiDAR Data Based on Iterative Scan-Line Spline Interpolation

    Get PDF
    Over the last two decades, a wide range of applications have been developed from Light Detection and Ranging (LiDAR) point clouds. Most LiDAR-derived products require the distinction between ground and non-ground points. Because of this, ground filtering its being one of the most studied topics in the literature and robust methods are nowadays available. However, these methods have been designed to work with offline data and they are generally not well suited for real-time scenarios. Aiming to address this issue, this paper proposes an efficient method for ground filtering of airborne LiDAR data based on scan-line processing. In our proposal, an iterative 1-D spline interpolation is performed in each scan line sequentially. The final spline knots of a scan line are taken into account for the next scan line, so that valuable 2-D information is also considered without compromising computational efficiency. Points are labelled into ground and non-ground by analysing their residuals to the final spline. When tested against synthetic ground truth, the method yields a mean kappa value of 88.59% and a mean total error of 0.50%. Experiments with real data also show satisfactory results under visual inspection. Performance tests on a workstation show that the method can process up to 1 million points per second. The original implementation was ported into a low-cost development board to demonstrate its feasibility to run in embedded systems, where throughput was improved by using programmable logic hardware acceleration. Analysis shows that real-time filtering is possible in a high-end board prototype, as it can process the amount of points per second that current lightweight scanners acquire with low-energy consumptionThis work was supported by the Ministry of Education, Culture, and Sport, Government of Spain (Grant Number TIN2016-76373-P), the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2016–2019, ED431G/08, and ED431C 2018/2019), and the European Union (European Regional Development Fund—ERDF)S

    A Developer-Friendly “Open Lidar Visualizer and Analyser” for Point Clouds With 3D Stereoscopic View

    Get PDF
    Light detection and ranging is being a hot topic in the remote sensing field, and the development of robust point cloud processing methods is essential for the adoption of this technology. In order to understand, evaluate, and show these methods, it is a key to visualize their outputs. Several visualization tools exist, although it is usually difficult to find the suited one for a specific application. On the one hand, proprietary (closed source) projects are not flexible enough because they cannot be modified to adapt them to particular applications. On the other hand, current open source projects lack an effortless way to create custom visualizations. For these reasons, we present Olivia, a developer-friendly open source visualization tool for point clouds. Olivia provides the backbone for any type of point cloud visualization, and it can be easily extended and tailored to meet the requirements of a specific application. It supports stereoscopic 3-D view, aiding both the evaluation and presentation of processing methods. In this paper, several cases of study are presented to demonstrate the usefulness of Olivia along with its computational performance.S

    Effect of ZrO2 nanoparticles on thermophysical and rheological properties of three synthetic oils

    Get PDF
    This article presents an experimental study on some thermophysical properties (density, viscosity and adiabatic bulk modulus) of six nanolubricants based on synthetic oils and ZrO2 nanoparticles. Two-step method with ultrasonic disruptor was used to prepare the nanodispersions. The morphology, crystalline degree and elemental composition of nanoparticles were analyzed by electron microscopy. Visual observation, temporal variation of refractive index and dynamic light scattering were used to analyze the stability of the nanolubricants and the average size of the aggregates. The presence of new interactions between nanoparticles and base oils was studied through Fourier transform infrared spectrometer. Vibrating tube densimeters, rotational viscometer and rheometer equipped with cone-plate geometry were used within the temperature range from (278.15 to 373.15) K. The ability of some theoretical simple models to predict densities and viscosities of these nanolubricants as a function of temperature and nanoparticle concentration was also checked.This work was supported by Spanish Ministry of Economy and Competitiveness and the UE FEDER programme through ENE2014-55489-C2-1-R, ENE2014-55489-C2-2-R, ENE2017-86425- C2-1-R and ENE2017-86425-C2-2-R projects. Moreover, this work was funded by the Xunta de Galicia (AGRUP2015/11 and GRC ED431C 2016/001). D.C. was recipient of a postdoctoral fellowship from Xunta de Galicia (Spain).S

    Influence of Multiple Conformations and Paths on Rate Constants and Product Branching Ratios. Thermal Decomposition of 1-Propanol Radicals

    Get PDF
    The potential energy surface involved in the thermal decomposition of 1-propanol radicals was investigated in detail using automated codes (tsscds2018 and Q2DTor). From the predicted elementary reactions, a relevant reaction network was constructed to study the decomposition at temperatures in the range 1000–2000 K. Specifically, this relevant network comprises 18 conformational reaction channels (CRCs), which in general exhibit a large wealth of conformers of reactants and transition states. Rate constants for all the CRCs were calculated using two approaches within the formulation of variational transition-state theory (VTST), as incorporated in the TheRa program. The simplest, one-well (1W) approach considers only the most stable conformer of the reactant and that of the transition state. In the second, more accurate approach, contributions from all the reactant and transition-state conformers are taken into account using the multipath (MP) formulation of VTST. In addition, kinetic Monte Carlo (KMC) simulations were performed to compute product branching ratios. The results show significant differences between the values of the rate constants calculated with the two VTST approaches. In addition, the KMC simulations carried out with the two sets of rate constants indicate that, depending on the radical considered as reactant, the 1W and the MP approaches may display different qualitative pictures of the whole decomposition processThis work was partially supported by the Consellería de Cultura, Educación e Ordenación Universitaria e da Consellería de Economía, Emprego e Industria (Axuda para Consolidación e Estructuración de unidades de investigación competitivas do Sistema Universitario de Galicia, Xunta de Galicia ED431C 2017/17 & Centro singular de investigación de Galicia acreditación 2016-2019, ED431G/09), the Ministerio de Economía y Competitividad of Spain (Research Grant No CTQ2014-58617-R), and the European Regional Development Fund (ERDF). D.F.-C. also thanks Xunta de Galicia for financial support through a postdoctoral grant. The authors thank “Centro de Supercomputación de Galicia (CESGA)” for the use of their computational facilitiesS

    Individual based model links thermodynamics, chemical speciation and environmental conditions to microbial growth

    Get PDF
    Individual based Models (IbM) must transition from research tools to engineering tools. To make the transition we must aspire to develop large, three dimensional and physically and biologically credible models. Biological credibility can be promoted by grounding, as far as possible, the biology in thermodynamics. Thermodynamic principles are known to have predictive power in microbial ecology. However, this in turn requires a model that incorporates pH and chemical speciation. Physical credibility implies plausible mechanics and a connection with the wider environment. Here, we propose a step toward that ideal by presenting an individual based model connecting thermodynamics, pH and chemical speciation and environmental conditions to microbial growth for 5·105 individuals. We have showcased the model in two scenarios: a two functional group nitrification model and a three functional group anaerobic community. In the former, pH and connection to the environment had an important effect on the outcomes simulated. Whilst in the latter pH was less important but the spatial arrangements and community productivity (that is, methane production) were highly dependent on thermodynamic and reactor coupling. We conclude that if IbM are to attain their potential as tools to evaluate the emergent properties of engineered biological systems it will be necessary to combine the chemical, physical, mechanical and biological along the lines we have proposed. We have still fallen short of our ideals because we cannot (yet) calculate specific uptake rates and must develop the capacity for longer runs in larger models. However, we believe such advances are attainable. Ideally in a common, fast and modular platform. For future innovations in IbM will only be of use if they can be coupled with all the previous advances
    corecore