2,044 research outputs found

    Convergence analysis and validation of low cost distance metrics for computational cost reduction of the Iterative Closest Point algorithm

    Get PDF
    The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations

    Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm

    Get PDF
    The Iterative Closest Point (ICP) algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results

    La electroestimulación como complemento al entrenamiento isométrico voluntario en la mejora de la fuerza isométrica máxima. Diferencias entre hombres y mujeres de mediana edad

    Get PDF
    En el presente estudio se analiza la eficacia de la electroestimulación como complemento al entrenamiento voluntario de la fuerza isométrica máxima (FIM). Para ello se estudió la mejora de la FIM en 20 sujetos de mediana edad de ambos sexos (n = 20). Tras el protocolo de entrenamiento se encontraron diferencias significativas en ambos entrenamientos. Se encontró mayor estabilidad en la respuesta del grupo que solo empleó entrenamiento isométrico voluntario, no obstante, se encontró un mayor rango de mejora del grupo que empleó electroestimulación simultánea al entrenamiento. De los resultados se deduce la necesidad de un trabajo específico y totalmente personalizado para hallar los parámetros óptimos de entrenamiento con el uso de estas tecnologías, ya que el incremento en el rendimiento será mucho más favorable

    L’electroestimulació com a complement a l’entrenament isomètric voluntari en la millora de la força isomètrica màxima. Diferències entre homes i dones de mitjana edat

    Get PDF
    En aquest estudi s’analitza l’eficàcia de l’electroestimulació com a complement a l’entrenament voluntari de la força isomètrica màxima (FIM). Per fer-ho, es va estudiar la millora de la FIM en 20 subjectes de mitjana edat i de tots dos sexes (n = 20). Després del protocol d’entrenament es van trobar diferències significatives en tots dos entrenaments. Es va trobar més estabilitat en la resposta del grup que només va utilitzar l’entrenament isomètric voluntari, no obstant això, es va trobar un major rang de millora del grup que va fer servir l’electroestimulació simultània amb l’entrenament. Dels resultats es dedueix la necessitat d’un treball específic i totalment personalitzat per trobar els paràmetres òptims d’entrenament amb l’ús d’aquestes tecnologies, atès que l’increment en el rendiment serà molt més favorable

    Cambios hasta cierto punto: Segregación residencial y desigualdades económicas en Montevideo (1996–2015)

    Get PDF
    This article analyzes the dynamics of the processes of residential segregation and economic inequalities in the city of Montevideo from the 1990s to 2015. In Montevideo during the past two decades there have been significant reductions in economic inequalities; despite these, inequalities in the urban space have not diminished in equal measure. To address the problem, we use a methodology using quantitative indicators of inequality and poverty based on official statistical sources (national household surveys). We describe the evolution of these indicators between 1996 and 2015 and compare them for different regions of the city. Regions are defined by the application of a residential segregation index based on household socioeconomic information at the level of the census segment using population censuses bases of 1996. The main indicators of economic inequality we consider are the Gini index, the incidence of poverty and generalized entropy indices, and comparative analysis of the value of housing at neighborhood level in Montevideo for 1996 and 2015. Resumen El objetivo de este trabajo es analizar la dinámica de los procesos de segregación residencial y las desigualdades económicas en la ciudad de Montevideo desde los años 90 hasta la primera década y media de los 2000. En Montevideo en las últimas dos décadas se han registrado avances significativos en la reducción de las desigualdades económicas, a pesar de lo cual, las desigualdades en el espacio urbano no han disminuido en igual medida. Se elabora una metodología de construcción de índices e indicadores cuantitativos de desigualdad y pobreza a través de fuentes estadísticas oficiales (encuestas nacionales a hogares). La evolución de estos indicadores entre los años 1996 y 2015 se observa y compara para diferentes regiones de la ciudad. Éstas surgen de un índice de segregación residencial a partir de información socioeconómica de los hogares a nivel de segmento censal, para esto se utiliza como fuente los censos de población, vivienda y hogares del año 1996. Los principales, indicadores de desigualdad económica considerados son el índice de Gini, la incidencia de la pobreza y los índices de entropía generalizada; también se ensaya un análisis comparativo del valor de la vivienda a nivel de barrios en Montevideo para los años 1996 y 2015 con el fin de identificar áreas que han depreciado o apreciado su valor

    Soluciones energéticas para la vida cotidiana

    Get PDF
    Informe de Investigación--Universidad de Costa Rica, Vicerrectoría de Acción Social, Trabajo Comunal Universitario. 2002. Para mayor información puede escribir a [email protected]ño de un sistema adecuado de iluminación para el Taller de Mecánica del Plantel de ICE en Rincón Grande de Pavas, según las indicaciones del Laboratorio de Eficiencia Energética del Área de Conservación de Energía.Universidad de Costa RicaUCR::Vicerrectoría de Acción Social::Trabajo Comunal Universitario (TCU

    Characterisation of snowfall events in the northern Iberian Peninsula and the synoptic classification of heavy episodes (1988-2018)

    Get PDF
    Historic snowfall events in the northern Iberian Peninsula recorded between 1988 and 2018 are presented and analysed. This study makes use of data collected over a course of 31 years from 105 observation stations. These weather reports describe the temporal and spatial characteristics of five Spanish provinces facing the Cantabrian Sea. The average number of snow events observed per year (as recorded by all 105 stations) was 133, where a maximum of 421 snow events was recorded in 2010 and a minimum of 24 events were recorded in 2002. In addition, the monthly distribution of snow events per day had a maximum of 630 events, (February), with a mean monthly value of 170 snow events. Other features like the distribution of snow events depending on the altitude of each province studied and the corresponding spatial patterns are also shown. Furthermore, the circulation patterns responsible for heavy snowfall in the region were also examined. To carry out this study, we considered the daily patterns at 1200 UTC of the geopotential height at 500 and 850 hPa pressure levels and sea‐level pressure and temperature at 500 and 850 hPa respectively. The synoptic situations were classified based on a principal component analysis coupled with a K‐means clustering, and four groups associated with heavy snowfall events were subsequently identified. The analysis of the daily synoptic patterns showed that a trough was present over the Iberian Peninsula, or close by, and a low appeared over the Mediterranean Sea or Central Europe. The low‐level flow was from the north (N) or northeast (NE) in ~ 85% of the cases and the temperature at 850 hPa pressure level was lower than ‐3°C in ~ 70% of the cases

    Meteorological patterns linked to landslide triggering in asturias (NW Spain): A preliminary analysis

    Get PDF
    Asturias is one of the most landslide prone areas in the north of Spain. Most landslides are linked to intense and continue rainfall events, especially between October and May. This fact points out precipitation as the main triggering factor in the study area. Thirteen rainfall episodes that caused 1064 landslides between 2008 and 2016 have been selected for its study. Landslide records come from the Principality of Asturias Landslide Database (BAPA) and meteorological data from the Spanish Meteorological Agency (AEMET). Meteorological conditions which took place during each period have been characterized by using NCEP/NCAR Reanalysis data. Four main landslide-triggering meteorological patterns have been identified for the Asturian territory: Strong Atlantic Anticyclone pattern (SAA), Atlantic Depression pattern (AD), Anticyclonic ridge pattern (AR) and Cut-off Low pattern (CL).This research is funded by the Department of Employment, Industry and Tourism of the Government of Asturias, Spain, and the European Regional Development Fund FEDER, within the framework of the research grant “GEOCANTABRICA: Procesos geológicos modeladores del relieve de la Cordillera Cantábrica” (FC-15-GRUPIN14-044), and supported on the cooperation between the Department of Geology at the University of Oviedo and the AEMET

    Stability and accuracy of deterministic project duration forecasting methods in earned value management

    Full text link
    [EN] Purpose Earned Value Management (EVM) is a project monitoring and control technique that enables the forecasting of a project's duration. Many EVM metrics and project duration forecasting methods have been proposed. However, very few studies have compared their accuracy and stability. Design/methodology/approach This paper presents an exhaustive stability and accuracy analysis of 27 deterministic EVM project duration forecasting methods. Stability is measured via Pearson's, Spearman's and Kendall's correlation coefficients while accuracy is measured by Mean Squared and Mean Absolute Percentage Errors. These parameters are determined at ten percentile intervals to track a given project's progress across 4,100 artificial project networks with varied topologies. Findings Findings support that stability and accuracy are inversely correlated for most forecasting methods, and also suggest that both significantly worsen as project networks become increasingly parallel. However, the AT + PD-ESmin forecasting method stands out as being the most accurate and reliable. Practical implications Implications of this study will allow construction project managers to resort to the simplest, most accurate and most stable EVM metrics when forecasting project duration. They will also be able to anticipate how the project topology (i.e., the network of activity predecessors) and the stage of project progress can condition their accuracy and stability. Originality/value Unlike previous research comparing EVM forecasting methods, this one includes all deterministic methods (classical and recent alike) and measures their performance in accordance with several parameters. Activity durations and costs are also modelled akin to those of construction projects.The first author acknowledges the University of Talca for his Doctoral Program Scholarship (RU-056-2019). The second author acknowledges the Spanish Ministry of Science and Innovation for his Ramon y Cajal contract (RYC-2017-22222) co-funded by the European Social Fund.Barrientos-Orellana, A.; Ballesteros-Pérez, P.; Mora-Melià, D.; González-Cruz, M.; Vanhoucke, M. (2022). Stability and accuracy of deterministic project duration forecasting methods in earned value management. Engineering, Construction and Architectural Management. 29(3):1449-1469. https://doi.org/10.1108/ECAM-12-2020-10451449146929

    Impact of multi-output and stacking methods on feed efficiency prediction from genotype using machine learning algorithms

    Get PDF
    Feeding represents the largest economic cost in meat production; therefore, selection to improve traits related to feed efficiency is a goal in most livestock breeding programs. Residual feed intake (RFI), that is, the difference between the actual and the expected feed intake based on animal's requirements, has been used as the selection criteria to improve feed efficiency since it was proposed by Kotch in 1963. In growing pigs, it is computed as the residual of the multiple regression model of daily feed intake (DFI), on average daily gain (ADG), backfat thickness (BFT), and metabolic body weight (MW). Recently, prediction using single-output machine learning algorithms and information from SNPs as predictor variables have been proposed for genomic selection in growing pigs, but like in other species, the prediction quality achieved for RFI has been generally poor. However, it has been suggested that it could be improved through multi-output or stacking methods. For this purpose, four strategies were implemented to predict RFI. Two of them correspond to the computation of RFI in an indirect way using the predicted values of its components obtained from (i) individual (multiple single-output strategy) or (ii) simultaneous predictions (multi-output strategy). The other two correspond to the direct prediction of RFI using (iii) the individual predictions of its components as predictor variables jointly with the genotype (stacking strategy), or (iv) using only the genotypes as predictors of RFI (single-output strategy). The single-output strategy was considered the benchmark. This research aimed to test the former three hypotheses using data recorded from 5828 growing pigs and 45,610 SNPs. For all the strategies two different learning methods were fitted: random forest (RF) and support vector regression (SVR). A nested cross-validation (CV) with an outer 10-folds CV and an inner threefold CV for hyperparameter tuning was implemented to test all strategies. This scheme was repeated using as predictor variables different subsets with an increasing number (from 200 to 3000) of the most informative SNPs identified with RF. Results showed that the highest prediction performance was achieved with 1000 SNPs, although the stability of feature selection was poor (0.13 points out of 1). For all SNP subsets, the benchmark showed the best prediction performance. Using the RF as a learner and the 1000 most informative SNPs as predictors, the mean (SD) of the 10 values obtained in the test sets were: 0.23 (0.04) for the Spearman correlation, 0.83 (0.04) for the zero–one loss, and 0.33 (0.03) for the rank distance loss. We conclude that the information on predicted components of RFI (DFI, ADG, MW, and BFT) does not contribute to improve the quality of the prediction of this trait in relation to the one obtained with the single-output strategy.info:eu-repo/semantics/publishedVersio
    corecore