22 research outputs found

    Assessing the performance of several rainfall interpolation methods as evaluated by a conceptual hydrological model

    Get PDF
    ArtículoThe objective of this study was to assess the performance of several rainfall interpolation methods as evaluated by a conceptual hydrological model. To this purpose, the upper Toro River catchment (43.15 km2) located in Costa Rica was selected as case study. Deterministic and geostatistical interpolation methods were selected to generate time-series of daily and hourly average rainfall over a period of 10 years (2001-2010). These time-series were used as inputs for the HBV-TEC hydrological model and were individually calibrated against observed streamflow data. Based on model results, the performance of the deterministic methods can be said to be comparable to that of the geostatistical methods at daily time-steps. However, at hourly time-steps, deterministic methods considerably outperformed geostatistical methods

    Development of the HBV-TEC hydrological model

    Get PDF
    ConferenciaIn this paper, the HBV-TEC hydrological model is presented. The development of this model version, aims to provide researchers and scholars with a stable and robust implementation of the HBV hydrological model based on the R programming language. To evaluate its performance, the HBV-TEC model was applied to three subcatchments of the Aguacaliente river catchment, an experimental catchment located in the province of Cartago, Costa Rica. Results suggest a satisfactory performance of the model for two subcatchments and an unsatisfactory performance of the remaining subcatchment; most of which could be attributed to unsufficient meteorological data along with a highly heterogeneous spatial rainfall-distribution

    Interfaces en Ambientes de Realidad Virtual (iReal 2011-2012)

    Get PDF
    Proyecto de Investigación. Instituto Tecnológico de Costa Rica. Escuela de Matemática, Escuela de Diseño Industrial, Escuela de Ingeniería en Computación, 2012El objetivo de iReal era desarrollar la tecnología para dotar al TEC de una instalación de realidad virtual. Para el proyecto se tenía que de!nir una estrategia sobre el uso y el desarrollo de los elementos de la interface, del software y hardware necesarios para proyectar en tiempo real ambientes tridimensionales en los que se pueda experimentar fenómenos espaciales de forma que el usuario esté inmerso en el ambiente ya sea física o virtualmente. Estas interfaces tridimensionales están muy poco desarrollado en el mundo. Al inicio del proyecto varios integrantes del grupo eScience (incluyendo a los investigadores Franklin Hernández y José Castro ) visitaron en marzo del 2010 el encuentro PRAGMA1 18 en San Diego California. En esta visita se pudo observar el estado del arte en varios países de los más avanzados en esta área, entre ellos Estados Unidos, Canadá, Japón, India y Corea entre otros. La parte de hardware del área está muy adelantada, sin embargo, el problema que persiste radica en la visualización de información (en alta resolución) en forma de ambientes tridimensionales virtuales y aun más crítico: la manipulación de esos sistemas

    A comparison of generalized extreme value, gumbel, and log-pearson distributions for the development of intensity duration frequency curves. A case study in Costa Rica

    Get PDF
    Global warming has already affected frequency and intensity of extreme rainfall events. This makes the evaluation of current and alternative statistical distributions used in the formulation of Intensity Duration Frequency curves (IDF) curves highly relevant. This study aims to evaluate the suitability of applying the Generalized Extreme Value (GEV) and the Log-Pearson type 3 (LP3) probability distributions against the traditionally used Gumbel (EV1) distribution to derive IDF curves for a flood prone area located in northern Costa Rica. A ranking system based on a normalized total-score from five metrics was implemented to identify the best distribution. GEV proved to be the most suitable distribution for most storm-durations and was therefore selected for development of the IDF curves with return periods ranging from 2 to 100 years. As return periods get longer however, deviations between rainfall estimates obtained get more prominent. Hence, a meticulous analysis of adjustment to select the most adequate probability distribution to estimate extreme events with return periods of 50 years or more should be undertaken, regardless of GEV or any other distribution. Results also reinforce the need to identify the distribution that best fits observed data for a particular weather station, especially when time-series are asymmetric

    Improving uncertainty estimations for mammogram classification using semi-supervised learning

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Computer aided diagnosis for mammogram images have seen positive results through the usage of deep learning architectures. However, limited sample sizes for the target datasets might prevent the usage of a deep learning model under real world scenarios. The usage of unlabeled data to improve the accuracy of the model can be an approach to tackle the lack of target data. Moreover, important model attributes for the medical domain as model uncertainty might be improved through the usage of unlabeled data. Therefore, in this work we explore the impact of using unlabeled data through the implementation of a recent approach known as MixMatch, for mammogram images. We evaluate the improvement on accuracy and uncertainty of the model using popular and simple approaches to estimate uncertainty. For this aim, we propose the usage of the uncertainty balanced accuracy metric

    Metodología iterativa de desarrollo de software para microempresas

    No full text
    En los últimos años, Costa Rica ha experimentado un creciente aumento en el número de microempresas de desarrollo de software, pero este crecimiento no ha ido acompañado del uso de metodologías apropiadas para el desarrollo de software en este tipo de empresas. Esta situación se debe a varios motivos: carencia de formación en ingeniería de software por parte de los creadores de la compañía, urgencia aparente de producir código a toda costa en detrimento de la planificación, uso inadecuado de metodologías que generan un exceso de trabajo administrativo, entre otros.  Aunado a lo anterior, el mundo empresarial vive vertiginosas transformaciones que demandan soluciones que rápidamente se adapten a los cambios. Las metodologías de desarrollo iterativas, como la propuesta en el presente trabajo, permiten más ágilmente adaptarse a esta variabilidad de requerimientos que vive el sector de TI.  El presente trabajo hace un diagnóstico sobre las características de las microempresas costarricenses de desarrollo de software; seguidamente, determina las mejores prácticas en las metodologías de desarrollo de software, para luego hacer un análisis comparativo y proponer una metodología de desarrollo que se adecúe a este tipo de organizaciones

    Performance of Deep Learning models with transfer learning for multiple-step-ahead forecasts in monthly time series

    No full text
    Deep Learning and transfer learning models are being used to generate time series forecasts; however, there is scarce evidence about their performance prediction that it is more evident for monthly time series. The purpose of this paper is to compare Deep Learning models with transfer learning and without transfer learning and other traditional methods used for monthly forecasts to answer three questions about the suitability of Deep Learning and Transfer Learning to generate predictions of time series. Time series of M4 and M3 competitions were used for the experiments. The results suggest that deep learning models based on TCN, LSTM, and CNN with transfer learning tend to surpass the performance prediction of other traditional methods. On the other hand, TCN and LSTM, trained directly on the target time series, got similar or better performance than traditional methods for some forecast horizons

    Performance of Deep Learning models with transfer learning for multiple-step-ahead forecasts in monthly time series

    No full text
    Deep Learning and transfer learning models are being used to generate time series forecasts; however, there is scarce evidence about their performance prediction that it is more evident for monthly time series. The purpose of this paper is to compare Deep Learning models with transfer learning and without transfer learning and other traditional methods used for monthly forecasts to answer three questions about the suitability of Deep Learning and Transfer Learning to generate predictions of time series. Time series of M4 and M3 competitions were used for the experiments. The results suggest that deep learning models based on TCN, LSTM, and CNN with transfer learning tend to surpass the performance prediction of other traditional methods. On the other hand, TCN and LSTM, trained directly on the target time series, got similar or better performance than traditional methods for some forecast horizons

    Algoritmo semisupervisado de agrupamiento que combina SUBCLU y el agrupamiento basado en restricciones, para la detección de grupos en conjuntos de alta dimensionalidad

    No full text
    High dimensional data poses a challenge to traditional clustering algorithms, where the similarity measures are not meaningful, affecting the quality of the groups. As a result, subspace clustering algorithms have been proposed as an alternative, aiming to find all groups in all spaces of the dataset.By detecting groups on lower dimensional spaces, each group may belong to different subspaces of the original dataset. Therefore, attributes the user considers of interest may be excluded in some or all groups, decreasing the value of the result for the data analysts.In this project, a new algorithm is proposed, that combines SUBCLU and the  clustering algorithms by constraint, which allows the users to identify variables as attributes of interest based on prior knowledge of domain, targeting direct group detection toward spaces that include user’s attributes of interest, and thereafter, generating more meaningful groups.Los datos de alta dimensionalidad plantean un desafío para los algoritmos de agrupamiento tradicionales, ya que las medidas de similitud convencionales utilizadas por estos no son significativas cuando se aplican sobre el espacio completo de datos, por lo que afectan la calidad de los grupos. Ante esto, los algoritmos de agrupamiento de subespacios han sido propuestos como alternativa para encontrar todos los grupos en todos los espacios del conjunto de datos. Al detectar grupos en espacios de menor dimensionalidad, cada grupo detectado puede pertenecer a diferentes subespacios del conjunto de datos original. Consecuentemente, atributos que el usuario considere de interés pueden ser excluidos en algunos o todos los grupos, perdiendo información importante y reduciendo el valor del resultado para los analistas. En este proyecto, se propone un nuevo método que combina el algoritmo SUBCLU y el algoritmo de agrupamiento por restricciones, el cual permite al usuario identificar variables como atributos de interés con base en conocimiento previo del dominio, esto con el objeto de dirigir la detección de grupos hacia espacios que incluyan estos atributos y, por ende, generar grupos más significativos

    Comparación de métodos de optimización locales y globales para la calibración y análisis de sensibilidad de un modelo hidrológico conceptual

    No full text
    Eight global and eight local optimization methods were used to calibrate the HBV-TEC hydrological model on the upper Toro river catchment in Costa Rica for four different calibration periods (4, 8, 12 and 16 years). To evaluate their sensitivity to getting trapped in local minima, each method was tested against 50 sets of randomly-generated initial model parameters. All methods were then evaluated in terms of optimization performance and computational cost. Results show a comparable performance among various global and local methods as they highly correlate to one another. Nonetheless, local methods are in general more sensitive to getting trapped in local minima, irrespective of the duration of the calibration period. Performance of the various methods seems to be independent to the total number of model calls, which may vary several orders of magnitude depending on the selected optimization method. The selection of an optimization method is largely influenced by its efficiency and the available computational resources regardless of global or local class.Ocho métodos de optimización global y ocho métodos de optimización local fueron utilizados para calibrar el modelo hidrológico conceptual HBV-TEC en la cuenca alta del río Toro en Costa Rica para cuatro diferentes periodos de calibración (4, 8, 12 y 16 años). Con el propósito de evaluar la sensibilidad de quedar atrapados en mínimos locales, cada método fue probado contra 50 sets de parámetros iniciales generados aleatoriamente. Todos los métodos fueron entonces evaluados en términos del desempeño de optimización y el costo computacional. Los resultados muestran un desempeño comparable entre varios métodos locales y globales dado que se correlacionan fuertemente entre ellos. Sin embargo, los métodos locales son generalmente más sensitivos a quedar atrapados en mínimos locales independientemente de la duración del periodo de calibración. El desempeño de optimización parece ser independiente del número total de llamas del modelo, el cual puede variar varios órdenes de magnitud dependiendo del método de optimización seleccionado. La selección final de un método de optimización está grandemente influenciada por su eficiencia y el nivel de recursos computacionales disponible indistintamente de clase local o global
    corecore