1,227 research outputs found

    Foreground detection enhancement using Pearson correlation filtering

    Get PDF
    Foreground detection algorithms are commonly employed as an initial module in video processing pipelines for automated surveillance. The resulting masks produced by these algorithms are usually postprocessed in order to improve their quality. In this work, a postprocessing filter based on the Pearson correlation among the pixels in a neighborhood of the pixel at hand is proposed. The flow of information among pixels is controlled by the correlation that exists among them. This way, the filtering performance is enhanced with respect to some state of the art proposals, as demonstrated with a selection of benchmark videos.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Solving Discrete Ordered Median Problems with Induced Order

    Get PDF
    Ordered median functions have been developed to model flexible discrete location problems. A weight is associated to the distance from a customer to its closest facility, depending on the position of that distance relative to the distances of all the customers. In this paper, the above idea is extended by adding a second type of facility and, consequently, a second weight, whose values are based on the position of the first weights. An integer programming formulation is provided in this work for solving this kind of models

    Solving discrete ordered median problems with induced order: preliminary results

    Get PDF
    The Discrete Ordered Median Problem with Induced Order (DOMP+IO) is a multi-level version of the classical DOMP, which has been widely studied. In this work, a DOMP+IO with two types of facilites (levels) is considered and some preliminary results are provided.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Análisis de la desigualdad económica en España

    Get PDF
    Este Trabajo de Fin de Grado consiste en un análisis de la desigualdad económica en España y sus regiones. La variable de estudio es la renta y los años analizados son 2009 y 2013. Los datos para el análisis se han obtenido de la Encuesta de Condiciones de Vida elaborada por el Instituto Nacional de Estadística. Los instrumentos utilizados para el análisis son la Curva de Lorenz y el Índice de Gini y se llega a la conclusión que la desigualdad económica en España ha aumentado durante este periodo. Estudiando las regiones sólo se detecta un aumento significativo de la desigualdad para Aragón y Castilla-La Mancha. También, cruzando la evolución de la renta media con la evolución de la desigualdad se realiza un análisis del bienestar, vinculado a la rentaDepartamento de Economía AplicadaGrado en Economí

    Function approximation in Hilbert spaces: a general sequential method and a particular implementation with neural networks

    Get PDF
    A sequential method for approximating vectors in Hilbert spaces, called Sequential Approximation with Optimal Coefficients (SAOC), is presented. Most of the existing sequential methods choose the new term so that it matches the previous residue as best as possible. Although this strategy leads to approximations convergent towards the target function, it may be far from being the best strategy with regard to the number of terms of the approximation. SAOC combines two key ideas. The first is the optimization of the coefficients (the linear part of the approximation). The second is the flexibility to choose the frequencies (the nonlinear part). The only relation with the residue has to do with its approximation capability of the target vector ff. SAOC maintains orthogonal-like properties. The theoretical results obtained proof that, under reasonable conditions, the construction of the approximation is always possible and, in the limit, the residue of the approximation obtained with SAOC is the best one that can be obtained with any subset of the given set of vectors. In addition, it seems that it should achieve the same accuracy that other existent sequential methods with fewer terms. In the particular case of L2L^2, it can be applied to polynomials, Fourier series, wavelets and neural networks, among others. Also, a particular implementation using neural networks is presented. In fact, the profit is reciprocal, because SAOC can be used as an inspiration to construct and train a neural network.Postprint (published version

    Solving multi-objective hub location problems by hybrid algorithms

    Get PDF
    In many logistic, telecommunications and computer networks, direct routing of commodities between any origin and destination is not viable due to economic and technolog- ical constraints. In that cases, a network with centralized units, known as hub facilities, and a small number of links is commonly used to connect any origin-destination pair. The purpose of these hub facilities is to consolidate, sort and transship e ciently any commodity in the network. Hub location problems (HLPs) consider the design of these networks by locating a set of hub facilities, establishing an interhub subnet, and routing the commodities through the network while optimizing some objective(s) based on the cost or service. Hub location has evolved into a rich research area, where a huge number of papers have been published since the seminal work of O'Kelly [1]. Early works were focused on analogue facility location problems, considering some assumptions to simplify network design. Recent works [2] have studied more complex models that relax some of these assumptions and in- corporate additional real-life features. In most HLPs considered in the literature, the input parameters are assumed to be known and deterministic. However, in practice, this assumption is unrealistic since there is a high uncertainty on relevant parameters, such as costs, demands or even distances. In this work, we will study the multi-objective hub location problems with uncertainty.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    Solving Multi-Objective Hub Location Problems with Robustness

    Get PDF
    Hub location problems (HLP) are considered in many logistic, telecommunications, and computer problems, where the design of these networks are optimized based on some objective(s) related to the cost or service. In those cases, direct routing between any origin and destination is not viable due to economic or technological constraints. From the seminal work of O'Kelly~\cite{OKelly86}, a huge number of works have been published in the literature. Early contributions were focused on analogue facility location problems, considering some assumptions to simplify the network design. Recent works have studied more complex models by incorporating additional real-life features and relaxing some assumptions, although the input parameters are still assumed to be known in most of the HLPs considered in the literature. This assumption is unrealistic in practice, since there is a high uncertainty on relevant parameters of real problems, such as costs, demands, or even distances. Consequently, a decision maker usually prefer several solutions with a low uncertainty in their objectives functions instead of the optimum solution of an assumed deterministic objective function. In this work we use a three-objective Integer Linear Programming model of the p-hub location problem where the average transportation cost, its variance, and the processing time in the hubs are minimized. The number of variables is O(n4)O(n^4) where nn is the number of nodes of the graph. ILP solvers can only solve small instances of the problems and we propose in this work the use of a recent hybrid algorithm combining a heuristic and exact methods: Construct, Merge, Solve, and AdaptUniversidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Weighted Contrastive Divergence

    Get PDF
    Learning algorithms for energy based Boltzmann architectures that rely on gradient descent are in general computationally prohibitive, typically due to the exponential number of terms involved in computing the partition function. In this way one has to resort to approximation schemes for the evaluation of the gradient. This is the case of Restricted Boltzmann Machines (RBM) and its learning algorithm Contrastive Divergence (CD). It is well-known that CD has a number of shortcomings, and its approximation to the gradient has several drawbacks. Overcoming these defects has been the basis of much research and new algorithms have been devised, such as persistent CD. In this manuscript we propose a new algorithm that we call Weighted CD (WCD), built from small modifications of the negative phase in standard CD. However small these modifications may be, experimental work reported in this paper suggest that WCD provides a significant improvement over standard CD and persistent CD at a small additional computational cost

    RibEx: a web server for locating riboswitches and other conserved bacterial regulatory elements

    Get PDF
    We present RibEx (riboswitch explorer), a web server capable of searching any sequence for known riboswitches as well as other predicted, but highly conserved, bacterial regulatory elements. It allows the visual inspection of the identified motifs in relation to attenuators and open reading frames (ORFs). Any of the ORF's or regulatory elements' sequence can be obtained with a click and submitted to NCBI's BLAST. Alternatively, the genome context of all other genes regulated by the same element can be explored with our genome context tool (GeConT). RibEx is available at
    corecore