1,541 research outputs found

    Regularización Laplaciana en el espacio dual para SVMs

    Full text link
    Máster Universitario en en Investigación e Innovación en Inteligencia Computacional y Sistemas InteractivosNowadays, Machine Learning (ML) is a field with a great impact because of its usefulness in solving many types of problems. However, today large amounts of data are handled and therefore traditional learning methods can be severely limited in performance. To address this problem, Regularized Learning (RL) is used, where the objective is to make the model as flexible as possible but preserving the generalization properties, so that overfitting is avoided. There are many models that use regularization in their formulations, such as Lasso, or models that use intrinsic regularization, such as the Support Vector Machine (SVM). In this model, the margin of a separating hyperplane is maximized, resulting in a solution that depends only on a subset of the samples called support vectors. This Master Thesis aims to develop an SVM model with Laplacian regularization in the dual space, under the intuitive idea that close patterns should have similar coefficients. To construct the Laplacian term we will use as basis the Fused Lasso model which penalizes the differences of the consecutive coefficients, but in our case we seek to penalize the differences between every pair of samples, using the elements of the kernel matrix as weights. This thesis presents the different phases carried out in the implementation of the new proposal, starting from the standard SVM, followed by the comparative experiments between the new model and the original method. As a result, we see that Laplacian regularization is very useful, since the new proposal outperforms the standard SVM in most of the datasets used, both in classification and regression. Furthermore, we observe that if we only consider the Laplacian term and we set the parameter C (upper bound for the coefficients) as if it were infinite, we also obtain better performance than the standard SVM metho

    Digital marketing actions that achieve a better attraction and loyalty of users: an analytical study

    Get PDF
    Currently, the digital economy contributes decisively to an increase in competitiveness, especially as a digital transformation involves migrating to new technological models where digital marketing is a key part of growth and user loyalty strategies. Internet and Digital Marketing have become important factors in campaigns, which attract and retain Internet users. This study aims to identify the main ways in which users can be gained and retained by using Digital Marketing. The Delphi method with in-depth interviews was the methodology used in this study. The results of the research show the most important actions for achieving user recruitment and loyalty with Digital Marketing from the opinions of consulted experts. The limitations of this study are those related to the number of experts included in the study, and the number of research papers consulted in the literature review. The literature review and the results of this research are used to propose new solid research with a consolidated critical methodology. This research deals with a new approach that will optimize web technologies for the evolution of user trends, and therefore, will be of academic and professional use for marketing managers and web solution developers. The conclusions of the investigation show the key factors, discarding others that do not affect the optimization of conversions in B2C businesses such as the duration of the session and the rebound percentage. Likewise, the results of the research identify the specific actions that must be carried out to attract and retain users in B2C companies that use the Digital Marketing ecosystem on the Internet. The requirements for companies that wish to implement a model to optimize conversions using the current digital economy are also shown.info:eu-repo/semantics/publishedVersio

    Spatiotemporal Stacked Sequential Learning for Pedestrian Detection

    Full text link
    Pedestrian classifiers decide which image windows contain a pedestrian. In practice, such classifiers provide a relatively high response at neighbor windows overlapping a pedestrian, while the responses around potential false positives are expected to be lower. An analogous reasoning applies for image sequences. If there is a pedestrian located within a frame, the same pedestrian is expected to appear close to the same location in neighbor frames. Therefore, such a location has chances of receiving high classification scores during several frames, while false positives are expected to be more spurious. In this paper we propose to exploit such correlations for improving the accuracy of base pedestrian classifiers. In particular, we propose to use two-stage classifiers which not only rely on the image descriptors required by the base classifiers but also on the response of such base classifiers in a given spatiotemporal neighborhood. More specifically, we train pedestrian classifiers using a stacked sequential learning (SSL) paradigm. We use a new pedestrian dataset we have acquired from a car to evaluate our proposal at different frame rates. We also test on a well known dataset: Caltech. The obtained results show that our SSL proposal boosts detection accuracy significantly with a minimal impact on the computational cost. Interestingly, SSL improves more the accuracy at the most dangerous situations, i.e. when a pedestrian is close to the camera.Comment: 8 pages, 5 figure, 1 tabl

    Búsqueda en rejilla no regular

    Full text link
    El machine learning o aprendizaje automático es una rama de la inteligencia artificial (IA) que proporciona a los sistemas la capacidad de aprender y mejorar automáticamente a partir de la experiencia, sin ser programados de manera explícita. El objetivo principal es reducir el error cometido por estos sistemas, optimizando (minimizando) la función que representa dicho error. Los métodos de optimización usados antes de la aparición de los ordenadores de alta velocidad eran indirectos, se utilizaban las propiedades de la función (derivadas y condiciones de optimalidad). Sin embargo, con el avance de la capacidad de cómputo, se han extendido los métodos directos, en los que la función representa una caja negra y se van realizando sucesivas pruebas observando los valores de entrada y salida de la función, sin usar herramientas analíticas de la misma. Éstos tienen la ventaja de ser muy sencillos de entender, aunque son menos eficientes que los indirectos. Este Trabajo de Fin de Grado se centra en el desarrollo de una modificación del método clásico de búsqueda en rejilla regular, perteneciente a los métodos directos de optimización, pues el proceso original presenta limitaciones y es muy ineficiente a nivel de computación. Para dicha tarea se realizarán variaciones partiendo de la rejilla regular, de forma que los puntos queden distribuidos en el espacio de una forma más estratégica para la posterior evaluación de los hiperparámetros. También se creará una variante de la búsqueda aleatoria para evaluar su rendimiento. Este documento presenta las diferentes fases llevadas a cabo en el diseño inicial, la codificación de la rejillas modificadas, y las posteriores pruebas de rendimiento de las mismas, comparándolas con las originales. Al final, como conclusión, observaremos que nuestras nuevas rejillas no regular y aleatoria modificada se comportarán bien en la minimización de funciones aleatorias. Respecto a la optimización de hiperparámetros en un modelo de aprendizaje automático, las nuevas rejillas tendrán un rendimiento mejor o igual que la regular en nuestros experimentos, superándola claramente en dos de los cuatro experimentos realizados

    Software-driven definition of virtual testbeds to validate emergent network technologies

    Full text link
    This paper is an extended version of our paper published in XIII Jornadas de Ingeniería Telemática (JITEL 2017), Valencia, Spain, 27–29 September 2017, “Definición de Testbeds Virtualizados Utilizando Perfiles de Actividad de Red”The lack of privileged access to emergent and operational deployments is one of the key matters during validation and testing of novel telecommunication systems and technologies. This matter jeopardizes the repeatability of experiments, which results in burdens for innovation and research in these areas. In this light, we present a method and architecture to make the software-driven definition of virtual testbeds easier. As distinguishing features, our proposal can mimic operational deployments by using high-dimensional activity patterns. These activity patterns shape the effect of a control module that triggers agents for the generation of network traffic. This solution exploits the capabilities of network emulation and virtualization systems, which nowadays can be easily deployed in commodity servers. With this, we accomplish a reproducible definition of realistic experimental conditions and the introduction of real agent implementations in a cost-effective fashion. We evaluate our solution in a case study that is comprised of the validation of a network-monitoring tool for Voice over IP (VoIP) deployments. Our experimental results support the viability of the method and illustrate how this formulation can improve the experimentation in emergent technologies.This work has been partially funded by the SpanishMinistry of Economy and Competitiveness and the European Regional Development Fund under the projects TRÁFICA (MINECO/FEDER TEC2015-69417-C2-1-R) and RACING DRONES (MINECO/FEDER RTC-2016-4744-7

    Characteristics of the dimensions and sub-dimensions of young basketball players' personalities

    Get PDF
    The aim of the present work is to assess the personality traits of young basketball players aged 16-18 years (n=186) through the description of the dimensions and sub-dimension from the Big Five Questionnaire (BFQ) regarding personality. This was a non-experimental study in which a descriptive transversal design was used. The results that were obtained indicate the players in the selected sample are characterized as people who are: a) moderately dynamic, extraverted and dominant; b) moderately altruistic, understanding and tolerant; c) moderately responsible, orderly, and diligent; d) moderately balanced, calm, patient, and able to manage their emotions moderately well; and e) rather uncreative, unimaginative, and not well informed

    Laboratory and telescope demonstration of the TP3-WFS for the adaptive optics segment of AOLI

    Get PDF
    AOLI (Adaptive Optics Lucky Imager) is a state-of-art instrument that combines adaptive optics (AO) and lucky imaging (LI) with the objective of obtaining diffraction limited images in visible wavelength at mid- and big-size ground-based telescopes. The key innovation of AOLI is the development and use of the new TP3-WFS (Two Pupil Plane PositionsWavefront Sensor). The TP3-WFS, working in visible band, represents an advance over classical wavefront sensors such as the Shack-Hartmann WFS (SH-WFS) because it can theoretically use fainter natural reference stars, which would ultimately provide better sky coverages to AO instruments using this newer sensor. This paper describes the software, algorithms and procedures that enabled AOLI to become the first astronomical instrument performing real-time adaptive optics corrections in a telescope with this new type of WFS, including the first control-related results at the William Herschel Telescope (WHT)This work was supported by the Spanish Ministry of Economy under the projects AYA2011-29024, ESP2014-56869-C2-2-P, ESP2015-69020-C2-2-R and DPI2015-66458-C2-2-R, by project 15345/PI/10 from the Fundación Séneca, by the Spanish Ministry of Education under the grant FPU12/05573, by project ST/K002368/1 from the Science and Technology Facilities Council and by ERDF funds from the European Commission. The results presented in this paper are based on observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. Special thanks go to Lara Monteagudo and Marcos Pellejero for their timely contributions

    Estimation of the parameters of token-buckets in multi-hop environments

    Full text link
    Bandwidth verification in shaping scenarios receives much attention of both operators and clients because of its impact on Quality of Service (QoS). As a result, measuring shapers’ parameters, namely the Committed Information Rate (CIR), Peak Information Rate (PIR) and Maximum Burst Size (MBS), is a relevant issue when it comes to assess QoS. In this paper, we present a novel algorithm, TBCheck, which serves to accurately measure such parameters with minimal intrusiveness. These measurements are the cornerstone for the validation of Service Level Agreements (SLA) with multiple shaping elements along an end-to-end path. As a further outcome of this measurement method, we define a formal taxonomy of multi-hop shaping scenarios. A thorough performance evaluation covering the latter taxonomy shows the advantages of TBCheck compared to other tools in the state of the art, yielding more accurate results even in the presence of cross-traffic. Additionally, our findings show that MBS estimation is unfeasible when the link load is high, regardless the measurement technique, because the token-bucket will always be empty. Consequently, we propose an estimation policy which maximizes the accuracy by measuring CIR during busy hours and PIR and MBS during off-peak hoursThis work was partially supported by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund under the project Tráfica (MINECO/FEDER TEC2015-69417-C2-1-R

    A CUDA Fortran GPU-parallelised hydrodynamic tool for high-resolution and long-term eco-hydraulic modelling

    Get PDF
    Eco-hydraulic models are wide extended tools to assess physical habitat suitability on aquatic environments. Currently, the application of these tools is limited to short river stretches and steady flow simulations. However, this limitation can be overcome with the application of a high-performance computing technique: graphics processing unit (GPU) computing. R-Iber is a GPU-based hydrodynamic code parallelised in CUDA Fortran that, with the integration of a physical habitat module, performs as an eco-hydraulic numerical tool. R-Iber was validated and applied to real cases by using an optimised instream flow incremental methodology in long river reaches and long-term simulations. R-Iber reduces the computation time considerably, reaching speed-ups of two orders of magnitude compared to traditional computing. R-Iber allows for overcoming the current limitations of the eco-hydraulic tools with the simulation of high-resolution numerical models calculated in a reasonable computation timeframe, which provides a better representation of the hydrodynamics and the physical habitat.The contract of the D.D.-S. is funded by the International Center for Numerical Methods in Engineering (VAC-2021-1).Peer ReviewedPostprint (published version

    Avances en investigación aplicada mediante modelación física y numérica en el diseño de la ingeniería de presas

    Full text link
    En la actualidad, y en todo el mundo, hay en desarrollo un muy importante número de proyectos de obras hidráulicas de diversa naturaleza (presas, canales, desaladoras, tanques de tormentas, centrales hidroeléctricas, obras de saneamiento, etc.), donde España es, en muchas ocasiones, el marco de referencia. En este artículo, se presentan algunas de las principales investigaciones en curso en el campo de la modelación física y numérica de la ingeniería de presas, con el objetivo de mejorar el conocimiento de los fenómenos hidráulicos que intervienen en su gestión y desarrollar nuevas herramientas de diseño que permitan dar solución a problemas hidráulicos complejos
    corecore