11 research outputs found

    Synthetic Data Generation using Benerator Tool

    Full text link
    Datasets of different characteristics are needed by the research community for experimental purposes. However, real data may be difficult to obtain due to privacy concerns. Moreover, real data may not meet specific characteristics which are needed to verify new approaches under certain conditions. Given these limitations, the use of synthetic data is a viable alternative to complement the real data. In this report, we describe the process followed to generate synthetic data using Benerator, a publicly available tool. The results show that the synthetic data preserves a high level of accuracy compared to the original data. The generated datasets correspond to microdata containing records with social, economic and demographic data which mimics the distribution of aggregated statistics from the 2011 Irish Census data.Comment: 12 pages, 5 figures, 10 reference

    SysGpr: System of Generation of Pseudo-realistic Synthetic Signals

    Full text link
    [EN] Signals obtained from sensors are widely used in different scientific fields. However, the resources to obtain the data are not always available due to structural constraints, physical, economic, environmental, and data collection failures, etc. It is in this scenario that the generation of synthetic data is established. The generation of synthetic data has several benefits, such as, reducing waiting times compared to the long periods required by some sensors to obtain large volumes of samples. In addition, the generated data can be as robust as users need it to be. For this reason, this paper presents a pseudo-realistic synthetic signal generation system for use in the validation of methods and design of experiments. The proposed signal generation method makes use of statistical models and the gradient of the signal to generate new data. The developed system is open for the public, available as a web tool.[ES] Las señales obtenidas desde sensores son ampliamente utilizadas en diferentes campos científicos. Sin embargo, no siempre se dispone de los recursos necesarios para obtener dichos datos, debido a limitaciones estructurales, físicas, económicas, ambientales, fallos en la recolección de los datos, etc. Es en este escenario limitante, donde se erige la generación de datos sintéticos. La generación de datos sintéticos tiene la característica de reducir tiempos de espera frente a los largos periodos temporales que necesitan algunos sensores para obtener grandes volúmenes de muestras. Además, los datos generados pueden llegar a ser todo lo robustos que los usuarios necesiten. Por ello este trabajo presenta un sistema de generación de señales sintéticas con carácter pseudo-realista para su uso aplicado a la validación de métodos y diseño de experimentos. El método de la generación de señales propuesto, hace uso de modelos estadísticos y el comportamiento del gradiente de la señal para ir generando nuevos datos. El sistema desarrollado se encuentra disponible públicamente como herramienta web.Este trabajo ha sido parcialmente financiado mediante el proyecto DPI2013-47347-C2-2-R.León, F.; Rodríguez-Lozano, FJ.; Cubero-Fernández, A.; Palomares, JM.; Olivares., J. (2019). SysGpr: Sistema de generación de señales sintéticas pseudo-realistas. Revista Iberoamericana de Automática e Informática. 16(3):369-379. https://doi.org/10.4995/riai.2019.10025SWORD36937916

    Fraud detection for online banking for scalable and distributed data

    Get PDF
    Online fraud causes billions of dollars in losses for banks. Therefore, online banking fraud detection is an important field of study. However, there are many challenges in conducting research in fraud detection. One of the constraints is due to unavailability of bank datasets for research or the required characteristics of the attributes of the data are not available. Numeric data usually provides better performance for machine learning algorithms. Most transaction data however have categorical, or nominal features as well. Moreover, some platforms such as Apache Spark only recognizes numeric data. So, there is a need to use techniques e.g. One-hot encoding (OHE) to transform categorical features to numerical features, however OHE has challenges including the sparseness of transformed data and that the distinct values of an attribute are not always known in advance. Efficient feature engineering can improve the algorithm’s performance but usually requires detailed domain knowledge to identify correct features. Techniques like Ripple Down Rules (RDR) are suitable for fraud detection because of their low maintenance and incremental learning features. However, high classification accuracy on mixed datasets, especially for scalable data is challenging. Evaluation of RDR on distributed platforms is also challenging as it is not available on these platforms. The thesis proposes the following solutions to these challenges: • We developed a technique Highly Correlated Rule Based Uniformly Distribution (HCRUD) to generate highly correlated rule-based uniformly-distributed synthetic data. • We developed a technique One-hot Encoded Extended Compact (OHE-EC) to transform categorical features to numeric features by compacting sparse-data even if all distinct values are unknown. • We developed a technique Feature Engineering and Compact Unified Expressions (FECUE) to improve model efficiency through feature engineering where the domain of the data is not known in advance. • A Unified Expression RDR fraud deduction technique (UE-RDR) for Big data has been proposed and evaluated on the Spark platform. Empirical tests were executed on multi-node Hadoop cluster using well-known classifiers on bank data, synthetic bank datasets and publicly available datasets from UCI repository. These evaluations demonstrated substantial improvements in terms of classification accuracy, ruleset compactness and execution speed.Doctor of Philosoph

    Does k-anonymous microaggregation affect machine-learned macrotrends?

    Get PDF
    n the era of big data, the availability of massive amounts of information makes privacy protection more necessary than ever. Among a variety of anonymization mechanisms, microaggregation is a common approach to satisfy the popular requirement of k-anonymity in statistical databases. In essence, k-anonymous microaggregation aggregates quasi-identifiers to hide the identity of each data subject within a group of other k - 1 subjects. As any perturbative mechanism, however, anonymization comes at the cost of some information loss that may hinder the ulterior purpose of the released data, which very often is building machine-learning models for macrotrends analysis. To assess the impact of microaggregation on the utility of the anonymized data, it is necessary to evaluate the resulting accuracy of said models. In this paper, we address the problem of measuring the effect of k-anonymous microaggregation on the empirical utility of microdata. We quantify utility accordingly as the accuracy of classification models learned from microaggregated data, and evaluated over original test data. Our experiments indicate, with some consistency, that the impact of the de facto microaggregation standard (maximum distance to average vector) on the performance of machine-learning algorithms is often minor to negligible for a wide range of k for a variety of classification algorithms and data sets. Furthermore, experimental evidences suggest that the traditional measure of distortion in the community of microdata anonymization may be inappropriate for evaluating the utility of microaggregated data.Postprint (published version

    Modelado de transmisión eficiente de datos para eventos multivariantes basados en umbral

    Get PDF
    This doctoral thesis delves into the optimization of communications in sensor networks for a specific purpose: to evaluate threshold-based events that depend on multiple distributed variables. This motivation is behind the detailed research presented here in the form of a compendium of papers. The developed work is structured in 3 scientific contributions in articles. Out of those 3 contributions, the most theoretical work has been described in 2 of them, leaving the third article for the presentation of a methodological support tool with great scientific impact and relevance in this doctoral thesis. Due to the two theoretical and large–scale contributions in the proposed field, a solution is proposed which is stated as an hypotheses. The first contribution is the mathematical foundations for modelling data reduction in the sensor network and measuring its influence on the quality of the event evaluation. For this purpose, a set of functions and parameters is defined. This logic modifies the cardinality of the mathematical domains in which information is defined in order to save traffic. Specific metrics that consider the time delays in the state changes of the evaluated condition are also defined. The second contribution is an adaptive algorithm that, taking into account the logical context of the system information, parameterizes the proposed model at runtime. As a result, this technique maximizes traffic reduction and minimizes error in the evaluation of the event simultaneously, obtaining promising results. As a methodological contribution, a procedure for generating pseudo-realistic random signals is also described, a useful tool for easily obtaining large datasets suitable for experimentation, which has been applied in the described contributions.Esta tesis doctoral profundiza en la optimización de las comunicaciones en redes de sensores con un propósito específico: evaluar eventos basados en umbral que dependen de múltiples variables distribuidas. Con esta motivación se desarrolla la investigación detallada aquí en forma compendio de artículos. El trabajo desarrollado se estructura en 3 aportaciones científicas en artículos. De esas 3 aportaciones, el trabajo en su vertiente más teórica se desarrolla en 2 de ellas, quedando el tercer artículo para la presentación de una herramienta de soporte metodológico con gran impacto científico y de relevancia en esta tesis doctoral. Gracias a las dos aportaciones teóricas y de gran calado en el ámbito propuesto se propone una solución que se plantea en forma de hipótesis. La primera aportación son los fundamentos matemáticos para modelar la reducción de datos en la red de sensores y medir su incidencia en la calidad de la evaluación del evento. Para ello define una serie de funciones y parámetros que alteran la cardinalidad de los dominios matemáticos en los que se define la información, así como métricas específicas que tienen en cuenta los desfases temporales en los cambios de estado de la condición evaluada. La segunda aportación es un algoritmo adaptativo que, considerando el contexto lógico de la información del sistema, parametriza el modelo propuesto en tiempo de ejecución. Como resultado, esta técnica maximiza la reducción de tráfico y minimiza el error en la evaluación del evento simultáneamente, obteniendo resultados prometedores. Como tercera aportación se describe también un procedimiento para generar señales aleatorias pseudo–realistas, una herramienta útil para disponer fácilmente de grandes conjuntos de datos adecuados para experimentación, que ha sido utilizada en las aportaciones descritas

    Context-Aware Recommendation Systems in Mobile Environments

    Get PDF
    Nowadays, the huge amount of information available may easily overwhelm users when they need to take a decision that involves choosing among several options. As a solution to this problem, Recommendation Systems (RS) have emerged to offer relevant items to users. The main goal of these systems is to recommend certain items based on user preferences. Unfortunately, traditional recommendation systems do not consider the user’s context as an important dimension to ensure high-quality recommendations. Motivated by the need to incorporate contextual information during the recommendation process, Context-Aware Recommendation Systems (CARS) have emerged. However, these recent recommendation systems are not designed with mobile users in mind, where the context and the movements of the users and items may be important factors to consider when deciding which items should be recommended. Therefore, context-aware recommendation models should be able to effectively and efficiently exploit the dynamic context of the mobile user in order to offer her/him suitable recommendations and keep them up-to-date.The research area of this thesis belongs to the fields of context-aware recommendation systems and mobile computing. We focus on the following scientific problem: how could we facilitate the development of context-aware recommendation systems in mobile environments to provide users with relevant recommendations? This work is motivated by the lack of generic and flexible context-aware recommendation frameworks that consider aspects related to mobile users and mobile computing. In order to solve the identified problem, we pursue the following general goal: the design and implementation of a context-aware recommendation framework for mobile computing environments that facilitates the development of context-aware recommendation applications for mobile users. In the thesis, we contribute to bridge the gap not only between recommendation systems and context-aware computing, but also between CARS and mobile computing.<br /

    A preliminary systems-engineering study of an advanced nuclear-electrolytic hydrogen-production facility

    Get PDF
    An advanced nuclear-electrolytic hydrogen-production facility concept was synthesized at a conceptual level with the objective of minimizing estimated hydrogen-production costs. The concept is a closely-integrated, fully-dedicated (only hydrogen energy is produced) system whose components and subsystems are predicted on ''1985 technology.'' The principal components are: (1) a high-temperature gas-cooled reactor (HTGR) operating a helium-Brayton/ammonia-Rankine binary cycle with a helium reactor-core exit temperature of 980 C, (2) acyclic d-c generators, (3) high-pressure, high-current-density electrolyzers based on solid-polymer electrolyte technology. Based on an assumed 3,000 MWt HTGR the facility is capable of producing 8.7 million std cu m/day of hydrogen at pipeline conditions, 6,900 kPa. Coproduct oxygen is also available at pipeline conditions at one-half this volume. It has further been shown that the incorporation of advanced technology provides an overall efficiency of about 43 percent, as compared with 25 percent for a contemporary nuclear-electric plant powering close-coupled contemporary industrial electrolyzers

    Contribution to privacy-enhancing tecnologies for machine learning applications

    Get PDF
    For some time now, big data applications have been enabling revolutionary innovation in every aspect of our daily life by taking advantage of lots of data generated from the interactions of users with technology. Supported by machine learning and unprecedented computation capabilities, different entities are capable of efficiently exploiting such data to obtain significant utility. However, since personal information is involved, these practices raise serious privacy concerns. Although multiple privacy protection mechanisms have been proposed, there are some challenges that need to be addressed for these mechanisms to be adopted in practice, i.e., to be “usable” beyond the privacy guarantee offered. To start, the real impact of privacy protection mechanisms on data utility is not clear, thus an empirical evaluation of such impact is crucial. Moreover, since privacy is commonly obtained through the perturbation of large data sets, usable privacy technologies may require not only preservation of data utility but also efficient algorithms in terms of computation speed. Satisfying both requirements is key to encourage the adoption of privacy initiatives. Although considerable effort has been devoted to design less “destructive” privacy mechanisms, the utility metrics employed may not be appropriate, thus the wellness of such mechanisms would be incorrectly measured. On the other hand, despite the advent of big data, more efficient approaches are not being considered. Not complying with the requirements of current applications may hinder the adoption of privacy technologies. In the first part of this thesis, we address the problem of measuring the effect of k-anonymous microaggregation on the empirical utility of microdata. We quantify utility accordingly as the accuracy of classification models learned from microaggregated data, evaluated over original test data. Our experiments show that the impact of the de facto microaggregation standard on the performance of machine-learning algorithms is often minor for a variety of data sets. Furthermore, experimental evidence suggests that the traditional measure of distortion in the community of microdata anonymization may be inappropriate for evaluating the utility of microaggregated data. Secondly, we address the problem of preserving the empirical utility of data. By transforming the original data records to a different data space, our approach, based on linear discriminant analysis, enables k-anonymous microaggregation to be adapted to the application domain of data. To do this, first, data is rotated (projected) towards the direction of maximum discrimination and, second, scaled in this direction, penalizing distortion across the classification threshold. As a result, data utility is preserved in terms of the accuracy of machine learned models for a number of standardized data sets. Afterwards, we propose a mechanism to reduce the running time for the k-anonymous microaggregation algorithm. This is obtained by simplifying the internal operations of the original algorithm. Through extensive experimentation over multiple data sets, we show that the new algorithm gets significantly faster. Interestingly, this remarkable speedup factor is achieved with no additional loss of data utility.Les aplicacions de big data impulsen actualment una accelerada innovació aprofitant la gran quantitat d’informació generada a partir de les interaccions dels usuaris amb la tecnologia. Així, qualsevol entitat és capaç d'explotar eficientment les dades per obtenir utilitat, emprant aprenentatge automàtic i capacitats de còmput sense precedents. No obstant això, sorgeixen en aquest escenari serioses preocupacions pel que fa a la privacitat dels usuaris ja que hi ha informació personal involucrada. Tot i que s'han proposat diversos mecanismes de protecció, hi ha alguns reptes per a la seva adopció en la pràctica, és a dir perquè es puguin utilitzar. Per començar, l’impacte real d'aquests mecanismes en la utilitat de les dades no esta clar, raó per la qual la seva avaluació empírica és important. A més, considerant que actualment es manegen grans volums de dades, una privacitat usable requereix, no només preservació de la utilitat de les dades, sinó també algoritmes eficients en temes de temps de còmput. És clau satisfer tots dos requeriments per incentivar l’adopció de mesures de privacitat. Malgrat que hi ha diversos esforços per dissenyar mecanismes de privacitat menys "destructius", les mètriques d'utilitat emprades no serien apropiades, de manera que aquests mecanismes de protecció podrien estar sent incorrectament avaluats. D'altra banda, tot i l’adveniment del big data, la investigació existent no s’enfoca molt en millorar la seva eficiència. Lamentablement, si els requisits de les aplicacions actuals no es satisfan, s’obstaculitzarà l'adopció de tecnologies de privacitat. A la primera part d'aquesta tesi abordem el problema de mesurar l'impacte de la microagregació k-Gnónima en la utilitat empírica de microdades. Per això, quantifiquem la utilitat com la precisió de models de classificació obtinguts a partir de les dades microagregades. i avaluats sobre dades de prova originals. Els experiments mostren que l'impacte de l’algoritme de rmicroagregació estàndard en el rendiment d’algoritmes d'aprenentatge automàtic és usualment menor per a una varietat de conjunts de dades avaluats. A més, l’evidència experimental suggereix que la mètrica tradicional de distorsió de les dades seria inapropiada per avaluar la utilitat empírica de dades microagregades. Així també estudiem el problema de preservar la utilitat empírica de les dades a l'ésser anonimitzades. Transformant els registres originaIs de dades en un espai de dades diferent, el nostre enfocament, basat en anàlisi de discriminant lineal, permet que el procés de microagregació k-anònima s'adapti al domini d’aplicació de les dades. Per això, primer, les dades són rotades o projectades en la direcció de màxima discriminació i, segon, escalades en aquesta direcció, penalitzant la distorsió a través del llindar de classificació. Com a resultat, la utilitat de les dades es preserva en termes de la precisió dels models d'aprenentatge automàtic en diversos conjunts de dades. Posteriorment, proposem un mecanisme per reduir el temps d'execució per a la microagregació k-anònima. Això s'aconsegueix simplificant les operacions internes de l'algoritme escollit Mitjançant una extensa experimentació sobre diversos conjunts de dades, vam mostrar que el nou algoritme és bastant més ràpid. Aquesta acceleració s'aconsegueix sense que hi ha pèrdua en la utilitat de les dades. Finalment, en un enfocament més aplicat, es proposa una eina de protecció de privacitat d'individus i organitzacions mitjançant l'anonimització de dades sensibles inclosos en logs de seguretat. Es dissenyen diferents mecanismes d'anonimat per implementar-los en base a la definició d'una política de privacitat, en el context d'un projecte europeu que té per objectiu construir un sistema de seguretat unificat

    Contribution to privacy-enhancing tecnologies for machine learning applications

    Get PDF
    For some time now, big data applications have been enabling revolutionary innovation in every aspect of our daily life by taking advantage of lots of data generated from the interactions of users with technology. Supported by machine learning and unprecedented computation capabilities, different entities are capable of efficiently exploiting such data to obtain significant utility. However, since personal information is involved, these practices raise serious privacy concerns. Although multiple privacy protection mechanisms have been proposed, there are some challenges that need to be addressed for these mechanisms to be adopted in practice, i.e., to be “usable” beyond the privacy guarantee offered. To start, the real impact of privacy protection mechanisms on data utility is not clear, thus an empirical evaluation of such impact is crucial. Moreover, since privacy is commonly obtained through the perturbation of large data sets, usable privacy technologies may require not only preservation of data utility but also efficient algorithms in terms of computation speed. Satisfying both requirements is key to encourage the adoption of privacy initiatives. Although considerable effort has been devoted to design less “destructive” privacy mechanisms, the utility metrics employed may not be appropriate, thus the wellness of such mechanisms would be incorrectly measured. On the other hand, despite the advent of big data, more efficient approaches are not being considered. Not complying with the requirements of current applications may hinder the adoption of privacy technologies. In the first part of this thesis, we address the problem of measuring the effect of k-anonymous microaggregation on the empirical utility of microdata. We quantify utility accordingly as the accuracy of classification models learned from microaggregated data, evaluated over original test data. Our experiments show that the impact of the de facto microaggregation standard on the performance of machine-learning algorithms is often minor for a variety of data sets. Furthermore, experimental evidence suggests that the traditional measure of distortion in the community of microdata anonymization may be inappropriate for evaluating the utility of microaggregated data. Secondly, we address the problem of preserving the empirical utility of data. By transforming the original data records to a different data space, our approach, based on linear discriminant analysis, enables k-anonymous microaggregation to be adapted to the application domain of data. To do this, first, data is rotated (projected) towards the direction of maximum discrimination and, second, scaled in this direction, penalizing distortion across the classification threshold. As a result, data utility is preserved in terms of the accuracy of machine learned models for a number of standardized data sets. Afterwards, we propose a mechanism to reduce the running time for the k-anonymous microaggregation algorithm. This is obtained by simplifying the internal operations of the original algorithm. Through extensive experimentation over multiple data sets, we show that the new algorithm gets significantly faster. Interestingly, this remarkable speedup factor is achieved with no additional loss of data utility.Les aplicacions de big data impulsen actualment una accelerada innovació aprofitant la gran quantitat d’informació generada a partir de les interaccions dels usuaris amb la tecnologia. Així, qualsevol entitat és capaç d'explotar eficientment les dades per obtenir utilitat, emprant aprenentatge automàtic i capacitats de còmput sense precedents. No obstant això, sorgeixen en aquest escenari serioses preocupacions pel que fa a la privacitat dels usuaris ja que hi ha informació personal involucrada. Tot i que s'han proposat diversos mecanismes de protecció, hi ha alguns reptes per a la seva adopció en la pràctica, és a dir perquè es puguin utilitzar. Per començar, l’impacte real d'aquests mecanismes en la utilitat de les dades no esta clar, raó per la qual la seva avaluació empírica és important. A més, considerant que actualment es manegen grans volums de dades, una privacitat usable requereix, no només preservació de la utilitat de les dades, sinó també algoritmes eficients en temes de temps de còmput. És clau satisfer tots dos requeriments per incentivar l’adopció de mesures de privacitat. Malgrat que hi ha diversos esforços per dissenyar mecanismes de privacitat menys "destructius", les mètriques d'utilitat emprades no serien apropiades, de manera que aquests mecanismes de protecció podrien estar sent incorrectament avaluats. D'altra banda, tot i l’adveniment del big data, la investigació existent no s’enfoca molt en millorar la seva eficiència. Lamentablement, si els requisits de les aplicacions actuals no es satisfan, s’obstaculitzarà l'adopció de tecnologies de privacitat. A la primera part d'aquesta tesi abordem el problema de mesurar l'impacte de la microagregació k-Gnónima en la utilitat empírica de microdades. Per això, quantifiquem la utilitat com la precisió de models de classificació obtinguts a partir de les dades microagregades. i avaluats sobre dades de prova originals. Els experiments mostren que l'impacte de l’algoritme de rmicroagregació estàndard en el rendiment d’algoritmes d'aprenentatge automàtic és usualment menor per a una varietat de conjunts de dades avaluats. A més, l’evidència experimental suggereix que la mètrica tradicional de distorsió de les dades seria inapropiada per avaluar la utilitat empírica de dades microagregades. Així també estudiem el problema de preservar la utilitat empírica de les dades a l'ésser anonimitzades. Transformant els registres originaIs de dades en un espai de dades diferent, el nostre enfocament, basat en anàlisi de discriminant lineal, permet que el procés de microagregació k-anònima s'adapti al domini d’aplicació de les dades. Per això, primer, les dades són rotades o projectades en la direcció de màxima discriminació i, segon, escalades en aquesta direcció, penalitzant la distorsió a través del llindar de classificació. Com a resultat, la utilitat de les dades es preserva en termes de la precisió dels models d'aprenentatge automàtic en diversos conjunts de dades. Posteriorment, proposem un mecanisme per reduir el temps d'execució per a la microagregació k-anònima. Això s'aconsegueix simplificant les operacions internes de l'algoritme escollit Mitjançant una extensa experimentació sobre diversos conjunts de dades, vam mostrar que el nou algoritme és bastant més ràpid. Aquesta acceleració s'aconsegueix sense que hi ha pèrdua en la utilitat de les dades. Finalment, en un enfocament més aplicat, es proposa una eina de protecció de privacitat d'individus i organitzacions mitjançant l'anonimització de dades sensibles inclosos en logs de seguretat. Es dissenyen diferents mecanismes d'anonimat per implementar-los en base a la definició d'una política de privacitat, en el context d'un projecte europeu que té per objectiu construir un sistema de seguretat unificat.Postprint (published version
    corecore