15 research outputs found

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Diseño de una metodología cibermétrica de cálculo del éxito para la optimización de contenidos web

    Full text link
    [EN] The object of this study is the design of a cybermetric methodology whose objectives are to measure the success of the content published in an online media and the possible prediction of the selected success variables. Framed in the field of digital journalism, it responds to the need to analyze the success of web content so that it can help in the decision-making of the editorial team of a digital medium. A line of research focused on the content itself, providing an innovative vision to that of previous research, and a methodology that serves as a basis for future scientific advances. It is about the contribution of valuable information, either from the statistical analysis of the data or from the possible prediction of the success indicators of greatest interest to the environment. In this way, it could be integrated as a feedback into the content strategy and thus favor its iterative optimization. The main objective, therefore, is the design of a cybermetric methodology for calculating the success of an online publication, having as specific objectives: to research the concept of success in digital journalism, the social network Twitter, web analytics and web advertising; design the methodology and determine what tools and reports are needed; extract and process data for statistical analysis; perform regressions that allow to obtain prediction equations of the selected success variables; and validate the prediction equations with test data and obtain their precision, serving this as a degree of confidence in the prediction. The design of the methodology has served to observe a significant over-dispersion in the data, as well as to demonstrate that the success of a web content has a strongly multifactorial nature, which causes a decrease in the variability calculated using the indicators proposed by previous research. This thesis serves, then, as the basis for a very interesting research framework both at an academic and business level: the prediction of the success of digital content.[ES] El objetivo de estudio es el diseño de una metodología cibermétrica para medir el éxito de los contenidos publicados en un medio de comunicación online y su posible predicción, de manera que se pueda orientar la optimización de los futuros contenidos publicados por el medio. Enmarcada en el ámbito del periodismo digital, responde a la necesidad de analizar el éxito de los contenidos web de manera que se pueda ayudar en la toma de decisiones del equipo editorial. Para ello, se ha realizado un extenso estudio de las publicaciones académicas versadas en las diferentes disciplinas que tienen lugar en esta tesis: la comunicación de contenidos digitales, Twitter, la difusión de las noticias en Twitter, la analítica web, la cibermetría, la analítica en Twitter, el análisis de tendencias en Twitter y la publicidad web. Con dicho marco, se ha obtenido información valiosa para la optimización futura de los contenidos digitales, ya sea procedente del análisis estadístico de los datos o de la posible predicción de los indicadores de éxito de mayor interés para el medio. De esta manera, se podría integrar de manera retroalimentada en la estrategia de contenidos y favorecer así su optimización iterativamente. Para ello, se han tenido en cuenta los siguientes objetivos específicos: investigar el concepto de éxito en el periodismo digital, la red social Twitter, la analítica web y la publicidad en la web; diseñar la metodología y determinar qué herramientas y reportes son necesarios; extraer y procesar los datos para su análisis estadístico; realizar regresiones que permitan obtener ecuaciones de predicción de las variables de éxito seleccionadas; y validar las ecuaciones de predicción con datos de test y obtener su precisión, sirviendo esta como grado de confianza en la predicción. El diseño de la metodología ha servido para observar una sobre dispersión significativa en los datos, así como demostrar que el éxito de un contenido web tiene un carácter fuertemente multifactorial, lo cual provoca una disminución en la variabilidad calculada mediante los indicadores propuestos por investigaciones previas. Esta tesis sirve, entonces, como base para una línea de investigación sobre la optimización de contenido digital basándose en la predicción estadística de su éxito.[CAT] L'objectiu d'estudi és el disseny d'una metodologia cibermètrica per a mesurar l'èxit dels continguts publicats en un mitjà de comunicació en línia i la seua possible predicció, de manera que es puga orientar l'optimització dels futurs continguts publicats pel mitjà. Emmarcada en l'àmbit del periodisme digital, respon a la necessitat d'analitzar l'èxit dels continguts web de manera que es puga ajudar en la presa de decisions de l'equip editorial. Per a això, s'ha realitzat un extens estudi de les publicacions acadèmiques versades en les diferents disciplines que tenen lloc en aquesta tesi: la comunicació de continguts digitals, Twitter, la difusió de les notícies en Twitter, l'analítica web, la cibermetría, l'analítica en Twitter, l'anàlisi de tendències en Twitter i la publicitat web. Amb aquest marc, s'ha obtingut informació valuosa per a l'optimització futura dels continguts digitals, ja siga procedent de l'anàlisi estadística de les dades o de la possible predicció dels indicadors d'èxit de major interés per al mitjà. D'aquesta manera, es podria integrar de manera retroalimentada en l'estratègia de continguts i afavorir així la seua optimització iterativament. Per a això, s'han tingut en compte els següents objectius específics: investigar el concepte d'èxit en el periodisme digital, la xarxa social Twitter, l'analítica web i la publicitat en la web; dissenyar la metodologia i determinar quines eines i reportes són necessaris; extraure i processar les dades per a la seua anàlisi estadística; realitzar regressions que permeten obtindre equacions de predicció de les variables d'èxit seleccionades; i validar les equacions de predicció amb dades de test i obtindre la seua precisió, servint aquesta com a grau de confiança en la predicció. El disseny de la metodologia ha servit per a observar una sobre dispersió significativa en les dades, així com demostrar que l'èxit d'un contingut web té un caràcter fortament multifactorial, la qual cosa provoca una disminució en la variabilitat calculada mitjançant els indicadors proposats per investigacions prèvies. Aquesta tesi serveix, llavors, com a base per a una línia d'investigació sobre l'optimització de contingut digital basant-se en la predicció estadística del seu èxit.Yeste Moreno, VM. (2021). Diseño de una metodología cibermétrica de cálculo del éxito para la optimización de contenidos web [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/176009TESI

    Factories of the Future

    Get PDF
    Engineering; Industrial engineering; Production engineerin

    Calibración de un algoritmo de detección de anomalías marítimas basado en la fusión de datos satelitales

    Get PDF
    La fusión de diferentes fuentes de datos aporta una ayuda significativa en el proceso de toma de decisiones. El presente artículo describe el desarrollo de una plataforma que permite detectar anomalías marítimas por medio de la fusión de datos del Sistema de Información Automática (AIS) para seguimiento de buques y de imágenes satelitales de Radares de Apertura Sintética (SAR). Estas anomalías son presentadas al operador como un conjunto de detecciones que requieren ser monitoreadas para descubrir su naturaleza. El proceso de detección se lleva adelante primero identificando objetos dentro de las imágenes SAR a través de la aplicación de algoritmos CFAR, y luego correlacionando los objetos detectados con los datos reportados mediante el sistema AIS. En este trabajo reportamos las pruebas realizadas con diferentes configuraciones de los parámetros para los algoritmos de detección y asociación, analizamos la respuesta de la plataforma y reportamos la combinación de parámetros que reporta mejores resultados para las imágenes utilizadas. Este es un primer paso en nuestro objetivo futuro de desarrollar un sistema que ajuste los parámetros en forma dinámica dependiendo de las imágenes disponibles.XVI Workshop Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    Footstep: An Approach to Track Web Usage in Real Time

    Get PDF
    Compreender o comportamento do usuário é primordial para o sucesso de um website. As abordagens existentes que exploram análise do comportamento dos usuários utilizam os logs de servidor. Esses logs possuem detalhes sobre o que cada usuário acessou, como, por exemplo, uma loja online. Embora os logs sejam úteis ao fornecer uma boa percepção a respeito do comportamento dos usuários, eles não fornecem informações detalhadas sobre as ações realizadas pelos mesmos nas páginas acessadas. A fim de atacar este problema, este trabalho propõe o Footstep, um sistema completo que fornece coleta, processamento e análise dos eventos disparados pelos usuários, no nível dos elementos das páginas. Footstep fornece rastreamento dos usuários à medida em que navegam pelas páginas, coletando informações sobre a estrutura DOM dos elementos associados a cada evento. Footstep fornece também um modelo de dados baseado em grafos que facilita a extração de informação útil dos eventos coletados. Finalmente, uma ferramenta analítica é fornecida para visualizar as informações a respeito das interações sobre as páginas e elementos, como audiência, fluxos de navegação e taxas de conversão das páginas e elementos

    Geospatial Computing: Architectures and Algorithms for Mapping Applications

    Get PDF
    Beginning with the MapTube website (1), which was launched in 2007 for crowd-sourcing maps, this project investigates approaches to exploratory Geographic Information Systems (GIS) using web-based mapping, or ‘web GIS’. Users can log in to upload their own maps and overlay different layers of GIS data sets. This work looks into the theory behind how web-based mapping systems function and whether their performance can be modelled and predicted. One of the important questions when dealing with different geospatial data sets is how they relate to one another. Internet data stores provide another source of information, which can be exploited if more generic geospatial data mining techniques are developed. The identification of similarities between thousands of maps is a GIS technique that can give structure to the overall fabric of the data, once the problems of scalability and comparisons between different geographies are solved. After running MapTube for nine years to crowd-source data, this would mark a natural progression from visualisation of individual maps to wider questions about what additional knowledge can be discovered from the data collected. In the new ‘data science’ age, the introduction of real-time data sets introduces a new challenge for web-based mapping applications. The mapping of real-time geospatial systems is technically challenging, but has the potential to show inter-dependencies as they emerge in the time series. Combined geospatial and temporal data mining of realtime sources can provide archives of transport and environmental data from which to accurately model the systems under investigation. By using techniques from machine learning, the models can be built directly from the real-time data stream. These models can then be used for analysis and experimentation, being derived directly from city data. This then leads to an analysis of the behaviours of the interacting systems. (1) The MapTube website: http://www.maptube.org

    A visual approach for probing learned models

    Get PDF
    Deep learning models are complex neural networks that are able to accomplish a large range of tasks effectively, including machine translation, speech recognition, and image classification. However, recent research has shown that transformations of input data can deteriorate the performance of these models dramatically. This effect is especially startling with adversarial perturbations that aim to fool a deep neural network while being barely perceptible. The complexity of these networks makes it hard to understand where and why they fail. Previous work has attempted to provide insights into the inner workings of these models in various different ways. A survey of these existing systems is conducted and concludes that they have failed to provide an integrated approach for probing how specific changes to the input data are represented within a trained model. This thesis introduces Advis, a visualization system for analyzing the impact of input data transformations on a model's performance and on its internal representations. For performance analysis, it displays various metrics of prediction quality and robustness using lists and a radar chart. An interactive confusion matrix supports pattern detection and input image selection. Insights into the impact of data distortions on internal representations can be gained by the combination of a color-coded computation graph and detailed activation visualizations. The system is based on a highly flexible architecture that enables users to adapt it to the specific requirements of their task. Three use cases demonstrate the usefulness of the system for probing and comparing the impact of input transformations on performance metrics and internal representations of various networks. The insights gained through this system show that interactive visual approaches for understanding the effect of input perturbations on deep learning models are an area worth further investigation

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF
    corecore