69 research outputs found

    Comparative Analysis of Techniques Used to Detect Copy-Move Tampering for Real-World Electronic Images

    Get PDF
    Evolution of high computational powerful computers, easy availability of several innovative editing software package and high-definition quality-based image capturing tools follows to effortless result in producing image forgery. Though, threats for security and misinterpretation of digital images and scenes have been observed to be happened since a long period and also a lot of research has been established in developing diverse techniques to authenticate the digital images. On the contrary, the research in this region is not limited to checking the validity of digital photos but also to exploring the specific signs of distortion or forgery. This analysis would not require additional prior information of intrinsic content of corresponding digital image or prior embedding of watermarks. In this paper, recent growth in the area of digital image tampering identification have been discussed along with benchmarking study has been shown with qualitative and quantitative results. With variety of methodologies and concepts, different applications of forgery detection have been discussed with corresponding outcomes especially using machine and deep learning methods in order to develop efficient automated forgery detection system. The future applications and development of advanced soft-computing based techniques in digital image forgery tampering has been discussed

    Comparative Analysis of Techniques Used to Detect Copy-Move Tampering for Real-World Electronic Images

    Get PDF
    Evolution of high computational powerful computers, easy availability of several innovative editing software package and high-definition quality-based image capturing tools follows to effortless result in producing image forgery. Though, threats for security and misinterpretation of digital images and scenes have been observed to be happened since a long period and also a lot of research has been established in developing diverse techniques to authenticate the digital images. On the contrary, the research in this region is not limited to checking the validity of digital photos but also to exploring the specific signs of distortion or forgery. This analysis would not require additional prior information of intrinsic content of corresponding digital image or prior embedding of watermarks. In this paper, recent growth in the area of digital image tampering identification have been discussed along with benchmarking study has been shown with qualitative and quantitative results. With variety of methodologies and concepts, different applications of forgery detection have been discussed with corresponding outcomes especially using machine and deep learning methods in order to develop efficient automated forgery detection system. The future applications and development of advanced soft-computing based techniques in digital image forgery tampering has been discussed

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Gender and Ethnicity Classification Using Partial Face in Biometric Applications

    Get PDF
    As the number of biometric applications increases, the use of non-ideal information such as images which are not strictly controlled, images taken covertly, or images where the main interest is partially occluded, also increases. Face images are a specific example of this. In these non-ideal instances, other information, such as gender and ethnicity, can be determined to narrow the search space and/or improve the recognition results. Some research exists for gender classification using partial-face images, but there is little research involving ethnic classifications on such images. Few datasets have had the ethnic diversity needed and sufficient subjects for each ethnicity to perform this evaluation. Research is also lacking on how gender and ethnicity classifications on partial face are impacted by age. If the extracted gender and ethnicity information is to be integrated into a larger system, some measure of the reliability of the extracted information is needed. This study will provide an analysis of gender and ethnicity classification on large datasets captured by non-researchers under day-to-day operations using texture, color, and shape features extracted from partial-face regions. This analysis will allow for a greater understanding of the limitations of various facial regions for gender and ethnicity classifications. These limitations will guide the integration of automatically extracted partial-face gender and ethnicity information with a biometric face application in order to improve recognition under non-ideal circumstances. Overall, the results from this work showed that reliable gender and ethnic classification can be achieved from partial face images. Different regions of the face hold varying amount of gender and ethnicity information. For machine classification, the upper face regions hold more ethnicity information while the lower face regions hold more gender information. All regions were impacted by age, but the eyes were impacted the most in texture and color. The shape of the nose changed more with respect to age than any of the other regions

    The brushstroke and materials of Amadeo de Souza-Cardoso combined in an authentication tool

    Get PDF
    Nowadays, authentication studies for paintings require a multidisciplinary approach, based on the contribution of visual features analysis but also on characterizations of materials and techniques. Moreover, it is important that the assessment of the authorship of a painting is supported by technical studies of a selected number of original artworks that cover the entire career of an artist. This dissertation is concerned about the work of modernist painter Amadeo de Souza-Cardoso. It is divided in three parts. In the first part, we propose a tool based on image processing that combines information obtained by brushstroke and materials analysis. The resulting tool provides qualitative and quantitative evaluation of the authorship of the paintings; the quantitative element is particularly relevant, as it could be crucial in solving authorship controversies, such as judicial disputes. The brushstroke analysis was performed by combining two algorithms for feature detection, namely Gabor filter and Scale Invariant Feature Transform. Thanks to this combination (and to the use of the Bag-of-Features model), the proposed method shows an accuracy higher than 90% in distinguishing between images of Amadeo’s paintings and images of artworks by other contemporary artists. For the molecular analysis, we implemented a semi-automatic system that uses hyperspectral imaging and elemental analysis. The system provides as output an image that depicts the mapping of the pigments present, together with the areas made using materials not coherent with Amadeo’s palette, if any. This visual output is a simple and effective way of assessing the results of the system. The tool proposed based on the combination of brushstroke and molecular information was tested in twelve paintings obtaining promising results. The second part of the thesis presents a systematic study of four selected paintings made by Amadeo in 1917. Although untitled, three of these paintings are commonly known as BRUT, Entrada and Coty; they are considered as his most successful and genuine works. The materials and techniques of these artworks have never been studied before. The paintings were studied with a multi-analytical approach using micro-Energy Dispersive X-ray Fluorescence spectroscopy, micro-Infrared and Raman Spectroscopy, micro-Spectrofluorimetry and Scanning Electron Microscopy. The characterization of Amadeo’s materials and techniques used on his last paintings, as well as the investigation of some of the conservation problems that affect these paintings, is essential to enrich the knowledge on this artist. Moreover, the study of the materials in the four paintings reveals commonalities between the paintings BRUT and Entrada. This observation is supported also by the analysis of the elements present in a photograph of a collage (conserved at the Art Library of the Calouste Gulbenkian Foundation), the only remaining evidence of a supposed maquete of these paintings. The final part of the thesis describes the application of the image processing tools developed in the first part of the thesis on a set of case studies; this experience demonstrates the potential of the tool to support painting analysis and authentication studies. The brushstroke analysis was used as additional analysis on the evaluation process of four paintings attributed to Amadeo, and the system based on hyperspectral analysis was applied on the painting dated 1917. The case studies therefore serve as a bridge between the first two parts of the dissertation

    The Optimisation of Elementary and Integrative Content-Based Image Retrieval Techniques

    Get PDF
    Image retrieval plays a major role in many image processing applications. However, a number of factors (e.g. rotation, non-uniform illumination, noise and lack of spatial information) can disrupt the outputs of image retrieval systems such that they cannot produce the desired results. In recent years, many researchers have introduced different approaches to overcome this problem. Colour-based CBIR (content-based image retrieval) and shape-based CBIR were the most commonly used techniques for obtaining image signatures. Although the colour histogram and shape descriptor have produced satisfactory results for certain applications, they still suffer many theoretical and practical problems. A prominent one among them is the well-known “curse of dimensionality “. In this research, a new Fuzzy Fusion-based Colour and Shape Signature (FFCSS) approach for integrating colour-only and shape-only features has been investigated to produce an effective image feature vector for database retrieval. The proposed technique is based on an optimised fuzzy colour scheme and robust shape descriptors. Experimental tests were carried out to check the behaviour of the FFCSS-based system, including sensitivity and robustness of the proposed signature of the sampled images, especially under varied conditions of, rotation, scaling, noise and light intensity. To further improve retrieval efficiency of the devised signature model, the target image repositories were clustered into several groups using the k-means clustering algorithm at system runtime, where the search begins at the centres of each cluster. The FFCSS-based approach has proven superior to other benchmarked classic CBIR methods, hence this research makes a substantial contribution towards corresponding theoretical and practical fronts

    Intelligent Systems

    Get PDF
    This book is dedicated to intelligent systems of broad-spectrum application, such as personal and social biosafety or use of intelligent sensory micro-nanosystems such as "e-nose", "e-tongue" and "e-eye". In addition to that, effective acquiring information, knowledge management and improved knowledge transfer in any media, as well as modeling its information content using meta-and hyper heuristics and semantic reasoning all benefit from the systems covered in this book. Intelligent systems can also be applied in education and generating the intelligent distributed eLearning architecture, as well as in a large number of technical fields, such as industrial design, manufacturing and utilization, e.g., in precision agriculture, cartography, electric power distribution systems, intelligent building management systems, drilling operations etc. Furthermore, decision making using fuzzy logic models, computational recognition of comprehension uncertainty and the joint synthesis of goals and means of intelligent behavior biosystems, as well as diagnostic and human support in the healthcare environment have also been made easier

    24th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European – Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of “Frontiers in Artificial Intelligence” by IOS Press (Amsterdam). The books “Information Modelling and Knowledge Bases” are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok

    From Molecules to the Masses : Visual Exploration, Analysis, and Communication of Human Physiology

    Get PDF
    Det overordnede målet med denne avhandlingen er tverrfaglig anvendelse av medisinske illustrasjons- og visualiseringsteknikker for å utforske, analysere og formidle aspekter ved fysiologi til publikum med ulik faglig nivå og bakgrunn. Fysiologi beskriver de biologiske prosessene som skjer i levende vesener over tid. Vitenskapen om fysiologi er kompleks, men samtidig kritisk for vår forståelse av hvordan levende organismer fungerer. Fysiologi dekker en stor bredde romlig-temporale skalaer og fordrer behovet for å kombinere og bygge bro mellom basalfagene (biologi, fysikk og kjemi) og medisin. De senere årene har det vært en eksplosjon av nye, avanserte eksperimentelle metoder for å detektere og karakterisere fysiologiske data. Volumet og kompleksiteten til fysiologiske data krever effektive strategier for visualisering for å komplementere dagens standard analyser. Hvilke tilnærminger som benyttes i visualiseringen må nøye balanseres og tilpasses formålet med bruken av dataene, enten dette er for å utforske dataene, analysere disse eller kommunisere og presentere dem. Arbeidet i denne avhandlingen bidrar med ny kunnskap innen teori, empiri, anvendelse og reproduserbarhet av visualiseringsmetoder innen fysiologi. Først i avhandlingen er en rapport som oppsummerer og utforsker dagens kunnskap om muligheter og utfordringer for visualisering innen fysiologi. Motivasjonen for arbeidet er behovet forskere innen visualiseringsfeltet, og forskere i ulike anvendelsesområder, har for en sammensatt oversikt over flerskala visualiseringsoppgaver og teknikker. Ved å bruke søk over et stort spekter av metodiske tilnærminger, er dette den første rapporten i sitt slag som kartlegger visualiseringsmulighetene innen fysiologi. I rapporten er faglitteraturen oppsummert slik at det skal være enkelt å gjøre oppslag innen ulike tema i rom-og-tid-skalaen, samtidig som litteraturen er delt inn i de tre høynivå visualiseringsoppgavene data utforsking, analyse og kommunikasjon. Dette danner et enkelt grunnlag for å navigere i litteraturen i feltet og slik danner rapporten et godt grunnlag for diskusjon og forskningsmuligheter innen feltet visualisering og fysiologi. Basert på arbeidet med rapporten var det særlig to områder som det er ønskelig for oss å fortsette å utforske: (1) utforskende analyse av mangefasetterte fysiologidata for ekspertbrukere, og (2) kommunikasjon av data til både eksperter og ikke-eksperter. Arbeidet vårt av mangefasetterte fysiologidata er oppsummert i to studier i avhandlingen. Hver studie omhandler prosesser som foregår på forskjellige romlig-temporale skalaer og inneholder konkrete eksempler på anvendelse av metodene vurdert av eksperter i feltet. I den første av de to studiene undersøkes konsentrasjonen av molekylære substanser (metabolitter) ut fra data innsamlet med magnetisk resonansspektroskopi (MRS), en avansert biokjemisk teknikk som brukes til å identifisere metabolske forbindelser i levende vev. Selv om MRS kan ha svært høy sensitivitet og spesifisitet i medisinske anvendelser, er analyseresultatene fra denne modaliteten abstrakte og vanskelige å forstå også for medisinskfaglige eksperter i feltet. Vår designstudie som undersøkte oppgavene og kravene til ekspertutforskende analyse av disse dataene førte til utviklingen av SpectraMosaic. Dette er en ny applikasjon som gjør det mulig for domeneeksperter å analysere konsentrasjonen av metabolitter normalisert for en hel kohort, eller etter prøveregion, individ, opptaksdato, eller status på hjernens aktivitetsnivå ved undersøkelsestidspunktet. I den andre studien foreslås en metode for å utføre utforskende analyser av flerdimensjonale fysiologiske data i motsatt ende av den romlig-temporale skalaen, nemlig på populasjonsnivå. En effektiv arbeidsflyt for utforskende dataanalyse må kritisk identifisere interessante mønstre og relasjoner, noe som blir stadig vanskeligere når dimensjonaliteten til dataene øker. Selv om dette delvis kan løses med eksisterende reduksjonsteknikker er det alltid en fare for at subtile mønstre kan gå tapt i reduksjonsprosessen. Isteden presenterer vi i studien DimLift, en iterativ dimensjonsreduksjonsteknikk som muliggjør brukeridentifikasjon av interessante mønstre og relasjoner som kan ligge subtilt i et datasett gjennom dimensjonale bunter. Nøkkelen til denne metoden er brukerens evne til å styre dimensjonalitetsreduksjonen slik at den følger brukerens egne undersøkelseslinjer. For videre å undersøke kommunikasjon til eksperter og ikke-eksperter, studeres i neste arbeid utformingen av visualiseringer for kommunikasjon til publikum med ulike nivåer av ekspertnivå. Det er naturlig å forvente at eksperter innen et emne kan ha ulike preferanser og kriterier for å vurdere en visuell kommunikasjon i forhold til et ikke-ekspertpublikum. Dette påvirker hvor effektivt et bilde kan benyttes til å formidle en gitt scenario. Med utgangspunkt i ulike teknikker innen biomedisinsk illustrasjon og visualisering, gjennomførte vi derfor en utforskende studie av kriteriene som publikum bruker når de evaluerer en biomedisinsk prosessvisualisering målrettet for kommunikasjon. Fra denne studien identifiserte vi muligheter for ytterligere konvergens av biomedisinsk illustrasjon og visualiseringsteknikker for mer målrettet visuell kommunikasjonsdesign. Særlig beskrives i større dybde utviklingen av semantisk konsistente retningslinjer for farging av molekylære scener. Hensikten med slike retningslinjer er å heve den vitenskapelige kompetansen til ikke-ekspertpublikum innen molekyler visualisering, som vil være spesielt relevant for kommunikasjon til befolkningen i forbindelse med folkehelseopplysning. All kode og empiriske funn utviklet i arbeidet med denne avhandlingen er åpen kildekode og tilgjengelig for gjenbruk av det vitenskapelige miljøet og offentligheten. Metodene og funnene presentert i denne avhandlingen danner et grunnlag for tverrfaglig biomedisinsk illustrasjon og visualiseringsforskning, og åpner flere muligheter for fortsatt arbeid med visualisering av fysiologiske prosesser.The overarching theme of this thesis is the cross-disciplinary application of medical illustration and visualization techniques to address challenges in exploring, analyzing, and communicating aspects of physiology to audiences with differing expertise. Describing the myriad biological processes occurring in living beings over time, the science of physiology is complex and critical to our understanding of how life works. It spans many spatio-temporal scales to combine and bridge the basic sciences (biology, physics, and chemistry) to medicine. Recent years have seen an explosion of new and finer-grained experimental and acquisition methods to characterize these data. The volume and complexity of these data necessitate effective visualizations to complement standard analysis practice. Visualization approaches must carefully consider and be adaptable to the user's main task, be it exploratory, analytical, or communication-oriented. This thesis contributes to the areas of theory, empirical findings, methods, applications, and research replicability in visualizing physiology. Our contributions open with a state-of-the-art report exploring the challenges and opportunities in visualization for physiology. This report is motivated by the need for visualization researchers, as well as researchers in various application domains, to have a centralized, multiscale overview of visualization tasks and techniques. Using a mixed-methods search approach, this is the first report of its kind to broadly survey the space of visualization for physiology. Our approach to organizing the literature in this report enables the lookup of topics of interest according to spatio-temporal scale. It further subdivides works according to any combination of three high-level visualization tasks: exploration, analysis, and communication. This provides an easily-navigable foundation for discussion and future research opportunities for audience- and task-appropriate visualization for physiology. From this report, we identify two key areas for continued research that begin narrowly and subsequently broaden in scope: (1) exploratory analysis of multifaceted physiology data for expert users, and (2) communication for experts and non-experts alike. Our investigation of multifaceted physiology data takes place over two studies. Each targets processes occurring at different spatio-temporal scales and includes a case study with experts to assess the applicability of our proposed method. At the molecular scale, we examine data from magnetic resonance spectroscopy (MRS), an advanced biochemical technique used to identify small molecules (metabolites) in living tissue that are indicative of metabolic pathway activity. Although highly sensitive and specific, the output of this modality is abstract and difficult to interpret. Our design study investigating the tasks and requirements for expert exploratory analysis of these data led to SpectraMosaic, a novel application enabling domain researchers to analyze any permutation of metabolites in ratio form for an entire cohort, or by sample region, individual, acquisition date, or brain activity status at the time of acquisition. A second approach considers the exploratory analysis of multidimensional physiological data at the opposite end of the spatio-temporal scale: population. An effective exploratory data analysis workflow critically must identify interesting patterns and relationships, which becomes increasingly difficult as data dimensionality increases. Although this can be partially addressed with existing dimensionality reduction techniques, the nature of these techniques means that subtle patterns may be lost in the process. In this approach, we describe DimLift, an iterative dimensionality reduction technique enabling user identification of interesting patterns and relationships that may lie subtly within a dataset through dimensional bundles. Key to this method is the user's ability to steer the dimensionality reduction technique to follow their own lines of inquiry. Our third question considers the crafting of visualizations for communication to audiences with different levels of expertise. It is natural to expect that experts in a topic may have different preferences and criteria to evaluate a visual communication relative to a non-expert audience. This impacts the success of an image in communicating a given scenario. Drawing from diverse techniques in biomedical illustration and visualization, we conducted an exploratory study of the criteria that audiences use when evaluating a biomedical process visualization targeted for communication. From this study, we identify opportunities for further convergence of biomedical illustration and visualization techniques for more targeted visual communication design. One opportunity that we discuss in greater depth is the development of semantically-consistent guidelines for the coloring of molecular scenes. The intent of such guidelines is to elevate the scientific literacy of non-expert audiences in the context of molecular visualization, which is particularly relevant to public health communication. All application code and empirical findings are open-sourced and available for reuse by the scientific community and public. The methods and findings presented in this thesis contribute to a foundation of cross-disciplinary biomedical illustration and visualization research, opening several opportunities for continued work in visualization for physiology.Doktorgradsavhandlin

    Digital traces and urban research : Barcelona through social media data

    No full text
    Most of the world’s population now resides in urban areas, and it is expected that almost all of the planet’s growth will be concentrated in them for the next 30 years, making the improvement of the quality of life in the cities one of the big challenges of this century. To that end, it is crucial to have information on how people use the spaces in the city, and allows urban planning to successfully respond to their needs. This dissertation proposes using data shared voluntarily by the millions of users that make up social network’s communities as a valuable tool for the study of the complexity of the city, because of its capacity of providing an unprecedented volume of urban information, with geographic, temporal, semantic and multimedia components. However, the volume and variety of data raises important challenges regarding its retrieval, manipulation, analysis and representation, requiring the adoption of the best practices in data science, using a multi-faceted approach in the field of urban studies with a strong emphasis in the reproducibility of the developed methodologies. This research focuses in the case of study of the city of Barcelona, using the public data collected from Panoramio, Flickr, Twitter and Instagram. After a literature review, the methods to access the different services are discussed, along with their available data and limitations. Next, the retrieved data is analyzed at different spatial and temporal scales. The first approximation to data focuses on the origins of users who took geotagged pictures of Barcelona, geocoding the hometowns that appear in their Flickr public profiles, allowing the identification of the regions, countries and cities with the largest influx of visitors, and relating the results with multiple indicators at a global scale. The next scale of analysis discusses the city as a whole, developing methodologies for the representation of the spatial distribution of the collected locations, avoiding the artifacts produced by overplotting. To this end, locations are aggregated in regular tessellations, whose size is determined empirically from their spatial distribution. Two spatial statistics techniques (Moran’s I and Getis-Ord’s G*) are used to visualize the local spatial autocorrelation of the areas with exceptionally high or low densities, under a statistical significance framework. Finally, the kernel density estimation is introduced as a non-parametric alternative. The third level of detail follows the official administrative division of Barcelona in 73 neighborhoods and 12 districts, which obeys to historical, morphological and functional criteria. Micromaps are introduced as a representation technique capable of providing a geographical context to commonly used statistical graphics, along with a methodology to produce these micromaps automatically. This technique is compared to annotated scatterplots to relate picture intensity with different urban indicators at a neighborhood scale. The hypothesis of spatial homogeneity is abandoned at the most detailed scale, focusing the analysis on the street network. Two techniques to assign events to road segments in the street graph are presented (direct by shortest distance or by proxy through the postal addresses), as well as the generalization of the kernel density estimation from the Euclidean space to a network topology. Beyond the spatial domain, the interactions of three temporal cycles are further analyzed using the timestamps available in the picture metadata: daytime/nighttime (daily cycle), work/leisure (weekly cycle) and seasonal (yearly cycle).La major part de la població mundial resideix actualment en àrees urbanes, i es preveu que pràcticament tot el creixement del planeta es concentri en elles en els propers 30 anys, convertint la millora de la qualitat de vida a les ciutats en un dels grans reptes del present segle. És per tant imprescindible disposar d'informació sobre les activitats que les persones desenvolupen en elles, que permetin al planejament donar resposta a les seves necessitats. Aquesta tesi proposa l'ús de dades compartides de manera voluntària pels milions d'usuaris que conformen les comunitats de les xarxes socials com una valuosa eina per a l'estudi de la complexitat de la ciutat, per la seva capacitat de proporcionar un volum d'informació urbana sense precedents, reunint components tant geogràfics, temporals, semàntics i multimèdia. No obstant això, aquest volum i varietat de les dades planteja grans reptes pel que fa a la seva obtenció, tractament, anàlisi i representació, requerint adoptar les millors pràctiques de la ciència de dades, aplicades des de múltiples punts de vista al camp dels estudis urbans, posant sempre l'èmfasi en la reproductibilitat de les metodologies desenvolupades. Aquesta investigació se centra en el cas d'estudi de la ciutat de Barcelona, a partir de les dades públiques obtingudes de Panoramio, Flickr, Twitter i Instagram. Després d'una revisió de l'estat de l'art, es desenvolupa l'operativa d'accés als diferents serveis, revisant les dades disponibles i les seves limitacions. A continuació, s'analitzen les dades obtingudes en diferents escales espacials i temporals. La primera aproximació a les dades es desenvolupa a partir de l'origen dels usuaris que han pres fotografies geolocalitzades de Barcelona, a través de la geocodificació de les ubicacions que apareixen en els seus perfils públics de Flickr, permetent identificar les regions, països i ciutats amb major afluència de visitants i relacionar els resultats amb diferents indicadors a escala global. La següent escala d'anàlisi es centra en la ciutat en el seu conjunt, desenvolupant metodologies per a la representació de la distribució espacial de les localitzacions obtingudes, evitant els artefactes produïts per la superposició de mostres. Per a això s'agreguen les localitzacions en tesselacions regulars, la mida de les quals es determina empíricament a partir de la seva distribució espacial. S'utilitzen dues tècniques d'estadística espacial (I de Moran i G* de Getis-Ord) per a visualitzar l'autocorrelació espacial local dels àmbits amb densitats excepcionalment altes o baixes, seguint un criteri de significança estadística. Finalment s'introdueix com a alternativa no paramètrica l'estimació de la densitat. El tercer nivell de detall coincideix amb la delimitació administrativa oficial de Barcelona en 73 barris i 12 districtes, realitzada a partir de criteris històrics, morfològics i funcionals. S'introdueixen els micromapes com a tècnica de representació capaç d'aportar un context geogràfic a gràfics estadístics d'ús comú, juntament amb una metodologia per produir aquests micromapes de manera automàtica. Es compara aquesta tècnica amb diagrames de dispersió anotats per a relacionar la intensitat de fotografies amb diferents indicadors urbans a escala de barri. En l'escala més detallada s'abandona la hipòtesi d'homogeneïtat espacial i es trasllada l'anàlisi al sistema viari. Es presenten dues tècniques d'atribució de localitzacions a trams de carrer del graf vial (directa per distància o indirecta a través de les adreces postals), així com la generalització de l'estimació de la densitat d'un espai euclidià a una topologia de xarxa. Fora del context espacial, s'analitzen les interaccions de tres cicles temporals a partir de les metadades del moment en què van ser preses les fotografies: diürn/nocturn (cicle diari), treball/oci (cicle setmanal) i estacional (cicle anual).Postprint (published version
    • …
    corecore