169 research outputs found

    Perceptual video quality assessment: the journey continues!

    Get PDF
    Perceptual Video Quality Assessment (VQA) is one of the most fundamental and challenging problems in the field of Video Engineering. Along with video compression, it has become one of two dominant theoretical and algorithmic technologies in television streaming and social media. Over the last 2 decades, the volume of video traffic over the internet has grown exponentially, powered by rapid advancements in cloud services, faster video compression technologies, and increased access to high-speed, low-latency wireless internet connectivity. This has given rise to issues related to delivering extraordinary volumes of picture and video data to an increasingly sophisticated and demanding global audience. Consequently, developing algorithms to measure the quality of pictures and videos as perceived by humans has become increasingly critical since these algorithms can be used to perceptually optimize trade-offs between quality and bandwidth consumption. VQA models have evolved from algorithms developed for generic 2D videos to specialized algorithms explicitly designed for on-demand video streaming, user-generated content (UGC), virtual and augmented reality (VR and AR), cloud gaming, high dynamic range (HDR), and high frame rate (HFR) scenarios. Along the way, we also describe the advancement in algorithm design, beginning with traditional hand-crafted feature-based methods and finishing with current deep-learning models powering accurate VQA algorithms. We also discuss the evolution of Subjective Video Quality databases containing videos and human-annotated quality scores, which are the necessary tools to create, test, compare, and benchmark VQA algorithms. To finish, we discuss emerging trends in VQA algorithm design and general perspectives on the evolution of Video Quality Assessment in the foreseeable future

    Satellite based methane emission estimation for flaring activities in oil and gas industry: A data-driven approach(SMEEF-OGI)

    Get PDF
    Klimaendringer, delvis utløst av klimagassutslipp, utgjør en kritisk global utfordring. Metan, en svært potent drivhusgass med et globalt oppvarmings potensial på 80 ganger karbondioksid, er en betydelig bidragsyter til denne krisen. Kilder til metanutslipp inkluderer olje- og gassindustrien, landbruket og avfallshåndteringen, med fakling i olje- og gassindustrien som en betydelig utslippskilde. Fakling, en standard prosess i olje- og gassindustrien, antas ofte å være 98 % effektiv ved omdannelse av metan til mindre skadelig karbondioksid. Nyere forskning fra University of Michigan, Stanford, Environmental Defense Fund og Scientific Aviation indikerer imidlertid at den allment aksepterte effektiviteten på 98 % av fakling ved konvertering av metan til karbondioksid, en mindre skadelig klimagass, kan være unøyaktig. Denne undersøkelsen revurderer fakkelprosessens effektivitet og dens rolle i metankonvertering. Dette arbeidet fokuserer på å lage en metode for uavhengig å beregne metanutslipp fra olje- og gassvirksomhet for å løse dette problemet. Satellittdata, som er et nyttig verktøy for å beregne klimagassutslipp fra ulike kilder, er inkludert i den foreslåtte metodikken. I tillegg til standard overvåkingsteknikker, tilbyr satellittdata en uavhengig, ikke-påtrengende, rimelig og kontinuerlig overvåkingstilnærming. På bakgrunn av dette er problemstillingen for dette arbeidet følgende "Hvordan kan en datadrevet tilnærming utvikles for å forbedre nøyaktigheten og kvaliteten på estimering av metanutslipp fra faklingsaktiviteter i olje- og gassindustrien, ved å bruke satellittdata fra utvalgte plattformer for å oppdage og kvantifisere fremtidige utslipp basert på maskinlæring mer effektivt?" For å oppnå dette ble følgende mål og aktiviteter utført. * Teoretisk rammeverk og sentrale begreper * Teknisk gjennomgang av dagens toppmoderne satellittplattformer og eksisterende litteratur. * Utvikling av et Proof of Concept * Foreslå en evaluering av metoden * Anbefalinger og videre arbeid Dette arbeidet har tatt i bruk en systematisk tilnærming, som starter med et omfattende teoretisk rammeverk for å forstå bruken av fakling, de miljømessige implikasjonene av metan, den nåværende «state-of-the-art» av forskning, og «state-of-the-art» i felt for fjernmåling via satellitter. Basert på rammeverket utviklet i de innledende fasene av dette arbeidet, ble det formulert en datadrevet metodikk, som benytter VIIRS-datasettet for å få geografiske områder av interesse. Hyperspektrale data og metandata ble samlet fra Sentinel-2 og Sentinel-5P satellittdatasettet. Denne informasjonen ble behandlet via en foreslått rørledning, med innledende justering og forbedring. I dette arbeidet ble bildene forbedret ved å beregne den normaliserte brennindeksen. Resultatet var et datasett som inneholdt plasseringen av kjente fakkelsteder, med data fra både Sentinel-2 og Sentinel-5P-satellitten. Resultatene understreker forskjellene i dekningen mellom Sentinel-2- og Sentinel-5P-data, en faktor som potensielt kan påvirke nøyaktigheten av metanutslippsestimater. De anvendte forbehandlingsteknikkene forbedret dataklarheten og brukervennligheten markant, men deres effektivitet kan avhenge av fakkelstedenes spesifikke egenskaper og rådatakvaliteten. Dessuten, til tross for visse begrensninger, ga kombinasjonen av Sentinel-2 og Sentinel-5P-data effektivt et omfattende datasett egnet for videre analyse. Avslutningsvis introduserer dette prosjektet en oppmuntrende metodikk for å estimere metanutslipp fra fakling i olje- og gassindustrien. Den legger et grunnleggende springbrett for fremtidig forskning, og forbedrer kontinuerlig presisjonen og kvaliteten på data for å bekjempe klimaendringer. Denne metodikken kan sees i flytskjemaet nedenfor. Basert på arbeidet som er gjort i dette prosjektet, kan fremtidig arbeid fokusere på å innlemme alternative kilder til metan data, utvide interesseområdene gjennom industrisamarbeid og forsøke å trekke ut ytterligere detaljer gjennom bildesegmenteringsmetoder. Dette prosjektet legger et grunnlag, og baner vei for påfølgende utforskninger å bygge videre på.Climate change, precipitated in part by greenhouse gas emissions, presents a critical global challenge. Methane, a highly potent greenhouse gas with a global warming potential of 80 times that of carbon dioxide, is a significant contributor to this crisis. Sources of methane emissions include the oil and gas industry, agriculture, and waste management, with flaring in the oil and gas industry constituting a significant emission source. Flaring, a standard process in the Oil and gas industry is often assumed to be 98% efficient when converting methane to less harmful carbon dioxide. However, recent research from the University of Michigan, Stanford, the Environmental Defense Fund, and Scientific Aviation indicates that the widely accepted 98% efficiency of flaring in converting methane to carbon dioxide, a less harmful greenhouse gas, may be inaccurate. This investigation reevaluates the flaring process's efficiency and its role in methane conversion. This work focuses on creating a method to independently calculate methane emissions from oil and gas activities to solve this issue. Satellite data, which is a helpful tool for calculating greenhouse gas emissions from various sources, is included in the suggested methodology. In addition to standard monitoring techniques, satellite data offers an independent, non-intrusive, affordable, and continuous monitoring approach. Based on this, the problem statement for this work is the following “How can a data-driven approach be developed to enhance the accuracy and quality of methane emission estimation from flaring activities in the Oil and Gas industry, using satellite data from selected platforms to detect and quantify future emissions based on Machine learning more effectively?" To achieve this, the following objectives and activities were performed. * Theoretical Framework and key concepts * Technical review of the current state-of-the-art satellite platforms and existing literature. * Development of a Proof of Concept * Proposing an evaluation of the method * Recommendations and further work This work has adopted a systematic approach, starting with a comprehensive theoretical framework to understand the utilization of flaring, the environmental implications of methane, the current state-of-the-art of research, and the state-of-the-art in the field of remote sensing via satellites. Based upon the framework developed during the initial phases of this work, a data-driven methodology was formulated, utilizing the VIIRS dataset to get geographical areas of interest. Hyperspectral and methane data were aggregated from the Sentinel-2 and Sentinel-5P satellite dataset. This information was processed via a proposed pipeline, with initial alignment and enhancement. In this work, the images were enhanced by calculating the Normalized Burn Index. The result was a dataset containing the location of known flare sites, with data from both the Sentinel-2, and the Sentinel-5P satellite. The results underscore the disparities in coverage between Sentinel-2 and Sentinel-5P data, a factor that could potentially influence the precision of methane emission estimates. The applied preprocessing techniques markedly enhanced data clarity and usability, but their efficacy may hinge on the flaring sites' specific characteristics and the raw data quality. Moreover, despite certain limitations, the combination of Sentinel-2 and Sentinel-5P data effectively yielded a comprehensive dataset suitable for further analysis. In conclusion, this project introduces an encouraging methodology for estimating methane emissions from flaring activities within the oil and gas industry. It lays a foundational steppingstone for future research, continually enhancing the precision and quality of data in combating climate change. This methodology can be seen in the flow chart below. Based on the work done in this project, future work could focus on incorporating alternative sources of methane data, broadening the areas of interest through industry collaboration, and attempting to extract further features through image segmentation methods. This project signifies a start, paving the way for subsequent explorations to build upon. Climate change, precipitated in part by greenhouse gas emissions, presents a critical global challenge. Methane, a highly potent greenhouse gas with a global warming potential of 80 times that of carbon dioxide, is a significant contributor to this crisis. Sources of methane emissions include the oil and gas industry, agriculture, and waste management, with flaring in the oil and gas industry constituting a significant emission source. Flaring, a standard process in the Oil and gas industry is often assumed to be 98% efficient when converting methane to less harmful carbon dioxide. However, recent research from the University of Michigan, Stanford, the Environmental Defense Fund, and Scientific Aviation indicates that the widely accepted 98% efficiency of flaring in converting methane to carbon dioxide, a less harmful greenhouse gas, may be inaccurate. This investigation reevaluates the flaring process's efficiency and its role in methane conversion. This work focuses on creating a method to independently calculate methane emissions from oil and gas activities to solve this issue. Satellite data, which is a helpful tool for calculating greenhouse gas emissions from various sources, is included in the suggested methodology. In addition to standard monitoring techniques, satellite data offers an independent, non-intrusive, affordable, and continuous monitoring approach. Based on this, the problem statement for this work is the following “How can a data-driven approach be developed to enhance the accuracy and quality of methane emission estimation from flaring activities in the Oil and Gas industry, using satellite data from selected platforms to detect and quantify future emissions based on Machine learning more effectively?" To achieve this, the following objectives and activities were performed. * Theoretical Framework and key concepts * Technical review of the current state-of-the-art satellite platforms and existing literature. * Development of a Proof of Concept * Proposing an evaluation of the method * Recommendations and further work This work has adopted a systematic approach, starting with a comprehensive theoretical framework to understand the utilization of flaring, the environmental implications of methane, the current state-of-the-art of research, and the state-of-the-art in the field of remote sensing via satellites. Based upon the framework developed during the initial phases of this work, a data-driven methodology was formulated, utilizing the VIIRS dataset to get geographical areas of interest. Hyperspectral and methane data were aggregated from the Sentinel-2 and Sentinel-5P satellite dataset. This information was processed via a proposed pipeline, with initial alignment and enhancement. In this work, the images were enhanced by calculating the Normalized Burn Index. The result was a dataset containing the location of known flare sites, with data from both the Sentinel-2, and the Sentinel-5P satellite. The results underscore the disparities in coverage between Sentinel-2 and Sentinel-5P data, a factor that could potentially influence the precision of methane emission estimates. The applied preprocessing techniques markedly enhanced data clarity and usability, but their efficacy may hinge on the flaring sites' specific characteristics and the raw data quality. Moreover, despite certain limitations, the combination of Sentinel-2 and Sentinel-5P data effectively yielded a comprehensive dataset suitable for further analysis. In conclusion, this project introduces an encouraging methodology for estimating methane emissions from flaring activities within the oil and gas industry. It lays a foundational steppingstone for future research, continually enhancing the precision and quality of data in combating climate change. This methodology can be seen in the flow chart below. Based on the work done in this project, future work could focus on incorporating alternative sources of methane data, broadening the areas of interest through industry collaboration, and attempting to extract further features through image segmentation methods. This project signifies a start, paving the way for subsequent explorations to build upon

    The State of Applying Artificial Intelligence to Tissue Imaging for Cancer Research and Early Detection

    Full text link
    Artificial intelligence represents a new frontier in human medicine that could save more lives and reduce the costs, thereby increasing accessibility. As a consequence, the rate of advancement of AI in cancer medical imaging and more particularly tissue pathology has exploded, opening it to ethical and technical questions that could impede its adoption into existing systems. In order to chart the path of AI in its application to cancer tissue imaging, we review current work and identify how it can improve cancer pathology diagnostics and research. In this review, we identify 5 core tasks that models are developed for, including regression, classification, segmentation, generation, and compression tasks. We address the benefits and challenges that such methods face, and how they can be adapted for use in cancer prevention and treatment. The studies looked at in this paper represent the beginning of this field and future experiments will build on the foundations that we highlight

    Rethinking auto-colourisation of natural Images in the context of deep learning

    Get PDF
    Auto-colourisation is the ill-posed problem of creating a plausible full-colour image from a grey-scale prior. The current state of the art utilises image-to-image Generative Adversarial Networks (GANs). The standard method for training colourisation is reformulating RGB images into a luminance prior and two-channel chrominance supervisory signal. However, progress in auto-colourisation is inherently limited by multiple prerequisite dilemmas, where unsolved problems are mutual prerequisites. This thesis advances the field of colourisation on three fronts: architecture, measures, and data. Changes are recommended to common GAN colourisation architectures. Firstly, removing batch normalisation from the discriminator to allow the discriminator to learn the primary statistics of plausible colour images. Secondly, eliminating the direct L1 loss on the generator as L1 will limit the discovery of the plausible colour manifold. The lack of an objective measure of plausible colourisation necessitates resource-intensive human evaluation and repurposed objective measures from other fields. There is no consensus on the best objective measure due to a knowledge gap regarding how well objective measures model the mean human opinion of plausible colourisation. An extensible data set of human-evaluated colourisations, the Human Evaluated Colourisation Dataset (HECD) is presented. The results from this dataset are compared to the commonly-used objective measures and uncover a poor correlation between the objective measures and mean human opinion. The HECD can assess the future appropriateness of proposed objective measures. An interactive tool supplied with the HECD allows for a first exploration of the space of plausible colourisation. Finally, it will be shown that the luminance channel is not representative of the legacy black-and-white images that will be presented to models when deployed; This leads to out-of-distribution errors in all three channels of the final colour image. A novel technique is proposed to simulate priors that match any black-and-white media for which the spectral response is known

    Bibliographic Control in the Digital Ecosystem

    Get PDF
    With the contributions of international experts, the book aims to explore the new boundaries of universal bibliographic control. Bibliographic control is radically changing because the bibliographic universe is radically changing: resources, agents, technologies, standards and practices. Among the main topics addressed: library cooperation networks; legal deposit; national bibliographies; new tools and standards (IFLA LRM, RDA, BIBFRAME); authority control and new alliances (Wikidata, Wikibase, Identifiers); new ways of indexing resources (artificial intelligence); institutional repositories; new book supply chain; “discoverability” in the IIIF digital ecosystem; role of thesauri and ontologies in the digital ecosystem; bibliographic control and search engines

    Deep learning based objective quality assessment of multidimensional visual content

    Get PDF
    Tese (doutorado) — Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2022.Na última década, houve um tremendo aumento na popularidade dos aplicativos multimídia, aumentando assim o conteúdo multimídia. Quando esses conteúdossão gerados, transmitidos, reconstruídos e compartilhados, seus valores de pixel originais são transformados. Nesse cenário, torna-se mais crucial e exigente avaliar a qualidade visual do conteúdo visual afetado para que os requisitos dos usuários finais sejam atendidos. Neste trabalho, investigamos recursos espaciais, temporais e angulares eficazes desenvolvendo algoritmos sem referência que avaliam a qualidade visual de conteúdo visual multidimensional distorcido. Usamos algoritmos de aprendizado de máquina e aprendizado profundo para obter precisão de previsão.Para avaliação de qualidade de imagem bidimensional (2D), usamos padrões binários locais multiescala e informações de saliência e treinamos/testamos esses recursos usando o Random Forest Regressor. Para avaliação de qualidade de vídeo 2D, apresentamos um novo conceito de saliência espacial e temporal e pontuações de qualidade objetivas personalizadas. Usamos um modelo leve baseado em Rede Neural Convolucional (CNN) para treinamento e teste em patches selecionados de quadros de vídeo.Para avaliação objetiva da qualidade de imagens de campo de luz (LFI) em quatro dimensões (4D), propomos sete métodos de avaliação de qualidade LFI (LF-IQA) no total. Considerando que o LFI é composto por multi-views densas, Inspired by Human Visual System (HVS), propomos nosso primeiro método LF-IQA que é baseado em uma arquitetura CNN de dois fluxos. O segundo e terceiro métodos LF-IQA também são baseados em uma arquitetura de dois fluxos, que incorpora CNN, Long Short-Term Memory (LSTM) e diversos recursos de gargalo. O quarto LF-IQA é baseado nas camadas CNN e Atrous Convolution (ACL), enquanto o quinto método usa as camadas CNN, ACL e LSTM. O sexto método LF-IQA também é baseado em uma arquitetura de dois fluxos, na qual EPIs horizontais e verticais são processados no domínio da frequência. Por último, mas não menos importante, o sétimo método LF-IQA é baseado em uma Rede Neural Convolucional de Gráfico. Para todos os métodos mencionados acima, realizamos experimentos intensivos e os resultados mostram que esses métodos superaram os métodos de última geração em conjuntos de dados de qualidade populares.Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).In the last decade, there has been a tremendous increase in the popularity of multimedia applications, hence increasing multimedia content. When these contents are generated, transmitted, reconstructed and shared, their original pixel values are transformed. In this scenario, it becomes more crucial and demanding to assess visual quality of the affected visual content so that the requirements of end-users are satisfied. In this work, we investigate effective spatial, temporal, and angular features by developing no-reference algorithms that assess the visual quality of distorted multi-dimensional visual content. We use machine learning and deep learning algorithms to obtain prediction accuracy. For two-dimensional (2D) image quality assessment, we use multiscale local binary patterns and saliency information, and train / test these features using Random Forest Regressor. For 2D video quality assessment, we introduce a novel concept of spatial and temporal saliency and custom objective quality scores. We use a Convolutional Neural Network (CNN) based light-weight model for training and testing on selected patches of video frames. For objective quality assessment of four-dimensional (4D) light field images (LFI), we propose seven LFI quality assessment (LF-IQA) methods in total. Considering that LFI is composed of dense multi-views, Inspired by Human Visual System (HVS), we propose our first LF-IQA method that is based on a two-streams CNN architecture. The second and third LF-IQA methods are also based on a two-stream architecture, which incorporates CNN, Long Short-Term Memory (LSTM), and diverse bottleneck features. The fourth LF-IQA is based on CNN and Atrous Convolution layers (ACL), while the fifth method uses CNN, ACL, and LSTM layers. The sixth LF-IQA method is also based on a two-stream architecture, in which, horizontal and vertical EPIs are processed in the frequency domain. Last, but not least, the seventh LF-IQA method is based on a Graph Convolutional Neural Network. For all of the methods mentioned above, we performed intensive experiments, and the results show that these methods outperformed state-of-the-art methods on popular quality datasets

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    Electronic Imaging & the Visual Arts. EVA 2017 Florence

    Get PDF
    The Publication is following the yearly Editions of EVA FLORENCE. The State of Art is presented regarding the Application of Technologies (in particular of digital type) to Cultural Heritage. The more recent results of the Researches in the considered Area are presented. Information Technologies of interest for Culture Heritage are presented: multimedia systems, data-bases, data protection, access to digital content, Virtual Galleries. Particular reference is reserved to digital images (Electronic Imaging & the Visual Arts), regarding Cultural Institutions (Museums, Libraries, Palace - Monuments, Archaeological Sites). The International Conference includes the following Sessions: Strategic Issues; New Sciences and Culture Developments and Applications; New Technical Developments & Applications; Museums - Virtual Galleries and Related Initiatives; Art and Humanities Ecosystem & Applications; Access to the Culture Information. Two Workshops regard: Innovation and Enterprise; the Cloud Systems connected to the Culture (eCulture Cloud) in the Smart Cities context. The more recent results of the Researches at national and international are reported in the Area of Technologies and Culture Heritage, also with experimental demonstrations of developed Activities

    A sense of self for power side-channel signatures: instruction set disassembly and integrity monitoring of a microcontroller system

    Get PDF
    Cyber-attacks are on the rise, costing billions of dollars in damages, response, and investment annually. Critical United States National Security and Department of Defense weapons systems are no exception, however, the stakes go well beyond financial. Dependence upon a global supply chain without sufficient insight or control poses a significant issue. Additionally, systems are often designed with a presumption of trust, despite their microelectronics and software-foundations being inherently untrustworthy. Achieving cybersecurity requires coordinated and holistic action across disciplines commensurate with the specific systems, mission, and threat. This dissertation explores an existing gap in low-level cybersecurity while proposing a side-channel based security monitor to support attack detection and the establishment of trusted foundations for critical embedded systems. Background on side-channel origins, the more typical side-channel attacks, and microarchitectural exploits are described. A survey of related side-channel efforts is provided through side-channel organizing principles. The organizing principles enable comparison of dissimilar works across the side-channel spectrum. We find that the maturity of existing side-channel security monitors is insufficient, as key transition to practice considerations are often not accounted for or resolved. We then document the development, maturation, and assessment of a power side-channel disassembler, Time-series Side-channel Disassembler (TSD), and extend it for use as a security monitor, TSD-Integrity Monitor (TSD-IM). We also introduce a prototype microcontroller power side-channel collection fixture, with benefits to experimentation and transition to practice. TSD-IM is finally applied to a notional Point of Sale (PoS) application for proof of concept evaluation. We find that TSD and TSD-IM advance state of the art for side-channel disassembly and security monitoring in open literature. In addition to our TSD and TSD-IM research on microcontroller signals, we explore beneficial side-channel measurement abstractions as well as the characterization of the underlying microelectronic circuits through Impulse Signal Analysis (ISA). While some positive results were obtained, we find that further research in these areas is necessary. Although the need for a non-invasive, on-demand microelectronics-integrity capability is supported, other methods may provide suitable near-term alternatives to ISA

    UAVs for the Environmental Sciences

    Get PDF
    This book gives an overview of the usage of UAVs in environmental sciences covering technical basics, data acquisition with different sensors, data processing schemes and illustrating various examples of application
    corecore