8 research outputs found
Recommended from our members
Intelligent Side Information Generation in Distributed Video Coding
Distributed video coding (DVC) reverses the traditional coding paradigm of complex encoders allied with basic decoding to one where the computational cost is largely incurred by the decoder. This is attractive as the proven theoretical work of Wyner-Ziv (WZ) and Slepian-Wolf (SW) shows that the performance by such a system should be exactly the same as a conventional coder. Despite the solid theoretical foundations, current DVC qualitative and quantitative performance falls short of existing conventional coders and there remain crucial limitations. A key constraint governing DVC performance is the quality of side information (SI), a coarse representation of original video frames which are not available at the decoder. Techniques to generate SI have usually been based on linear motion compensated temporal interpolation (LMCTI), though these do not always produce satisfactory SI quality, especially in sequences exhibiting non-linear motion.
This thesis presents an intelligent higher order piecewise trajectory temporal interpolation (HOPTTI) framework for SI generation with original contributions that afford better SI quality in comparison to existing LMCTI-based approaches. The major elements in this framework are: (i) a cubic trajectory interpolation algorithm model that significantly improves the accuracy of motion vector estimations; (ii) an adaptive overlapped block motion compensation (AOBMC) model which reduces both blocking and overlapping artefacts in the SI emanating from the block matching algorithm; (iii) the development of an empirical mode switching algorithm; and (iv) an intelligent switching mechanism to construct SI by automatically selecting the best macroblock from the intermediate SI generated by HOPTTI and AOBMC algorithms. Rigorous analysis and evaluation confirms that significant quantitative and perceptual improvements in SI quality are achieved with the new framework
Towards Computational Efficiency of Next Generation Multimedia Systems
To address throughput demands of complex applications (like Multimedia), a next-generation system designer needs to co-design and co-optimize the hardware and software layers. Hardware/software knobs must be tuned in synergy to increase the throughput efficiency. This thesis provides such algorithmic and architectural solutions, while considering the new technology challenges (power-cap and memory aging). The goal is to maximize the throughput efficiency, under timing- and hardware-constraints
Semantics-Empowered Communication: A Tutorial-cum-Survey
Along with the springing up of the semantics-empowered communication (SemCom)
research, it is now witnessing an unprecedentedly growing interest towards a
wide range of aspects (e.g., theories, applications, metrics and
implementations) in both academia and industry. In this work, we primarily aim
to provide a comprehensive survey on both the background and research taxonomy,
as well as a detailed technical tutorial. Specifically, we start by reviewing
the literature and answering the "what" and "why" questions in semantic
transmissions. Afterwards, we present the ecosystems of SemCom, including
history, theories, metrics, datasets and toolkits, on top of which the taxonomy
for research directions is presented. Furthermore, we propose to categorize the
critical enabling techniques by explicit and implicit reasoning-based methods,
and elaborate on how they evolve and contribute to modern content & channel
semantics-empowered communications. Besides reviewing and summarizing the
latest efforts in SemCom, we discuss the relations with other communication
levels (e.g., conventional communications) from a holistic and unified
viewpoint. Subsequently, in order to facilitate future developments and
industrial applications, we also highlight advanced practical techniques for
boosting semantic accuracy, robustness, and large-scale scalability, just to
mention a few. Finally, we discuss the technical challenges that shed light on
future research opportunities.Comment: Submitted to an IEEE journal. Copyright might be transferred without
further notic
Multimedia Forensic Analysis via Intrinsic and Extrinsic Fingerprints
Digital imaging has experienced tremendous growth in recent decades, and digital images have been used in a growing number of applications. With such increasing popularity of imaging devices and the availability of low-cost image editing software, the integrity of image content can no longer be taken for granted. A number of forensic and provenance questions often arise, including how an image was generated; from where an image was from; what has been done on the image since its creation, by whom, when and how. This thesis presents two different sets of techniques to address the problem via intrinsic and extrinsic fingerprints.
The first part of this thesis introduces a new methodology based on intrinsic fingerprints for forensic analysis of digital images. The proposed method is motivated by the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on the final output data. We present methods to identify these intrinsic fingerprints via component forensic analysis, and demonstrate that these traces can serve as useful features for such forensic applications as to build a robust device identifier and to identify potential technology infringement or licensing.
Building upon component forensics, we develop a general authentication and provenance framework to reconstruct the processing history of digital images. We model post-device processing as a manipulation filter and estimate its coefficients using a linear time invariant approximation. Absence of in-device fingerprints, presence of new post-device fingerprints, or any inconsistencies in the estimated fingerprints across different regions of the test image all suggest that the image is not a direct device output and has possibly undergone some kind of processing, such as content tampering or steganographic embedding, after device capture.
While component forensics is widely applicable in a number of scenarios, it has performance limitations. To understand the fundamental limits of component forensics, we develop a new theoretical framework based on estimation and pattern classification theories, and define formal notions of forensic identifiability and classifiability of components. We show that the proposed framework provides a solid foundation to study information forensics and helps design optimal input patterns to improve parameter estimation accuracy via semi non-intrusive forensics.
The final part of the thesis investigates a complementing extrinsic approach via image hashing that can be used for content-based image authentication and other media security applications. We show that the proposed hashing algorithm is robust to common signal processing operations and present a systematic evaluation of the security of image hash against estimation and forgery attacks
Anuário Científico – 2009 & 2010 Resumos de Artigos, Comunicações, Teses, Patentes, Livros e Monografias de Mestrado
O Conselho Técnico-Científico do Instituto Superior de Engenharia de Lisboa (ISEL), na senda da consolidação da divulgação do conhecimento e da ciência desenvolvidos pelo nosso corpo docente, propõe-se publicar mais uma edição do Anuário Científico, relativa à produção científica de 2009 e 2010.
A investigação, enquanto vertente estratégica do Instituto Superior de Engenharia de Lisboa (ISEL), tem concorrido para o seu reconhecimento nacional e internacional como instituição de referência e de qualidade na área do ensino das engenharias.
É também nesta vertente que o ISEL consubstancia a sua ligação à sociedade portuguesa e internacional através da transferência de tecnologia e de conhecimento, resultantes da sua atividade científica e pedagógica, contribuindo para o seu desenvolvimento e crescimento de forma sustentada.
São parte integrante do Anuário Científico todos os conteúdos com afiliação ISEL resultantes de resumos de artigos publicados em livros, revistas e atas de congressos que os docentes do ISEL apresentaram em fóruns e congressos nacionais e internacionais, bem como teses e patentes.
Desde 2002, ano da publicação da primeira edição, temos assistido a uma evolução crescente do número de publicações de conteúdos científicos, fruto do trabalho desenvolvido pelos docentes que se têm empenhado com afinco e perseverança.
Contudo, nestes dois anos (2009 e 2010) constatou-se um decréscimo no número de publicações, principalmente em 2010. Uma das causas poderá estar diretamente relacionada com a redução do financiamento ao ensino superior uma vez que limita toda a investigação no âmbito da atividade de I&D e da produção científica.
Na sequência da implementação do Processo de Bolonha em 2006, o ISEL promoveu a criação de cursos de Mestrado disponibilizando uma oferta educativa mais completa e diversificada aos seus alunos, mas também de outras instituições, dotando-os de competências inovadoras apropriadas ao mercado de trabalho que hoje se carateriza mais competitivo e dinâmico. Terminados os períodos escolar e de execução das monografias dos alunos, os resumos destas são igualmente parte integrante deste Anuário, no que concerne à conclusão dos Mestrados em 2009 e 2010.A fim de permitir uma maior acessibilidade à comunidade científica e à sociedade civil, o Anuário Científico será editado de ora avante em formato eletrónico.
Excecionalmente esta edição contempla publicações referentes a dois anos – 2009 e 2010
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well