4 research outputs found

    Full Geo-localized Mobile Video in Android Mobile Telephones

    Full text link
    [EN] The evolution of mobile telephones have produced smart devices that not only allows the mobile user to talk but also to use a lot of telematic services. High quality photos and videos are produced by smart mobile telephones. The Global Positioning System available in the Mobile telephones allows the user to tag their photos and videos. There are several photo and integral video tagging mobile software but there is not a mobile application that allows the mobile users to tag the full video frames. This full tagging process allows the mobile user to tag independent video frames in order to explode the photo-video properties of the integral video. In this paper we present a mobile application and a Server application that allow the mobile user to full tag the mobile videos and share them with other users (registered in the Server). We present some tradeoffs present in the design of the tagging processMacias Lopez, EM.; Abdelfatah, H.; Suarez Sarmiento, A.; Canovas Solbes, A. (2011). Full Geo-localized Mobile Video in Android Mobile Telephones. Network Protocols and Algorithms. 3(1):64-81. doi:10.5296/npa.v3i1.641S64813

    Video enhancement : content classification and model selection

    Get PDF
    The purpose of video enhancement is to improve the subjective picture quality. The field of video enhancement includes a broad category of research topics, such as removing noise in the video, highlighting some specified features and improving the appearance or visibility of the video content. The common difficulty in this field is how to make images or videos more beautiful, or subjectively better. Traditional approaches involve lots of iterations between subjective assessment experiments and redesigns of algorithm improvements, which are very time consuming. Researchers have attempted to design a video quality metric to replace the subjective assessment, but so far it is not successful. As a way to avoid heuristics in the enhancement algorithm design, least mean square methods have received considerable attention. They can optimize filter coefficients automatically by minimizing the difference between processed videos and desired versions through a training. However, these methods are only optimal on average but not locally. To solve the problem, one can apply the least mean square optimization for individual categories that are classified by local image content. The most interesting example is Kondo’s concept of local content adaptivity for image interpolation, which we found could be generalized into an ideal framework for content adaptive video processing. We identify two parts in the concept, content classification and adaptive processing. By exploring new classifiers for the content classification and new models for the adaptive processing, we have generalized a framework for more enhancement applications. For the part of content classification, new classifiers have been proposed to classify different image degradations such as coding artifacts and focal blur. For the coding artifact, a novel classifier has been proposed based on the combination of local structure and contrast, which does not require coding block grid detection. For the focal blur, we have proposed a novel local blur estimation method based on edges, which does not require edge orientation detection and shows more robust blur estimation. With these classifiers, the proposed framework has been extended to coding artifact robust enhancement and blur dependant enhancement. With the content adaptivity to more image features, the number of content classes can increase significantly. We show that it is possible to reduce the number of classes without sacrificing much performance. For the part of model selection, we have introduced several nonlinear filters to the proposed framework. We have also proposed a new type of nonlinear filter, trained bilateral filter, which combines both advantages of the original bilateral filter and the least mean square optimization. With these nonlinear filters, the proposed framework show better performance than with linear filters. Furthermore, we have shown a proof-of-concept for a trained approach to obtain contrast enhancement by a supervised learning. The transfer curves are optimized based on the classification of global or local image content. It showed that it is possible to obtain the desired effect by learning from other computationally expensive enhancement algorithms or expert-tuned examples through the trained approach. Looking back, the thesis reveals a single versatile framework for video enhancement applications. It widens the application scope by including new content classifiers and new processing models and offers scalabilities with solutions to reduce the number of classes, which can greatly accelerate the algorithm design

    Multimedia Forensic Analysis via Intrinsic and Extrinsic Fingerprints

    Get PDF
    Digital imaging has experienced tremendous growth in recent decades, and digital images have been used in a growing number of applications. With such increasing popularity of imaging devices and the availability of low-cost image editing software, the integrity of image content can no longer be taken for granted. A number of forensic and provenance questions often arise, including how an image was generated; from where an image was from; what has been done on the image since its creation, by whom, when and how. This thesis presents two different sets of techniques to address the problem via intrinsic and extrinsic fingerprints. The first part of this thesis introduces a new methodology based on intrinsic fingerprints for forensic analysis of digital images. The proposed method is motivated by the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on the final output data. We present methods to identify these intrinsic fingerprints via component forensic analysis, and demonstrate that these traces can serve as useful features for such forensic applications as to build a robust device identifier and to identify potential technology infringement or licensing. Building upon component forensics, we develop a general authentication and provenance framework to reconstruct the processing history of digital images. We model post-device processing as a manipulation filter and estimate its coefficients using a linear time invariant approximation. Absence of in-device fingerprints, presence of new post-device fingerprints, or any inconsistencies in the estimated fingerprints across different regions of the test image all suggest that the image is not a direct device output and has possibly undergone some kind of processing, such as content tampering or steganographic embedding, after device capture. While component forensics is widely applicable in a number of scenarios, it has performance limitations. To understand the fundamental limits of component forensics, we develop a new theoretical framework based on estimation and pattern classification theories, and define formal notions of forensic identifiability and classifiability of components. We show that the proposed framework provides a solid foundation to study information forensics and helps design optimal input patterns to improve parameter estimation accuracy via semi non-intrusive forensics. The final part of the thesis investigates a complementing extrinsic approach via image hashing that can be used for content-based image authentication and other media security applications. We show that the proposed hashing algorithm is robust to common signal processing operations and present a systematic evaluation of the security of image hash against estimation and forgery attacks

    Framing digital image credibility: image manipulation problems, perceptions and solutions

    No full text
    Image manipulation is subverting the credibility of photographs as a whole. Currently there is no practical solution for asserting the authenticity of a photograph. People express their concern about this when asked but continue to operate in a ‘business as usual’ fashion. While a range of digital forensic technologies has been developed to address falsification of digital photographs, such technologies begin with ‘sourceless’ images and conclude with results in equivocal terms of probability, while not addressing the meaning and content contained within the image. It is interesting that there is extensive research into computer-based image forgery detection, but very little research into how we as humans perceive, or fail to perceive, these forgeries when we view them. The survey, eye-gaze tracking experiments and neural network analysis undertaken in this research contribute to this limited pool of knowledge. The research described in this thesis investigates human perceptions of images that are manipulated and, by comparison, images that are not manipulated. The data collected, and their analyses, demonstrate that humans are poor at identifying that an image has been manipulated. I consider some of the implications of digital image manipulation, explore current approaches to image credibility, and present a potential digital image authentication framework that uses technology and tools that exploit social factors such as reputation and trust to create a framework for technologically packaging/wrapping images with social assertions of authenticity, and surfaced metadata information. The thesis is organised into 6 chapters. Chapter 1: Introduction I briefly introduce the history of photography, highlighting its importance as reportage, and discuss how it has changed from its introduction in the early 19th century to today. I discuss photo manipulation and consider how it has changed along with photography. I describe the relevant literature on the subject of image authentication and the use of eye gaze tracking and neural nets in identifying the role of human vision in image manipulation detection, and I describe my area of research within this context. Chapter 2: Literature review I describe the various types of image manipulation, giving examples, and then canvas the literature to describe the landscape of image manipulation problems and extant solutions, namely: • the nature of image manipulation, • investigations of human perceptions of image manipulation, • eye gaze tracking and manipulated images, • known efforts to create solutions to the problem of preserving unadulterated photographic representations and the meanings they hold. Finally, I position my research activities within the context of the literature. Chapter 3: The research I describe the survey and experiments I undertook to investigate attitudes toward image manipulation, research human perceptions of manipulated and unmanipulated images, and to trial elements of a new wrapper-style file format that I call .msci (mobile self-contained image), designed to address image authenticity issues. Methods, results and discussion for each element are presented in both explanatory text and by presentation of papers resulting from the experiments. Chapter 4: Analysis of eye gaze data using classification neural networks I describe pattern classifying neural network analysis applied to selected data obtained from the experiments and the insights this analysis provided into the opaque realm of cognitive perception as seen through the lens of eye gaze. Chapter 5: Discussion I synthesise and discuss the outcomes of the survey and experiments. I discuss the outcomes of this research, and consider the need for a distinction between photographs and photo art. I offer a theoretical formula within which the overall authenticity of an image can be assessed. In addition I present a potential image authentication framework built around the .msci file format, designed in consideration of my investigation of the requirements of the image manipulation problem space and the experimental work undertaken in this research. Chapter 6: Conclusions and future work This thesis concludes with a summary of the outcomes of my research, and I consider the need for future experimentation to expand on the insights gained to date. I also note some ways forward to develop an image authentication framework to address the ongoing problem of image authenticity
    corecore