523 research outputs found

    3D Reconstruction of Small Solar System Bodies using Rendered and Compressed Images

    Get PDF
    Synthetic image generation and reconstruction of Small Solar System Bodies and the influence of compression is becoming an important study topic because of the advent of small spacecraft in deep space missions. Most of these missions are fly-by scenarios, for example in the Comet Interceptor mission. Due to limited data budgets of small satellite missions, maximising scientific return requires investigating effects of lossy compression. A preliminary simulation pipeline had been developed that uses physics-based rendering in combination with procedural terrain generation to overcome limitations of currently used methods for image rendering like the Hapke model. The rendered Small Solar System Body images are combined with a star background and photometrically calibrated to represent realistic imagery. Subsequently, a Structure-from-Motion pipeline reconstructs three-dimensional models from the rendered images. In this work, the preliminary simulation pipeline was developed further into the Space Imaging Simulator for Proximity Operations software package and a compression package was added. The compression package was used to investigate effects of lossy compression on reconstructed models and the possible amount of data reduction of lossy compression to lossless compression. Several scenarios with varying fly-by distances ranging from 50 km to 400 km and body sizes of 1 km and 10 km were simulated and compressed with lossless and several quality levels of lossy compression using PNG and JPEG 2000 respectively. It was found that low compression ratios introduce artefacts resembling random noise while high compression ratios remove surface features. The random noise artefacts introduced by low compression ratios frequently increased the number of vertices and faces of the reconstructed three-dimensional model

    Forensic Video Analytic Software

    Full text link
    Law enforcement officials heavily depend on Forensic Video Analytic (FVA) Software in their evidence extraction process. However present-day FVA software are complex, time consuming, equipment dependent and expensive. Developing countries struggle to gain access to this gateway to a secure haven. The term forensic pertains the application of scientific methods to the investigation of crime through post-processing, whereas surveillance is the close monitoring of real-time feeds. The principle objective of this Final Year Project was to develop an efficient and effective FVA Software, addressing the shortcomings through a stringent and systematic review of scholarly research papers, online databases and legal documentation. The scope spans multiple object detection, multiple object tracking, anomaly detection, activity recognition, tampering detection, general and specific image enhancement and video synopsis. Methods employed include many machine learning techniques, GPU acceleration and efficient, integrated architecture development both for real-time and postprocessing. For this CNN, GMM, multithreading and OpenCV C++ coding were used. The implications of the proposed methodology would rapidly speed up the FVA process especially through the novel video synopsis research arena. This project has resulted in three research outcomes Moving Object Based Collision Free Video Synopsis, Forensic and Surveillance Analytic Tool Architecture and Tampering Detection Inter-Frame Forgery. The results include forensic and surveillance panel outcomes with emphasis on video synopsis and Sri Lankan context. Principal conclusions include the optimization and efficient algorithm integration to overcome limitations in processing power, memory and compromise between real-time performance and accuracy.Comment: The Forensic Video Analytic Software demo video is available https://www.youtube.com/watch?v=vsZlYKQxSk

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Importance of Defining Roi: a Semi- Automated Algorithm for Predicting Importance Maps

    Get PDF
    The importance of defining ROI in gray scale images is clearly established. A new way of proportionating the distortion between ROI and non-ROI is found out. The various factors influencing the ROI have been studied and the relationship between these factors and the perceived interest has been established. A new Algorithm has been proposed to produce an importance map for a given gray scale image. The algorithm takes into account various factors like Size, Location, Blur and Contrast. The results of the algorithm have been found to be satisfactory. A comparison was made between the proposed algorithm and two other widely used algorithm and the proposed algorithm has significantly better results.School of Electrical & Computer Engineerin

    Recent Advances in Steganography

    Get PDF
    Steganography is the art and science of communicating which hides the existence of the communication. Steganographic technologies are an important part of the future of Internet security and privacy on open systems such as the Internet. This book's focus is on a relatively new field of study in Steganography and it takes a look at this technology by introducing the readers various concepts of Steganography and Steganalysis. The book has a brief history of steganography and it surveys steganalysis methods considering their modeling techniques. Some new steganography techniques for hiding secret data in images are presented. Furthermore, steganography in speeches is reviewed, and a new approach for hiding data in speeches is introduced

    The 1993 Space and Earth Science Data Compression Workshop

    Get PDF
    The Earth Observing System Data and Information System (EOSDIS) is described in terms of its data volume, data rate, and data distribution requirements. Opportunities for data compression in EOSDIS are discussed

    Digital Image Processing

    Get PDF
    This book presents several recent advances that are related or fall under the umbrella of 'digital image processing', with the purpose of providing an insight into the possibilities offered by digital image processing algorithms in various fields. The presented mathematical algorithms are accompanied by graphical representations and illustrative examples for an enhanced readability. The chapters are written in a manner that allows even a reader with basic experience and knowledge in the digital image processing field to properly understand the presented algorithms. Concurrently, the structure of the information in this book is such that fellow scientists will be able to use it to push the development of the presented subjects even further

    Modulaarisen graafipohjaisen kuvankäsittelyjärjestelmän verifiointi

    Get PDF
    Electronic devices today have become complex. Any non-trivial device consists of both hardware and software. Tightening time to market and cost requirements put pressure on the development process of the devices. Software and hardware needs to be developed concurrently and must be verified in an early phase of product development. This thesis introduces a graph based image processing system. Image processing system is a complex system that usually consists of software, firmware and hardware. The possibilities and methods of graph verification are investigated in this thesis. Graphs can be used to handle the complexity of the system by encapsulating the functionality of the underlying implementations. Graphs provide modularity and configurability that can be utilized in the development and verification of the system. Reuse of software is increased due to the consistent and defined nature of graphs and their vertices. Software development shift left can be enabled by performing graph vertex verification in isolation by using pre-silicon development platforms. In this thesis, image processing system graphs were also used in a real life product development project. Graph verification was initiated early in the product development. Shift left was exercised by utilizing the graph verification in several pre-silicon platforms. Functional, performance and stability testing was implemented. Both complete graphs and their vertices were verified in isolation. Graph verification provided many benefits to the product development. Implementations could be tested in several different environments in isolation using only a light test framework. Issues could be found and fixed early. Performance bottlenecks could be pinpointed and acted upon. With the foundations laid in this project, it would be possible in the future to take more advantage of graphs. More advanced automated image quality testing would allow efficient verification. Finer granularity graphs would allow more configurability and more focused testing. Shift left could be further increased by adapting the development of the algorithms to use graphs. This would lower the gap between algorithms and actual vertex implementations and also introduce the available test infrastructure to algorithm development

    ADVANCES IN IMAGE PRE-PROCESSING TO IMPROVE AUTOMATED 3D RECONSTRUCTION

    Get PDF
    Tools and algorithms for automated image processing and 3D reconstruction purposes have become more and more available, giving the possibility to process any dataset of unoriented and markerless images. Typically, dense 3D point clouds (or texture 3D polygonal models) are produced at reasonable processing time. In this paper, we evaluate how the radiometric pre-processing of image datasets (particularly in RAW format) can help in improving the performances of state-of-the-art automated image processing tools. Beside a review of common pre-processing methods, an efficient pipeline based on color enhancement, image denoising, RGB to Gray conversion and image content enrichment is presented. The performed tests, partly reported for sake of space, demonstrate how an effective image pre-processing, which considers the entire dataset in analysis, can improve the automated orientation procedure and dense 3D point cloud reconstruction, even in case of poor texture scenarios
    corecore