306 research outputs found

    Multum in parvo: Toward a generic compression method for binary images.

    Get PDF
    Data compression is an active field of research as the requirements to efficiently store and retrieve data at minimum time and cost persist to date. Lossless or lossy compression of bi-level data, such as binary images, has an equally crucial factor of importance. In this work, we explore a generic, application-independent method for lossless binary image compression. The first component of the proposed algorithm is a predetermined fixed-size codebook comprising 8 x 8-bit blocks of binary images along with the corresponding codes of shorter lengths. The two variations of the codebook--Huffman codes and Arithmetic codes--have yielded considerable compression ratios for various binary images. In order to attain higher compression, we introduce a second component--the row-column reduction coding--which removes additional redundancy. The proposed method is tested on two major areas involving bi-level data. The first area of application consists of binary images. Empirical results suggest that our algorithm outperforms the standard JBIG2 by at least 5% on average. The second area involves images consisting of a predetermined number of discrete colors, such as digital maps and graphs. By separating such images into binary layers, we employed our algorithm and attained efficient compression down to 0.035 bits per pixel. --P.ii.The original print copy of this thesis may be available here: http://wizard.unbc.ca/record=b173649

    Transmission of Images over Noisy Channels Using Error-resilient Wavelet Coding and Forward Error Correction

    Get PDF
    A novel embedded wavelet coding scheme is proposed for the transmission of images over unreliable channels. The proposed scheme is based on the partitioning of information into a number of layers which can be decoded independently provided that some important and highly protected information is initially errorlessly transmitted to the decoder. Forward Error Correction is used in conjunction with the error-resilient source coder for the protection of the compressed stream. Unlike many other robust coding schemes presented to-date, the proposed scheme is able to decode portions of the bitstream even after the occurrence of uncorrectable errors. This coding strategy is very suitable for application with block coding schemes such as defined by the JPEG2000 standard. The proposed scheme is compared with other robust image coders and is shown to be very suitable for transmission of images over memoryless channels

    The Unified-FFT Method for Fast Solution of Integral Equations as Applied to Shielded-Domain Electromagnetics

    Get PDF
    Electromagnetic (EM) solvers are widely used within computer-aided design (CAD) to improve and ensure success of circuit designs. Unfortunately, due to the complexity of Maxwell\u27s equations, they are often computationally expensive. While considerable progress has been made in the realm of speed-enhanced EM solvers, these fast solvers generally achieve their results through methods that introduce additional error components by way of geometric approximations, sparse-matrix approximations, multilevel decomposition of interactions, and more. This work introduces the new method, Unified-FFT (UFFT). A derivative of method of moments, UFFT scales as O(N log N), and achieves fast analysis by the unique combination of FFT-enhanced matrix fill operations (MFO) with FFT-enhanced matrix solve operations (MSO). In this work, two versions of UFFT are developed, UFFT-Precorrected (UFFT-P) and UFFT-Grid Totalizing (UFFT-GT). UFFT-P uses precorrected FFT for MSO and allows the use of basis functions that do not conform to a regular grid. UFFT-GT uses conjugate gradient FFT for MSO and features the capability of reducing the error of the solution down to machine precision. The main contribution of UFFT-P is a fast solver, which utilizes FFT for both MFO and MSO. It is demonstrated in this work to not only provide simulation results for large problems considerably faster than state of the art commercial tools, but also to be capable of simulating geometries which are too complex for conventional simulation. In UFFT-P these benefits come at the expense of a minor penalty to accuracy. UFFT-GT contains further contributions as it demonstrates that such a fast solver can be accurate to numerical precision as compared to a full, direct analysis. It is shown to provide even more algorithmic efficiency and faster performance than UFFT-P. UFFT-GT makes an additional contribution in that it is developed not only for planar geometries, but also for the case of multilayered dielectrics and metallization. This functionality is particularly useful for multi-layered printed circuit boards (PCBs) and integrated circuits (ICs). Finally, UFFT-GT contributes a 3D planar solver, which allows for current to be discretized in the z-direction. This allows for similar fast and accurate simulation with the inclusion of some 3D features, such as vias connecting metallization planes

    Prioritizing Content of Interest in Multimedia Data Compression

    Get PDF
    Image and video compression techniques make data transmission and storage in digital multimedia systems more efficient and feasible for the system's limited storage and bandwidth. Many generic image and video compression techniques such as JPEG and H.264/AVC have been standardized and are now widely adopted. Despite their great success, we observe that these standard compression techniques are not the best solution for data compression in special types of multimedia systems such as microscopy videos and low-power wireless broadcast systems. In these application-specific systems where the content of interest in the multimedia data is known and well-defined, we should re-think the design of a data compression pipeline. We hypothesize that by identifying and prioritizing multimedia data's content of interest, new compression methods can be invented that are far more effective than standard techniques. In this dissertation, a set of new data compression methods based on the idea of prioritizing the content of interest has been proposed for three different kinds of multimedia systems. I will show that the key to designing efficient compression techniques in these three cases is to prioritize the content of interest in the data. The definition of the content of interest of multimedia data depends on the application. First, I show that for microscopy videos, the content of interest is defined as the spatial regions in the video frame with pixels that don't only contain noise. Keeping data in those regions with high quality and throwing out other information yields to a novel microscopy video compression technique. Second, I show that for a Bluetooth low energy beacon based system, practical multimedia data storage and transmission is possible by prioritizing content of interest. I designed custom image compression techniques that preserve edges in a binary image, or foreground regions of a color image of indoor or outdoor objects. Last, I present a new indoor Bluetooth low energy beacon based augmented reality system that integrates a 3D moving object compression method that prioritizes the content of interest.Doctor of Philosoph

    Compression of dynamic polygonal meshes with constant and variable connectivity

    Get PDF
    This work was supported by the projects 20-02154S and 17-07690S of the Czech Science Foundation and SGS-2019-016 of the Czech Ministry of Education.Polygonal mesh sequences with variable connectivity are incredibly versatile dynamic surface representations as they allow a surface to change topology or details to suddenly appear or disappear. This, however, comes at the cost of large storage size. Current compression methods inefficiently exploit the temporal coherence of general data because the correspondences between two subsequent frames might not be bijective. We study the current state of the art including the special class of mesh sequences for which connectivity is static. We also focus on the state of the art of a related field of dynamic point cloud sequences. Further, we point out parts of the compression pipeline with the possibility of improvement. We present the progress we have already made in designing a temporal model capturing the temporal coherence of the sequence, and point out to directions for a future research

    Visual Tracking and Motion Estimation for an On-orbit Servicing of a Satellite

    Get PDF
    This thesis addresses visual tracking of a non-cooperative as well as a partially cooperative satellite, to enable close-range rendezvous between a servicer and a target satellite. Visual tracking and estimation of relative motion between a servicer and a target satellite are critical abilities for rendezvous and proximity operation such as repairing and deorbiting. For this purpose, Lidar has been widely employed in cooperative rendezvous and docking missions. Despite its robustness to harsh space illumination, Lidar has high weight and rotating parts and consumes more power, thus undermines the stringent requirements of a satellite design. On the other hand, inexpensive on-board cameras can provide an effective solution, working at a wide range of distances. However, conditions of space lighting are particularly challenging for image based tracking algorithms, because of the direct sunlight exposure, and due to the glossy surface of the satellite that creates strong reflection and image saturation, which leads to difficulties in tracking procedures. In order to address these difficulties, the relevant literature is examined in the fields of computer vision, and satellite rendezvous and docking. Two classes of problems are identified and relevant solutions, implemented on a standard computer are provided. Firstly, in the absence of a geometric model of the satellite, the thesis presents a robust feature-based method with prediction capability in case of insufficient features, relying on a point-wise motion model. Secondly, we employ a robust model-based hierarchical position localization method to handle change of image features along a range of distances, and localize an attitude-controlled (partially cooperative) satellite. Moreover, the thesis presents a pose tracking method addressing ambiguities in edge-matching, and a pose detection algorithm based on appearance model learning. For the validation of the methods, real camera images and ground truth data, generated with a laboratory tet bed similar to space conditions are used. The experimental results indicate that camera based methods provide robust and accurate tracking for the approach of malfunctioning satellites in spite of the difficulties associated with specularities and direct sunlight. Also exceptional lighting conditions associated to the sun angle are discussed, aimed at achieving fully reliable localization system in a certain mission

    Sistema baseado em técnicas de compressão para o reconhecimento de dígitos manuscritos

    Get PDF
    Mestrado em Engenharia Eletrónica e TelecomunicaçõesO reconhecimento de dígitos manuscritos é uma habilidade humana adquirida. Com pouco esforço, um humano pode reconhecer adequadamente em milissegundos uma sequência de dígitos manuscritos. Com o auxílio de um computador, esta tarefa de reconhecimento pode ser facilmente automatizada, melhorando um número significativo de processos. A separação do correio postal, a verificação de cheques bancários e operações que têm como entrada de dados dígitos manuscritos estão incluídas num amplo conjunto de aplicações que podem ser realizadas de forma mais eficaz e automatizada. Nos últimos anos, várias técnicas e métodos foram propostos para automatizar o mecanismo de reconhecimento de dígitos manuscritos. No entanto, para resolver esta desafiante questão de reconhecimento de imagem são utilizadas técnicas complexas e computacionalmente muito exigentes de machine learning, como é o caso do deep learning. Nesta dissertação é introduzida uma nova solução para o problema do reconhecimento de dígitos manuscritos, usando métricas de similaridade entre imagens de dígitos. As métricas de similaridade são calculadas com base na compressão de dados, nomeadamente pelo uso de Modelos de Contexto Finito.The Recognition of Handwritten Digits is a human-acquired ability. With little e ort, a human can properly recognize, in milliseconds, a sequence of handwritten digits. With the help of a computer, the task of handwriting recognition can be easily automated, improving and making a signi cant number of processes faster. The postal mail sorting, bank check veri cation and handwritten digit data entry operations are in a wide group of applications that can be performed in a more e ective and automated way. In the recent past years, a number of techniques and methods have been proposed to automate the handwritten digit recognition mechanism. However, to solve this challenging question of image recognition, there are used complex and computationally demanding machine learning techniques, as it is the case of deep learning. In this dissertation is introduced a novel solution to the problem of handwritten digit recognition, using metrics of similarity between digit images. The metrics are computed based on data compression, namely by the use of Finite Context Models

    NASA Tech Briefs, September 2009

    Get PDF
    opics covered include: Filtering Water by Use of Ultrasonically Vibrated Nanotubes; Computer Code for Nanostructure Simulation; Functionalizing CNTs for Making Epoxy/CNT Composites; Improvements in Production of Single-Walled Carbon Nanotubes; Progress Toward Sequestering Carbon Nanotubes in PmPV; Two-Stage Variable Sample-Rate Conversion System; Estimating Transmitted-Signal Phase Variations for Uplink Array Antennas; Board Saver for Use with Developmental FPGAs; Circuit for Driving Piezoelectric Transducers; Digital Synchronizer without Metastability; Compact, Low-Overhead, MIL-STD-1553B Controller; Parallel-Processing CMOS Circuitry for M-QAM and 8PSK TCM; Differential InP HEMT MMIC Amplifiers Embedded in Waveguides; Improved Aerogel Vacuum Thermal Insulation; Fluoroester Co-Solvents for Low-Temperature Li+ Cells; Using Volcanic Ash to Remove Dissolved Uranium and Lead; High-Efficiency Artificial Photosynthesis Using a Novel Alkaline Membrane Cell; Silicon Wafer-Scale Substrate for Microshutters and Detector Arrays; Micro-Horn Arrays for Ultrasonic Impedance Matching; Improved Controller for a Three-Axis Piezoelectric Stage; Nano-Pervaporation Membrane with Heat Exchanger Generates Medical-Grade Water; Micro-Organ Devices; Nonlinear Thermal Compensators for WGM Resonators; Dynamic Self-Locking of an OEO Containing a VCSEL; Internal Water Vapor Photoacoustic Calibration; Mid-Infrared Reflectance Imaging of Thermal-Barrier Coatings; Improving the Visible and Infrared Contrast Ratio of Microshutter Arrays; Improved Scanners for Microscopic Hyperspectral Imaging; Rate-Compatible LDPC Codes with Linear Minimum Distance; PrimeSupplier Cross-Program Impact Analysis and Supplier Stability Indicator Simulation Model; Integrated Planning for Telepresence With Time Delays; Minimizing Input-to-Output Latency in Virtual Environment; Battery Cell Voltage Sensing and Balancing Using Addressable Transformers; Gaussian and Lognormal Models of Hurricane Gust Factors; Simulation of Attitude and Trajectory Dynamics and Control of Multiple Spacecraft; Integrated Modeling of Spacecraft Touch-and-Go Sampling; Spacecraft Station-Keeping Trajectory and Mission Design Tools; Efficient Model-Based Diagnosis Engine; and DSN Simulator

    THE SPATIAL INDUCTIVE BIAS OF DEEP LEARNING

    Get PDF
    In the past few years, Deep Learning has become the method of choice for producing state-of-the-art results on machine learning problems involving images, text, and speech. The explosion of interest in these techniques has resulted in a large number of successful applications of deep learning, but relatively few studies exploring the nature of and reason for that success. This dissertation is motivated by a desire to understand and reproduce the performance characteristics of deep learning systems, particularly Convolutional Neural Networks (CNNs). One factor in the success of CNNs is that they have an inductive bias that assumes a certain type of spatial structure is present in the data. We give a formal definition of how this type of spatial structure can be characterised, along with some statistical tools for testing whether spatial structure is present in a given dataset. These tools are applied to several standard image datasets, and the results are analyzed. We demonstrate that CNNs rely heavily on the presence of such structure, and then show several ways that a similar bias can be introduced into other methods. The first is a partition-based method for training Restricted Boltzmann Machines and Deep Belief Networks, which is able to speed up convergence significantly without changing the overall representational power of the network. The second is a deep partitioned version of Principal Component Analysis, which demonstrates that a spatial bias can be useful even in a model that is non-connectionist and completely linear. The third is a variation on projective Random Forests, which shows that we can introduce a spatial bias with only minor changes to the algorithm, and no externally imposed partitioning is required. In each case, we can show that introducing a spatial bias results in improved performance on spatial data
    corecore