51 research outputs found

    A Systolic Image Difference Algorithm for RLE-Compressed Images

    Get PDF
    A new systolic algorithm which computes image differences in run-length encoded (RLE) format is described. The binary image difference operation is commonly used in many image processing applications including automated inspection systems, character recognition, fingerprint analysis, and motion detection. The efficiency of these operations can be improved significantly with the availability of a fast systolic system that computes the image difference as described in this paper. It is shown that for images with a high similarity measure, the time complexity of the systolic algorithm is small and, in some cases, constant with respect to the image size. A formal proof of correctness for the algorithm is also given

    A Systolic Algorithm to Process Compressed Binary Images

    Get PDF
    A new systolic algorithm which computes image differences in run-length encoded (RLE) format is described. The binary image difference operation is commonly used in many image processing applications including automated inspection systems, character recognition, fingerprint analysis, and motion detection. The efficiency of these operations can be improved significantly with the availability of a fast systolic system that computes the image difference as described in this paper It is shown that for images with a high similarity measure, the time complexity of the systolic algorithm is small and in some cases constant with respect to the image size. The time for the systolic algorithm is proportional to the difference between the number of runs in the two images, while the time for the sequential algorithm is proportional to the total number of runs in the two images together A formal proof of correctness for the algorithm is also given

    Systolic Algorithm for Processing RLE Images

    Get PDF
    Image difference operation is commonly used in on-line automated printed circuit board (PCB) inspection systems as well as many other image processing applications. In this paper, we describe a new systolic algorithm and its system architecture which computes image differences in run-length encoded (RLE) format. The efficiency of this operation greatly affects the overall performance of the inspection system. It is shown that, for images with a high similarity measure, the time complexity of the systolic algorithm is a small constant. A formal proof of correctness for the algorithm is also given in the paper

    Dynamically Reconfigurable Systolic Array Accelerators: A Case Study with Extended Kalman Filter and Discrete Wavelet Transform Algorithms

    Get PDF
    Field programmable grid arrays (FPGA) are increasingly being adopted as the primary on-board computing system for autonomous deep space vehicles. There is a need to support several complex applications for navigation and image processing in a rapidly responsive on-board FPGA-based computer. This requires exploring and combining several design concepts such as systolic arrays, hardware-software partitioning, and partial dynamic reconfiguration. A microprocessor/co-processor design that can accelerate two single precision oating-point algorithms, extended Kalman lter and a discrete wavelet transform, is presented. This research makes three key contributions. (i) A polymorphic systolic array framework comprising of recofigurable partial region-based sockets to accelerate algorithms amenable to being mapped onto linear systolic arrays. When implemented on a low end Xilinx Virtex4 SX35 FPGA the design provides a speedup of at least 4.18x and 6.61x over a state of the art microprocessor used in spacecraft systems for the extended Kalman lter and discrete wavelet transform algorithms, respectively. (ii) Switchboxes to enable communication between static and partial reconfigurable regions and a simple protocol to enable schedule changes when a socket\u27s contents are dynamically reconfigured to alter the concurrency of the participating systolic arrays. (iii) A hybrid partial dynamic reconfiguration method that combines Xilinx early access partial reconfiguration, on-chip bitstream decompression, and bitstream relocation to enable fast scaling of systolic arrays on the PolySAF. This technique provided a 2.7x improvement in reconfiguration time compared to an o-chip partial reconfiguration technique that used a Flash card on the FPGA board, and a 44% improvement in BRAM usage compared to not using compression

    Efficient data encoder for endoscopic imaging applications

    Get PDF
    The invention of medical imaging technology revolved the process of diagnosing diseases and opened a new world for better studying inside of the human body. In order to capture images from different human organs, different devices have been developed. Gastro-Endoscopy is an example of a medical imaging device which captures images from human gastrointestinal. With the advancement of technology, the issues regarding such devices started to get rectified. For example, with the invention of swallow-able pill photographer which is called Wireless Capsule Endoscopy (WCE); pain, time, and bleeding risk for patients are radically decreased. The development of such technologies and devices has been increased and the demands for instruments providing better performance are grown along the time. In case ofWCE, the special feature requirements such as a small size (as small as an ordinary pill) and wireless transmission of the captured images dictate restrictions in power consumption and area usage. In this research, the reduction of image encoder hardware cost for endoscopic imaging application has been focused. Several encoding algorithms have been studied and the comparative results are discussed. An efficient data encoder based on Lempel-Ziv-Welch (LZW) algorithm is presented. The encoder is a library-based one where the size of library can be modified by the user, and hence, the output data rate can be controlled according to the bandwidth requirement. The simulation is carried out with several endoscopic images and the results show that a minimum compression ratio of 92.5 % can be achieved with a minimum reconstruction quality of 30 dB. The hardware architecture and implementation result in Field-Programmable Gate Array (FPGA) for the proposed window-based LZW are also presented. A new lossy LZW algorithm is proposed and implemented in FPGA which provides promising results for such an application

    Joint Optimization of Low-power DCT Architecture and Effcient Quantization Technique for Embedded Image Compression

    Get PDF
    International audienceThe Discrete Cosine Transform (DCT)-based image com- pression is widely used in today's communication systems. Signi cant research devoted to this domain has demonstrated that the optical com- pression methods can o er a higher speed but su er from bad image quality and a growing complexity. To meet the challenges of higher im- age quality and high speed processing, in this chapter, we present a joint system for DCT-based image compression by combining a VLSI archi- tecture of the DCT algorithm and an e cient quantization technique. Our approach is, rstly, based on a new granularity method in order to take advantage of the adjacent pixel correlation of the input blocks and to improve the visual quality of the reconstructed image. Second, a new architecture based on the Canonical Signed Digit and a novel Common Subexpression Elimination technique is proposed to replace the constant multipliers. Finally, a recon gurable quantization method is presented to e ectively save the computational complexity. Experimental results obtained with a prototype based on FPGA implementation and com- parisons with existing works corroborate the validity of the proposed optimizations in terms of power reduction, speed increase, silicon area saving and PSNR improvement

    Slovníková komprese dat

    Get PDF
    Abstract Data compression has gained increased attention of researchers and developers in the last few decades. It comes handy when users have limited storage capacity or transmission bandwidth as it reduces the size of data without much loss. There have been many types of compression techniques proposed by researchers so far, Lempel-Ziv series of algorithms, for example. Lempel-Ziv series of algorithms are lossless algorithms and follow a dictionary-based approach to data compression where a dictionary is used to keep references of repeated text or words and are omitted to reduce size of data. Out of all the Lempel-Ziv algorithms, this study focuses on Lempel-Ziv-Ross-Williams (LZRW) algorithm and its implementation, which was proposed by Ross Williams in 1991. There is a brief introduction in the study which talks about basics of data compression followed by the compression techniques. Various dictionary-based algorithms have also been compared with each other discussing along with their weaknesses and advantages. Not only that, there is also a brief description on how data compression works for L1 cache. The design issues faced by the designers while implementing data cache compression have also been noted in this study.Abstrakt Komprese dat si získala v posledních několika desetiletích zvýšenou pozornost výzkumníků a vývojářů. Je užitečné používání komprese dat, pokud uživatelé mají k dispozici omezenou úložnou kapacitu nebo datovou propustnost, protože snižují velikost dat bez velké ztráty. Existuje mnoho typů kompresních technik navržených výzkumnými pracovníky, např. řady algoritmů Lempel-Ziv. Série algoritmů Lempel-Ziv jsou bezztrátové algoritmy a řídí se slovníkem založeným na kompresi dat, kde je slovník používán k udržování odkazů na opakovaný text nebo slova a je vynechán ke snížení velikosti dat. Ze všech algoritmů Lempel-Ziv se tato studie zaměřuje na algoritmus Lempel-Ziv-Ross-Williams (LZRW) a jeho implementaci, který navrhl Ross Williams v roce 1991. Ve studii je krátký úvod, který hovoří o základech komprese dat s popisem kompresních technik. Různé algoritmy na bázi slovníku byly také navzájem porovnány, spolu s jejich výhodami a nevýhodami. Studie obsahuje také stručný popis toho, jak komprese dat funguje pro vyrovnávací cache paměť L1. V této studii byly rovněž zaznamenány konstrukční problémy, s nimiž se potýkají vývojáři při implementaci komprese dat ve vyrovnávací paměti.460 - Katedra informatikyvelmi dobř

    A novel approach for the hardware implementation of a PPMC statistical data compressor

    Get PDF
    This thesis aims to understand how to design high-performance compression algorithms suitable for hardware implementation and to provide hardware support for an efficient compression algorithm. Lossless data compression techniques have been developed to exploit the available bandwidth of applications in data communications and computer systems by reducing the amount of data they transmit or store. As the amount of data to handle is ever increasing, traditional methods for compressing data become· insufficient. To overcome this problem, more powerful methods have been developed. Among those are the so-called statistical data compression methods that compress data based on their statistics. However, their high complexity and space requirements have prevented their hardware implementation and the full exploitation of their potential benefits. This thesis looks into the feasibility of the hardware implementation of one of these statistical data compression methods by exploring the potential for reorganising and restructuring the method for hardware implementation and investigating ways of achieving efficient and effective designs to achieve an efficient and cost-effective algorithm. [Continues.

    An Image Processing Approach Toward a Visual Intra-Cortical Stimulator

    Get PDF
    Abstract Visual impairment may be caused by various factors varying from trauma, birth-defects, and diseases. Until today there are no viable medical treatments for this condition; hence bio-medical approaches are being employed to overcome that. The Cortivision team has been working on an intra-cortical implant that can bypass the retina and optic nerve and directly stimulate the visual cortex. In this work we aimed to implement a modular, reusable, and parameterizable object recognition system that tends to ``simplify'' video data prior to stimulation; hence opening new horizons for partial vision restoration, navigational and even recognition abilities. We identified the Scale Invariant Feature Transform (SIFT) algorithm as being a robust candidate for our application's needs. A multithreaded software prototype of the SIFT and Lucas-Kanade tracker was implemented to ensure proper overall operation. The feature extractor, difference of Gaussians (DoG) part of the SIFT, being the most computationally expensive, was migrated to an FPGA implementation due to the real-time restrictions that is not achievable on a host machine. The VHDL implementation is highly parameterizable for different application needs and tradeoffs. We introduced a novel architecture employing the sub-kernel trick to reduce resource usage compared to preexisting architectures while still being comparably accurate to a software floating point implementation. In order to alleviate transmission bottlenecks, the system also includes a new parallel Huffman encoder design that is capable of performing lossless compression of both images and scale space image pyramids taking into account spatial and scale data correlations during the predictor phase. The encoder was able to achieve compression ratios of 27.3% on the Caltech-256 data-set. Furthermore, a new camera and fiducial markers setup based on image processing was proposed in order to target the phosphene map estimation problem which affects the quality of the final stimulation that is perceived by the patient.----------RÉSUMÉ Introduction et objectifs La déficience visuelle, qui est définie par la perte totale ou partielle de la vision, n'est actuellement pas médicalement traitable. Des approches biomédicales modernes sont utilisées pour stimuler électriquement la vision; ces approches peuvent être divisées en trois groupes principaux: le premier ciblant les implants rétiniens Humayun et al. (2003), Kim et al. (2004), Chow et al. (2004); Palanker et al. (2005), Toledo et al. (2005); Yanai et al. (2007), Winter et al. (2007); Zrenner et al. (2011), le deuxième ciblant les implants du nerf optique Veraart et al. (2003), Sakaguchi et al. (2009), et le troisième ciblant les implants intra-corticaux Doljanu et Sawan (2007); Coulombe et al. (2007); Srivastava et al. (2007). L’inconvénient principal des deux premiers groupes, c'est qu'ils ne sont pas suffisamment génériques pour surmonter la majorité des maladies de déficience visuelle, car ils dépendent du fait que le patient doit avoir un nerf optique intact et/ou une rétine partiellement opérationnelle ; ce qui n'est pas le cas pour le troisième groupe. L'équipe du Laboratoire Polystim Neurotechnologies travaille actuellement sur un implant intra-cortical qui stimule directement le cortex visuel primaire (région V1) ; le nom du projet global est Cortivision. Le système utilise une caméra, un module de traitement d'image, un transmetteur RF (radiofréquence) et un stimulateur implantable. Cette méthode est robuste et générique car elle contourne l'oeil et le nerf optique. Un des défis majeurs est le traitement d'image nécessaire pour «simplifier» les données antérieures à la stimulation, l'extraction de l’information utile en écartant les données superflues. Les pixels qui sont capturés par la caméra n'ont pas de correspondance un-à-un sur le cortex visuel comme dans une image rectangulaire, ils sont plutôt mis en correspondance avec une carte complexe de «phosphènes» Coulombe et al. (2007); Srivastava et al. (2007). Les phosphènes sont des points lumineux qui apparaissent dans le champ de vision du patient quand le cerveau est stimulé électriquement. Ces points changent en terme de taille, de luminosité et d’emplacement en fonction de la façon dont la stimulation électrique est effectuée (c'est à dire un changement dans la fréquence, la tension, la durée, etc. ...) et même par le placement physique des électrodes dans le cortex visuel. Les approches actuelles visent à stimuler des images de phosphènes monochromes à basse résolution. Sachant cela, nous nous attendons plutôt à une vision de faible qualité qui rend des activités comme naviguer, interpréter des objets, ou encore lire, difficile pour le patient. Ceci est principalement dû à la complexité de l’étalonnage de la carte phosphène et sa correspondance, et aussi à la non-trivialité de savoir comment simplifier les données à partir des images qui viennent de la camera de façon qu’on conserve seulement les données pertinentes. La Figure 1.1 est un exemple qui démontre la non-trivialité de transformer une image grise en stimulation phosphène
    corecore