56 research outputs found

    Test Segmentation of MRC Document Compression and Decompression by Using MATLAB

    Get PDF
    Abstract-The mixed raster content (MRC) standard specifies a framework for document compression which can dramatically improve the compression/ quality tradeoff as compared to traditional lossy image compression algorithms. The key to MRC compression is the separation of the document into foreground and background layers, represented as a binary mask. Therefore, the resulting quality and compression ratio of a MRC document encoder is highly dependent upon the segmentation algorithm used to compute the binary mask. The incorporated multi scale framework is used in order to improve the segmentation accuracy of text with varying size. In this paper, we propose a novel multi scale segmentation scheme for MRC document encoding based on the sequential application of two algorithms. The first algorithm, cost optimized segmentation (COS), is a block wise segmentation algorithm formulated in a global cost optimization framework. The second algorithm, connected component classification (CCC), refines the initial segmentation by classifying feature vectors of connected components using a Markov random field (MRF) model. The combined COS/CCC segmentation algorithms are then incorporated into a multi scale framework in order to improve the segmentation accuracy of text with varying size

    Sub-pixel gradient ๋ฅผ ํ™œ์šฉํ•œ compound ์˜์ƒ ์••์ถ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2014. 2. ๊น€์ˆ˜ํ™˜.์ปดํ“จํ„ฐ ์„ฑ๋Šฅ๊ณผ ๋„คํŠธ์›Œํฌ ์†๋„๊ฐ€ ๋ฐœ์ „ํ•จ์— ๋”ฐ๋ผ ์ปดํ“จํ„ฐ ํ™”๋ฉด์— ํ‘œ์‹œ๋˜๋Š” compound image ์˜ ๊ธฐ์ˆ ์€ ๋‹ค์–‘ํ•œ ์ „์†ก ํ™˜๊ฒฝ์—์„œ ๋น„๋””์˜ค ๋ฐ ์–‘๋ฐฉํ–ฅ ์„œ๋น„์Šค๊ฐ€ ๊ฐ€๋Šฅํ•ด์กŒ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ compound image๋Š” ๋‹ค์–‘ํ•œ ์ข…๋ฅ˜์˜ ์˜์ƒ์ด ๋ณตํ•ฉ์ ์œผ๋กœ ๋‚˜ํƒ€๋‚˜๊ธฐ ๋•Œ๋ฌธ์— ์˜์ƒ์˜ ์ข…๋ฅ˜๋ฅผ ๋ช…ํ™•ํžˆ ๊ตฌ๋ถ„ํ•˜๊ณ  ๊ฐ ์ข…๋ฅ˜์— ๋งž๋Š” ์˜์ƒ ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ ๋ฐฉ์‹์ด ํ•„์š”ํ•˜๊ฒŒ ๋œ๋‹ค. ์˜์ƒ์˜ ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ ๋ฐฉ์‹์ด ๋ณต์žกํ•ด ์งˆ์ˆ˜๋ก ์„œ๋ฒ„์™€ ํด๋ผ์ด์–ธํŠธ์˜ ์„ฑ๋Šฅ ๋ถˆ๊ท ํ˜•์€ ๋ฐ์ดํ„ฐ๋ฅผ ์›ํ™œํžˆ ์ƒ์„ฑ/์žฌํ˜„ ํ•˜์ง€ ๋ชปํ•˜๋Š” ๋ฌธ์ œ๋ฅผ ๊ฐ€์งˆ ์ˆ˜ ์žˆ๋‹ค. Compound image ์˜ ๋ถ„๋ฅ˜๋Š” ํ…์ŠคํŠธ๋กœ ๊ตฌ์„ฑ๋œ ๋ถ€๋ถ„์— ๋Œ€ํ•˜์—ฌ ๋‹ค๋ฅธ ์ข…๋ฅ˜์˜ ์˜์ƒ์œผ๋กœ ๋ถ„๋ฅ˜ํ•˜์ง€ ์•Š์•„์•ผ ํ•œ๋‹ค. ์ด๋Š” ๋ธ”๋ก ๋‹จ์œ„๋กœ ๊ตฌ๋ถ„ํ•˜์—ฌ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ฐฉ๋ฒ•์—์„œ ์ธ์ ‘ํ•œ ๋ธ”๋ก๊ฐ„์— ์„œ๋กœ ๋‹ค๋ฅธ ์ฝ”๋”ฉ ๋ฐฉ๋ฒ•์„ ์ ์šฉํ•˜๊ฒŒ ๋˜๋ฉด ์‚ฌ๋žŒ์ด ๋Š๋ผ๋Š” ์˜์ƒ์˜ ํ™”์งˆ์€ ๋‚ฎ์•„์ง€๊ฒŒ ๋œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ฌธ์ œ์ ์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด์„œ ํ…์ŠคํŠธ์˜ ์ƒ์„ฑ๊ณผ์ •์„ ์—ญ์ด์šฉํ•œ sub-pixel gradient ๋ธ”๋ก ๋ถ„๋ฅ˜ ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•œ๋‹ค. ํ‰ํŒ ๋””์Šคํ”Œ๋ ˆ์ด์—์„œ๋Š” ํ…์ŠคํŠธ์˜ ๋ถ€๋“œ๋Ÿฌ์›€์„ ํ‘œํ˜„ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ sub-pixel ๋‹จ์œ„๋กœ ์ปฌ๋Ÿฌ์˜ ๋ณ€ํ™”๋Ÿ‰์„ ์กฐ์ ˆํ•˜๊ฒŒ ๋œ๋‹ค. ์ด๋ฅผ whole-pixel์˜ ๋‹จ์œ„๋กœ ์˜์ƒ์„ ๊ตฌ๋ถ„ํ•˜๊ฒŒ ๋˜๋ฉด, ํ…์ŠคํŠธ์˜ ์˜์—ญ์„ ๋ช…ํ™•ํ•˜๊ฒŒ ๊ตฌ๋ถ„ํ•˜์ง€ ๋ชปํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—๋Š” sub-pixel gradient ๋ธ”๋ก ๋ถ„๋ฅ˜ ๋ฐฉ๋ฒ•์„ ํ†ตํ•˜์—ฌ ํ…์ŠคํŠธ๋กœ ๊ตฌ์„ฑ๋œ ์˜์—ญ๊ณผ ํ…์ŠคํŠธ๊ฐ€ ์•„๋‹Œ ์˜์—ญ์— ๋Œ€ํ•œ ํŒ๋‹จ์ด ์ •ํ™•ํžˆ ์ด๋ฃจ์–ด์ง์„ ์‹คํ—˜์„ ํ†ตํ•˜์—ฌ ํ™•์ธํ•˜์˜€๋‹ค. ํ…์ŠคํŠธ์˜ ์ฝ”๋”ฉ๋ฐฉ๋ฒ• ์ค‘ ์†์‹ค ์••์ถ•๋ฐฉ๋ฒ•์€ ํ…์ŠคํŠธ๋กœ ๊ตฌ์„ฑ๋œ ์˜์ƒ์ด ๋†’์€ ์ฃผํŒŒ์ˆ˜๋ฅผ ๊ฐ€์ง€๋Š” ์˜์ƒ์ด๊ธฐ ๋•Œ๋ฌธ์— ์–‘์žํ™”๋‚˜ ๋ณ€ํ™˜๊ณผ์ •์„ ๊ฑฐ์น˜๊ฒŒ ๋˜๋ฉด ์˜์ƒ์˜ ์†์‹ค์ด ์ปค์ง€๊ฒŒ ๋œ๋‹ค. ํ•˜์ง€๋งŒ ๋ฌด ์†์‹ค ์••์ถ• ๋ฐฉ๋ฒ•์€ ๋†’์€ ๋ฐ์ดํ„ฐ ๋Ÿ‰์„ ๊ฐ€์ง€๊ฒŒ ๋˜๊ณ , ์˜์ƒ ์ „์†ก ์†๋„๊ฐ€ ๋†’์•„์ ธ์•ผ ํ•˜๋Š” ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” sub-pixel gradient ๋ฐฉ๋ฒ•์„ ์ด์šฉํ•œ ํ…์ŠคํŠธ ์˜์—ญ์— ๋Œ€ํ•œ ์ฝ”๋”ฉ ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•œ๋‹ค. ํ…์ŠคํŠธ ์˜์ƒ์ด ๊ฐ€์ง€๋Š” ํŠน์„ฑ์„ ์ด์šฉํ•˜์—ฌ ์˜์ƒ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ๊ธฐ์šธ๊ธฐ์— ๋Œ€ํ•˜์—ฌ ์ฝ”๋”ฉ์„ ์ง„ํ–‰ํ•œ๋‹ค. ์ด๋ฅผ ํ†ตํ•˜์—ฌ ์˜์ƒ์˜ ์†์‹ค์„ ์ค„์ด๊ณ  ํ…์ŠคํŠธ์˜ ๊ฐ€๋…์„ฑ์„ ๋†’์ผ ์ˆ˜ ์žˆ๋‹ค. ๋™์ผํ•œ ์••์ถ•๋ฅ ์—์„œ ๋‹ค๋ฅธ ์••์ถ• ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋น„ํ•˜์—ฌ ํ…์ŠคํŠธ์˜ ํ™”์งˆ๊ณผ ๊ฐ€๋…์„ฑ์ด ๋›ฐ์–ด๋‚จ์„ ํ™•์ธํ•˜์˜€๋‹ค. Compound image๋Š” ์ž์—ฐ ์˜์ƒ๊ณผ๋Š” ๋‹ค๋ฅด๊ฒŒ ์›€์ง์ž„์ด ๋‹จ์ˆœํ•˜๊ณ  ๋…ธ์ด์ฆˆ๊ฐ€ ์—†๋‹ค๋Š” ํŠน์„ฑ์„ ๊ฐ€์ง„๋‹ค. ์ด๋Š” ๊ธฐ์กด์˜ ์›€์ง์ž„ ์ถ”์ •๋ฐฉ๋ฒ•์— ๋น„ํ•˜์—ฌ ๋ณต์žก๋„๊ฐ€ ๋‚ฎ์€ ๋ฐฉ๋ฒ•์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ด๋Ÿฌํ•œ compound image์˜ ์˜์ƒ ํŠน์„ฑ์„ ์ด์šฉํ•œ ๊ทธ๋ฃน ์›€์ง์ž„ ์ถ”์ • ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•œ๋‹ค. ํ”ฝ์…€์˜ ์›€์ง์ž„์„ ํ™•์ธํ•˜๊ธฐ ์ „์— ์˜์ƒ์˜ ๋ถ„๋ฅ˜์— ๋”ฐ๋ผ ๋ถ„๋ฅ˜๋œ ์˜์—ญ์˜ ์›€์ง์ž„์„ ๋จผ์ € ํŒŒ์•…ํ•˜๊ณ  ์ด๋ฅผ ํ†ตํ•˜์—ฌ ์ตœ์ข…์ ์ธ ์›€์ง์ž„์„ ์ถ”์ •ํ•˜๊ฒŒ ๋œ๋‹ค. ๊ทธ๋ฃน ์›€์ง์ž„ ์ถ”์ • ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜๋ฉด ๊ธฐ์กด์˜ ํƒ์ƒ‰์˜์—ญ ๋ฐฉ๋ฒ•๊ณผ ๋น„๊ตํ•˜์—ฌ ํƒ์ƒ‰ ์˜์—ญ์„ ์ตœ์†Œํ™” ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋ณต์žก๋„๋ฅผ ๋‚ฎ์ถœ ์ˆ˜ ์žˆ์Œ์„ ์‹คํ—˜์„ ํ†ตํ•˜์—ฌ ํ™•์ธํ•˜์˜€๋‹ค.์ดˆ ๋ก i ์ฐจ ๋ก€ iii ๊ทธ๋ฆผ ๋ชฉ์ฐจ vi ํ‘œ ๋ชฉ ์ฐจ ix ์ œ1์žฅ ์„œ ๋ก  1 1.1 ์—ฐ๊ตฌ ๋ฐฐ๊ฒฝ 1 1.2 ์—ฐ๊ตฌ ๋‚ด์šฉ 4 1.3 ๋…ผ๋ฌธ ๊ตฌ์„ฑ 6 ์ œ2์žฅ ํ…์ŠคํŠธ ์ƒ์„ฑ๊ณผ์ • ๋ฐ ๊ธฐ์กด์••์ถ•๋ฐฉ๋ฒ• 7 2.1 ํ…์ŠคํŠธ ์ƒ์„ฑ๊ณผ์ • 7 2.2 ํ‘œ์ค€ ์˜์ƒ ์••์ถ• ๋ฐฉ๋ฒ• 14 2.3 H.264 inter prediction 16 2.4 Compound image ์˜ ์••์ถ• ์•Œ๊ณ ๋ฆฌ์ฆ˜ 19 ์ œ3์žฅ Sub-pixel gradient ๋ธ”๋ก ๋ถ„๋ฅ˜ ๋ฐฉ๋ฒ• 23 3.1 Background & Text color extraction 28 3.2 Text De-colorization 32 3.3 ๋ธ”๋ก ๋ถ„๋ฅ˜ ์‹คํ—˜ ๊ฒฐ๊ณผ 38 ์ œ4์žฅ Sub-pixel Gradient text ๋ธ”๋ก ์ฝ”๋”ฉ ๋ฐฉ๋ฒ• 46 4.1 Gradient fitting process 51 4.2 Text Coding 56 4.2.1 Gradient๋กœ ๊ตฌ์„ฑ๋œ ๋ถ€๋ถ„์˜ ์ฝ”๋”ฉ๋ฐฉ๋ฒ• 56 4.2.2 Gradient๊ฐ€ ์—†๋Š” ๋ถ€๋ถ„์˜ ์ฝ”๋”ฉ๋ฐฉ๋ฒ• 57 4.2.3 local min/max ๊ฐ’ ์˜ˆ์ธก 57 4.2.4 Whole-pixel ์ฝ”๋”ฉ 59 4.2.5 ํ™”์งˆ enhancement 60 4.3 ํ…์ŠคํŠธ ์ฝ”๋”ฉ ๋™์ž‘ 64 4.3.1 ํ…์ŠคํŠธ ์ฝ”๋”ฉ ์ž…๋ ฅ 65 4.3.2 Whole-pixel ์ฝ”๋”ฉ 1 66 4.3.3 ์—ญ๋ฐฉํ–ฅ Sub-pixel gradient ์ฝ”๋”ฉ 1 67 4.3.4 Local minimum ์ฝ”๋”ฉ 1 69 4.3.5 ์ˆœ๋ฐฉํ–ฅ gradient ์ฝ”๋”ฉ 1 70 4.3.6 Local maximum ์ฝ”๋”ฉ 1 71 4.3.7 ์—ญ๋ฐฉํ–ฅ gradient ์ฝ”๋”ฉ 2 72 4.3.8 Local minimum ์ฝ”๋”ฉ 2 73 4.3.9 ์ˆœ๋ฐฉํ–ฅ gradient ์ฝ”๋”ฉ 2 74 4.3.10 Whole-pixel ์ฝ”๋”ฉ 2 75 4.4 ํ…์ŠคํŠธ ๋ธ”๋ก ์ฝ”๋”ฉ ์‹คํ—˜ ๊ฒฐ๊ณผ 77 ์ œ5์žฅ ๊ทธ๋ฃน ์›€์ง์ž„ ์ถ”์ • ๋ฐฉ๋ฒ• 88 5.1 Block Grouping 94 5.2 Group Matching 97 5.3 Group motion vector calculation 101 5.4 ๊ทธ๋ฃน ์›€์ง์ž„ ์ถ”์ • ๋ฐฉ๋ฒ• ์‹คํ—˜ ๊ฒฐ๊ณผ 104 ์ œ6์žฅ ๊ฒฐ ๋ก  109 ์ฐธ ๊ณ  ๋ฌธ ํ—Œ 112 Abstract 119Docto

    Scanline calculation of radial influence for image processing

    Full text link
    Efficient methods for the calculation of radial influence are described and applied to two image processing problems, digital halftoning and mixed content image compression. The methods operate recursively on scanlines of image values, spreading intensity from scanline to scanline in proportions approximating a Cauchy distribution. For error diffusion halftoning, experiments show that this recursive scanline spreading provides an ideal pattern of distribution of error. Error diffusion using masks generated to provide this distribution of error alleviate error diffusion "worm" artifacts. The recursive scanline by scanline application of a spreading filter and a complementary filter can be used to reconstruct an image from its horizontal and vertical pixel difference values. When combined with the use of a downsampled image the reconstruction is robust to incomplete and quantized pixel difference data. Such gradient field integration methods are described in detail proceeding from representation of images by gradient values along contours through to a variety of efficient algorithms. Comparisons show that this form of gradient field integration by convolution provides reduced distortion compared to other high speed gradient integration methods. The reduced distortion can be attributed to success in approximating a radial pattern of influence. An approach to edge-based image compression is proposed using integration of gradient data along edge contours and regularly sampled low resolution image data. This edge-based image compression model is similar to previous sketch based image coding methods but allows a simple and efficient calculation of an edge-based approximation image. A low complexity implementation of this approach to compression is described. The implementation extracts and represents gradient data along edge contours as pixel differences and calculates an approximate image by performing integration of pixel difference data by scanline convolution. The implementation was developed as a prototype for compression of mixed content image data in printing systems. Compression results are reported and strengths and weaknesses of the implementation are identified

    Optimum Implementation of Compound Compression of a Computer Screen for Real-Time Transmission in Low Network Bandwidth Environments

    Get PDF
    Remote working is becoming increasingly more prevalent in recent times. A large part of remote working involves sharing computer screens between servers and clients. The image content that is presented when sharing computer screens consists of both natural camera captured image data as well as computer generated graphics and text. The attributes of natural camera captured image data differ greatly to the attributes of computer generated image data. An image containing a mixture of both natural camera captured image and computer generated image data is known as a compound image. The research presented in this thesis focuses on the challenge of constructing a compound compression strategy to apply the โ€˜best fitโ€™ compression algorithm for the mixed content found in a compound image. The research also involves analysis and classification of the types of data a given compound image may contain. While researching optimal types of compression, consideration is given to the computational overhead of a given algorithm because the research is being developed for real time systems such as cloud computing services, where latency has a detrimental impact on end user experience. The previous and current state of the art videos codecโ€™s have been researched along many of the most current publishingโ€™s from academia, to design and implement a novel approach to a low complexity compound compression algorithm that will be suitable for real time transmission. The compound compression algorithm will utilise a mixture of lossless and lossy compression algorithms with parameters that can be used to control the performance of the algorithm. An objective image quality assessment is needed to determine whether the proposed algorithm can produce an acceptable quality image after processing. Both traditional metrics such as Peak Signal to Noise Ratio will be used along with a new more modern approach specifically designed for compound images which is known as Structural Similarity Index will be used to define the quality of the decompressed Image. In finishing, the compression strategy will be tested on a set of generated compound images. Using open source software, the same images will be compressed with the previous and current state of the art video codecโ€™s to compare the three main metrics, compression ratio, computational complexity and objective image quality

    Processing and codification images based on jpg standard

    Get PDF
    This project raises the necessity to use the image compression currently, and the different methods of compression and codification. Specifically, it will deepen the lossy compression standards with the JPEG [1] standard. The main goal of this project is to implement a Matlab program, which encode and compress an image of any format in a โ€œjpgโ€ format image, through JPEG standard premises. JPEG compresses images based on their spatial frequency, or level of detail in the image. Areas with low levels of detail, like blue sky, are compressed better than areas with high levels of detail, like hair, blades of trees, or hard-edged transitions. The JPEG algorithm takes advantage of the human eye's increased sensitivity to small differences in brightness versus small differences in color, especially at higher frequencies. The JPEG algorithm first transforms the image from RGB to the luminance/chrominance (Y-Cb-Cr) color space, or brightness/grayscale (Y) from the two color components. The algorithm then downsamples the color components and leaves the brightness component alone. Next, the JPEG algorithm approximates 8x8 blocks of pixels with a base value representing the average, plus some frequency coefficients for nearby variations. Quantization, then downsamples these DCT coefficients. Higher frequencies and chroma are quantized by larger coefficients than lower frequencies and luminance. Thus more of the brightness information is kept than the higher frequencies and color values. So the lower the level of detail and the fewer abrupt color or tonal transitions, the more efficient the JPEG algorithm becomes. ____________________________________________________________________________________________________________________________En este proyecto se aborda la necesidad de comprimir las imรกgenes en la actualidad, ademรกs de explicar los diferentes mรฉtodos posibles para la compresiรณn y codificaciรณn de imรกgenes. En concreto, se va a profundizar en los estรกndares de compresiรณn con pรฉrdidas, mediante el estรกndar JPEG. El pilar central del proyecto serรก la realizaciรณn de un programa en Matlab que codifique y comprima una imagen de cualquier formato en una imagen con formato โ€œjpgโ€, mediante las premisas del estรกndar JPEG. La compresiรณn de imรกgenes con JPEG estรก basada en la frecuencia espacial, o nivel de detalle, de las imรกgenes. Las รกreas con bajo nivel de detalle, es decir, homogรฉneas, se pueden comprimir mejor que รกreas con gran nivel de detalle o las transiciones de los bordes. El algoritmo JPEG se aprovecha de la sensibilidad del ojo humano a pequeรฑas diferencias de brillo frente a las de color, especialmente con altas frecuencias. El algoritmo JPEG primero transforma la paleta de colores de la imagen RGB a un espacio de color de luminancia/crominancia (Y-Cb-Cr), o brillo/escala de grises (Y) con las dos componentes del color. El algoritmo a continuaciรณn disminuye las componentes del color y deja solo la componente del brillo. A continuaciรณn, se aproxima la imagen en bloques de 8x8 pixeles con un valor base promedio, ademรกs de coeficientes de frecuencia de variaciones cercanas. Con la cuantificaciรณn, se disminuyen la resoluciรณn de los coeficientes de la DCT. Las frecuencias mรกs altas y crominancias se cuantifican con los coeficientes de bajas frecuencias y luminancia. De esta forma, se mantienen mayor informaciรณn de brillo que de altas frecuencias y colores. Por lo tanto, cuanto mรกs homogรฉnea sea la imagen (menor nivel de detalle y menos transiciones tonales abruptas) mรกs eficiente serรก el algoritmo JPEG.Ingenierรญa Tรฉcnica en Sistemas de Telecomunicaciรณ

    IMPACT Best Practice Guide: Metadata for Text Digitisation and OCR

    Get PDF

    The 1992 4th NASA SERC Symposium on VLSI Design

    Get PDF
    Papers from the fourth annual NASA Symposium on VLSI Design, co-sponsored by the IEEE, are presented. Each year this symposium is organized by the NASA Space Engineering Research Center (SERC) at the University of Idaho and is held in conjunction with a quarterly meeting of the NASA Data System Technology Working Group (DSTWG). One task of the DSTWG is to develop new electronic technologies that will meet next generation electronic data system needs. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The NASA SERC is proud to offer, at its fourth symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories, the electronics industry, and universities. These speakers share insights into next generation advances that will serve as a basis for future VLSI design

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Doctor of Philosophy

    Get PDF
    dissertationOver 40 years ago, the first computer simulation of a protein was reported: the atomic motions of a 58 amino acid protein were simulated for few picoseconds. With today's supercomputers, simulations of large biomolecular systems with hundreds of thousands of atoms can reach biologically significant timescales. Through dynamics information biomolecular simulations can provide new insights into molecular structure and function to support the development of new drugs or therapies. While the recent advances in high-performance computing hardware and computational methods have enabled scientists to run longer simulations, they also created new challenges for data management. Investigators need to use local and national resources to run these simulations and store their output, which can reach terabytes of data on disk. Because of the wide variety of computational methods and software packages available to the community, no standard data representation has been established to describe the computational protocol and the output of these simulations, preventing data sharing and collaboration. Data exchange is also limited due to the lack of repositories and tools to summarize, index, and search biomolecular simulation datasets. In this dissertation a common data model for biomolecular simulations is proposed to guide the design of future databases and APIs. The data model was then extended to a controlled vocabulary that can be used in the context of the semantic web. Two different approaches to data management are also proposed. The iBIOMES repository offers a distributed environment where input and output files are indexed via common data elements. The repository includes a dynamic web interface to summarize, visualize, search, and download published data. A simpler tool, iBIOMES Lite, was developed to generate summaries of datasets hosted at remote sites where user privileges and/or IT resources might be limited. These two informatics-based approaches to data management offer new means for the community to keep track of distributed and heterogeneous biomolecular simulation data and create collaborative networks
    • โ€ฆ
    corecore