2,768 research outputs found

    Efficient data reliability management of cloud storage systems for big data applications

    Get PDF
    Cloud service providers are consistently striving to provide efficient and reliable service, to their client's Big Data storage need. Replication is a simple and flexible method to ensure reliability and availability of data. However, it is not an efficient solution for Big Data since it always scales in terabytes and petabytes. Hence erasure coding is gaining traction despite its shortcomings. Deploying erasure coding in cloud storage confronts several challenges like encoding/decoding complexity, load balancing, exponential resource consumption due to data repair and read latency. This thesis has addressed many challenges among them. Even though data durability and availability should not be compromised for any reason, client's requirements on read performance (access latency) may vary with the nature of data and its access pattern behaviour. Access latency is one of the important metrics and latency acceptance range can be recorded in the client's SLA. Several proactive recovery methods, for erasure codes are proposed in this research, to reduce resource consumption due to recovery. Also, a novel cache based solution is proposed to mitigate the access latency issue of erasure coding

    A Survey on Energy-Efficient Strategies in Static Wireless Sensor Networks

    Get PDF
    A comprehensive analysis on the energy-efficient strategy in static Wireless Sensor Networks (WSNs) that are not equipped with any energy harvesting modules is conducted in this article. First, a novel generic mathematical definition of Energy Efficiency (EE) is proposed, which takes the acquisition rate of valid data, the total energy consumption, and the network lifetime of WSNs into consideration simultaneously. To the best of our knowledge, this is the first time that the EE of WSNs is mathematically defined. The energy consumption characteristics of each individual sensor node and the whole network are expounded at length. Accordingly, the concepts concerning EE, namely the Energy-Efficient Means, the Energy-Efficient Tier, and the Energy-Efficient Perspective, are proposed. Subsequently, the relevant energy-efficient strategies proposed from 2002 to 2019 are tracked and reviewed. Specifically, they respectively are classified into five categories: the Energy-Efficient Media Access Control protocol, the Mobile Node Assistance Scheme, the Energy-Efficient Clustering Scheme, the Energy-Efficient Routing Scheme, and the Compressive Sensing--based Scheme. A detailed elaboration on both of the basic principle and the evolution of them is made. Finally, further analysis on the categories is made and the related conclusion is drawn. To be specific, the interdependence among them, the relationships between each of them, and the Energy-Efficient Means, the Energy-Efficient Tier, and the Energy-Efficient Perspective are analyzed in detail. In addition, the specific applicable scenarios for each of them and the relevant statistical analysis are detailed. The proportion and the number of citations for each category are illustrated by the statistical chart. In addition, the existing opportunities and challenges facing WSNs in the context of the new computing paradigm and the feasible direction concerning EE in the future are pointed out

    A Comprehensive Review of Distributed Coding Algorithms for Visual Sensor Network (VSN)

    Get PDF
    Since the invention of low cost camera, it has been widely incorporated into the sensor node in Wireless Sensor Network (WSN) to form the Visual Sensor Network (VSN). However, the use of camera is bringing with it a set of new challenges, because all the sensor nodes are powered by batteries. Hence, energy consumption is one of the most critical issues that have to be taken into consideration. In addition to this, the use of batteries has also limited the resources (memory, processor) that can be incorporated into the sensor node. The life time of a VSN decreases quickly as the image is transferred to the destination. One of the solutions to the aforementioned problem is to reduce the data to be transferred in the network by using image compression. In this paper, a comprehensive survey and analysis of distributed coding algorithms that can be used to encode images in VSN is provided. This also includes an overview of these algorithms, together with their advantages and deficiencies when implemented in VSN. These algorithms are then compared at the end to determine the algorithm that is more suitable for VSN

    Antioxidants: nanotechnology and biotechnology fusion for medicine in overall

    Get PDF
    Antioxidant is a chemical substance that is naturally found in our food. It can prevent or reduce the oxidative stress of the physiological system. Due to the regular usage of oxygen, the body continuously produces free radicals. Excessive number of free radicals could cause cellular damage in the human body that could lead to various diseases like cancer, muscular degeneration and diabetes. The presence of antioxidants helps to counterattack the effect of these free radicals. The antioxidant can be found in abundance in plants and most of the time there are problems with the delivery. The solution is by using nanotechnology that has multitude potential for advanced medical science. Nano devices and nanoparticles have significant impact as they can interact with the subcellular level of the body with a high degree of specificity. Thus, the treatment can be in maximum efficacy with little side effect

    Balancing Compression and Encryption of Satellite Imagery

    Get PDF
    With the rapid developments in the remote sensing technologies and services, there is a necessity for combined compression and encryption of satellite imagery. The onboard satellite compression is used to minimize storage and communication bandwidth requirements of high data rate satellite applications. While encryption is employed to secure these resources and prevent illegal use of image sensitive information. In this paper, we propose an approach to address these challenges which raised in the highly dynamic satellite based networked environment. This approach combined compression algorithms (Huffman and SPIHT) and encryptions algorithms (RC4, blowfish and AES) into three complementary modes: (1) secure lossless compression, (2) secure lossy compression and (3) secure hybrid compression. The extensive experiments on the 126 satellite images dataset showed that our approach outperforms traditional and state of art approaches by saving approximately (53%) of computational resources. In addition, the interesting feature of this approach is these three options that mimic reality by imposing every time a different approach to deal with the problem of limited computing and communication resources

    Algorithms and Hardware Co-Design of HEVC Intra Encoders

    Get PDF
    Digital video is becoming extremely important nowadays and its importance has greatly increased in the last two decades. Due to the rapid development of information and communication technologies, the demand for Ultra-High Definition (UHD) video applications is becoming stronger. However, the most prevalent video compression standard H.264/AVC released in 2003 is inefficient when it comes to UHD videos. The increasing desire for superior compression efficiency to H.264/AVC leads to the standardization of High Efficiency Video Coding (HEVC). Compared with the H.264/AVC standard, HEVC offers a double compression ratio at the same level of video quality or substantial improvement of video quality at the same video bitrate. Yet, HE-VC/H.265 possesses superior compression efficiency, its complexity is several times more than H.264/AVC, impeding its high throughput implementation. Currently, most of the researchers have focused merely on algorithm level adaptations of HEVC/H.265 standard to reduce computational intensity without considering the hardware feasibility. Whatโ€™s more, the exploration of efficient hardware architecture design is not exhaustive. Only a few research works have been conducted to explore efficient hardware architectures of HEVC/H.265 standard. In this dissertation, we investigate efficient algorithm adaptations and hardware architecture design of HEVC intra encoders. We also explore the deep learning approach in mode prediction. From the algorithm point of view, we propose three efficient hardware-oriented algorithm adaptations, including mode reduction, fast coding unit (CU) cost estimation, and group-based CABAC (context-adaptive binary arithmetic coding) rate estimation. Mode reduction aims to reduce mode candidates of each prediction unit (PU) in the rate-distortion optimization (RDO) process, which is both computation-intensive and time-consuming. Fast CU cost estimation is applied to reduce the complexity in rate-distortion (RD) calculation of each CU. Group-based CABAC rate estimation is proposed to parallelize syntax elements processing to greatly improve rate estimation throughput. From the hardware design perspective, a fully parallel hardware architecture of HEVC intra encoder is developed to sustain UHD video compression at 4K@30fps. The fully parallel architecture introduces four prediction engines (PE) and each PE performs the full cycle of mode prediction, transform, quantization, inverse quantization, inverse transform, reconstruction, rate-distortion estimation independently. PU blocks with different PU sizes will be processed by the different prediction engines (PE) simultaneously. Also, an efficient hardware implementation of a group-based CABAC rate estimator is incorporated into the proposed HEVC intra encoder for accurate and high-throughput rate estimation. To take advantage of the deep learning approach, we also propose a fully connected layer based neural network (FCLNN) mode preselection scheme to reduce the number of RDO modes of luma prediction blocks. All angular prediction modes are classified into 7 prediction groups. Each group contains 3-5 prediction modes that exhibit a similar prediction angle. A rough angle detection algorithm is designed to determine the prediction direction of the current block, then a small scale FCLNN is exploited to refine the mode prediction

    ๋“œ๋…ธ๋ณด ์ „์žฅ์œ ์ „์ฒด ์กฐ๋ฆฝ ๋ฐ ๋ถ„์„์„ ํ†ตํ•œ ํ•ด์–‘ ์ ˆ์ง€๋™๋ฌผ ์ง„ํ™” ์‚ฌ๋ก€ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ž์—ฐ๊ณผํ•™๋Œ€ํ•™ ์ƒ๋ช…๊ณผํ•™๋ถ€, 2020. 8. ๊น€์›.The de novo genome assembly has become an essential approach for studying non-model organisms since the post-genome era arrived. The reported cases of de novo genome assemblies of non-model arthropods have increased dramatically in recent days. The marine arthropod, however, is one of the least sequenced animal groups despite of their surprisingly high taxonomic and morphological diversity. The de novo genome studies on these marine arthropods remain mostly limited in terms of their cases and quality of assemblies up to now. This study therefore conducted the first case of de novo genome research focusing to the under-sampled marine arthropod groups, the Class Pycnogonida and the Infraorder Brachyura in Korea. In this study, one mitochondrial genome and four whole-genomes were de novo assembled and their genomic characteristics were discussed. While the two cases of de novo genomes assembled by using short read-length sequencing showed limited assembly quality, the long read-length based assemblies of Nymphon striatum and Chionoecetes opilio provided significantly informative, high-qualitied genomes. The preliminary phylogenomic research of this study which firstly included the representative genomes of pycnogonid and brachyuran decapod, also implied that recent hypothesis of xiphosuran nested in the most derived clade, Arachnopulmonate, is indeedly plausible. Furthermore, the limitations of de novo genome researches on the laboratory experiment lacking bioinformatics background were discussed to establish an optimized research workflow for the genomic study on non-model marine arthropod.ํฌ์ŠคํŠธ๊ฒŒ๋†ˆ ์‹œ๋Œ€์˜ ๋„๋ž˜์— ๋”ฐ๋ผ ๋“œ๋…ธ๋ณด ์œ ์ „์ฒด ์กฐ๋ฆฝ์€ ๋น„๋ชจ๋ธ ์ƒ๋ช…์ฒด์˜ ์ƒ๋ช…ํ˜„์ƒ์„ ์—ฐ๊ตฌํ•˜๋Š”๋ฐ ํ•„์ˆ˜์ ์ธ ๊ณผ์ •์ด ๋˜์—ˆ๋‹ค. ๋น„๋ชจ๋ธ ์ ˆ์ง€๋™๋ฌผ์˜ ๋“œ๋…ธ๋ณด ์กฐ๋ฆฝ๋œ ์œ ์ „์ฒด์˜ ์‚ฌ๋ก€๋Š” ๊ทผ๋ž˜์— ๋“ค์–ด ๊ธ‰๊ฒฉํ•˜๊ฒŒ ์ฆ๊ฐ€ํ–ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ํ•ด์–‘ ์ ˆ์ง€๋™๋ฌผ์€ ๋†€๋ผ์šธ ์ •๋„๋กœ ๋‹ค์–‘ํ•œ ๋ถ„๋ฅ˜๊ตฐ๊ณผ ํ˜•ํƒœ๋ฅผ ๊ฐ€์ง์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ๊ฐ€์žฅ ๋“œ๋…ธ๋ณด ์œ ์ „์ฒด ์กฐ๋ฆฝ ์—ฐ๊ตฌ๊ฐ€ ๋ฏธํกํ•œ ๋ถ„๋ฅ˜๊ตฐ ์ค‘ ํ•˜๋‚˜์ด๋‹ค. ํ˜„์žฌ๊นŒ์ง€ ๋ณด๊ณ ๋œ ํ•ด์–‘ ์ ˆ์ง€๋™๋ฌผ์˜ ๋“œ๋…ธ๋ณด ์œ ์ „์ฒด ์กฐ๋ฆฝ ์—ฐ๊ตฌ๋Š” ๋Œ€๋ถ€๋ถ„์ด ๊ทธ ์–‘๊ณผ ์งˆ ๋ชจ๋‘๊ฐ€ ์ œํ•œ์ ์ด๋‹ค. ๊ทธ๋Ÿฌ๋ฏ€๋กœ, ๋ณธ ์—ฐ๊ตฌ๋Š” ๊ตญ๋‚ด์—์„œ ์ตœ์ดˆ๋กœ ์„ ํ–‰ ์—ฐ๊ตฌ๊ฐ€ ๋ฏธํกํ•œ ํ•ด์–‘ ์ ˆ์ง€๋™๋ฌผ ๋ถ„๋ฅ˜๊ตฐ์ธ ๋ฐ”๋‹ค๊ฑฐ๋ฏธ ๊ฐ•๊ณผ ๋‹จ๋ฏธ ํ•˜๋ชฉ์— ์ดˆ์ ์„ ๋งž์ถฐ ๋“œ๋…ธ๋ณด ์œ ์ „์ฒด ์กฐ๋ฆฝ ๋ฐ ๋ถ„์„์„ ์‹ค์‹œํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ์˜ ๊ฒฐ๊ณผ๋กœ, 1๊ฑด์˜ ๋ฏธํ† ์ฝ˜๋“œ๋ฆฌ์•„ ์œ ์ „์ฒด์™€ 4๊ฑด์˜ ์ „์žฅ์œ ์ „์ฒด๊ฐ€ ๋“œ๋…ธ๋ณด ์กฐ๋ฆฝ๋˜์—ˆ์œผ๋ฉฐ, ์กฐ๋ฆฝ๋œ ์œ ์ „์ฒด์˜ ํŠน์ง•์ด ๊ธฐ์ˆ ๋˜์—ˆ๋‹ค. ๋‹จ์„œ์—ด ์—ผ๊ธฐ์„œ์—ด๊ฒฐ์ •๋ฒ•์œผ๋กœ ์กฐ๋ฆฝ๋œ ๋‘ ๊ฑด์˜ ์œ ์ „์ฒด์˜ ํ’ˆ์งˆ์€ ๋น„๊ต์  ๋‚ฎ์•˜์œผ๋‚˜, ์žฅ์„œ์—ด ์—ผ๊ธฐ์„œ์—ด๊ฒฐ์ •๋ฒ•์„ ์ฃผ๋กœํ•˜์—ฌ ์กฐ๋ฆฝ๋œ Nymphon striatum๊ณผ Chionoecetes opilio ์œ ์ „์ฒด๊ฐ€ ๋งค์šฐ ํ’๋ถ€ํ•œ ๊ณ ํ’ˆ์งˆ ์œ ์ „์ฒด ์ •๋ณด๋ฅผ ์ œ๊ณตํ•œ๋‹ค๋Š” ๊ฒƒ์ด ๋ฐํ˜€์กŒ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ˆ˜ํ–‰๋œ ๊ธฐ์ดˆ์ ์ธ ๊ณ„ํ†ต์œ ์ „์ฒดํ•™ ์—ฐ๊ตฌ๋Š” ๋ฐ”๋‹ค๊ฑฐ๋ฏธ ๊ฐ•๊ณผ ์‹ญ๊ฐ ๋ชฉ์„ ๊ฐ๊ฐ ๋Œ€ํ‘œํ•˜๋Š” ๋“œ๋…ธ๋ณด ์กฐ๋ฆฝ๋œ ์œ ์ „์ฒด๋ฅผ ์ตœ์ดˆ๋กœ ํฌํ•จํ–ˆ์œผ๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ์ตœ๊ทผ ๋…ผ๋ž€์˜ ๋Œ€์ƒ์ธ ๊ฑฐ๋ฏธ๊ฐ•์— ์†ํ•˜๋Š” ํˆฌ๊ตฌ๊ฒŒ๋ฅ˜ ๊ฐ€์„ค์„ ์ง€์ง€ํ•˜๋Š” ๊ฒฐ๊ณผ๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ๊ฒƒ์œผ๋กœ ๋ฐํ˜€์กŒ๋‹ค. ๋” ๋‚˜์•„๊ฐ€, ๋น„์ƒ๋ฌผ์ •๋ณดํ•™ ์—ฐ๊ตฌ์‹ค ํ™˜๊ฒฝ์—์„œ ์ด๋ฃจ์–ด์ง€๋Š” ๋“œ๋…ธ๋ณด ์œ ์ „์ฒด ์—ฐ๊ตฌ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ์ œํ•œ์š”์ธ๋“ค์„ ๋ถ„์„ํ•จ์œผ๋กœ์จ ๋น„๋ชจ๋ธ ํ•ด์–‘ ์ ˆ์ง€๋™๋ฌผ์˜ ๋“œ๋…ธ๋ณด ์œ ์ „์ฒด ์—ฐ๊ตฌ์— ์ตœ์ ํ™”๋œ ์•ˆ์ •์ ์ธ ์—ฐ๊ตฌ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์‹œํ•˜์˜€๋‹ค.BACKGROUNDS 1 CHAPTER 1. THE PILOT RESEARCHES FOR EVOLUTIONARY STUDIES ON MARINE ARTHROPOD GENOMES 17 1.1. The preliminary genomic studies on Liparis tanakae and its genomic characteristics 19 1.1.1. Introduction 19 1.1.2. Materials and Methods 22 1.1.3. Results 29 1.1.4. Discussion 35 1.2. The de novo Mitochondrial genome assembly of Chionoecetes opilio : The manual curation of predicted genes and the phylogenomic analyses with large datasets 38 1.2.1. Introduction 38 1.2.2. Materials and Methods 40 1.2.3. Results 43 1.2.4. Discussion 49 CHAPTER 2. THE DE NOVO GENOME ASSEMBLIES OF THREE MARINE ARTHROPODS 53 2.1. The first de novo assembled genome of Portunus trituberculatus indicating the bottlenecks in researching non-model marine arthropods 55 2.1.1. Introduction 55 2.1.2. Materials and Methods 57 2.1.3. Results 63 2.1.4. Discussion 69 2.2. The high-qualitied marine arthropod assemblies : De novo assembled Chionoecetes opilio and Nymphon striatum genomes and their characteristics 71 2.2.1. Introduction 71 2.2.2. Materials and Methods 76 2.2.3. Results 86 2.3. General discussion 100 2.3.1. The ab initio prediction and annotation of marine arthropod Hox genes 100 2.3.2. The optimizied workflow of de novo whole-genome researches of marine arthropods 104 CHAPTER 3. THE CASE STUDY OF THE ARTHROPOD EVOLUTION THROUGH THE COMPARATIVE WHOLE-GENOME ANALYSES 111 3.1. The preliminary chelicerate phylogenomic analyses incorporating under-sampled taxa 113 3.1.1. Introduction 113 3.1.2. Materials and Methods 120 3.1.3. Results and Discussion 125 CONCLUSION 131 REFERENCES 135 APPENDIX 157 Appendix 1. Detailed list of sequenced animal genomes with their Scientific names visible 159 ABSTRACT (In Korean) 180Docto

    Hybrid DWT-DCT algorithm for image and video compression applications

    Get PDF
    Digital image and video in their raw form require an enormous amount of storage capacity. Considering the important role played by digital imaging and video, it is necessary to develop a system that produces high degree of compression while preserving critical image/video information. There are various transformation techniques used for data compression. Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are the most commonly used transformation. DCT has high energy compaction property and requires less computational resources. On the other hand, DWT is multiresolution transformation. In this work, we propose a hybrid DWT-DCT algorithm for image compression and reconstruction taking benefit from the advantages of both algorithms. The algorithm performs the Discrete Cosine Transform (DCT) on the Discrete Wavelet Transform (DWT) coefficients. Simulations have been conducted on several natural, benchmark, medical and endoscopic images. Several QCIF, high definition, and endoscopic videos have also been used to demonstrate the advantage of the proposed scheme. The simulation results show that the proposed hybrid DWT-DCT algorithm performs much better than the standalone JPEG-based DCT, DWT, and WHT algorithms in terms of peak signal to noise ratio (PSNR), as well as visual perception at higher compression ratio. The new scheme reduces โ€œfalse contouringโ€ and โ€œblocking artifactsโ€ significantly. The rate distortion analysis shows that for a fixed level of distortion, the number of bits required to transmit the hybrid coefficients would be less than those required for other schemes Furthermore, the proposed algorithm is also compared with the some existing hybrid algorithms. The comparison results show that, the proposed hybrid algorithm has better performance and reconstruction quality. The proposed scheme is intended to be used as the image/video compressor engine in imaging and video applications
    • โ€ฆ
    corecore