11 research outputs found

    DWT based Compression of X-Ray Images using Fuzzy C - Means

    Get PDF
    People, Hospitals and many other organizations manage to store a large amount of files inside their storage devices. Once the storage reaches its limit then the organizations try to minimize the files size by using different compression techniques. In this paper, we focus on lossless image compression for DICOM images using clustering approach. Cluster of data point is formed by Fuzzy C-mean clustering approach. An automatic threshold is selected by this clustering approach and the data point whose pixel intensity is greater than threshold is grouped into one cluster and the data point whose pixel intensity is less than threshold is grouped into another cluster. Hence we obtain region of interest (ROI) and Non-region of interest (N-ROI). Discrete wavelet transform (DWT) is used to compress the image and inverse DWT is used to regenerate the image

    Optimization of LZW Compression Algorithm With Modification of Dictionary Formation

    Get PDF
    At present, the need to access the data have been transformed into digital data, and its use has been growing very rapidly. This transformation is due to the use of the Internet is growing very rapidly, and also the development of mobile devices are growing massively. People tend to store a lot of files in their storage and transfer files from one media to another media. When approaching the limit of storage media, the fewer files that can be stored. A compression technique is required to reduce the size of a file. The dictionary coding technique is one of the lossless compression techniques, LZW is an algorithm for applying coding dictionary compression techniques. In the LZW algorithm, the process of forming a dictionary uses a future based dictionary and encoding process using the Fixed Length Code. It allows the encoding process to produce a sequence that is still quite long. This study will modify the process of forming a dictionary and use Variable Length Code, to optimize the compression ratio. Based on the test using the data used in this study, the average compression ratio for LZW algorithm is 42,85%, and our proposed algorithm is 38,35%. It proves that the modification of the formation of the dictionary we proposed has not been able to improve the compression ratio of the LZW algorithm

    Analisis Perbandingan Kompresi Suara Menggunakan Principal Component Analysis dan Transformasi Wavelet

    Get PDF
    One of the requirements faced as a result of information technology development is memory and transmission efficiency. This requirement can be overcome with data compression. Compression is a method to obtain compact data with a smaller size but still maintaining similarity to the original data. Principal Component Analysis (PCA) is an algorithm in machine learning that is used to reduce dimensions. Dimensional reduction is a process of transforming high-dimensional data into new subspaces with lower dimensions. The goal is to use some principal components to represents the original data. Wavelet transformation represents a signal into a set of basic functions through filter analysis. Wavelets concentrate information into coefficients of approximation and coefficients of detail. Wavelet transform produces a lot of zero or close to zero coefficients that can be neglected so it can reduce storage space. In this research, we will propose the implementation of PCA and Wavelet for digital audio compression. The audio was performed with the .wav format. The compressed audio will be evaluated based on Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE). The mean PSNR obtained when using a wavelet is 47.61601 dB  with an average MSE of 3.76 x 10-5. Meanwhile, when using PCA, the PSNR average was 57.3962772 dB and the average MSE obtained was 4.59 x 10-5. Four out of five compressed audio had a larger PSNR and smaller MSE when using PCA. Thus, the Principal Component Analysis algorithm can be better used for audio compression than the level 1 of Symlet Wavelet Transformation

    Linguistic-family-specific Encoders and Decoders for Multilingual Spoken Machine Translation

    Get PDF
    This project provides a spoken language translation system trained with UN Parallel Corpus and MuST-C, aiming at study the correlation between languages of different linguistic families and the performance of the translation tasks. This SLT system consists of a text-to-text Neural Machine Translation model, whose dataset includes six languages from five linguistic families, and a Automated Speech Recognition model, using dataset that contains four languages from four linguistic families. The combined SLT system is an end2end system, which is a relatively new task, and in this project, the idea is to analyze how would different linguistic families perform when training under the same conditions. Apart from measuring the performance using BLEU score system, this project also performs fine-tuning and zero-shot translation tasks. In general, the obtained BLEU scores are good and similar to original baseline models studies in UNPC and MuST-C papers. Finetuning and zero-shot translation experiments also obtained reasonable results, proving the hypothesized positive correlation between the closeness of languages and the performances of the translation tasks

    JBE. Concept and application

    Get PDF
    To save the information inside storage users try to reduce the files size to minimum by using data compression software. It is a new algorithm for data compression in this article. It is j-bit encoding (JBE). This algorithm manipulates each bit of data inside file to minimize the size without losing any data after decoding. It is classified lossless compression. This basic algorithm is combining with other data compression algorithms to optimize the compression ratio. The implementation of this algorithm consists in a combination of various data compression algorithms

    JBE. Concept and application

    Get PDF
    To save the information inside storage users try to reduce the files size to minimum by using data compression software. It is a new algorithm for data compression in this article. It is jbit encoding (JBE). This algorithm manipulates each bit of data inside file to minimize the size without losing any data after decoding. It is classified lossless compression. This basic algorithm is combining with other data compression algorithms to optimize the compression ratio. The implementation of this algorithm consists in a combination of various data compression algorithms

    Enhancing Program Management with Predictive Analytics Algorithms (PAAs)

    Get PDF
    The increase in the amount of big data and the emergence of analytics technologies has created the opportunity for applying algorithm development techniques using machine learning (ML) languages to predict future events. To conduct inclusive analyses of contemporary literature of existing relevant narratives with a focus on program management themes, including state-of-the art methodologies on current plausible predictive analytics models. The methodology used is the review and applications of relevant programming platforms available. Program management requires the utilization of the existing ML languages in understanding future events. Enabling decision makers to make strategic - goals, objectives, and missions. The use of PAAs has gained thematic significance in automotive industries, energy sector, financial organizations, industrial operations, medical services, governments, and more. PAAs are important in promoting the management of future events such as workflow or operational activities in a way that institutions can schedule their activities in order to optimize performance. It enables organizations to use existing big data to predict future performance and mitigate risks. The improvements in information technology and data analytics procedures have resulted in the ability of businesses to make effective use of historical data in making predictions. This enables evidence-based planning, mitigating risks, and optimizing production

    The Use of ChatGPT for Generating Scientific Citations : An experiment

    Get PDF
    This research examined the capabilities of ChatGPT in producing a numbered list of references for a range of topics and the accuracy of each reference through manual evaluation. Results suggest moderate levels of precision in generating reference lists, with 55% accuracy in titles, 43% in authors, 44% in sources, and 54% in overall relevance. Based on the relatively low accuracy of the generated references, this study introduced and applied a novel "Reverse Order Method". This method involves generating a list of references, manually validating each, and then instructing ChatGPT to compose a theoretical introduction based on the validated references alone. It's implied that the model's precise reproduction of a reference demonstrates repeated exposure and understanding of its content, enabling reliable citation in the final text. All final texts for all topics were evaluated as convincing and good quality scientific text with citations in place. Though the assessment of the final texts was purely subjective, the study suggest the promising utility of the Reverse Order Method in crafting scientific texts using ChatGPT 3.5. The study underscores the potential of AI tools like ChatGPT in scientific writing, emphasising the role of manual validation in improving precision and the careful use of AI-generated references. To enhance the understanding of the results, the study further explored the intricate inner workings of ChatGPT, concentrating on its 'transformer' architecture and the pursued 'learning objectives'. The study offered intuition by exploring the essential principles of language comprehension. A graphical representation, built upon existing research, was employed to illuminate this complex procedure

    A New Algorithm for Data Compression Optimization

    No full text
    corecore