32 research outputs found

    Bioinformatics: Computational Approaches for Genomics and Proteomics

    Get PDF
    Bioinformatics is a fast evolving field that combines biology, computer science, and statistics to analyze and comprehend enormous volumes of biological data. As a result of the introduction of high-throughput technologies like next-generation sequencing and mass spectrometry, genomic and proteomics research has generated enormous volumes of data, necessitating the development of computational tools to process and extract useful insights from these datasets. This presentation presents a survey of computational approaches in bioinformatics with a particular emphasis on their application to genomics and proteomics. The study of the entire genome is a topic covered in the discipline of genomics, which also includes genome annotation, assembly, and comparative genomics. Proteomics focuses on the investigation of proteins, including their identification, quantification, structural analysis, and functional characterization. Consequently, the importance of the area of bioinformatics has increased

    Quantum Computing: Algorithms,Architectures, and Applications

    Get PDF
    Cryptography, optimization, simulation, and machine learning are just a few of the industries that might be completely transformed by quantum computing. This abstract gives a thorough introduction to quantum computing with an emphasis on its algorithms, architectures, and applications. In conclusion, this abstract offers an in-depth analysis of quantum computing, including its algorithms, structures, and applications. It highlights the revolutionary potential of quantum computing in tackling difficult issues that are beyond the scope of conventional computers, laying the groundwork for further research and understanding of this quickly developing topic

    Parallel and Distributed Computing for High-Performance Applications

    Get PDF
    The study of parallel and distributed computing has become an important area in computer science because it makes it possible to create high-performance software that can effectively handle challenging computational tasks. In terms of their use in the world of high-performance applications, parallel and distributed computing techniques are given a thorough introduction in this study. The partitioning of computational processes into smaller subtasks that may be completed concurrently on numerous processors or computers is the core idea underpinning parallel and distributed computing. This strategy enables quicker execution times and enhanced performance in general. Parallel and distributed computing are essential for high-performance applications like scientific simulations, data analysis, and artificial intelligence since they frequently call for significant computational resources. High-performance apps are able to effectively handle computationally demanding tasks thanks in large part to parallel and distributed computing. This article offers a thorough review of the theories, methods, difficulties, and developments in parallel and distributed computing for high-performance applications. Researchers and practitioners may fully utilize the potential of parallel and distributed computing to open up new vistas in computational science and engineering by comprehending the underlying concepts and utilizing the most recent breakthroughs

    Computational Intelligence for Solving Complex Optimization Problems

    Get PDF
    Complex optimization issues may now be solved using computational intelligence (CI), which has shown to be a powerful and diverse discipline. Traditional optimization approaches frequently struggle to offer efficient and effective solutions because real-world situations are becoming more complicated. Evolutionary algorithms, neural networks, fuzzy systems, and swarm intelligence are just a few examples of the many methods that fall under the umbrella of computational intelligence and are inspired by both natural and artificial intelligence. This abstract examines how computational intelligence techniques are used to solve complicated optimization issues, highlighting their benefits, drawbacks, and most recent developments. In this, computational intelligence techniques provide a potent and adaptable solution for resolving challenging optimization issues. They are highly adapted for dealing with the non-linear connections, uncertainties, and multi-objective situations that arise in real-world problems. The limits of computational intelligence have recently been pushed by recent developments in hybrid techniques and metaheuristics, even if obstacles in algorithm design and parameter tuning still exist. Computational intelligence is anticipated to play an increasingly significant role in tackling complicated optimization issues and fostering innovation across a variety of disciplines as technology continues to advance

    Data Privacy and Security in Cloud Computing Environments

    Get PDF
    The globe has adopted the cloud computing environment, which organizes data and manages space for data storage, processing, and access. This technical development has brought up questions regarding data security and privacy in cloud computing environments, though. The purpose of this abstract is to offer a thorough review of the issues, solutions, and future developments related to data privacy and security in cloud computing. Keeping data private and secure while it is being processed and stored in outside data centres is the main difficulty in cloud computing systems. The abstract discusses the dangers of insider threats, data breaches, and illegal access to sensitive information. It digs further into the legal and compliance criteria that businesses must follow in order to protect user data in the cloud. In result, data privacy and security in cloud computing environments remain critical concerns for organizations and individuals alike. In the survey the overview of how to use cloud storage globally and its challenges, solution and future innovation is well explained. It underscores the importance of robust encryption, access controls, user awareness, and emerging technologies in safeguarding data in the cloud. By addressing these concerns, organizations can leverage the power of cloud computing while maintaining the confidentiality, integrity, and availability of their data

    Modeling ionic liquids mixture viscosity using Eyring theory combined with a SAFT-based EOS

    Get PDF
    This work aims to calculate the viscosities of ionic liquid mixtures using the Eyring theory combined with the SAFT-VR Morse EOS. The free volume theory was used to correlate the pure viscosity of ionic liquids (ILs) and solvents. Three model parameters have been adjusted using experimental viscosity data of ILs between 282 K and 413 K and 1 bar to 350 bar. The average ARD%, Bias%, and rmsd between model estimation and viscosity experimental data for pure ILs have been obtained 4.9 %, 1.015 %, and 0.67, respectively. The average error of the proposed model tends to increase at a pressure higher than 200 bar. The average ARD% for [C2mim][Tf2N] and [C6mim][Tf2N] is about 3.8 % and 3.4 % at pressures lower than 200 bar, while the average ARD% values increase sharply at higher pressures. This is due to the weak performance of the SAFT-VR Morse EOS for the calculation of IL density at high pressures. The SAFT-VR Morse EOS has been coupled with the Eyring theory, and the Redlich-Kister mixing rule to estimate the mixture viscosity of ILs-ILs and ILs-solvent systems. The thermal contribution of excess activation free energy has been calculated using the Redlich-Kister mixing rule with four adjustable parameters. The average ARD%, rmsd, and Bias% for fifteen binary mixtures have been obtained 3.9 %, 2.51, and 0.57 %, respectively. The average error values for mixture viscosity of ILs-polar solvent are higher than non-polar solvents. In the case of binary IL-IL systems, the model results are in good agreement with experimental data. The model performance has been evaluated using the viscosity deviation property. The SAFT-VR Morse EOS predicts the negative viscosity deviation. The strong attractive interaction in the mixture than a pure component is the major contribution to negative viscosity deviation. The results show that the new model can calculate the mixture viscosity and viscosity deviation of binary systems satisfactory. The obtained error values of mixture viscosity show that the Eyring theory can be coupled with a SAFT-based EOS to calculate the viscosity of ILs over a wide range of pressures and temperatures satisfactory

    Robust Classification and Detection of Big Medical Data Using Advanced Parallel K-Means Clustering, YOLOv4, and Logistic Regression

    No full text
    Big-medical-data classification and image detection are crucial tasks in the field of healthcare, as they can assist with diagnosis, treatment planning, and disease monitoring. Logistic regression and YOLOv4 are popular algorithms that can be used for these tasks. However, these techniques have limitations and performance issue with big medical data. In this study, we presented a robust approach for big-medical-data classification and image detection using logistic regression and YOLOv4, respectively. To improve the performance of these algorithms, we proposed the use of advanced parallel k-means pre-processing, a clustering technique that identified patterns and structures in the data. Additionally, we leveraged the acceleration capabilities of a neural engine processor to further enhance the speed and efficiency of our approach. We evaluated our approach on several large medical datasets and showed that it could accurately classify large amounts of medical data and detect medical images. Our results demonstrated that the combination of advanced parallel k-means pre-processing, and the neural engine processor resulted in a significant improvement in the performance of logistic regression and YOLOv4, making them more reliable for use in medical applications. This new approach offers a promising solution for medical data classification and image detection and may have significant implications for the field of healthcare.</p

    Employment of multi-classifier and multi-domain features for PCG recognition

    No full text
    In this paper, multi-classifier of K-Nearest Neighbor and Support Vector Machine (SVM) classifiers with multi-domain features are employed, as a proposed methodology for recognizing the normality status of the heart sound recordings (so-called Phonocardiogram - PCG). The PhysioNet/CinC Challenge 2016 offers the dataset used in this paper. Heart sounds are complex signals and required trained clinicians for diagnosis, which motivated us to develop an algorithm for automatic classification of heart sounds into two classes normal and abnormal. Entropy, high-order statistics, Cyclo-stationarity, cepstrum, the frequency spectrum of records, energy, state amplitude, the frequency spectrum of states, and time interval, are the nine-domain features employed. These domain features are extracted to a total of 527 features. These features have been used to train the KNearest Neighbor and Support Vector Machine (SVM) classifiers. Fine-KNN classifier outperformed types of SVM classifiers by achieving the accuracy of 93.5% while Cubic-SVM classifier achieved 90.9% which is the highest accuracy of all SVMs. The Fine-KNN classifier and the proposed features are both efficient and significant for PCG recognition.</p

    Content based image retrival using combination histogram and moment methods

    No full text
    In this work, we introduce content based image retrieval CBIR. One of the essential features is colour in an image processing and CBIR so we use the colour histogram and colour moment features in order to compare a inquiry image with the image in the database. The two ways, colour histogram and colour moment have achieved state-of-art results when we applied them to WANG database images. For testing purposes, We have used 100 images (10 images from each class).The mean retrieval of precision of histogram was 74.4 % and the average of colour moment was 72.4% when test every algorithm alone and the result be more efficient when combine them which be 75.1 % by using constant weight and the precision increase to 81.9% when make the weight of combination variable by the user. (Restate with marks).</p

    Bioinformatics: Computational Approaches for Genomics and Proteomics

    No full text
    Bioinformatics is a fast evolving field that combines biology, computer science, and statistics to analyze and comprehend enormous volumes of biological data. As a result of the introduction of high-throughput technologies like next-generation sequencing and mass spectrometry, genomic and proteomics research has generated enormous volumes of data, necessitating the development of computational tools to process and extract useful insights from these datasets. This presentation presents a survey of computational approaches in bioinformatics with a particular emphasis on their application to genomics and proteomics. The study of the entire genome is a topic covered in the discipline of genomics, which also includes genome annotation, assembly, and comparative genomics. Proteomics focuses on the investigation of proteins, including their identification, quantification, structural analysis, and functional characterization. Consequently, the importance of the area of bioinformatics has increased
    corecore