Computing and Informatics (E-Journal - Institute of Informatics, SAS, Bratislava)
Not a member yet
1446 research outputs found
Sort by
Identification of KLF9 and FOSL2 as Endoplasmic Reticulum Stress Signature Genes in Osteoarthritis with Multiple Machine Learning Approaches
Objective: This study aims to screen osteoarthritis (OA) endoplasmic reticulum (ER) stress signature genes using a machine learning approach to provide new insights and methods for OA treatment. Methods: We obtained GSE55235 and GSE98918 datasets from the gene expression omnibus (GEO) database and identified ER stress-related genes from the GeneCard database. We used R software to perform data batch correction, extract OA endoplasmic reticulum stress-related genes, and conduct differential analysis. We performed functional Gene Ontology (GO) analysis, Kyoto Encyclopedia of Genes and Genomes (KEGG) signaling pathway analysis, and gene set enrichment analysis (GSEA) on differentially expressed genes (DEGs). Additionally, we used machine learning algorithms, including Least Absolute Shrinkage and Selection Operator (LASSO) regression, SVM-RFE, and weighted gene co-expression network analysis (WGCNA), to screen OA endoplasmic reticulum stress signature genes. Human chondrocytes were selected for OA model establishment, cells without any treatment were served as the control. Results: We obtained 236 DEGs related to OA ER stress. GO and KEGG enrichment analysis showed that these genes were mainly involved in the positive regulation of leukocyte activation, collagen-containing extracellular matrix, phagosome, and other biological functions or signaling pathways. GSEA-GO analysis revealed that ER stress genes were significantly enriched in the negative regulation in metabolic processes of nucleobase-containing compounds (NES = -2.50, P < 0.001), while OA ER stress genes were significantly enriched in the processing and presentation of peptide antigens (NES = 2.40, P < 0.001). Using WGCNA analysis, LASSO regression analysis, and SVM-RFE analysis of intersection, we identified KLF9 and FOSL2 as potential OA endoplasmic reticulum stress signature genes, which were found to be more accurate as OA signature genes after validation. KLF9 expression in OA group was higher than that in control group, while FOSL2 expression was lower (P < 0.05). Conclusion: Machine learning and co-expression network analysis can effectively identify the genes and potential factors characteristic of ER stress in OA, which can help elucidate its pathogenesis and provide a new direction for better clinical treatment
MGRF: Multi-Graph Recommendation Framework with Heterogeneous and Homogeneous Graph Iterative Fusion
With the development of deep learning, deep neural methods have been introduced to boost the performance of Collaborative Filtering (CF) models. However, most of the models rely solely on the user-item heterogeneous graph and only implicitly capture homogenous information, which limits their performance improvement. Although some state-of-the-art methods try to utilize additional graphs to make up, they either merely aggregate the information of multiple graphs in the step of initial embedding or only merge different multi-graph information in the step of final embedding. Such one-time multi-graph integration leads to the loss of interactive and topological information in the intermediate process of propagation. This paper proposes a novel Multi-Graph iterative fusion Recommendation Framework (MGRF) for CF recommendation. The core components are dual information crossing interaction and multi-graph fusing propagation. The former enables repeated feature crossing between heterogeneous nodes throughout the whole embedding process. The latter repeatedly integrates homogeneous nodes as well as their topological relationships based on the constructed user-user and item-item graphs. Thus, MGRF can improve the embedding quality by iteratively fusing user-item heterogeneous graph, user-user and item-item homogeneous graphs. Extensive experiments on three public benchmarks demonstrate the effectiveness of MGRF, which outperforms state-of-the-art baselines in terms of Recall and NDCG.
Intelligent Fusion Recommendation Algorithm for Social Network Based on Fuzzy Perception
In order to improve the effect of intelligent fusion recommendation under the background of social network, this paper combines the fuzzy perception algorithm to research the intelligent fusion recommendation algorithm of social network. Moreover, this paper proposes a task offloading scheme that relies on V2V communication to utilize idle computing resources in a "resource pool". In addition, this paper formulates the computational task execution time as a min-max problem to reduce the storage overhead to optimize the total task execution time. Numerical results show that the proposed scheme greatly reduces the task execution time. The introduced particle swarm optimization algorithm also proves the convergence speed and accuracy of the optimization problem. The research verifies that the intelligent fusion recommendation algorithm for social network based on fuzzy perception has good social network data fusion effect and can effectively improve the effect of intelligent fusion recommendation
BTAN: Lightweight Super-Resolution Network with Target Transform and Attention
In the realm of single-image super-resolution (SISR), generating high-resolution (HR) images from a low-resolution (LR) input remains a challenging task. While deep neural networks have shown promising results, they often require significant computational resources. To address this issue, we introduce a lightweight convolutional neural network, named BTAN, that leverages the connection between LR and HR images to enhance performance without increasing the number of parameters. Our approach includes a target transform module that adjusts output features to match the target distribution and improve reconstruction quality, as well as a spatial and channel-wise attention module that modulates feature maps based on visual attention at multiple layers. We demonstrate the effectiveness of our approach on four benchmark datasets, showcasing superior accuracy, efficiency, and visual quality when compared to state-of-the-art methods.
Leveraging Genetic Algorithms for Efficient Search-Based Higher Order Mutation Testing
Higher order mutation testing is a type of white-box testing in which the source code is changed repeatedly using two or more mutation operators to generate mutated programs. The objective of this procedure is to improve the design and execution phases of testing by allowing testers to automatically evaluate their test cases. However, generating higher order mutants is challenging due to the large number of mutants needed and the complexity of the mutation search space. To address this challenge, the problem is modeled as a search problem. The purpose of this study is to propose a genetic algorithm-based search technique for mutation testing. The expected outcome is a reduction in the number of equivalent high order mutants produced, leading to a minimum number of mutant sets that produce an adequate mutation score. The experiments were carried out and the results were compared with a random search algorithm and four different versions of the proposed genetic algorithm which use different selection methods: roulette wheel, tournament, rank, and truncation selection. The results indicate that the number of equivalent mutants and the execution cost can be reduced using the proposed genetic algorithm with respect to the selection method
Visual Communication Design and Color Balance Algorithm for Multimedia Image Analysis
As culture continues to evolve, the field of visual communication design faces new challenges in the era of new media. To address these challenges, a paper proposed innovative ideas for the industry's growth and development. Specifically, the paper suggested that incorporating visual communication design and color balance into multimedia image analysis can enhance the visual impact of images. Results showed that images analyzed under this approach received a visual effect score 10 % higher than those analyzed without it, validating the effectiveness of this proposal. The visual effect score of Image 2 (the visual design proposed in this article) was 10 % higher than that of Image 1 (general visual design). Based on the comparison of the spatial comparison Transfer Function of Modulation Transfer Function, the increase is about 12.5 % under this method. Overall, this paper offered new perspectives on improving visual communication design for the era of new media
Dangerousness of Client-Side Code Execution with Microsoft Office
Gaining unauthorized remote access to an environment is generally done either by exploiting a vulnerable service, or application that is internet-based; or by tricking a user into executing malicious codes. The former one is typically more simple since there is no need for any user interaction. The latter one, however, requires much more effort on the attackers' side since they must find a way to incite the victim into opening a malicious document and interacting with an HTML page in a web browser. In this paper, we will focus on the latter technique which falls into the social engineering category, as it will involve the use of a phishing attack. The reason for this selection is based on the fact that it is challenging to correct user behavior. Thus, it increases the attackers' chance of performing a successful attack, contrary to the former technique, where a simple patch, upgrade, or update can prevent the adversaries from being successful in their attacks. Since Microsoft Office is a very trusted and used software by many people (both in personal and commercial use), we will make use of its features to build our payloads and eventually to gain a remote code execution to a victim's system. Performing a successful phishing attack involves a lot of barriers that often need to be crossed such as the need for similarity, purchasing domains, the use of encoding, encryption, etc. Nowadays, companies frequently employ very aggressive antivirus software that will delete malicious files as soon as they land on their system. Therefore, bypassing the security protections will need to be taken into account, which will also be addressed in this paper
Low-Light Image Enhancement via Weighted Fractional-Order Model
Low-light image enhancement (LLIE) enables to serve high-level vision tasks and improve their efficiency. Retinex-based methods have well been recognized as a representative technique for LLIE, but they still suffer from inflexible regularization terms in decomposing illumination and reflectance. In this paper, we propose a new weighted fractional-order variational model based on the Retinex model. First, the constructed weighted fractional-order variational model estimates piecewise smoothed and weakly pixel-shifted illumination by aware structures and textures. Then, to solve this problem accurately, we chose a semi-decoupled approach and an alternating minimization method. Finally, the designed multi-illumination fusion method accurately enhances the structure-rich dark regions of the image through well-exposedness and local entropy weights, while realizing adaptive enhancement based on a naturalness-preserving parameter estimation algorithm. The results of subjective and objective experiments on several challenging low-light datasets demonstrate that our proposed method shows better competitiveness in enhancing low-light images compared with the state-of-the-art methods
MIDWRSeg: Acquiring Adaptive Multi-Scale Contextual Information for Road-Scene Semantic Segmentation
We present MIDWRSeg, a simple semantic segmentation model based on neural network architecture. For complex road scenes, a large receptive field gathered at multiple scales is crucial for semantic segmentation tasks. Currently, there is an urgent need for the CNN architecture to establish long-range dependencies (large receptive fields) akin to the unique attention mechanism employed by the Transformer architecture. However, the high complexity of the attention mechanism formed by the matrix operations of Query, Key and Value cannot be borne by real-time semantic segmentation models. Therefore, a Multi-Scale Convolutional Attention (MSCA) block is constructed using inexpensive convolution operations to form long distance dependencies. In this method, the model adopts a Simple Inverted Residual (SIR) block for feature extraction in the initial encoding stage. After downsampling, the feature maps with reduced resolution undergo a sequence of stacked MSCA blocks, resulting in the formation of multi-scale long-range dependencies. Finally, in order to further enrich the size of the adaptive receptive field, an Internal Depth Wise Residual (IDWR) block is introduced. In the decoding stage, a simple decoder similar to FCN is used to alleviate computational consumption. Our method has formed a competitive advantage with existing real-time semantic segmentation models for encoder-decoder on Cityscapes and CamVid datasets. Our MIDWRSeg achieves 74.2 % mIoU at a speed of 88.9 FPS at Cityscapes test and achieves 76.8 % mIoU at a speed of 95.2 FPS at CamVid test