46 research outputs found

    Development and Effectiveness Testing of “Punsook”: A Smartphone Application for Intermittent Urinary Catheter Users with Spinal Cord Injury

    Get PDF
    Objective: To develop a smartphone application to assist the self-management of intermittent urinary catheter  users and a study of its effectiveness. Methods: In phase 1, the 10 intermittent urinary catheter users used the first version of “Punsook”, a web-based application (app) for a smartphone, alongside usual intermittent urinary catheterization (IC), and gave feedback on their experiences. Their qualitative opinions were used to further develop a second version of the “Punsook” app. In phase 2, the new version was used by 35 participants, who were asked to complete an effectiveness questionnaire after using the app, including providing details on their history of urinary tract infection (UTI), urinary leakage, and catheterization-related pain. This information was gathered at the end of first and third months in the second phase of the study. Results: More than half the participants agreed at the end of the first month that every part of the app was acceptably pleasant. They admitted to quite liking the simplicity of the app regarding ease of use, accessibility, ease of return to use, and interest in the program. No statistically significant changes in urinary leakage, UTI, or pain were found. Conclusion: The app was considered effective in terms of the positive user satisfaction with every part of the program. However, despite this positive reception, the app might not actually have helped the users to improve their bladder control

    SC-FDMA-based resource allocation and power control scheme for D2D communication using LTE-A uplink resource

    Get PDF
    Device-to-device (D2D) communication-enabled cellular networks allow cellular devices to directly communicate with each other without any evolved NodeB (eNB). D2D communication aims to improve the spectral efficiency and increases the overall system capacity. For future mobile networks, intelligent radio resource allocation and power control schemes are required to accommodate the increasing number of cellular devices and their growing demand of data traffic. In this paper, a combined resource allocation and power control scheme for D2D communication is proposed. In the proposed scheme, D2D communication reuses the uplink (UL) resources of conventional cellular user equipments (CUEs); therefore, we have adopted single-carrier frequency division multiple access (SC-FDMA) as UL transmission scheme. The proposed scheme uses fractional frequency reuse (FFR)-based architecture to efficiently allocate the resources and mitigate the interference between CUEs and D2D user equipments (DUEs). In order to guarantee the user fairness, the proposed scheme uses the well-known proportional fair (PF) scheduling algorithm for resource allocation. We have also proposed an intelligent power control scheme which provides equal opportunity to both CUEs and DUEs to achieve a certain minimum signal-to-interference and noise ratio (SINR) value. The performance evaluation results show that the proposed scheme significantly improves the overall cell capacity and achieves low peak-to-average power ratio (PAPR)

    A Log Parsing Framework for ALICE O2^2 Facilities

    No full text
    The ALICE (A Large Ion Collider Experiment) detector at the European Organization for Nuclear Research (CERN) generates a substantial volume of experimental data, demanding efficient online and offline processing. To enhance the stability and reliability of the ALICE computing system, this study introduces an Artificial Intelligence-based logging system designed to detect, identify, and resolve issues through the analysis of system runtime information contained in logs. Existing online log parsing methods, however, often lack full automation and generality, relying instead on manual parameter definition and regular expressions that are better suited for static logs. In this study, we propose a novel and fully automated online log parsing framework for ALICE O2 (Online-Offline). To overcome key challenges, we employ the Term Frequency-Inverse Document Frequency (TF-IDF) algorithm to create ground truth, employ genetic programming to generate regular expressions, utilize the Artificial Bee Colony (ABC) algorithm for hyperparameter optimization, and implement a log template reduction algorithm to reduce similarity among log templates. Our framework’s effectiveness is validated through experiments on 5 benchmark log datasets and ALICE application logs, comparing its performance with the state-of-art online log parsing framework, Drain. The empirical results demonstrate the automated nature of our approach and its ability to achieve accurate parsing with high accuracy (i.e., 99.89% on the ALICE application log)

    A Topic Modeling for ALICE'S Log Messages using Latent Dirichlet Allocation

    No full text
    In modern-day software where digital technology is everywhere, the system can generate a massive amount of log messages every second. Like other data, a log can provide insight and depth knowledge of the system given enough resources and time. However, not all systems have an organized log system, and an unorganized log is messy and difficult to navigate. There are many challenging points for organizing the log messages. As the amount of log data generated is massive, it is impossible to be handled by human labor alone. A log message is not regular human communication. To thoroughly understand the content inside the log, assistance from specialists of that particular system is required. These problems exist everywhere, and there is no exception even for high-performance computing systems like those used in the ALICE experiment at CERN. In this paper, we propose a topic modeling for ALICE’s log messages using the Latent Dirichlet Allocation algorithm. The objective is to convert the messy log messages into categorized ones. We handled the log messages and preprocessed them using Bag of Word. Then we performed hyperparameter-tuning to find the suitable number of topics using topic coherence as an evaluated measurement. Additionally, we also applied the same method to the log dataset of HDFS, to ensure the valid ability of the model. Finally, the outputs were then handed to CERN domain experts to give the final evaluation. From the result, we could create a practical topic modeling framework for ALICE’s log messages in a real scenario

    Computing Resource Optimization for a Log Monitoring System

    No full text
    A Large Ion Collider Experiment (ALICE) at the Large Hadron Collider (LHC) in the European Organization for Nuclear Research (CERN) laboratory was built to study heavy-ion collisions and the properties of the quark-gluon plasma. The Online and Offline (O2) software systems of the experiment generate a huge amount of log data that is used for monitoring to detect a potential system failure. Elasticsearch was selected as a log storage and search engine for the monitoring system. One of the main problems is how to allocate the computing resources for Elasticsearch while minimizing cost and satisfying performance thresholds, i.e., throughput). Moreover, lacking knowledge of the search engine's behavior makes it difficult to find the best configuration. The exhaustive search method is a potential approach for solving. However, it is not practical since it consumes a lot of time and computing resources. Due to the limited resources, Bayesian optimization is applied as a solution. The Bayesian method requires only a few samples to create a surrogate function that roughly represents the objective function, i.e., minimizing cost while satisfying the performance needs. Then, the method explores only the area where the optimal solution exists with a high probability. The results show that Bayesian optimization provides the optimal or near-optimal computing resource configuration for given benchmark experiments while requiring only about half of the evaluations compared to other methods, e.g., exhaustive search, regression, and machine learning. The impact of several acquisition functions and initial sample generators were studied in order to find the best solution. These insights can help system operators search for an optimal computing resource configuration quickly and efficiently

    A Hyperparameter Tuning Approach for an Online Log Parser

    No full text
    The European Organization for Nuclear Research (CERN) has deployed ALICE'S upgraded computing system in 2020 for improving the performance of the system. One of the aims of the upgraded computing system is to complement the monitoring system by using an Al-based logging system since logs include valuable system runtime information. This allows developers and administrators to monitor their systems and identify abnormal behavior and errors. The new computing system is expected to generate large quantities of logs due to the scale and the complexity of the system. Therefore, log parsing is required to transform unstructured log or free-text log messages into structured logs where the structured logs are ready to use as the input of an automated monitoring system in ALICE. Drain is a popular online log parsing method using the parsing tree technique. However, the performance of Drain depends on the values of parameters (i.e., similarity threshold, maximum depth of the tree, and maximum child nodes of the tree). To achieve the best performance in a reasonable time, we propose a hyperparameter tuning approach by using the Artificial Bee Colony (ABC) algorithm to support Drain. We evaluate our proposed method on two log datasets which are HDFS and Apache ZooKeeper in terms of precision, recall, f-measure, and parsing accuracy

    A runtime estimation framework for ALICE

    No full text
    The European Organization for Nuclear Research (CERN) is the largest research organization for particle physics. ALICE, short for ALarge Ion Collider Experiment, serves as one of the main detectors at CERN and produces approximately 15 petabytes of data each year. The computing associated with an ALICE experiment consists of both online and offline processing. An online cluster retrieves data while an offline cluster farm performs a broad range of data analysis. Online processing occurs as collision events are streamed from the detector to the online cluster. This process compresses and calibrates the data before storing it in a data storage system for subsequent offline processing, e.g., event reconstruction. Due to the large volume of stored data to process, offline processing seeks to minimize execution time and data-staging time of the applications via a two-tier offline cluster — the Event Processing Node (EPN) as the first tier and the World LHC Grid Computing (WLGC) as the second tier. This two-tier cluster requires a smart job scheduler to efficiently manage the running of the application. Thus, we propose a runtime estimation method for this offline processing in the ALICE environment. Our approach exploits application profiles to predict the runtime of a high-performance computing (HPC) application without the need for any additional metadata. To evaluate our proposed framework, we performed our experiment on the actual ALICE applications. In addition, we also test the efficacy of our runtime estimation method to predict the run times of the HPC applications on the Amazon EC2 cloud. The results show that our approach generally delivers accurate predictions, i.e., low error percentages
    corecore