145,639 research outputs found

    Development of a web-based recommendation service based on the analysis of user behavioral data

    Get PDF
    In today's information society, web services that provide personalized information and recommendations have become essential and popular. With the advancement of data analysis and machine learning technologies, developing web recommendation services based on user behavior analysis is a relevant task in the field of information technology. This article discusses the development of a recommendation web service that analyzes user behavior data to provide personalized recommendations, aiming to enhance user experience and meet their individual needs. The problem statement emphasizes the need to develop a web service for recommendations based on user behavior analysis. The tasks involved in achieving this goal include data collection and processing, data analysis, development of a recommendation algorithm, and the creation of a user-friendly web interface. The main task of this project is to develop a web service for recommendations based on user behavior analysis. The service aims to improve the user experience and satisfy individual needs by providing personalized recommendations that consider user preferences, activities, and context. In conclusion, the development of a web service for recommendations based on user behavior analysis is an important and timely task in today's information society. Leveraging user behavior data and machine learning algorithms enables the improvement of user experience and the provision of personalized recommendations. Developing such a service involves data collection and processing, algorithm development, and the implementation of a user-friendly web interface. Further advancements in these services can enhance personalization and effectiveness in information retrieval for users

    KFC Server: interactive forecasting of protein interaction hot spots

    Get PDF
    The KFC Server is a web-based implementation of the KFC (Knowledge-based FADE and Contacts) model—a machine learning approach for the prediction of binding hot spots, or the subset of residues that account for most of a protein interface's; binding free energy. The server facilitates the automated analysis of a user submitted protein–protein or protein–DNA interface and the visualization of its hot spot predictions. For each residue in the interface, the KFC Server characterizes its local structural environment, compares that environment to the environments of experimentally determined hot spots and predicts if the interface residue is a hot spot. After the computational analysis, the user can visualize the results using an interactive job viewer able to quickly highlight predicted hot spots and surrounding structural features within the protein structure. The KFC Server is accessible at http://kfc.mitchell-lab.org

    Web-based eTutor for learning electrical circuit analysis

    Get PDF
    This paper discusses a web-based eTutor for learning electrical circuit analysis. The eTutor system components, mainly the user-interface and the assessment model, are described. The system architecture developed provides a framework to support interactive sessions between the human and the machine for the case when the human is a student and the machine a tutor and also for the case when the roles of the human and the machine are swapped. To motivate the usefulness of the data gathered, some examples of interactive sessions are given and models to capture both declarative and procedural knowledge during learning are discussed. A probabilistic assessment model is reviewed and future directions in the field of eTutors for electrical circuits are discussed.peer-reviewe

    A Web Based User Interface for Machine Learning Analysis of Health and Education Data

    Full text link
    The objective of this thesis is to develop a user friendly web application that will be used to analyse data sets using various machine learning algorithms. The application design follows human computer interaction design guidelines and principles to make a user friendly interface [Shn03]. It uses Linear Regression, Logistic Regression, Backpropagation machine learning algorithms for prediction. This application is built using Java, Play framework, Bootstrap and IntelliJ IDE. Java is used in the backend to create a model that maps the input and output data based on any of the above given learning algorithms while Play Framework and Bootstrap are used to display content in frontend. Play framework is used because it is based on web-friendly architecture. As a result it uses predictable, minimal resources (CPU, memory, threads) for highly scalable applications. It is also developer friendly where changes can be made in the code and hitting the refresh button in browser will update the interface. Bootstrap is used to style the web application and it adds responsiveness to the interface with added feature of cross-browser compatible designs. As a result, the website is responsive and fits the screen size of computer. Using this web application users can predict features, category of the entity in the data sets. User needs to submit data set where each row in the data set must represent attributes of the entity. Once data is submitted the application builds a model using user selected machine learning algorithm logistic regression, linear regression or backpropagation. After the model is developed in second stage of the application user can submit attributes of the entity whose category needs to predicted. The predicted category will be displayed on screen in third stage of the application. The interface of the application shows its current active stage. These models are built using 80% of submitted dataset and remaining 20% is used to test the accuracy of the application. In this thesis, prediction accuracy of each algorithm is tested using UCI breast cancer data sets. When tested on breast cancer data with 10 attributes both Logistic Regression and Backpropagation gave 98.5% accuracy. And when tested on breast cancer data with 31 attributes Logistic Regression gave 92.85% accuracy and Backpropagation gave 94.64%

    Unveiling the Veiled: Unmasking Fileless Malware through Memory Forensics and Machine Learning

    Get PDF
    In recent times, significant advancements within the realm of malware development have dramatically reshaped the entire landscape. The reasons for targeting a system have undergone a complete transformation, shifting from file-based to fileless malware.Fileless malware poses a significant cybersecurity threat, challenging traditional detection methods. This research introduces an innovative approach that combines memory forensics and machine learning to effectively detect and mitigate fileless malware. By analyzing volatile memory and leveraging machine learning algorithms, our system automates detection.We employ virtual machines to capture memory snapshots and conduct thorough analysis using the Volatility framework.  Among various algorithms, we have determined that the Random Forest algorithm is the most effective, achieving an impressive overall accuracy rate of 93.33%. Specifically, it demonstrates a True Positive Rate (TPR) of 87.5% while maintaining a zero False Positive Rate (FPR) when applied to fileless malware obtained from HatchingTriage, AnyRun, VirusShare, PolySwarm, and JoESandbox datasets. To enhance user interaction, a user-friendly graphical interface is provided, and scalability and processing capabilities are optimized through Amazon Web Services.Experimental evaluations demonstrate high accuracy and efficiency in detecting fileless malware. This framework contributes to the advancement of cybersecurity, providing practical tools for detecting against evolving fileless malware threats

    DrugComb update: a more comprehensive drug sensitivity data repository and analysis portal

    Get PDF
    gkab438Combinatorial therapies that target multiple pathways have shown great promises for treating complex diseases. DrugComb (https://drugcomb.org/) is a web-based portal for the deposition and analysis of drug combination screening datasets. Since its first release, DrugComb has received continuous updates on the coverage of data resources, as well as on the functionality of the web server to improve the analysis, visualization and interpretation of drug combination screens. Here, we report significant updates of DrugComb, including: (i) manual curation and harmonization of more comprehensive drug combination and monotherapy screening data, not only for cancers but also for other diseases such as malaria and COVID-19; (ii) enhanced algorithms for assessing the sensitivity and synergy of drug combinations; (iii) network modelling tools to visualize the mechanisms of action of drugs or drug combinations for a given cancer sample and (iv) state-of-the-art machine learning models to predict drug combination sensitivity and synergy. These improvements have been provided with more user-friendly graphical interface and faster database infrastructure, which make DrugComb the most comprehensive web-based resources for the study of drug sensitivities for multiple diseases.Peer reviewe

    Towards a Knowledge Graph Enhanced Automation and Collaboration Framework for Digital Twins

    Get PDF
    The Digital Twin (DT) provides a digital representation of a physical system and allows users to interactively study the physical processes of a real system via the digital representation in different scenarios in real time. The development of a DT is highly complex; it requires not only expertise from multiple disciplines but also the integration of often heterogeneous software components, e.g., simulations, machine learning, visualization, and user interface components across distributed environments. This poster presents a Knowledge Graph-based ontological framework to boost automation and collaboration during the DT lifecycle stages. We implement our methods in developing a what-if analysis service for a DT of an ecosystem of wetlands and its automated deployment to the Amazon Web Services (AWS) cloud.</p

    Data Processing Engine (DPE): Data Analysis Tool for Particle Tracking and Mixed Radiation Field Characterization with Pixel Detectors Timepix

    Full text link
    Hybrid semiconductor pixelated detectors from the Timepix family are advanced detectors for online particle tracking, offering energy measurement and precise time stamping capabilities for particles of various types and energies. This inherent capability makes them highly suitable for various applications, including imaging, medical fields such as radiotherapy and particle therapy, space-based applications aboard satellites and the International Space Station, and industrial applications. The data generated by these detectors is complex, necessitating the development and deployment of various analytical techniques to extract essential information. For this purpose, and to aid the Timepix user community, it was designed and developed the "Data Processing Engine" (DPE) as an advanced tool for data processing designed explicitly for Timepix detectors. The functionality of the DPE is structured into three distinct processing levels: i) Pre-processing: This phase involves clusterization and the application of necessary calibrations and corrections. ii) Processing: This stage includes particle classification, employing machine learning algorithms, and the recognition of radiation fields. iii) Post-processing: Involves various analyses, such as directional analysis, coincidence analysis, frame analysis, Compton directional analysis, and the generation of physics products, are performed. The core of the DPE is supported by an extensive experimental database containing calibrations and referential radiation fields of typical environments, including protons, ions, electrons, gamma rays and X-rays, as well as thermal and fast neutrons. To enhance accessibility, the DPE is implemented into various user interface platforms such as a command-line tool, an application programming interface, and as a graphical user interface in the form of a web portal.Comment: 9 pages, proceedings IWORI

    VEAP: a visualisation engine and analyzer for preSS#

    Get PDF
    Computer science courses have been shown to have a low rate of student retention. There are many possible reasons for this, and our research group have had considerable success in pinpointing the factors that influence outcome when learning to program. The earlier we are able to make these predictions, the earlier a teacher can intervene and provide help to an at-risk student, before they fail and/or drop out. PreSS (Predict Student Success) is a semi-automated machine learning system developed between 2002 and 2006 that can predict the performance of students on an introductory programming module with 80% accuracy, after minimal programming exposure. Between 2013 and 2015, a fully automated web-based system was developed, known as PreSS#, that replicates the original system but provides: a streamlined user interface; an easy acquisition process; automatic modeling; and reporting. Currently, the reporting component of PreSS# outputs a value that indicates if the student is a "weak" or "strong" programmer, along with a measure of confidence in the prediction. This paper will discuss the development of VEAP: a Visualisation Engine and Analyser for PreSS#. This software provides a comprehensive data visualisation and user interface, that will allow teachers to view data gathered and processed about institutions, classes and individual students, and provides access to further user-defined analysis, to allow a teacher to view how an intervention could influence a student's predicted outcome
    corecore