10 research outputs found

    Evaluating Complex Entity Knowledge Propagation for Knowledge Editing in LLMs

    No full text
    In today’s world, where information keeps growing rapidly and changing constantly, language models play a crucial role in making our lives easier across different fields. However, it is tough to keep these models updated with all the new data while making sure they stay accurate and relevant. To tackle this challenge, our study proposes an innovative approach to facilitate the propagation of complex entity knowledge within language models through extensive triplet representation. Using a specially curated dataset (CTR-KE) derived from reliable sources like Wikipedia and Wikidata, the research assesses the efficacy of editing methods in handling intricate relationships between entities across multiple tiers of information. By employing a comprehensive triplet representation strategy, the study aims to enrich contextual understanding while mitigating the risks associated with distorting or forgetting critical information. The study evaluates its proposed methodology using various evaluation metrics and four distinct editing methods across three diverse language models (GPT2-XL, GPT-J, and Llama-2-7b). The results indicate the superiority of mass-editing memory in a transformer (MEMIT) and in-context learning for knowledge editing (IKE) in efficiently executing multiple updates within the triplet representation framework. This research signifies a promising pathway for deeper exploration of data representation for knowledge editing within large language models, and improved understanding of contexts to facilitate continual learning

    A Context-Aware Location Recommendation System for Tourists Using Hierarchical LSTM Model

    No full text
    The significance of contextual data has been recognized by analysts and specialists in numerous disciplines such as customization, data recovery, ubiquitous and versatile processing, information mining, and management. While a generous research has just been performed in the zone of recommender frameworks, by far most of the existing approaches center on prescribing the most relevant items to customers. It usually neglects extra-contextual information, for example time, area, climate or the popularity of different locations. Therefore, we proposed a deep long-short term memory (LSTM) based context-enriched hierarchical model. This proposed model had two levels of hierarchy and each level comprised of a deep LSTM network. In each level, the task of the LSTM was different. At the first level, LSTM learned from user travel history and predicted the next location probabilities. A contextual learning unit was active between these two levels. This unit extracted maximum possible contexts related to a location, the user and its environment such as weather, climate and risks. This unit also estimated other effective parameters such as the popularity of a location. To avoid feature congestion, XGBoost was used to rank feature importance. The features with no importance were discarded. At the second level, another LSTM framework was used to learn these contextual features embedded with location probabilities and resulted into top ranked places. The performance of the proposed approach was elevated with an accuracy of 97.2%, followed by gated recurrent unit (GRU) (96.4%) and then Bidirectional LSTM (94.2%). We also performed experiments to find the optimal size of travel history for effective recommendations

    Topic Predictions and Optimized Recommendation Mechanism Based on Integrated Topic Modeling and Deep Neural Networks in Crowdfunding Platforms

    No full text
    The accelerated growth rate of internet users and its applications, primarily e-business, has accustomed people to write their comments and reviews about the product they received. These reviews are remarkably competent to shape customers’ decisions. However, in crowdfunding, where investors finance innovative ideas in exchange for some rewards or products, the comments of investors are often ignored. These comments can play a markedly significant role in helping crowdfunding platforms to battle against the bitter challenge of fraudulent activities. We take advantage of the language modeling techniques and aim to merge them with neural networks to identify some hidden discussion patterns in the comments. Our objective is to design a language modeling based neural network architecture, where Recurrent Neural Networks (RNN) Long Short-Term Memory (LSTM) is used to predict discussion trends, i.e., either towards scam or non-scam. LSTM layers are fed with latent topic distribution learned from the pre-trained Latent Dirichlet Allocation (LDA) model. In order to optimize the recommendations, we used Particle Swarm Optimization (PSO) as a baseline algorithm. This module helps investors find secure projects to invest in (with the highest chances of delivery) within their preferred categories. We used prediction accuracy, an optimal number of identified topics, and the number of epochs, as metrics of performance evaluation for the proposed approach. We compared our results with simple Neural Networks (NNs) and NN-LDA based on these performance metrics. The strengths of both integrated models suggest that the proposed model can play a substantial role in a better understanding of crowdfunding comments

    Effectiveness of Machine Learning Approaches Towards Credibility Assessment of Crowdfunding Projects for Reliable Recommendations

    No full text
    Recommendation systems aim to decipher user interests, preferences, and behavioral patterns automatically. However, it becomes trickier to make the most trustworthy and reliable recommendation to users, especially when their hardest earned money is at risk. The credibility of the recommendation is of magnificent importance in crowdfunding project recommendations. This research work devises a hybrid machine learning-based approach for credible crowdfunding projects’ recommendations by wisely incorporating backers’ sentiments and other influential features. The proposed model has four modules: a feature extraction module, a hybrid LDA-LSTM (latent Dirichlet allocation and long short-term memory) based latent topics evaluation module, credibility formulation, and recommendation module. The credibility analysis proffers a process of correlating project creator’s proficiency, reviewers’ sentiments, and their influence to estimate a project’s authenticity level that makes our model robust to unauthentic and untrustworthy projects and profiles. The recommendation module selects projects based on the user’s interests with the highest credible scores and recommends them. The proposed recommendation method harnesses numeric data and sentiment expressions linked with comments, backers’ preferences, profile data, and the creator’s credibility for quantitative examination of several alternative projects. The proposed model’s evaluation depicts that credibility assessment based on the hybrid machine learning approach contributes efficient results (with 98% accuracy) than existing recommendation models. We have also evaluated our credibility assessment technique on different categories of the projects, i.e., suspended, canceled, delivered, and never delivered projects, and achieved satisfactory outcomes, i.e., 93%, 84%, 58%, and 93%, projects respectively accurately classify into our desired range of credibility

    Backers Beware: Characteristics and Detection of Fraudulent Crowdfunding Campaigns

    No full text
    Crowdfunding has seen an enormous rise, becoming a new alternative funding source for emerging companies or new startups in recent years. As crowdfunding prevails, it is also under substantial risk of the occurrence of fraud. Though a growing number of articles indicate that crowdfunding scams are a new imminent threat to investors, little is known about them primarily due to the lack of measurement data collected from real scam cases. This paper fills the gap by collecting, labeling, and analyzing publicly available data of a hundred fraudulent campaigns on a crowdfunding platform. In order to find and understand distinguishing characteristics of crowdfunding scams, we propose to use a broad range of traits including project-based traits, project creator-based ones, and content-based ones such as linguistic cues and Named Entity Recognition features, etc. We then propose to use the feature selection method called Forward Stepwise Logistic Regression, through which 17 key discriminating features (including six original and hitherto unused ones) of scam campaigns are discovered. Based on the selected 17 key features, we present and discuss our findings and insights on distinguishing characteristics of crowdfunding scams, and build our scam detection model with 87.3% accuracy. We also explore the feasibility of early scam detection, building a model with 70.2% of classification accuracy right at the time of project launch. We discuss what features from which sections are more helpful for early scam detection on day 0 and thereafter

    A Comprehensive Predictive-Learning Framework for Optimal Scheduling and Control of Smart Home Appliances Based on User and Appliance Classification

    No full text
    Energy consumption is increasing daily, and with that comes a continuous increase in energy costs. Predicting future energy consumption and building an effective energy management system for smart homes has become essential for many industrialists to solve the problem of energy wastage. Machine learning has shown significant outcomes in the field of energy management systems. This paper presents a comprehensive predictive-learning based framework for smart home energy management systems. We propose five modules: classification, prediction, optimization, scheduling, and controllers. In the classification module, we classify the category of users and appliances by using k-means clustering and support vector machine based classification. We predict the future energy consumption and energy cost for each user category using long-term memory in the prediction module. We define objective functions for optimization and use grey wolf optimization and particle swarm optimization for scheduling appliances. For each case, we give priority to user preferences and indoor and outdoor environmental conditions. We define control rules to control the usage of appliances according to the schedule while prioritizing user preferences and minimizing energy consumption and cost. We perform experiments to evaluate the performance of our proposed methodology, and the results show that our proposed approach significantly reduces energy cost while providing an optimized solution for energy consumption that prioritizes user preferences and considers both indoor and outdoor environmental factors

    PSO Based Optimized Ensemble Learning and Feature Selection Approach for Efficient Energy Forecast

    No full text
    Swarm intelligence techniques with incredible success rates are broadly used for various irregular and interdisciplinary topics. However, their impact on ensemble models is considerably unexplored. This study proposes an optimized-ensemble model integrated for smart home energy consumption management based on ensemble learning and particle swarm optimization (PSO). The proposed model exploits PSO in two distinct ways; first, PSO-based feature selection is performed to select the essential features from the raw dataset. Secondly, with larger datasets and comprehensive range problems, it can become a cumbersome task to tune hyper-parameters in a trial-and-error manner manually. Therefore, PSO was used as an optimization technique to fine-tune hyper-parameters of the selected ensemble model. A hybrid ensemble model is built by using combinations of five different baseline models. Hyper-parameters of each combination model were optimized using PSO followed by training on different random samples. We compared our proposed model with our previously proposed ANN-PSO model and a few other state-of-the-art models. The results show that optimized-ensemble learning models outperform individual models and the ANN-PSO model by minimizing RMSE to 6.05 from 9.63 and increasing the prediction accuracy by 95.6%. Moreover, our results show that random sampling can help improve prediction results compared to the ANN-PSO model from 92.3% to around 96%

    A Feature Selection-Based Predictive-Learning Framework for Optimal Actuator Control in Smart Homes

    No full text
    In today’s world, smart buildings are considered an overarching system that automates a building’s complex operations and increases security while reducing environmental impact. One of the primary goals of building management systems is to promote sustainable and efficient use of energy, requiring coherent task management and execution of control commands for actuators. This paper proposes a predictive-learning framework based on contextual feature selection and optimal actuator control mechanism for minimizing energy consumption in smart buildings. We aim to assess multiple parameters and select the most relevant contextual features that would optimize energy consumption. We have implemented an artificial neural network-based particle swarm optimization (ANN-PSO) algorithm for predictive learning to train the framework on feature importance. Based on the relevance of attributes, our model was also capable of re-adding features. The extracted features are then applied as input parameters for the training of long short-term memory (LSTM) and optimal control module. We have proposed an objective function using a velocity boost-particle swarm optimization (VB-PSO) algorithm that reduces energy cost for optimal control. We then generated and defined the control tasks based on the fuzzy rule set and optimal values obtained from VB-PSO. We compared our model’s performance with and without feature selection using the root mean square error (RMSE) metric in the evaluation section. This paper also presents how optimal control can reduce energy cost and improve performance resulting from lesser learning cycles and decreased error rates

    SARS-CoV-2 vaccination modelling for safe surgery to save lives: data from an international prospective cohort study

    No full text
    Background: Preoperative SARS-CoV-2 vaccination could support safer elective surgery. Vaccine numbers are limited so this study aimed to inform their prioritization by modelling. Methods: The primary outcome was the number needed to vaccinate (NNV) to prevent one COVID-19-related death in 1 year. NNVs were based on postoperative SARS-CoV-2 rates and mortality in an international cohort study (surgical patients), and community SARS-CoV-2 incidence and case fatality data (general population). NNV estimates were stratified by age (18-49, 50-69, 70 or more years) and type of surgery. Best- and worst-case scenarios were used to describe uncertainty. Results: NNVs were more favourable in surgical patients than the general population. The most favourable NNVs were in patients aged 70 years or more needing cancer surgery (351; best case 196, worst case 816) or non-cancer surgery (733; best case 407, worst case 1664). Both exceeded the NNV in the general population (1840; best case 1196, worst case 3066). NNVs for surgical patients remained favourable at a range of SARS-CoV-2 incidence rates in sensitivity analysis modelling. Globally, prioritizing preoperative vaccination of patients needing elective surgery ahead of the general population could prevent an additional 58 687 (best case 115 007, worst case 20 177) COVID-19-related deaths in 1 year. Conclusion: As global roll out of SARS-CoV-2 vaccination proceeds, patients needing elective surgery should be prioritized ahead of the general population
    corecore