76 research outputs found

    Time Aware Knowledge Extraction for Microblog Summarization on Twitter

    Full text link
    Microblogging services like Twitter and Facebook collect millions of user generated content every moment about trending news, occurring events, and so on. Nevertheless, it is really a nightmare to find information of interest through the huge amount of available posts that are often noise and redundant. In general, social media analytics services have caught increasing attention from both side research and industry. Specifically, the dynamic context of microblogging requires to manage not only meaning of information but also the evolution of knowledge over the timeline. This work defines Time Aware Knowledge Extraction (briefly TAKE) methodology that relies on temporal extension of Fuzzy Formal Concept Analysis. In particular, a microblog summarization algorithm has been defined filtering the concepts organized by TAKE in a time-dependent hierarchy. The algorithm addresses topic-based summarization on Twitter. Besides considering the timing of the concepts, another distinguish feature of the proposed microblog summarization framework is the possibility to have more or less detailed summary, according to the user's needs, with good levels of quality and completeness as highlighted in the experimental results.Comment: 33 pages, 10 figure

    NSL-BP: A Meta Classifier Model Based Prediction of Amazon Product Reviews

    Get PDF
    In machine learning, the product rating prediction based on the semantic analysis of the consumers' reviews is a relevant topic. Amazon is one of the most popular online retailers, with millions of customers purchasing and reviewing products. In the literature, many research projects work on the rating prediction of a given review. In this research project, we introduce a novel approach to enhance the accuracy of rating prediction by machine learning methods by processing the reviewed text. We trained our model by using many methods, so we propose a combined model to predict the ratings of products corresponding to a given review content. First, using k-means and LDA, we cluster the products and topics so that it will be easy to predict the ratings having the same kind of products and reviews together. We trained low, neutral, and high models based on clusters and topics of products. Then, by adopting a stacking ensemble model, we combine Naïve Bayes, Logistic Regression, and SVM to predict the ratings. We will combine these models into a two-level stack. We called this newly introduced model, NSL model, and compared the prediction performance with other methods at state of the art

    Drift-Aware Methodology for Anomaly Detection in Smart Grid

    Get PDF
    Energy efficiency and sustainability are important factors to address in the context of smart cities. In this sense, smart metering and nonintrusive load monitoring play a crucial role in fighting energy thefts and for optimizing the energy consumption of the home, building, city, and so forth. The estimated number of smart meters will exceed 800 million by 2020. By providing near real-time data about power consumption, smart meters can be used to analyze electricity usage trends and to point out anomalies guaranteeing companies' safety and avoiding energy wastes. In literature, there are many proposals approaching the problem of anomaly detection. Most of them are limited because they lack context and time awareness and the false positive rate is affected by the change in consumer habits. This research work focuses on the need to define anomaly detection method capable of facing the concept drift, for instance, family structure changes; a house becomes a second residence, and so forth. The proposed methodology adopts long short term memory network in order to profile and forecast the consumers' behavior based on their recent past consumptions. The continuous monitoring of the consumption prediction errors allows us to distinguish between possible anomalies and changes (drifts) in normal behavior that correspond to different error motifs. The experimental results demonstrate the suitability of the proposed framework by pointing out an anomaly in a near real-time after a training period of one week

    Imputation of Rainfall Data Using the Sine Cosine Function Fitting Neural Network

    Get PDF
    Missing rainfall data have reduced the quality of hydrological data analysis because they are the essential input for hydrological modeling. Much research has focused on rainfall data imputation. However, the compatibility of precipitation (rainfall) and non-precipitation (meteorology) as input data has received less attention. First, we propose a novel pre-processing mechanism for non-precipitation data by using principal component analysis (PCA). Before the imputation, PCA is used to extract the most relevant features from the meteorological data. The final output of the PCA is combined with the rainfall data from the nearest neighbor gauging stations and then used as the input to the neural network for missing data imputation. Second, a sine cosine algorithm is presented to optimize neural network for infilling the missing rainfall data. The proposed sine cosine function fitting neural network (SC-FITNET) was compared with the sine cosine feedforward neural network (SCFFNN), feedforward neural network (FFNN) and long short-term memory (LSTM) approaches. The results showed that the proposed SC-FITNET outperformed LSTM, SC-FFNN and FFNN imputation in terms of mean absolute error (MAE), root mean square error (RMSE) and correlation coefficient (R), with an average accuracy of 90.9%. This study revealed that as the percentage of missingness increased, the precision of the four imputation methods reduced. In addition, this study also revealed that PCA has potential in pre-processing meteorological data into an understandable format for the missing data imputation

    A sensitivity analysis on the influence of the external constraints on the dynamic behaviour of a low pollutant emissions aircraft combustor-rig

    Get PDF
    Abstract The need to reduce pollutant emissions leads the engineers to design new aeronautic combustors characterized by lean burn at relatively low temperatures. This requirement can easily cause flame instability phenomena and consequent pressure pulsations which may seriously damage combustor's structure and/or compromise its fatigue life. Hence the need to study the combustor's structural dynamics and the interaction between elastic, thermal and acoustic phenomena. Finite element method represent a largely used and fairly reliable tool to address these studies; on the other hand, the idealization process may bring to results quite far from the reality whereas too simplifying assumptions are made. Constraints modelling represent a key-issue for all dynamic FE analyses; a wrong simulation of the constraints may indeed compromise entire analyses although running on very accurate and mesh-refined structural models. In this paper, a probabilistic approach to characterize the influence of external constraints on the modal behaviour of an aircraft combustor-rig is presented. The finite element model validation was performed at first by comparing numerical and experimental results for the free-free condition (no constraints). Once the model was validated, the effect of constraints elasticity on natural frequencies was investigated by means of a probabilistic design simulation (PDS); referring to a specific tool developed in the ANSYS®software, a preliminary statistical analysis was at performed via Monte-Carlo Simulation (MCS) method. The results were then correlated with the experimental ones via Response Surface Method (RSM)

    Imputation of rainfall data using the sine cosine function fitting neural network

    Get PDF
    Missing rainfall data have reduced the quality of hydrological data analysis because they are the essential input for hydrological modeling. Much research has focused on rainfall data imputation. However, the compatibility of precipitation (rainfall) and non-precipitation (meteorology) as input data has received less attention. First, we propose a novel pre-processing mechanism for non-precipitation data by using principal component analysis (PCA). Before the imputation, PCA is used to extract the most relevant features from the meteorological data. The final output of the PCA is combined with the rainfall data from the nearest neighbor gauging stations and then used as the input to the neural network for missing data imputation. Second, a sine cosine algorithm is presented to optimize neural network for infilling the missing rainfall data. The proposed sine cosine function fitting neural network (SC-FITNET) was compared with the sine cosine feedforward neural network (SC-FFNN), feedforward neural network (FFNN) and long short-term memory (LSTM) approaches. The results showed that the proposed SC-FITNET outperformed LSTM, SC-FFNN and FFNN imputation in terms of mean absolute error (MAE), root mean square error (RMSE) and correlation coefficient (R), with an average accuracy of 90.9%. This study revealed that as the percentage of missingness increased, the precision of the four imputation methods reduced. In addition, this study also revealed that PCA has potential in pre-processing meteorological data into an understandable format for the missing data imputation

    Sigh in patients with acute hypoxemic respiratory failure and acute respiratory distress syndrome: the PROTECTION pilot randomized clinical trial

    Get PDF
    Background: Sigh is a cyclic brief recruitment manoeuvre: previous physiological studies showed that its use could be an interesting addition to pressure support ventilation to improve lung elastance, decrease regional heterogeneity and increase release of surfactant. Research question: Is the clinical application of sigh during pressure support ventilation (PSV) feasible? Study design and methods: We conducted a multi-center non-inferiority randomized clinical trial on adult intubated patients with acute hypoxemic respiratory failure or acute respiratory distress syndrome undergoing PSV. Patients were randomized to the No Sigh group and treated by PSV alone, or to the Sigh group, treated by PSV plus sigh (increase of airway pressure to 30 cmH2Ofor 3 seconds once per minute) until day 28 or death or successful spontaneous breathing trial. The primary endpoint of the study was feasibility, assessed as non-inferiority (5% tolerance) in the proportion of patients failing assisted ventilation. Secondary outcomes included safety, physiological parameters in the first week from randomization, 28-day mortality and ventilator-free days. Results: Two-hundred fifty-eight patients (31% women; median age 65 [54-75] years) were enrolled. In the Sigh group, 23% of patients failed to remain on assisted ventilation vs. 30% in the No Sigh group (absolute difference -7%, 95%CI -18% to 4%; p=0.015 for non-inferiority). Adverse events occurred in 12% vs. 13% in Sigh vs. No Sigh (p=0.852). Oxygenation was improved while tidal volume, respiratory rate and corrected minute ventilation were lower over the first 7 days from randomization in Sigh vs. No Sigh. There was no significant difference in terms of mortality (16% vs. 21%, p=0.342) and ventilator-free days (22 [7-26] vs. 22 [3-25] days, p=0.300) for Sigh vs. No Sigh. Interpretation: Among hypoxemic intubated ICU patients, application of sigh was feasible and without increased risk

    Building Pedagogical Models by Formal Concept Analysis

    No full text
    The Pedagogical Model is one of the main components of an Intelligent Tutoring System. It is exploited to select a suitable action (e.g., feedback, hint) that the intelligent tutor provides to the learner in order to react to her interaction with the system. Such selection depends on the implemented pedagogical strategy and, typically, takes care of several aspects such as correctness and delay of the learner’s response, learner’s profile, context and so on. The main idea of this paper is to exploit Formal Concept Analysis to automatically learn pedagogical models from data representing human tutoring behaviours. The paper describes the proposed approach by applying it to an early case study

    A Hybrid approach to Semantic Web Services Matchmaking

    Get PDF
    Deploying the semantics embedded in web services is a mandatory step in the automation of discovery, invocation and composition activities. The semantic annotation is the ‘‘add-on” to cope with the actual interoperability limitations and to assure a valid support to the interpretation of services capabilities. Nevertheless many issues have to be reached to support semantics in the web services and to guarantee accurate functionality descriptions. Early efforts address automatic matchmaking tasks, in order to find eligible advertised services which appropriately meet the consumer’s demand. In the most of approaches, this activity is often entrusted to software agents, able to drive reasoning/planning activities, to discover the required service which can be single or composed of more atomic services. This paper presents a hybrid framework which achieves a fuzzy matchmaking of semantic web services. Central role is entrusted to task-oriented agents that, given a service request, interact to discover approximate reply, when no exact match occurs among the available web services. The matchmaking activity exploits a mathematical model, the fuzzy multiset to suitably represent the multi-granular information, enclosed into an OWLS-based description of a semantic web service
    corecore