7 research outputs found

    Modelling Patterns in Continuous Streams of Data

    Get PDF
    The untapped source of information, extracted from the increasing number of sensors, can be explored to improve and optimize several systems. Yet, hand in hand with this growth goes the increasing difficulty to manage and organize all this new information. The lack of a standard context representation scheme is one of the main struggles in this research area. Conventional methods for extracting knowledge from data rely on a standard representation or a priori relation, which may not be feasible for IoT and M2M scenarios. With this in mind we propose a stream characterization model in order to provide the foundations for a novel stream similarity metric. Complementing previous work on context organization, we aim to provide an automatic stream organizational model without enforcing specific representations. In this paper we extend our work on stream characterization and devise a novel similarity method

    Stock market prediction using machine learning classifiers and social media, news

    Get PDF
    Accurate stock market prediction is of great interest to investors; however, stock markets are driven by volatile factors such as microblogs and news that make it hard to predict stock market index based on merely the historical data. The enormous stock market volatility emphasizes the need to effectively assess the role of external factors in stock prediction. Stock markets can be predicted using machine learning algorithms on information contained in social media and financial news, as this data can change investors’ behavior. In this paper, we use algorithms on social media and financial news data to discover the impact of this data on stock market prediction accuracy for ten subsequent days. For improving performance and quality of predictions, feature selection and spam tweets reduction are performed on the data sets. Moreover, we perform experiments to find such stock markets that are difficult to predict and those that are more influenced by social media and financial news. We compare results of different algorithms to find a consistent classifier. Finally, for achieving maximum prediction accuracy, deep learning is used and some classifiers are ensembled. Our experimental results show that highest prediction accuracies of 80.53% and 75.16% are achieved using social media and financial news, respectively. We also show that New York and Red Hat stock markets are hard to predict, New York and IBM stocks are more influenced by social media, while London and Microsoft stocks by financial news. Random forest classifier is found to be consistent and highest accuracy of 83.22% is achieved by its ensemble

    Tesla Stock Close Price Prediction Using KNNR, DTR, SVR, and RFR

    Get PDF
    The growth of every nation's economy depends heavily on the stock market. Because of their strong predictive powers, "K-Nearest Neighbor Regression (KNNR), Random Forest Regression (RFR), Decision Tree Regression (DTR), and Support Vector Regression (SVR)" is commonly used. From the previous study, it shows that the chosen state-of-the-art of previous studies is by Shah et al. (2021) using Bi-LSTM. But the state-of-the-art model shows a high Mean Squared Error (MSE) value that is 5% for predict Tesla Stock. Therefore, we propose the SVR, KNNR, DTR, and RFR model hope error value can be reduced. However, those models have difficulties finding suitable parameters for prediction. Due to that, we proposed the best parameters that were commonly used in studies which is SVR with RBF kernel, C = [1; 10; 100], gamma = [0, 1; 0, 2], epsilon = 0,1, KNNR with K = [1, 3, 5, 7, 9], and DTR with a criterion = [‘squared_error’;’friedman_mse’], RFR with 0 < n_estimators < 100 is picked to enhance the predictive capabilities of each proposed model. To prove this, a comparison is made with each proposed model, and choose the best proposed model then compared to the state-of-the-art of the previous study, which is Bi-LSTM. The dataset used was the Tesla Stock historical data from yahoo finance. The result showed RFR with n_estimators = 87 is superior to its comparison with results of 0.005452 RMSE, 0.000029 MSE, and 0.999296 R2 and compared to Bi-LSTM have about 0.11773 RMSE, 0.01386 MSE, and 0.791263 R2. Based on this study, it can be concluded that RFR with n_estimators = 87 has superior predictive capabilities performance compared with other models and the state-of-the-art of previous studies.   Doi: 10.28991/HEF-2022-03-04-01 Full Text: PD

    Extração de conhecimento a partir de fontes semi-estruturadas

    Get PDF
    The increasing number of small, cheap devices, full of sensing capabilities lead to an untapped source of data that can be explored to improve and optimize multiple systems, from small-scale home automation to large-scale applications such as agriculture monitoring, traffic flow and industrial maintenance prediction. Yet, hand in hand with this growth, goes the increasing difficulty to collect, store and organize all these new data. The lack of standard context representation schemes is one of the main struggles in this area. Furthermore, conventional methods for extracting knowledge from data rely on standard representations or a priori relations. These a priori relations add latent information to the underlying model, in the form of context representation schemes, table relations, or even ontologies. Nonetheless, these relations are created and maintained by human users. While feasible for small-scale scenarios or specific areas, this becomes increasingly difficult to maintain when considering the potential dimension of IoT and M2M scenarios. This thesis addresses the problem of storing and organizing context information from IoT/M2M scenarios in a meaningful way, without imposing a representation scheme or requiring a priori relations. This work proposes a d-dimension organization model, which was optimized for IoT/M2M data. The model relies on machine learning features to identify similar context sources. These features are then used to learn relations between data sources automatically, providing the foundations for automatic knowledge extraction, where machine learning, or even conventional methods, can rely upon to extract knowledge on a potentially relevant dataset. During this work, two different machine learning techniques were tackled: semantic and stream similarity. Semantic similarity estimates the similarity between concepts (in textual form). This thesis proposes an unsupervised learning method for semantic features based on distributional profiles, without requiring any specific corpus. This allows the organizational model to organize data based on concept similarity instead of string matching. Another advantage is that the learning method does not require input from users, making it ideal for massive IoT/M2M scenarios. Stream similarity metrics estimate the similarity between two streams of data. Although these methods have been extensively researched for DNA sequencing, they commonly rely on variants of the longest common sub-sequence. This PhD proposes a generative model for stream characterization, specially optimized for IoT/M2M data. The model can be used to generate statistically significant data’s streams and estimate the similarity between streams. This is then used by the context organization model to identify context sources with similar stream patterns. The work proposed in this thesis was extensively discussed, developed and published in several international publications. The multiple contributions in projects and collaborations with fellow colleagues, where parts of the work developed were used successfully, support the claim that although the context organization model (and subsequent similarity features) were optimized for IoT/M2M data, they can potentially be extended to deal with any kind of context information in a wide array of applications.O número crescente de dispositivos pequenos e baratos, repletos de capacidades sensoriais, criou uma nova fonte de dados que pode ser explorada para melhorar e otimizar vários sistemas, desde domótica em ambientes residenciais até aplicações de larga escala como monitorização agrícola, gestão de tráfego e manutenção preditiva a nível industrial. No entanto, este crescimento encontra-se emparelhado com a crescente dificuldade em recolher, armazenar e organizar todos estes dados. A inexistência de um esquema de representação padrão é uma das principais dificuldades nesta área. Além disso, métodos de extração de conhecimento convencionais dependem de representações padrão ou relações definidas a priori. No entanto estas relações são definidas e mantidas por utilizadores humanos. Embora seja viável para cenários de pequena escala ou áreas especificas, este tipo de relações torna-se cada vez mais difícil de manter quando se consideram cenários com a dimensão associado a IoT e M2M. Esta tese de doutoramento endereça o problema de armazenar e organizar informação de contexto de cenários de IoT/M2M, sem impor um esquema de representação ou relações a priori. Este trabalho propõe um modelo de organização com d dimensões, especialmente otimizado para dados de IoT/M2M. O modelo depende de características de machine learning para identificar fontes de contexto similares. Estas caracteristicas são utilizadas para aprender relações entre as fontes de dados automaticamente, criando as fundações para a extração de conhecimento automática. Quer machine learning quer métodos convencionais podem depois utilizar estas relações automáticas para extrair conhecimento em datasets potencialmente relevantes. Durante este trabalho, duas técnicas foram desenvolvidas: similaridade semântica e similaridade entre séries temporais. Similaridade semântica estima a similaridade entre conceitos (em forma textual). Este trabalho propõe um método de aprendizagem não supervisionado para features semânticas baseadas em perfis distributivos, sem exigir nenhum corpus específico. Isto permite ao modelo de organização organizar dados baseado em conceitos e não em similaridade de caracteres. Numa outra vantagem importante para os cenários de IoT/M2M, o método de aprendizagem não necessita de dados de entrada adicionados por utilizadores. A similaridade entre séries temporais são métricas que permitem estimar a similaridade entre várias series temporais. Embora estes métodos tenham sido extensivamente desenvolvidos para sequenciação de ADN, normalmente dependem de variantes de métodos baseados na maior sub-sequencia comum. Esta tese de doutoramento propõe um modelo generativo para caracterizar séries temporais, especialmente desenhado para dados IoT/M2M. Este modelo pode ser usado para gerar séries temporais estatisticamente corretas e estimar a similaridade entre múltiplas séries temporais. Posteriormente o modelo de organização identifica fontes de contexto com padrões temporais semelhantes. O trabalho proposto foi extensivamente discutido, desenvolvido e publicado em diversas publicações internacionais. As múltiplas contribuições em projetos e colaborações com colegas, onde partes trabalho desenvolvido foram utilizadas com sucesso, permitem reivindicar que embora o modelo (e subsequentes técnicas) tenha sido otimizado para dados IoT/M2M, podendo ser estendido para lidar com outros tipos de informação de contexto noutras áreas.The present study was developed in the scope of the Smart Green Homes Project [POCI-01-0247-FEDER-007678], a co-promotion between Bosch Termotecnologia S.A. and the University of Aveiro. It is financed by Portugal 2020 under the Competitiveness and Internationalization Operational Program, and by the European Regional Development Fund.Programa Doutoral em Informátic
    corecore