7,626 research outputs found

    Knowledge Discovery in the SCADA Databases Used for the Municipal Power Supply System

    Full text link
    This scientific paper delves into the problems related to the develop-ment of intellectual data analysis system that could support decision making to manage municipal power supply services. The management problems of mu-nicipal power supply system have been specified taking into consideration modern tendencies shown by new technologies that allow for an increase in the energy efficiency. The analysis findings of the system problems related to the integrated computer-aided control of the power supply for the city have been given. The consideration was given to the hierarchy-level management decom-position model. The objective task targeted at an increase in the energy effi-ciency to minimize expenditures and energy losses during the generation and transportation of energy carriers to the Consumer, the optimization of power consumption at the prescribed level of the reliability of pipelines and networks and the satisfaction of Consumers has been defined. To optimize the support of the decision making a new approach to the monitoring of engineering systems and technological processes related to the energy consumption and transporta-tion using the technologies of geospatial analysis and Knowledge Discovery in databases (KDD) has been proposed. The data acquisition for analytical prob-lems is realized in the wireless heterogeneous medium, which includes soft-touch VPN segments of ZigBee technology realizing the 6LoWPAN standard over the IEEE 802.15.4 standard and also the segments of the networks of cellu-lar communications. JBoss Application Server is used as a server-based plat-form for the operation of the tools used for the retrieval of data collected from sensor nodes, PLC and energy consumption record devices. The KDD tools are developed using Java Enterprise Edition platform and Spring and ORM Hiber-nate technologies

    Patent Analytics Based on Feature Vector Space Model: A Case of IoT

    Full text link
    The number of approved patents worldwide increases rapidly each year, which requires new patent analytics to efficiently mine the valuable information attached to these patents. Vector space model (VSM) represents documents as high-dimensional vectors, where each dimension corresponds to a unique term. While originally proposed for information retrieval systems, VSM has also seen wide applications in patent analytics, and used as a fundamental tool to map patent documents to structured data. However, VSM method suffers from several limitations when applied to patent analysis tasks, such as loss of sentence-level semantics and curse-of-dimensionality problems. In order to address the above limitations, we propose a patent analytics based on feature vector space model (FVSM), where the FVSM is constructed by mapping patent documents to feature vectors extracted by convolutional neural networks (CNN). The applications of FVSM for three typical patent analysis tasks, i.e., patents similarity comparison, patent clustering, and patent map generation are discussed. A case study using patents related to Internet of Things (IoT) technology is illustrated to demonstrate the performance and effectiveness of FVSM. The proposed FVSM can be adopted by other patent analysis studies to replace VSM, based on which various big data learning tasks can be performed

    Forecasting of commercial sales with large scale Gaussian Processes

    Full text link
    This paper argues that there has not been enough discussion in the field of applications of Gaussian Process for the fast moving consumer goods industry. Yet, this technique can be important as it e.g., can provide automatic feature relevance determination and the posterior mean can unlock insights on the data. Significant challenges are the large size and high dimensionality of commercial data at a point of sale. The study reviews approaches in the Gaussian Processes modeling for large data sets, evaluates their performance on commercial sales and shows value of this type of models as a decision-making tool for management.Comment: 1o pages, 5 figure

    Air Quality Prediction in Smart Cities Using Machine Learning Technologies Based on Sensor Data: A Review

    Get PDF
    The influence of machine learning technologies is rapidly increasing and penetrating almost in every field, and air pollution prediction is not being excluded from those fields. This paper covers the revision of the studies related to air pollution prediction using machine learning algorithms based on sensor data in the context of smart cities. Using the most popular databases and executing the corresponding filtration, the most relevant papers were selected. After thorough reviewing those papers, the main features were extracted, which served as a base to link and compare them to each other. As a result, we can conclude that: (1) instead of using simple machine learning techniques, currently, the authors apply advanced and sophisticated techniques, (2) China was the leading country in terms of a case study, (3) Particulate matter with diameter equal to 2.5 micrometers was the main prediction target, (4) in 41% of the publications the authors carried out the prediction for the next day, (5) 66% of the studies used data had an hourly rate, (6) 49% of the papers used open data and since 2016 it had a tendency to increase, and (7) for efficient air quality prediction it is important to consider the external factors such as weather conditions, spatial characteristics, and temporal features

    Deep learning macroeconomics

    Get PDF
    Limited datasets and complex nonlinear relationships are among the challenges that may emerge when applying econometrics to macroeconomic problems. This research proposes deep learning as an approach to transfer learning in the former case and to map relationships between variables in the latter case. Several machine learning techniques are incorporated into the econometric framework, but deep learning remains focused on time-series forecasting. Firstly, transfer learning is proposed as an additional strategy for empirical macroeconomics. Although macroeconomists already apply transfer learning when assuming a given a priori distribution in a Bayesian context, estimating a structural VAR with signal restriction and calibrating parameters based on results observed in other models, to name a few examples, advance in a more systematic transfer learning strategy in applied macroeconomics is the innovation we are introducing. When developing economics modeling strategies, the lack of data may be an issue that transfer learning can fix. We start presenting theoretical concepts related to transfer learning and proposed a connection with a typology related to macroeconomic models. Next, we explore the proposed strategy empirically, showing that data from different but related domains, a type of transfer learning, helps identify the business cycle phases when there is no business cycle dating committee and to quick estimate an economic-based output gap. In both cases, the strategy also helps to improve the learning when data is limited. The approach integrates the idea of storing knowledge gained from one region’s economic experts and applying it to other geographic areas. The first is captured with a supervised deep neural network model, and the second by applying it to another dataset, a domain adaptation procedure. Overall, there is an improvement in the classification with transfer learning compared to baseline models. To the best of our knowledge, the combined deep and transfer learning approach is underused for application to macroeconomic problems, indicating that there is plenty of room for research development. Secondly, since deep learning methods are a way of learning representations, those that are formed by the composition of multiple non-linear transformations, to yield more abstract representations, we apply deep learning for mapping low-frequency from high-frequency variables. There are situations where we know, sometimes by construction, that there is a relationship be-tween input and output variables, but this relationship is difficult to map, a challenge in which deep learning models have shown excellent performance. The results obtained show the suitability of deep learning models applied to macroeconomic problems. Additionally, deep learning proved adequate for mapping low-frequency variables from high-frequency data to interpolate, distribute, and extrapolate time series by related series. The application of this technique to Brazilian data proved to be compatible with benchmarks based on other techniques.Conjuntos de dados limitados e complexas relações não-lineares estão entre os desafios que podem surgir ao se aplicar econometria a problemas macroeconômicos. Esta pesquisa propõe aprendizagem profunda como uma abordagem para transferir aprendizagem no primeiro caso e para mapear relações entre variáveis no último caso. Várias técnicas de aprendizado de máquina estão incorporadas à estrutura econométrica, mas o aprendizado profundo continua focado na previsão de séries temporais. Primeiramente, aprendizagem por transferência é proposta como uma estratégia adicional para a macroeconomia empírica. Embora os macroeconomistas já apliquem aprendizagem por transferência ao assumir uma dada distribuição a priori em um contexto Bayesiano, estimar um VAR estrutural com restrição de sinal e calibrar parâmetros com base em resultados observados em outros modelos, para citar alguns exemplos, avançar em uma estratégia mais sistemática de transferência de aprendizagem em macroeconomia aplicada é a inovação que estamos introduzindo. Ao desenvolver estratégias de modelagem econômica, a falta de dados pode ser um problema que aprendizagem por transferência pode corrigir. Começamos por apresentar conceitos teóricos relacionados à transferência de aprendizagem e propomos uma conexão com uma tipologia relacionada a modelos macroeconômicos. Em seguida, exploramos a estratégia proposta empiricamente, mostrando que os dados de domínios diferentes, mas relacionados, um tipo de aprendizagem por transferência, ajudam a identificar as fases do ciclo de negócios quando não há comitê de datação do ciclo de negócios e a estimar rapidamente um hiato do produto de base econômica. Em ambos os casos, a estratégia também ajuda a melhorar o aprendizado quando os dados são limitados. A abordagem integra a ideia de armazenar conhecimento obtido de especialistas em economia de uma região e aplicá-lo a outras áreas geográficas. O primeiro é capturado com um modelo de rede neural profunda supervisionado e o segundo aplicando-o a outro conjunto de dados, um procedimento de adaptação de domínio. No geral, há uma melhora na classificação com a aprendizagem por transferência em comparação com os modelos de base. Até onde sabemos, a abordagem combinada de aprendizagem profunda e transferência é subutilizada para aplicação a problemas macroeconômicos, indicando que há muito espaço para o desenvolvimento de pesquisas. Em segundo lugar, uma vez que os métodos de aprendizagem profunda são uma forma de aprender representações, aquelas que são formadas pela composição de várias transformações não lineares, para produzir representações mais abstratas, aplicamos aprendizagem profunda para mapear variáveis de baixa frequência a partir de variáveis de alta frequência. Há situações em que sabemos, às vezes por construção, que existe uma relação entre as variáveis de entrada e saída, mas essa relação é difícil de mapear, um desafio no qual os modelos de aprendizagem profunda têm apresentado excelente desempenho. Os resultados obtidos mostram a adequação de modelos de aprendizagem profunda aplicados a problemas macroeconômicos. Além disso, o aprendizado profundo se mostrou adequado para mapear variáveis de baixa frequência a partir de dados de alta frequência para interpolar, distribuir e extrapolar séries temporais por séries relacionadas. A aplicação dessa técnica em dados brasileiros mostrou-se compatível com benchmarks baseados em outras técnicas

    A demand-driven approach for a multi-agent system in Supply Chain Management

    Get PDF
    This paper presents the architecture of a multi-agent decision support system for Supply Chain Management (SCM) which has been designed to compete in the TAC SCM game. The behaviour of the system is demand-driven and the agents plan, predict, and react dynamically to changes in the market. The main strength of the system lies in the ability of the Demand agent to predict customer winning bid prices - the highest prices the agent can offer customers and still obtain their orders. This paper investigates the effect of the ability to predict customer order prices on the overall performance of the system. Four strategies are proposed and compared for predicting such prices. The experimental results reveal which strategies are better and show that there is a correlation between the accuracy of the models' predictions and the overall system performance: the more accurate the prediction of customer order prices, the higher the profit. © 2010 Springer-Verlag Berlin Heidelberg
    • …
    corecore