6,479 research outputs found

    Hybrid Ventilation System and Soft-Sensors for Maintaining Indoor Air Quality and Thermal Comfort in Buildings

    Get PDF
    Maintaining both indoor air quality (IAQ) and thermal comfort in buildings along with optimized energy consumption is a challenging problem. This investigation presents a novel design for hybrid ventilation system enabled by predictive control and soft-sensors to achieve both IAQ and thermal comfort by combining predictive control with demand controlled ventilation (DCV). First, we show that the problem of maintaining IAQ, thermal comfort and optimal energy is a multi-objective optimization problem with competing objectives, and a predictive control approach is required to smartly control the system. This leads to many implementation challenges which are addressed by designing a hybrid ventilation scheme supported by predictive control and soft-sensors. The main idea of the hybrid ventilation system is to achieve thermal comfort by varying the ON/OFF times of the air conditioners to maintain the temperature within user-defined bands using a predictive control and IAQ is maintained using Healthbox 3.0, a DCV device. Furthermore, this study also designs soft-sensors by combining the Internet of Things (IoT)-based sensors with deep-learning tools. The hardware realization of the control and IoT prototype is also discussed. The proposed novel hybrid ventilation system and the soft-sensors are demonstrated in a real research laboratory, i.e., Center for Research in Automatic Control Engineering (C-RACE) located at Kalasalingam University, India. Our results show the perceived benefits of hybrid ventilation, predictive control, and soft-sensors

    Box-Jenkin’s Methodology in Python for Stock Managing

    Get PDF
    At the end of 2019, the world had shaken when social media communicated that a potential worldwide pandemic might be beginning. Early in 2020, most countries worldwide affected by the pandemic declared a state of emergency, announcing that people could not leave their houses. When confronted with these security policies, many companies faced new management challenges regarding physical and technological resources. Companies had to adapt their work style, allowing its employees to work remotely (some companies even adopted a hybrid work when the restrictions ended/ where on a break), on the other they had to adapt its technological resources for the information to be accessible for every employer with safety. For this purpose, large companies had to spend thousands or millions quickly adapting its information systems – both for acquiring more potent virtual private network and improve their capacity in terms of the online channel integration and invoice systems – as this was the only available channel to buy non-essential goods. This thesis addressed the possibility of using Machine Learning (ML) to build a predictive model to forecast which will be the sales behavior over time, by analysing a time series. That possibility consists in building a model for stock managing, that would be updated daily (with the sales till the previous day), and automatically predicts the future sales behavior, allowing an automated stock management process – not only without shortages but also without overstocking. As a result, it was achieved a fully automated ML model, using a S3 bucket (from amazon web services) connected to a Databricks instance (launched through the S3 bucket), that has the capacity to receive the sales daily, treat the data and forecast the future data points of this sales time series.No final do ano de 2019 o mundo tremeu quando a comunicação social comunicou o possível começo duma pandemia mundial. No início do ano de 2020, a maior parte dos Países do Mundo, afetados pela pandemia, decretaram estados de calamidade e ordenaram que as suas populações não pudessem sair de casa. Com estas medidas para contenção da pandemia, as empresas enfrentaram novos desafios em termos da sua gestão – tanto em termos da gestão dos recursos físicos, como tecnológicos. Se por um lado as empresas tiveram que adaptar o seu modo de trabalho de modo que os seus colaboradores pudessem trabalhar remotamente (levando a que algumas adotassem mesmo o trabalho híbrido no pós-pandemia), por outro tiveram que adaptar os seus recursos tecnológicos para que a informação estivesse acessível a todos os trabalhadores, com segurança. Neste âmbito, grandes empresas tiveram que gastar milhares ou milhões na rápida adaptação dos seus sistemas de informação – tanto para terem network privada virtual mais potente, como para aumentarem a capacidade dos seus sistemas de integração e faturação do canal de vendas online – pois este era o único meio de venda possível para bens não essenciais. Deste modo foi abordada a possibilidade de, através de um modelo de Machine Learning, contruir um modelo preditivo que analise o comportamento das vendas ao longo do tempo, analisando uma série temporal. Essa possibilidade passa por desenvolver um modelo de gestão de stocks que seria atualizado todos os dias (com as vendas até ao dia anterior), e automaticamente prever o comportamento futuro, permitindo assim que haja uma gestão automatizada de stocks – sem ruturas e encomendas maiores do que o previsto. Como resultado, foi desenvolvido um modelo de ML totalmente automatizado, tendo sido utilizado um S3 bucket (serviço da Amazon Web Services) conectado a uma instância de Databricks, com a capacidade de ingerir dados diariamente, fazer o seu tratamento e prever as futuras vendas

    Internet of Things-aided Smart Grid: Technologies, Architectures, Applications, Prototypes, and Future Research Directions

    Full text link
    Traditional power grids are being transformed into Smart Grids (SGs) to address the issues in existing power system due to uni-directional information flow, energy wastage, growing energy demand, reliability and security. SGs offer bi-directional energy flow between service providers and consumers, involving power generation, transmission, distribution and utilization systems. SGs employ various devices for the monitoring, analysis and control of the grid, deployed at power plants, distribution centers and in consumers' premises in a very large number. Hence, an SG requires connectivity, automation and the tracking of such devices. This is achieved with the help of Internet of Things (IoT). IoT helps SG systems to support various network functions throughout the generation, transmission, distribution and consumption of energy by incorporating IoT devices (such as sensors, actuators and smart meters), as well as by providing the connectivity, automation and tracking for such devices. In this paper, we provide a comprehensive survey on IoT-aided SG systems, which includes the existing architectures, applications and prototypes of IoT-aided SG systems. This survey also highlights the open issues, challenges and future research directions for IoT-aided SG systems

    Une approche géostatistique spatio-temporelle pour la prévision immédiate d'ensemble de pluies

    Get PDF
    3rd European Conference on Flood Risk Management, Lyon, FRA, 17-/10/2016 - 21/10/2016International audienceNowcasting systems are essential to prevent extreme events and reduce their socio-economic impacts. The major challenge of these systems is to capture high-risk situations in advance, with good accuracy, location and time. Uncertainties associated with the precipitation events have an impact on the hydrological forecasts, especially when it concerns localized flash flood events. Radar monitoring can help to detect the space-time evolution of rain fields, but nowcasting techniques are needed to go beyond the observation and provide scenarios of rainfall for the next hours of the event. In this study, we investigate a space-time geostatistical framework to generate multiple scenarios of future rainfall. The rainfall ensemble is generated based on space-time properties of precipitation fields given by radar measurements and rainfall data from rain gauges. The aim of this study is to investigate the potential of a framework that applies a geostatistical conditional simulation method to generate an ensemble nowcasting of rainfall fields. The Var region (south eastern France) and 14 events are used to validate the approach. Results show that the proposed method can be a solution to combine information from radar fields and rain gauges to generate nowcasting rainfall fields adapted for flash flood alert

    Spatiotemporal analysis of gapfilled high spatial resolution time series for crop monitoring.

    Full text link
    [ES] La obtención de mapas fiables de clasificación de cultivos es importante para muchas aplicaciones agrícolas, como el monitoreo de los campos y la seguridad alimentaria. Hoy en día existen distintas bases de datos de cobertura terrestre con diferentes escalas espaciales y temporales cubriendo diferentes regiones terrestres (por ejemplo, Corine Land cover (CORINE) en Europa o Cropland Data Layer (CDL) en Estados Unidos (EE.UU.)). Sin embargo, estas bases de datos son mapas históricos y por lo tanto no reflejan los estados fenológicos actuales de los cultivos. Normalmente estos mapas requieren un tiempo específico (anual) para generarse basándose en las diferentes fenologías de cada cultivo. Los objetivos de este trabajo son dos: 1- analizar la distribución espacial de los cultivos a diferentes regiones espaciales para identificar las áreas más representativas. 2- analizar el rango temporal utilizado para acelerar la generación de mapas de clasificación. El análisis se realiza sobre el contiguo de Estados Unidos (CONUS, de sus siglas en inglés) en 2019. Para abordar estos objetivos, se utilizan diferentes fuentes de datos. La capa CDL, una base de datos robusta y completa de mapas de cultivo en el CONUS, que proporciona datos anuales de cobertura terrestre rasterizados y georeferenciados. Así como, datos multiespectrales a 30 metros de resolución espacial, preprocesados para rellenar los posibles huecos debido a la presencia de nubes y aerosoles en los datos. Este conjunto de datos ha sido generado por la fusión de sensores Landsat y Moderate Resolution Imaging Spectroradiometer (MODIS). Para procesar tal elevada cantidad de datos se utilizará Google Earth Engine (GEE), que es una aplicación que procesa la información en la nube y está especializada en el procesamiento geoespacial. GEE se puede utilizar para obtener mapas de cultivos a nivel mundial, pero requiere algoritmos eficientes. En este estudio se analizarán diferentes algoritmos de aprendizaje de máquina (machine learning) para analizar la posible aceleración de la obtención de los mapas de clasificación de cultivo. En GEE hay diferentes tipos de algoritmos de clasificación disponibles, desde simples árboles de decisión (decision trees) hasta algoritmos más complejos como máquinas de vectores soporte (SVM) o redes neuronales (neural networks). Este estudio presentará los primeros resultados para la generación de mapas de clasificación de cultivos utilizando la menor cantidad posible de información, a nivel temporal, con una resolución espacial de 30 metros.[EN] Reliable crop classification maps are important for many agricultural applications, such as field monitoring and food security. Nowadays there are already several crop cover databases at different scales and temporal resolutions for different parts of the world (e. g. Corine Land cover in Europe (CORINE) or Cropland Data Layer (CDL) in the United States (US)). However, these databases are historical crop cover maps and hence do not reflect the actual crops on the ground. Usually, these maps require a specific time (annually) to be generated based on the diversity of the different crop phenologies. The aims of this work are two: 1- analyzing the multi-scale spatial crop distribution to identify the most representative areas. 2- analyzing the temporal range used to generate crop cover maps to build maps promptly. The analysis is done over the contiguous US (CONUS) in 2019. To address these objectives, different types of data are used. The CDL, a robust and complete cropland mapping in the CONUS, which provides annual land cover data raster geo-referenced. And, multispectral high-resolution gap-filled data at 30 meter spatial resolution used to avoid the presence of clouds and aerosols in the data. This dataset has been generated by the fusion of Landsat and Moderate Resolution Imaging Spectroradiometer (MODIS). To process this large amount of data will be used Google Earth Engine (GEE) which is a cloud-based application specialized in geospatial processing. GEE can be used to map crops globally, but it requires efficient algorithms. In this study, different machine learning algorithms will be analyzed to generate the promptest classification crop maps. Several options are available in GEE from simple decision trees to more complex algorithms like support vector machines or neural networks. This study will present the first results and the potential to generate crop classification maps using as less possible temporal range information at 30 meters spatial resolution.Rajadel Lambistos, C. (2020). Análisis espaciotemporal de series temporales sin huecos de alta resolución espacial. Universitat Politècnica de València. http://hdl.handle.net/10251/155879TFG

    An Integrative Remote Sensing Application of Stacked Autoencoder for Atmospheric Correction and Cyanobacteria Estimation Using Hyperspectral Imagery

    Get PDF
    Hyperspectral image sensing can be used to effectively detect the distribution of harmful cyanobacteria. To accomplish this, physical- and/or model-based simulations have been conducted to perform an atmospheric correction (AC) and an estimation of pigments, including phycocyanin (PC) and chlorophyll-a (Chl-a), in cyanobacteria. However, such simulations were undesirable in certain cases, due to the difficulty of representing dynamically changing aerosol and water vapor in the atmosphere and the optical complexity of inland water. Thus, this study was focused on the development of a deep neural network model for AC and cyanobacteria estimation, without considering the physical formulation. The stacked autoencoder (SAE) network was adopted for the feature extraction and dimensionality reduction of hyperspectral imagery. The artificial neural network (ANN) and support vector regression (SVR) were sequentially applied to achieve AC and estimate cyanobacteria concentrations (i.e., SAE-ANN and SAE-SVR). Further, the ANN and SVR models without SAE were compared with SAE-ANN and SAE-SVR models for the performance evaluations. In terms of AC performance, both SAE-ANN and SAE-SVR displayed reasonable accuracy with the Nash???Sutcliffe efficiency (NSE) > 0.7. For PC and Chl-a estimation, the SAE-ANN model showed the best performance, by yielding NSE values > 0.79 and > 0.77, respectively. SAE, with fine tuning operators, improved the accuracy of the original ANN and SVR estimations, in terms of both AC and cyanobacteria estimation. This is primarily attributed to the high-level feature extraction of SAE, which can represent the spatial features of cyanobacteria. Therefore, this study demonstrated that the deep neural network has a strong potential to realize an integrative remote sensing application
    corecore