572 research outputs found

    Residential Energy Management for Renewable Energy Systems Incorporating Data-Driven Unravelling of User Behavior

    Get PDF
    The penetration of distributed energy resources (DERs) such as photovoltaic (PV) at the residential level has increased rapidly over the past year. It will inevitably induce a paradigm shift in end-user and operations of local energy markets. The energy community with high integration of DERs initiative allows its users to manage their generation (for prosumers) and consumption more efficiently, resulting in various economic, social, and environmental benefits. Specifically, the local energy communities and their members can legally engage in energy generation, distribution, supply, consumption, storage, and sharing to increase levels of autonomy from the power grid, advance energy efficiency, reduce energy costs, and decrease carbon emissions. Reducing energy consumption costs is difficult for residential energy management without understanding the users' preferences. The advanced measurement and communication technologies provide opportunities for individual consumers/prosumers and local energy communities to adopt a more active role in renewable-rich smart grids. Non-intrusive load monitoring (NILM) monitors the load activities from a single point source, such as a smart meter, based on the assumption that different appliances have different power consumption levels and features. NILM can extract the users' load consumption from the smart meter to support the development of the smart grid for better energy management and demand response (DR). Yet to date, how to design residential energy management, including home energy management systems (HEMS) and community energy management systems (CEMS), with an understanding of user preferences and willingness to participate in energy management, is still far from being fully investigated. This thesis aims to develop methodologies for a resident energy management system for renewable energy systems (RES) incorporating data-driven unravelling of the user's energy consumption behaviour

    The characterisation of multiple defects in components using artificial neural networks

    Get PDF
    This thesis investigates the use of artificial neural networks (ANNs) as a means of processing signals from non-destructive tests, to characterise defects and provide more information regarding the condition of the component than would otherwise be possible for an operator to obtain from the test data. ANNs are used both as pattern classifiers and as function approximators. In the first part of the thesis, finite element analysis was carried out on a simple component containing a single defect modelled as a void, simulating three kinds of non-destructive test: an impact method that sent a stress wave through the component, an analysis of natural frequencies, and an ultrasonic pulse-echo method. The inputs to the ANNs were data from the numerical model, and the outputs were the x and y co-ordinates of the defect in the case of the impact and frequency methods, and the size and distance to the defect in the case of the ultrasonic method. Very good accuracy was observed in all three methods. Experimental validation of the ultrasonic method was carried out, and the ANNs returned accurate outputs for the position and size of a circular hole in a steel plate when presented with experimental data. When the ANNs were presented with noisy input data, their reduction in accuracy was small in comparison with published data from similar studies. In the second part of the thesis, the case of two defects lying within one wavelength of each other was considered, where the reflected ultrasonic waves from each defect overlapped, partially cancelling each other out and reducing the overall amplitude. A novel ANN-based approach was developed to decouple the overlapping signals, characterising each defect in terms of its position and size. Optimisation of the ANN architecture was carried out to maximise the ability of the ANN to generalise when presented with previously unseen data. Finally, an ANN-based general defect characterisation ‘expert system’ is presented, using data from an ultrasonic test as its input, and classifying cases according to the number of defects present. The system then characterised the defects present in the component in terms of their location and size, providing more information regarding the component’s condition than would be possible by existing techniques

    Advanced Geoscience Remote Sensing

    Get PDF
    Nowadays, advanced remote sensing technology plays tremendous roles to build a quantitative and comprehensive understanding of how the Earth system operates. The advanced remote sensing technology is also used widely to monitor and survey the natural disasters and man-made pollution. Besides, telecommunication is considered as precise advanced remote sensing technology tool. Indeed precise usages of remote sensing and telecommunication without a comprehensive understanding of mathematics and physics. This book has three parts (i) microwave remote sensing applications, (ii) nuclear, geophysics and telecommunication; and (iii) environment remote sensing investigations

    Signal classification at discrete frequencies using machine learning

    Get PDF
    Incidents such as the 2018 shut down of Gatwick Airport due to a small Unmanned Aerial System (UAS) airfield incursion, have shown that we don’t have routine and consistent detection and classification methods in place to recognise unwanted signals in an airspace. Today, incidents of this nature are taking place around the world regularly. The first stage in mitigating a threat is to know whether a threat is present. This thesis focuses on the detection and classification of Global Navigation Satellite Systems (GNSS) jamming radio frequency (RF) signal types and small commercially available UAS RF signals using machine learning for early warning systems. RF signals can be computationally heavy and sometimes sensitive to collect. With neural networks requiring a lot of information to train from scratch, the thesis explores the use of transfer learning from the object detection field to lessen this burden by using graphical representations of the signal in the frequency and time domain. The thesis shows that utilising the benefits of transfer learning with both supervised and unsupervised learning and graphical signal representations, can provide high accuracy detection and classification, down to the fidelity of whether a small UAS is flying or stationary. By treating the classification of RF signals as an image classification problem, this thesis has shown that transfer learning through CNN feature extraction reduces the need for large datasets while still providing high accuracy results. CNN feature extraction and transfer learning was also shown to improve accuracy as a precursor to unsupervised learning but at a cost of time, while raw images provided a good overall solution for timely clustering. Lastly the thesis has shown that the implementation of machine learning models using a raspberry pi and software defined radio (SDR) provides a viable option for low cost early warning systems

    Deep combination of radar with optical data for gesture recognition: role of attention in fusion architectures

    Get PDF
    Multimodal time series classification is an important aspect of human gesture recognition, in which limitations of individual sensors can be overcome by combining data from multiple modalities. In a deep learning pipeline, the attention mechanism further allows for a selective, contextual concentration on relevant features. However, while the standard attention mechanism is an effective tool when working with Natural Language Processing (NLP), it is not ideal when working with temporally- or spatially-sparse multi-modal data. In this paper, we present a novel attention mechanism, Multi-Modal Attention Preconditioning (MMAP). We first demonstrate that MMAP outperforms regular attention for the task of classification of modalities involving temporal and spatial sparsity and secondly investigate the impact of attention in the fusion of radar and optical data for gesture recognition via three specific modalities: dense spatiotemporal optical data, spatially sparse/temporally dense kinematic data, and sparse spatiotemporal radar data. We explore the effect of attention on early, intermediate, and late fusion architectures and compare eight different pipelines in terms of accuracy and their ability to preserve detection accuracy when modalities are missing. Results highlight fundamental differences between late and intermediate attention mechanisms in respect to the fusion of radar and optical data

    Deep Learning Methods for Remote Sensing

    Get PDF
    Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing

    Pathology detection mechanisms through continuous acquisition of biological signals

    Get PDF
    Mención Internacional en el título de doctorPattern identification is a widely known technology, which is used on a daily basis for both identification and authentication. Examples include biometric identification (fingerprint or facial), number plate recognition or voice recognition. However, when we move into the world of medical diagnostics this changes substantially. This field applies many of the recent innovations and technologies, but it is more difficult to see cases of pattern recognition applied to diagnostics. In addition, the cases where they do occur are always supervised by a specialist and performed in controlled environments. This behaviour is expected, as in this field, a false negative (failure to identify pathology when it does exists) can be critical and lead to serious consequences for the patient. This can be mitigated by configuring the algorithm to be safe against false negatives, however, this will raise the false positive rate, which may increase the workload of the specialist in the best case scenario or even result in a treatment being given to a patient who does not need it. This means that, in many cases, validation of the algorithm’s decision by a specialist is necessary, however, there may be cases where this validation is not so essential, or where this first identification can be treated as a guideline to help the specialist. With this objective in mind, this thesis focuses on the development of an algorithm for the identification of lower body pathologies. This identification is carried out by means of the way people walk (gait). People’s gait differs from one person to another, even making biometric identification possible through its use. however, when the people has a pathology, both physical or psychological, the gait is affected. This alteration generates a common pattern depending on the type of pathology. However, this thesis focuses exclusively on the identification of physical pathologies. Another important aspect in this thesis is that the different algorithms are created with the idea of portability in mind, avoiding the obligation of the user to carry out the walks with excessive restrictions (both in terms of clothing and location). First, different algorithms are developed using different configurations of smartphones for database acquisition. In particular, configurations using 1, 2 and 4 phones are used. The phones are placed on the legs using special holders so that they cannot move freely. Once all the walks have been captured, the first step is to filter the signals to remove possible noise. The signals are then processed to extract the different gait cycles (corresponding to two steps) that make up the walks. Once the feature extraction process is finished, part of the features are used to train different machine learning algorithms, which are then used to classify the remaining features. However, the evidence obtained through the experiments with the different configurations and algorithms indicates that it is not feasible to perform pathology identification using smartphones. This can be mainly attributed to three factors: the quality of the signals captured by the phones, the unstable sampling frequency and the lack of synchrony between the phones. Secondly, due to the poor results obtained using smartphones, the capture device is changed to a professional motion acquisition system. In addition, two types of algorithm are proposed, one based on neural networks and the other based on the algorithms used previously. Firstly, the acquisition of a new database is proposed. To facilitate the capture of the data, a procedure is established, which is proposed to be in an environment of freedom for the user. Once all the data are available, the preprocessing to be carried out is similar to that applied previously. The signals are filtered to remove noise and the different gait cycles that make up the walks are extracted. However, as we have information from several sensors and several locations for the capture device, instead of using a common cut-off frequency, we empirically set a cut-off frequency for each signal and position. Since we already have the data ready, a recurrent neural network is created based on the literature, so we can have a first approximation to the problem. Given the feasibility of the neural network, different experiments are carried out with the aim of improving the performance of the neural network. Finally, the other algorithm picks up the legacy of what was seen in the first part of the thesis. As before, this algorithm is based on the parameterisation of the gait cycles for its subsequent use and employs algorithms based on machine learning. Unlike the use of time signals, by parameterising the cycles, spurious data can be generated. To eliminate this data, the dataset undergoes a preparation phase (cleaning and scaling). Once a prepared dataset has been obtained, it is split in two, one part is used to train the algorithms, which are used to classify the remaining samples. The results of these experiments validate the feasibility of this algorithm for pathology detection. Next, different experiments are carried out with the aim of reducing the amount of information needed to identify a pathology, without compromising accuracy. As a result of these experiments, it can be concluded that it is feasible to detect pathologies using only 2 sensors placed on a leg.La identificación de patrones es una tecnología ampliamente conocida, la cual se emplea diariamente tanto para identificación como para autenticación. Algunos ejemplos de ello pueden ser la identificación biométrica (dactilar o facial), el reconocimiento de matrículas o el reconocimiento de voz. Sin embargo, cuando nos movemos al mundo del diagnóstico médico esto cambia sustancialmente. Este campo aplica muchas de las innovaciones y tecnologías recientes, pero es más difícil ver casos de reconocimiento de patrones aplicados al diagnóstico. Además, los casos donde se dan siempre están supervisados por un especialista y se realizan en ambientes controlados. Este comportamiento es algo esperado, ya que, en este campo, un falso negativo (no identificar la patología cuando esta existe) puede ser crítico y provocar consecuencias graves para el paciente. Esto se puede intentar paliar, configurando el algoritmo para que sea seguro frente a los falsos negativos, no obstante, esto aumentará la tasa de falsos positivos, lo cual puede aumentar el trabajo del especialista en el mejor de los casos o incluso puede provocar que se suministre un tratamiento a un paciente que no lo necesita. Esto hace que, en muchos casos sea necesaria la validación de la decisión del algoritmo por un especialista, sin embargo, pueden darse casos donde esta validación no sea tan esencial, o que se pueda tratar a esta primera identificación como una orientación de cara a ayudar al especialista. Con este objetivo en mente, esta tesis se centra en el desarrollo de un algoritmo para la identificación de patologías del tren inferior. Esta identificación se lleva a cabo mediante la forma de caminar de la gente (gait, en inglés). La forma de caminar de la gente difiere entre unas personas y otras, haciendo posible incluso la identificación biométrica mediante su uso. Sin embargo, esta también se ve afectada cuando se presenta una patología, tanto física como psíquica, que afecta a las personas. Esta alteración, genera un patrón común dependiendo del tipo de patología. No obstante, esta tesis se centra exclusivamente la identificación de patologías físicas. Otro aspecto importante en esta tesis es que los diferentes algoritmos se crean con la idea de la portabilidad en mente, evitando la obligación del usuario de realizar los paseos con excesivas restricciones (tanto de vestimenta como de localización). En primer lugar, se desarrollan diferentes algoritmos empleando diferentes configuraciones de teléfonos inteligentes para la adquisición de la base de datos. En concreto se usan configuraciones empleando 1, 2 y 4 teléfonos. Los teléfonos se colocan en las piernas empleando sujeciones especiales, de tal modo que no se puedan mover libremente. Una vez que se han capturado todos los paseos, el primer paso es filtrar las señales para eliminar el posible ruido que contengan. Seguidamente las señales se procesan para extraer los diferentes ciclos de la marcha (que corresponden a dos pasos) que componen los paseos. Una vez terminado el proceso de extracción de características, parte de estas se emplean para entrenar diferentes algoritmos de machine learning, los cuales luego son empleados para clasificar las restantes características. Sin embargo, las evidencias obtenidas a través de la realización de los experimentos con las diferentes configuración y algoritmos indican que no es viable realizar una identificación de patologías empleando teléfonos inteligentes. Principalmente esto se puede achacar a tres factores: la calidad de las señales capturadas por los teléfonos, la frecuencia de muestreo inestable y la falta de sincronía entre los teléfonos. Por otro lado, a raíz de los pobres resultados obtenidos empleado teléfonos inteligentes se cambia el dispositivo de captura a un sistema profesional de adquisición de movimiento. Además, se plantea crear dos tipos de algoritmo, uno basado en redes neuronales y otro basado en los algoritmos empleados anteriormente. Primeramente, se plantea la adquisición de una nueva base de datos. Para ellos se establece un procedimiento para facilitar la captura de los datos, los cuales se plantea han de ser en un entorno de libertad para el usuario. Una vez que se tienen todos los datos, el preprocesado que se realizar es similar al aplicado anteriormente. Las señales se filtran para eliminar el ruido y se extraen los diferentes ciclos de la marcha que componen los paseos. Sin embargo, como para el dispositivo de captura tenemos información de varios sensores y varias localizaciones, el lugar de emplear una frecuencia de corte común, empíricamente se establece una frecuencia de corte para cada señal y posición. Dado que ya tenemos los datos listos, se crea una red neuronal recurrente basada en la literatura, de este modo podemos tener una primera aproximación al problema. Vista la viabilidad de la red neuronal, se realizan diferentes experimentos con el objetivo de mejorar el rendimiento de esta. Finalmente, el otro algoritmo recoge el legado de lo visto en la primera parte de la tesis. Al igual que antes, este algoritmo se basa en la parametrización de los ciclos de la marcha, para su posterior utilización y emplea algoritmos basado en machine learning. A diferencia del uso de señales temporales, al parametrizar los ciclos, se pueden generar datos espurios. Para eliminar estos datos, el conjunto de datos se somete a una fase de preparación (limpieza y escalado). Una vez que se ha obtenido un conjunto de datos preparado, este se divide en dos, una parte se usa para entrenar los algoritmos, los cuales se emplean para clasificar las muestras restantes. Los resultados de estos experimentos validan la viabilidad de este algoritmo para la detección de patologías. A continuación, se realizan diferentes experimentos con el objetivo de reducir la cantidad de información necesaria para identificar una patología, sin perjudicar a la precisión. Resultado de estos experimentos, se puede concluir que es viable detectar patologías empleando únicamente 2 sensores colocados en una pierna.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: María del Carmen Sánchez Ávila.- Secretario: Mariano López García.- Vocal: Richard Matthew Gues
    corecore