44 research outputs found

    LIDAR-based wind speed modelling and control system design

    Get PDF
    Abstract—The main objective of this work is to explore the feasibility of using LIght Detection And Ranging (LIDAR) measurement and develop feedforward control strategy to improve wind turbine operation. Firstly the Pseudo LIDAR measurement data is produced using software package GH Bladed across a distance from the turbine to the wind measurement points. Next the transfer function representing the evolution of wind speed is developed. Based on this wind evolution model, a model-inverse feedforward control strategy is employed for the pitch control at above-rated wind conditions, in which LIDAR measured wind speed is fed into the feedforward. Finally the baseline feedback controller is augmented by the developed feedforward control. This control system is developed based on a Supergen 5MW wind turbine model linearised at the operating point, but tested with the nonlinear model of the same system. The system performances with and without the feedforward control channel are compared. Simulation results suggest that with LIDAR information, the added feedforward control has the potential to reduce blade and tower loads in comparison to a baseline feedback control alone

    An integrated data-driven model-based approach to condition monitoring of the wind turbine gearbox

    Get PDF
    Condition Monitoring (CM) is considered an effective method to improve the reliability of wind turbines and implement cost-effective maintenance. This paper presents a single hidden-layer feed forward neural network (SLFN), trained using an extreme learning machine (ELM) algorithm, for condition monitoring of wind turbines. Gradient-based algorithms are commonly used to train SLFNs; however, these algorithms are slow and may become trapped in local optima. The use of an ELM algorithm can dramatically reduce learning time and overcome issues associated with local optima. In this paper, the ELM model is optimized using a genetic algorithm. The residual signal obtained by comparing the model and actual output is analyzed using the Mahalanobis distance measure due to its ability to capture correlations among multiple variables. An accumulated Mahalanobis distance value, obtained from a range of components, is used to evaluate the heath of a gearbox, one of the critical subsystems of a wind turbine. Models have been identified from supervisory control and data acquisition (SCADA) data obtained from a working wind farm. The results show that the proposed training method is considerably faster than traditional techniques, and the proposed method can efficiently identify faults and the health condition of the gearbox in wind turbines

    Predicting potential customer needs and wants for agile design and manufacture in an industry 4.0 environment

    Get PDF
    Manufacturing is currently experiencing a paradigm shift in the way that products are designed, produced and serviced. Such changes are brought about mainly by the extensive use of the Internet and digital technologies. As a result of this shift, a new industrial revolution is emerging, termed “Industry 4.0” (i4), which promises to accommodate mass customisation at a mass production cost. For i4 to become a reality, however, multiple challenges need to be addressed, highlighting the need for design for agile manufacturing and, for this, a framework capable of integrating big data analytics arising from the service end, business informatics through the manufacturing process, and artificial intelligence (AI) for the entire manufacturing value chain. This thesis attempts to address these issues, with a focus on the need for design for agile manufacturing. First, the state of the art in this field of research is reviewed on combining cutting-edge technologies in digital manufacturing with big data analysed to support agile manufacturing. Then, the work is focused on developing an AI-based framework to address one of the customisation issues in smart design and agile manufacturing, that is, prediction of potential customer needs and wants. With this framework, an AI-based approach is developed to predict design attributes that would help manufacturers to decide the best virtual designs to meet emerging customer needs and wants predictively. In particular, various machine learning approaches are developed to help explain at least 85% of the design variance when building a model to predict potential customer needs and wants. These approaches include k-means clustering, self-organizing maps, fuzzy k-means clustering, and decision trees, all supporting a vector machine to evaluate and extract conscious and subconscious customer needs and wants. A model capable of accurately predicting customer needs and wants for at least 85% of classified design attributes is thus obtained. Further, an analysis capable of determining the best design attributes and features for predicting customer needs and wants is also achieved. As the information analysed can be utilized to advise the selection of desired attributes, it is fed back in a closed-loop of the manufacturing value chain: design → manufacture → management/service → → → design... For this, a total of 4 case studies are undertaken to test and demonstrate the efficacy and effectiveness of the framework developed. These case studies include: 1) an evaluation model of consumer cars with multiple attributes including categorical and numerical ones; 2) specifications of automotive vehicles in terms of various characteristics including categorical and numerical instances; 3) fuel consumptions of various car models and makes, taking into account a desire for low fuel costs and low CO2 emissions; and 4) computer parts design for recommending the best design attributes when buying a computer. The results show that the decision trees, as a machine learning approach, work best in predicting customer needs and wants for smart design. With the tested framework and methodology, this thesis overall presents a holistic attempt to addressing the missing gap between manufacture and customisation, that is meeting customer needs and wants. Effective ways of achieving customization for i4 and smart manufacturing are identified. This is achieved through predicting potential customer needs and wants and applying the prediction at the product design stage for agile manufacturing to meet individual requirements at a mass production cost. Such agility is one key element in realising Industry 4.0. At the end, this thesis contributes to improving the process of analysing the data to predict potential customer needs and wants to be used as inputs to customizing product designs agilely

    Absolute calibration of radiometric partial discharge sensors for insulation condition monitoring in electrical substations

    Get PDF
    Measurement of partial discharge (PD) is an important tool in the monitoring of insulation integrity in high voltage (HV) equipment. Partial discharge is measured traditionally using galvanic contact techniques based on IEC 60270 standard or near field coupling [1]. Freespace radiometric (FSR) detection of PD is a relatively new technique. This work advances calibration method for FSR measurements and proposer a methodology for FSR measurement of absolute PD intensity. Until now, it has been believed that absolute measurement of partial discharge intensity using radiometric method is not possible. In this thesis it is demonstrated that such measurement is possible and the first ever such absolute measurements are presented. Partial discharge sources have been specially constructed. These included a floating electrode PD emulator, an acrylic cylinder internal PD emulator and an epoxy dielectric internal PD emulator. Radiated signals are captured using a wideband biconical antenna [1]. Free-space radiometric and galvanic contact measurement techniques are compared. Discharge pulse shape and PD characteristics under high voltage DC and AC conditions are obtained. A comparison shows greater similarity between the two measurements than was expected. It is inferred that the dominant mechanism in shaping the spectrum is the band-limiting effect of the radiating structure rather than band limiting by the receiving antenna. The cumulative energies of PD pulses in both time and frequency domains are also considered [2]. The frequency spectrum is obtained by FFT analysis of time-domain pulses. The relative spectral densities in the frequency bands 50 MHz – 290 MHz, 290 MHz – 470 MHz and 470 MHz – 800 MHz are determined. The calibration of the PD sources for used in the development of Wireless Sensor Network (WSN) is presented. A method of estimating absolute PD activity level from a radiometric measurement by relating effective radiated power (ERP) to PD intensity using a PD calibration device is proposed and demonstrated. The PD sources have been simulated using CST Microwave Studio. The simulations are used to establish a relationship between radiated PD signals and PD intensity as defined by apparent charge transfer. To this end, the radiated fields predicted in the simulations are compared with measurements. There is sufficient agreement between simulations and measurements to suggest the simulations could be used to investigate the relationship between PD intensity and the field strength of radiated signals [3]

    Nonlinear dynamic process monitoring using kernel methods

    Get PDF
    The application of kernel methods in process monitoring is well established. How- ever, there is need to extend existing techniques using novel implementation strate- gies in order to improve process monitoring performance. For example, process monitoring using kernel principal component analysis (KPCA) have been reported. Nevertheless, the e ect of combining kernel density estimation (KDE)-based control limits with KPCA for nonlinear process monitoring has not been adequately investi- gated and documented. Therefore, process monitoring using KPCA and KDE-based control limits is carried out in this work. A new KPCA-KDE fault identi cation technique is also proposed. Furthermore, most process systems are complex and data collected from them have more than one characteristic. Therefore, three techniques are developed in this work to capture more than one process behaviour. These include the linear latent variable-CVA (LLV-CVA), kernel CVA using QR decomposition (KCVA-QRD) and kernel latent variable-CVA (KLV-CVA). LLV-CVA captures both linear and dynamic relations in the process variables. On the other hand, KCVA-QRD and KLV-CVA account for both nonlinearity and pro- cess dynamics. The CVA with kernel density estimation (CVA-KDE) technique reported does not address the nonlinear problem directly while the regular kernel CVA approach require regularisation of the constructed kernel data to avoid com- putational instability. However, this compromises process monitoring performance. The results of the work showed that KPCA-KDE is more robust and detected faults higher and earlier than the KPCA technique based on Gaussian assumption of pro- cess data. The nonlinear dynamic methods proposed also performed better than the afore-mentioned existing techniques without employing the ridge-type regulari- sation

    Impact of assistive technologies in supporting people with dementia.

    Get PDF
    In recent decades, many Assistive Technologies (ATs) have been developed to promote independence among people with dementia (PWD). Although there is a high rate of AT abandonment, only a handful of studies have focused on AT usability evaluation from the user point of view. The aim of this thesis is to empirically investigate the usability of ATs from the PWD and to measure its impacts on their lives. Following the Multi-methods research approach, the first part of the thesis uses secondary research methods including literature review and systematic mapping studies. The second part uses primary research methods including interviews (N=20) and questionnaire (N=327) based surveys for data collection and requirements elicitation. The third part is based on the design, development, and testing of an assistive software application through case studies (N=8). The first mapping study categorised existing general ATs into five major categories: robotics, monitoring, reminders, communication, and software. The second mapping study categorised software-based ATs into nine categories: cognitive help, reminders, health/activity monitoring, socialization, leisure, travel help, dementia detection, dementia prevention, and rehabilitation. The qualitative results showed that most of the PWD use ATs for socialization, and highlighted user interface efficacy, tailoring individual needs, and simplified functions as the major limitations of existing ATs. The quantitative results identified eleven factors for ATs usage: operational support, physical support, psychological support, social support, cultural match, reduced external help, affordability, travel help, compatibility, effectiveness, and retention. The statistical analysis showed that improved (social, psychological and travel) support and reduced need of external help for operating ATs, greatly impact AT effectiveness and retention. Based on PWD requirements, an assistive software application named E-Community for Dementia (ECD) was developed and tested through case studies involving 8 PWD and 40 volunteers. The participants were able to get their daily needed items in less time and with a friendlier manner through the help of their neighbours. The involvement of the caregivers for medication, meals, prayers etc. reduced significantly. The painting function helped evoke their memories, and encouraged them to perform activities from their youth. The news and weather functions kept them updated about the world around them. The travel tutor guided them in safe travel outside home and made sure that they got back home independently. The enhanced interaction between the PWD and their neighbours significantly reduced their social isolation. The results support the idea to create dementia-friendly communities at street levels, which is a cost-effective and reliable solution. The major outcomes from this thesis are AT categorization, evaluation of user experiences, factor identification and ranking, user requirements elicitation, assistive software application development, and case studies. This thesis helps considerably towards empirical investigation of the impact of ATs in supporting the PWD. The implementation of the ECD contributes towards the wellbeing of the PWD and saves costs spent on caregivers and carer companies. In future, the same study could be conducted in other settings to analyse the role of culture in AT acceptance

    Modelling and Control of Chemical Processes using Local Linear Model Networks

    Get PDF
    Recently, technology and research in control systems have made fast progress in numerous fields, such as chemical process engineering. The modelling and control may face some challenges as the procedures applied to chemical reactors and processes are nonlinear. Therefore, the aim of this research is to overcome these challenges by applying a local linear model networks technique to identify and control temperature, pH, and dissolved oxygen. The reactor studied exhibits a nonlinear function, which contains heating power, flow rate of base, and the flow rate of air as the input parameters and temperature, pH, and dissolved oxygen (pO2) the output parameters. The local linear model networks technique is proposed and applied to identify and control the pH process. This method was selected following a comparison of radial basis function neural networks (RBFNN) and adaptive neuro-fuzzy inference system (ANFIS). The results revealed that local linear model networks yielded less mean square errors than RBFNN and ANFIS. Then proportional-integral (PI) and local linear model controllers are implemented using the direct design method for the pH process. The controllers were designed on the first order pH model with 4 local models and the scaling factor is 20. Moreover, local linear model networks are also used to identify and control the level of dissolved oxygen. To select the best method for system identification, a gradient descent learning algorithm is also used to update the width scaling factor in the network, with findings compared to the manual approach for local linear model networks. However, the results demonstrated that manually updating the scaling factor yielded less mean square error than gradient descent. Consequently, PI and local linear model controllers are designed using the direct design method to control and maintain the dissolved oxygen level. The controllers were designed on first and second order pO2 model with 3 local models and the scaling factor is 20. The results for the first order revealed good control performance. However, the results for second order model lead to ringing poles which caused an unstable output with an oscillation in the input. This problem was solved by zero cancellation in the controller design and these results show good control performance. Finally, the temperature process was identified using local linear model networks and PI and local linear model controllers were designed using the direct design method. From the results, it can be observed that the first order model gives acceptable output responses compared to the higher order model. The control action for the output was behaving much better on the first order model when the number of local models M=4, compared with M=3 and M=5. Furthermore, the results revealed that the mean square error became less when the number of local models M=4 in the controller, compared with having number of local models M=3 and M=5

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems

    Perceptions Measurement of Professional Certifications to Augment Buffalo State College Baccalaureate Technology Programs, as a Representative American Postsecondary Educational Institution

    Get PDF
    The purpose of this study was to assess, measure, and analyze whether voluntary, nationally-recognized professional certification credentials were important to augment technology programs at Buffalo State College (BSC), as a representative postsecondary baccalaureate degree-granting institution offering technology curricula. Six BSC undergraduate technology programs were evaluated within the scope of this study: 1.) Computer Information Systems; 2.) Electrical Engineering, Electronics; 3.) Electrical Engineering, Smart Grid; 4.) Industrial Technology; 5.) Mechanical Engineering; and 6.) Technology Education. This study considered the following three aspects of the problem: a.) postsecondary technology program enrollment and graduation trends; b.) the value/awareness of professional certifications to employers and students; and c.) professional certification relevancy and postsecondary curricula integration. The study was conducted through surveys and interviews with four technology-related purposive sample groups: 1.) BSC program alumni; 2.) BSC and non-BSC technology program faculty; 3.) hiring managers/industry leaders; and 4.) non-BSC alumni and certification holders. In addition, this study included an analysis of relevant professional certification organizations and student enrollment data from the six technology programs within scope. Research methods included both quantitative and qualitative analytical techniques. This study concluded undergraduate technology students benefitted from a greater awareness of relevant professional certifications and their perceived value. This study also found the academic community may be well served to acknowledge the increasing trend of professional certification integration into postsecondary technology programs

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems
    corecore