357 research outputs found

    Method of lines and runge-kutta method in solving partial differential equation for heat equation

    Get PDF
    Solving the differential equation for Newton’s cooling law mostly consists of several fragments formed during a long time to solve the equation. However, the stiff type problems seem cannot be solved efficiently via some of these methods. This research will try to overcome such problems and compare results from two classes of numerical methods for heat equation problems. The heat or diffusion equation, an example of parabolic equations, is classified into Partial Differential Equations. Two classes of numerical methods which are Method of Lines and Runge-Kutta will be performed and discussed. The development, analysis and implementation have been made using the Matlab language, which the graphs exhibited to highlight the accuracy and efficiency of the numerical methods. From the solution of the equations, it showed that better accuracy is achieved through the new combined method by Method of Lines and Runge-Kutta method

    Life jacket

    Get PDF
    Anyone who cannot swim well should wear life jacket whether they are in the water or around the water. Even those who are can swim well should wear the life jacket when they are doing activity such as swimming, fishing, boating or while doing any water-related activity. Life jacket is a kind of safety jacket that keeping the wearer float the in the water. The wearer may be in the conscious or unconscious condition but by wearing the life jacket we can minimize the risk of drowning because life jacket assist the wearer to keep floating in the water

    Design optimization for the two-stage bivariate pattern recognition scheme

    Get PDF
    In manufacturing operations, unnatural process variation has become a major contributor to a poor quality product. Therefore, monitoring and diagnosis of variation is critical in quality control. Monitoring refers to the identification of process condition either it is running within in statistically in-control or out-of-control, whereas diagnosis refers to the identification of the source of out-of-control process. Selection of SPC scheme becomes more challenging when involving two correlated variables, which are known as bivariate quality control (BQC). Generally, the traditional SPC charting schemes were known to be effective in monitoring aspects, but there were unable to provide information towards diagnosis. In order to overcome this issue, many researches proposed an artificial neural network (ANN) - based pattern recognition schemes. Such schemes were mainly utilize raw data as input representation into an ANN recognizer, which resulted in limited performance. In this research, an integrated MEWMA-ANN scheme was investigated. The optimal design parameters for the MEWMA control chart have been studied. The study focused on BQC with variation in mean shifts (Ό = ±0.75 ~ 3.00) standard deviations and cross correlation function (ρ = 0.1 ~ 0.9). The monitoring and diagnosis performances were evaluated based on the average run length (ARL0, ARL1) and recognition accuracy (RA) respectively. The selected optimal design parameters with λ=0.10, H=8.64 gave better performance among the other designs, namely, average run length, ARL1=3.24 ~ 16.93 (for out-of-control process) and recognition accuracy, RA=89.05 ~ 97.73%. For in-control process, design parameters with λ=0.40, H=10.31 parameter gave superior performance with ARL0 = 676.81 ~ 921.71, which is more effective in avoiding false alarm with any correlation

    Maintenance Management of Wind Turbines

    Get PDF
    “Maintenance Management of Wind Turbines” considers the main concepts and the state-of-the-art, as well as advances and case studies on this topic. Maintenance is a critical variable in industry in order to reach competitiveness. It is the most important variable, together with operations, in the wind energy industry. Therefore, the correct management of corrective, predictive and preventive politics in any wind turbine is required. The content also considers original research works that focus on content that is complementary to other sub-disciplines, such as economics, finance, marketing, decision and risk analysis, engineering, etc., in the maintenance management of wind turbines. This book focuses on real case studies. These case studies concern topics such as failure detection and diagnosis, fault trees and subdisciplines (e.g., FMECA, FMEA, etc.) Most of them link these topics with financial, schedule, resources, downtimes, etc., in order to increase productivity, profitability, maintainability, reliability, safety, availability, and reduce costs and downtime, etc., in a wind turbine. Advances in mathematics, models, computational techniques, dynamic analysis, etc., are employed in analytics in maintenance management in this book. Finally, the book considers computational techniques, dynamic analysis, probabilistic methods, and mathematical optimization techniques that are expertly blended to support the analysis of multi-criteria decision-making problems with defined constraints and requirements

    Degradation Modeling and Remaining Useful Life Estimation: From Statistical Signal Processing to Deep Learning Models

    Get PDF
    Aging critical infrastructures and valuable machineries together with recent catastrophic incidents such as the collapse of Morandi bridge, or the Gulf of Mexico oil spill disaster, call for an urgent quest to design advanced and innovative prognostic solutions, and efficiently incorporate multi-sensor streaming data sources for industrial development. Prognostic health management (PHM) is among the most critical disciplines that employs the advancement of the great interdependency between signal processing and machine learning techniques to form a key enabling technology to cope with maintenance development tasks of complex industrial and safety-critical systems. Recent advancements in predictive analytics have empowered the PHM paradigm to move from the traditional condition-based monitoring solutions and preventive maintenance programs to predictive maintenance to provide an early warning of failure, in several domains ranging from manufacturing and industrial systems to transportation and aerospace. The focus of the PHM is centered on two core dimensions; the first is taking into account the behavior and the evolution over time of a fault once it occurs, while the second one aims at estimating/predicting the remaining useful life (RUL) during which a device can perform its intended function. The first dimension is the degradation that is usually determined by a degradation model derived from measurements of critical parameters of relevance to the system. Developing an accurate model for the degradation process is a primary objective in prognosis and health management. Extensive research has been conducted to develop new theories and methodologies for degradation modeling and to accurately capture the degradation dynamics of a system. However, a unified degradation framework has yet not been developed due to: (i) structural uncertainties in the state dynamics of the system and (ii) the complex nature of the degradation process that is often non-linear and difficult to model statistically. Thus even for a single system, there is no consensus on the best degradation model. In this regard, this thesis tries to bridge this gap by proposing a general model that able to model the true degradation path without having any prior knowledge of the true degradation model of the system. Modeling and analysis of degradation behavior lead us to RUL estimation, which is the second dimension of the PHM and the second part of the thesis. The RUL is the main pillar of preventive maintenance, which is the time a machine is expected to work before requiring repair or replacement. Effective and accurate RUL estimation can avoid catastrophic failures, maximize operational availability, and consequently reduce maintenance costs. The RUL estimation is, therefore, of paramount importance and has gained significant attention for its importance to improve systems health management in complex fields including automotive, nuclear, chemical, and aerospace industries to name but a few. A vast number of researches related to different approaches to the concept of remaining useful life have been proposed, and they can be divided into three broad categories: (i) Physics-based; (ii) Data-driven, and; (iii) Hybrid approaches (multiple-model). Each category has its own limitations and issues, such as, hardly adapt to different prognostic applications, in the first one, and accuracy degradation issues, in the second one, because of the deviation of the learned models from the real behavior of the system. In addition to hardly sustain good generalization. Our thesis belongs to the third category, as it is the most promising category, in particular, the new hybrid models, on basis of two different architectures of deep neural networks, which have great potentials to tackle complex prognostic issues associated with systems with complex and unknown degradation processes

    Deep Clustering and Deep Network Compression

    Get PDF
    The use of deep learning has grown increasingly in recent years, thereby becoming a much-discussed topic across a diverse range of fields, especially in computer vision, text mining, and speech recognition. Deep learning methods have proven to be robust in representation learning and attained extraordinary achievement. Their success is primarily due to the ability of deep learning to discover and automatically learn feature representations by mapping input data into abstract and composite representations in a latent space. Deep learning’s ability to deal with high-level representations from data has inspired us to make use of learned representations, aiming to enhance unsupervised clustering and evaluate the characteristic strength of internal representations to compress and accelerate deep neural networks.Traditional clustering algorithms attain a limited performance as the dimensionality in-creases. Therefore, the ability to extract high-level representations provides beneficial components that can support such clustering algorithms. In this work, we first present DeepCluster, a clustering approach embedded in a deep convolutional auto-encoder. We introduce two clustering methods, namely DCAE-Kmeans and DCAE-GMM. The DeepCluster allows for data points to be grouped into their identical cluster, in the latent space, in a joint-cost function by simultaneously optimizing the clustering objective and the DCAE objective, producing stable representations, which is appropriate for the clustering process. Both qualitative and quantitative evaluations of proposed methods are reported, showing the efficiency of deep clustering on several public datasets in comparison to the previous state-of-the-art methods.Following this, we propose a new version of the DeepCluster model to include varying degrees of discriminative power. This introduces a mechanism which enables the imposition of regularization techniques and the involvement of a supervision component. The key idea of our approach is to distinguish the discriminatory power of numerous structures when searching for a compact structure to form robust clusters. The effectiveness of injecting various levels of discriminatory powers into the learning process is investigated alongside the exploration and analytical study of the discriminatory power obtained through the use of two discriminative attributes: data-driven discriminative attributes with the support of regularization techniques, and supervision discriminative attributes with the support of the supervision component. An evaluation is provided on four different datasets.The use of neural networks in various applications is accompanied by a dramatic increase in computational costs and memory requirements. Making use of the characteristic strength of learned representations, we propose an iterative pruning method that simultaneously identifies the critical neurons and prunes the model during training without involving any pre-training or fine-tuning procedures. We introduce a majority voting technique to compare the activation values among neurons and assign a voting score to evaluate their importance quantitatively. This mechanism effectively reduces model complexity by eliminating the less influential neurons and aims to determine a subset of the whole model that can represent the reference model with much fewer parameters within the training process. Empirically, we demonstrate that our pruning method is robust across various scenarios, including fully-connected networks (FCNs), sparsely-connected networks (SCNs), and Convolutional neural networks (CNNs), using two public datasets.Moreover, we also propose a novel framework to measure the importance of individual hidden units by computing a measure of relevance to identify the most critical filters and prune them to compress and accelerate CNNs. Unlike existing methods, we introduce the use of the activation of feature maps to detect valuable information and the essential semantic parts, with the aim of evaluating the importance of feature maps, inspired by novel neural network interpretability. A majority voting technique based on the degree of alignment between a se-mantic concept and individual hidden unit representations is utilized to evaluate feature maps’ importance quantitatively. We also propose a simple yet effective method to estimate new convolution kernels based on the remaining crucial channels to accomplish effective CNN compression. Experimental results show the effectiveness of our filter selection criteria, which outperforms the state-of-the-art baselines.To conclude, we present a comprehensive, detailed review of time-series data analysis, with emphasis on deep time-series clustering (DTSC), and a founding contribution to the area of applying deep clustering to time-series data by presenting the first case study in the context of movement behavior clustering utilizing the DeepCluster method. The results are promising, showing that the latent space encodes sufficient patterns to facilitate accurate clustering of movement behaviors. Finally, we identify state-of-the-art and present an outlook on this important field of DTSC from five important perspectives

    DATA-DRIVEN ANALYTICAL MODELS FOR IDENTIFICATION AND PREDICTION OF OPPORTUNITIES AND THREATS

    Get PDF
    During the lifecycle of mega engineering projects such as: energy facilities, infrastructure projects, or data centers, executives in charge should take into account the potential opportunities and threats that could affect the execution of such projects. These opportunities and threats can arise from different domains; including for example: geopolitical, economic or financial, and can have an impact on different entities, such as, countries, cities or companies. The goal of this research is to provide a new approach to identify and predict opportunities and threats using large and diverse data sets, and ensemble Long-Short Term Memory (LSTM) neural network models to inform domain specific foresights. In addition to predicting the opportunities and threats, this research proposes new techniques to help decision-makers for deduction and reasoning purposes. The proposed models and results provide structured output to inform the executive decision-making process concerning large engineering projects (LEPs). This research proposes new techniques that not only provide reliable timeseries predictions but uncertainty quantification to help make more informed decisions. The proposed ensemble framework consists of the following components: first, processed domain knowledge is used to extract a set of entity-domain features; second, structured learning based on Dynamic Time Warping (DTW), to learn similarity between sequences and Hierarchical Clustering Analysis (HCA), is used to determine which features are relevant for a given prediction problem; and finally, an automated decision based on the input and structured learning from the DTW-HCA is used to build a training data-set which is fed into a deep LSTM neural network for time-series predictions. A set of deeper ensemble programs are proposed such as Monte Carlo Simulations and Time Label Assignment to offer a controlled setting for assessing the impact of external shocks and a temporal alert system, respectively. The developed model can be used to inform decision makers about the set of opportunities and threats that their entities and assets face as a result of being engaged in an LEP accounting for epistemic uncertainty

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man
    • 

    corecore