375 research outputs found

    Rolling element bearing fault diagnostics using the blind deconvolution technique

    Get PDF
    Bearing failure is one of the foremost causes of breakdown in rotating machinery. Such failure can be catastrophic and can result in costly downtime. Bearing condition monitoring has thus played an important role in machine maintenance. In condition monitoring, the observed signal at a measurement point is often corrupted by extraneous noise during the transmission process. It is important to detect incipient faults in advance before catastrophic failure occurs. In condition monitoring, the early detection of incipient bearing signal is often made difficult due to its corruption by background vibration (noise). Numerous advanced signal processing techniques have been developed to detect defective bearing signals but with varying degree of success because they require a high Signal to Noise Ratio (SNR), and the fault components need to be larger than the background noise. Vibration analyses in the time and frequency domains are commonly used to detect machinery failure, but these methods require a relatively high SNR. Hence, it is essential to minimize the noise component in the observed signal before post processing is conducted. In this research, detection of failure in rolling element bearing faults by vibration analysis is investigated. The expected time intervals between the impacts of faulty bearing components signals are analysed using the blind deconvolution technique as a feature extraction technique to recover the source signal. Blind deconvolution refers to the process of learning the inverse of an unknown channel and applying it to the observed signal to recover the source signal of a damaged bearing. The estimation time period between the impacts is improved by using the technique and consequently provides a better approach to identify a damaged bearing. The procedure to obtain the optimum inverse equalizer filter is addressed to provide the filter parameters for the blind deconvolution process. The efficiency and robustness of the proposed algorithm is assessed initially using different kinds of corrupting noises. The result show that the proposed algorithm works well with simulated corrupting periodic noises. This research also shows that blind deconvolution behaves as a notch filter to remove the noise components. This research involves the application of blind deconvolution technique with optimum equalizer design for improving the SNR for the detection of damaged rolling element bearings. The filter length of the blind equalizer needs to be adjusted continuously due to different operating conditions, size and structure of the machines. To determine the optimum filter length a simulation test was conducted with a pre-recorded bearing signal (source) and corrupted with varying magnitude noise. From the output, the modified Crest Factor (CF) and Arithmetic Mean (AM) of the recovered signal can be plotted versus the filter length. The optimum filter length can be selected by observation when the plot converges close to the pre-determined source feature value. The filter length is selected based on the CF and AM plots, and these values are stored in a data training set for optimum determination of filter length using neural network. A pre-trained neural network is designed to train the behaviour of the system to target the optimum filter length. The performance of the blind deconvolution technique was assessed based on kurtosis values. The capability of blind deconvolution with optimum filter length developed from the simulation studies was further applied in a life bearing test rig. In this research, life time testing is also conducted to gauge the performance of the blind deconvolution technique in detecting a growing potential failure of a new bearing which is eventually run to failure. Results from unseeded new bearing tests are different, because seeded defects have certain defect characteristic frequencies which can be used to track a specific damaged frequency component. In this test, the test bearing was set to operate continuously until failures occurred. The proposed technique was then applied to monitor the condition of the test bearing and a trend of the bearing life was established. The results revealed the superiority of the technique in identifying the periodic components of the bearing before final break-down of the test bearing. The results show that the proposed technique with optimum filter length does improve the SNR of the deconvolved signal and can be used for automatic feature extraction and fault classification. This technique has potential for use in machine diagnostics

    Permutation based decision making under fuzzy environment using Tabu search

    Get PDF
    One of the techniques, which are used for Multiple Criteria Decision Making (MCDM) is the permutation. In the classical form of permutation, it is assumed that weights and decision matrix components are crisp. However, when group decision making is under consideration and decision makers could not agree on a crisp value for weights and decision matrix components, fuzzy numbers should be used. In this article, the fuzzy permutation technique for MCDM problems has been explained. The main deficiency of permutation is its big computational time, so a Tabu Search (TS) based algorithm has been proposed to reduce the computational time. A numerical example has illustrated the proposed approach clearly. Then, some benchmark instances extracted from literature are solved by proposed TS. The analyses of the results show the proper performance of the proposed method

    Published Books in the Library and Information Science Studies in Iran between 2006 and 2016: A Bibliometric Study

    Get PDF
    This study aims to assess the published books in library and information science in Iran, with 583 books being evaluated using the bibliometric method. In order to review the books of information science and knowledge studies, bibliometrics method was used. The research population was the books published in the information science and knowledge studies during 2006-2016. The total number of books published during these years was 764 books and, for this research, the books that had been reprinted were removed and; finally, the total number of books was 583. The data collection tools were a researcher-made checklist that was approved by the members of the faculty of the information science and knowledge studies department. The findings showed that according to the review of the academic syllabi in 2010 and 2014, most works were published in 2011 and 2015. In addition, 78% of the works were compiled, and 24% were translated. About 55.7% of 476 authors were women, while 44.3% were men. With over ten works, Hamid Mohseni and Rahmatullah Fattahi were the most prolific writers in this field. Besides, 83% of the books had been published in Tehran, and the rest had been published in other cities. Ketabdar & Chapar publication, National Library and Archives of the Islamic Republic of Iran, The Center for Study and Compiling University Books in Humanities, and Payame Noor University were the top 5 publishers in this field. Price surveys showed that the price of the published books was on the rise, reaching the highest level in 2015 and 2016. The three topics of information organization, library and information science, and test books were the most widely published. On the other hand, less attention has been paid to scientometrics, bibliometrics, webometrics, information management, and social networks.https://dorl.net/dor/20.1001.1.20088302.2022.20.2.17.

    An Intelligent Approach to Detecting Novel Fault Classes for Centrifugal Pumps Based on Deep CNNs and Unsupervised Methods

    Full text link
    Despite the recent success in data-driven fault diagnosis of rotating machines, there are still remaining challenges in this field. Among the issues to be addressed, is the lack of information about variety of faults the system may encounter in the field. In this paper, we assume a partial knowledge of the system faults and use the corresponding data to train a convolutional neural network. A combination of t-SNE method and clustering techniques is then employed to detect novel faults. Upon detection, the network is augmented using the new data. Finally, a test setup is used to validate this two-stage methodology on a centrifugal pump and experimental results show high accuracy in detecting novel faults.Comment: 6 pages, 9 figure

    Landau Kleffner Syndrome and Misdiagnosis of Autism Spectrum Disorder: A Mini-Review

    Get PDF
    Autism spectrum disorders (ASD) is the name for a group of developmental disorders including a wide range of signs, symptoms and disability. Landau kleffner syndrome (LKS) or acquired epileptic aphasia is a pediatric disorder characterized by the association of epileptiform electroencephalographic (EEG) abnormalities and acquired aphasia. The early stages of the LKS may be manifested by the symptoms of the autism leading to misdiagnosis. Since LKS is a progressive disease, its misdiagnosis leads to a greater neurocognitive deterioration which may result in seizure in the final stages. The purpose of this review was to provide an overview of available researchs on ASD population and patients with LKS and relationship between these two diseases

    Context aware saliency map generation using semantic segmentation

    Full text link
    Saliency map detection, as a method for detecting important regions of an image, is used in many applications such as image classification and recognition. We propose that context detection could have an essential role in image saliency detection. This requires extraction of high level features. In this paper a saliency map is proposed, based on image context detection using semantic segmentation as a high level feature. Saliency map from semantic information is fused with color and contrast based saliency maps. The final saliency map is then generated. Simulation results for Pascal-voc11 image dataset show 99% accuracy in context detection. Also final saliency map produced by our proposed method shows acceptable results in detecting salient points.Comment: 5 pages, 7 figures, 2 table

    An Inconsistency between Being and Time in Presentism

    Get PDF
    Presentists argue that only present entities exist absolutely and unrestrictedly. Presentism, which itself is a temporal analog of the modal doctrine of actualism, is an ontological idea about time and existence against theories such as eternalism, possibilism, and growing block theory. Thus, presentists deny the existence of atemporal or timeless entities and describe presentism as a version of the (A-theory), which makes a difference between present, past, and future. Also, presentists are not able to ontologically, justify the existence of some entities such as Socrates and the year 3000 in the past and the future and the relations involving non-present objects like ‘Abraham Lincoln was taller than Napoleon Bonaparte’. However, presentism is altered via the addition of an abstract four-dimensional manifold of ersatz time, which is a type of (B-theory) series, to identify all ontological entities and justify the dynamic world. This inquiry is an attempt to put a completely different perspective on presentism and, the result obtained that despite the new conceptualization of time, there is an inconsistency between time and being in presentism. Therefore, presentists have failed to determine the ontological identification of cross-temporal relations, reference and propositions, and truth-makers. Also, whatever the final outcome of the debate between presentism and other views is significantly eternalism, according to ersatz presentism, it is hard to make sense of the idea that things change from one moment to the next
    • …
    corecore