667 research outputs found

    PROGNOSTICS-BASED QUALIFICATION OF WHITE LIGHT-EMITTING DIODES (LEDS)

    Get PDF
    Light-emitting diode (LED) applications have expanded from display backlighting in computers and smart phones to more demanding applications including automotive headlights and street lightening. With these new applications, LED manufacturers must ensure that their products meet the performance requirements expected by end users, which in many cases require lifetimes of 10 years or more. The qualification tests traditionally conducted to assess such lifetimes are often as long as 6,000 hours, yet even this length of time does not guarantee that the lifetime requirements will be met. This research aims to reduce the qualification time by employing anomaly detection and prognostic methods utilizing optical, electrical, and thermal parameters of LEDs. The outcome of this research will be an in-situ monitoring approach that enables parameter sensing, data acquisition, and signal processing to identify the potential failure modes such as electrical, thermal, and optical degradation during the qualification test. To detect anomalies, a similarity-based-metric test has been developed to identify anomalies without utilizing historical libraries of healthy and unhealthy data. This similarity-based-metric test extracts features from the spectral power distributions using peak analysis, reduces the dimensionality of the features by using principal component analysis, and partitions the data set of principal components into groups using a KNN-kernel density-based clustering technique. A detection algorithm then evaluates the distances from the centroid of each cluster to each test point and detects anomalies when the distance is greater than the threshold. From this analysis, dominant degradation processes associated with the LED die and phosphors in the LED package can be identified. When implemented, the results of this research will enable a short qualification time. Prognostics of LEDs are developed with spectral power distribution (SPD) prediction for color failure. SPD is deconvoluted with die SPD and phosphor SPD with asymmetric double sigmoidal functions. Future SPD is predicted by using the particle filter algorithm to estimate the propagating parameters of the asymmetric double sigmoidal functions. Diagnostics is enabled by SPD prediction to indicate die degradation, phosphor degradation, or package degradation based on the nature of degradation shape of SPD. SPDs are converted to light output and 1976 CIE color coordinates using colorimetric conversion with color matching functions. Remaining useful life (RUL) is predicted using 7-step SDCM (standard deviation of color matching) threshold (i.e., 0.007 color distance in the CIE 1676 chromaticity coordinates). To conduct prognostics utilizing historical libraries of healthy and unhealthy data from other devices, this research employs similarity-based statistical measures for a prognostics-based qualification method using optical, electrical, and thermal covariates as health indices. Prognostics is conducted using the similarity-based statistical measure with relevance vector machine regression to capture degradation trends. Historical training data is used to extract features and define failure thresholds. Based on the relevance vector machine regression results, which construct the background health knowledge from historical training units, the similarity weight is used to measure the similarity between each training unit and test unit under the test. The weighted sum is then used to estimate the remaining useful life of the test unit

    Machine Learning-based Predictive Maintenance for Optical Networks

    Get PDF
    Optical networks provide the backbone of modern telecommunications by connecting the world faster than ever before. However, such networks are susceptible to several failures (e.g., optical fiber cuts, malfunctioning optical devices), which might result in degradation in the network operation, massive data loss, and network disruption. It is challenging to accurately and quickly detect and localize such failures due to the complexity of such networks, the time required to identify the fault and pinpoint it using conventional approaches, and the lack of proactive efficient fault management mechanisms. Therefore, it is highly beneficial to perform fault management in optical communication systems in order to reduce the mean time to repair, to meet service level agreements more easily, and to enhance the network reliability. In this thesis, the aforementioned challenges and needs are tackled by investigating the use of machine learning (ML) techniques for implementing efficient proactive fault detection, diagnosis, and localization schemes for optical communication systems. In particular, the adoption of ML methods for solving the following problems is explored: - Degradation prediction of semiconductor lasers, - Lifetime (mean time to failure) prediction of semiconductor lasers, - Remaining useful life (the length of time a machine is likely to operate before it requires repair or replacement) prediction of semiconductor lasers, - Optical fiber fault detection, localization, characterization, and identification for different optical network architectures, - Anomaly detection in optical fiber monitoring. Such ML approaches outperform the conventionally employed methods for all the investigated use cases by achieving better prediction accuracy and earlier prediction or detection capability

    STRUCTURAL INFLUENCES ON INTENSITY INTERFEROMETRY CORRELATION

    Get PDF
    Pairing the information received from multiple telescopes to explore the universe is typically based on the interference phenomenon between amplitudes of light, rather their intensities. Brighter sources and larger telescopes allow for greater amounts of light to be collected, but do not specifically involve the intensity interference of electromagnetic fields. There is an alternate form of creating images of distant objects called Intensity Interferometry (II), which is less sensitive to atmospheric distortions and aberrations of telescope surfaces. The deficiencies of II are overcome as photo detectors become more sensitive and computers more powerful. In recognition of this possibility this dissertation investigates how the deformation of a large optical surface would influence the accuracy of II. This research first involved obtaining an understanding of the theoretical foundation of II and statistics (based on quantum mechanics) of photon correlations. Optical Ray-tracing and Finite Element Analyses were thereafter integrated to answer this question: how would the correlation of the intensity field change as a large light weight reflective structure deforms? Analytical models based on the theory of the deformation of shells were developed to validate the Finite Element Analyses. In this study a single focal parabolic reflector of an Intensity Interferometer (II) system is simulated. The extent that dynamic focal properties amongst a parabolic reflector change the statistics of the light at a detector is analyzed. A ray tracing algorithm is used to examine how the statistical variations of simulated monochromatic stellar light changes from the source to the detector. Varying the positions of the detector from the focal plane and the surface profile of the mirror develops a metric to understand how the various scenarios affect the statistics of the detected light and the correlation measurement between the source and detector. Photon streams are evaluated for light distribution, time of flight, and statistical changes at a detector. This research and analysis are used as a means to develop a tool to quantify how structural perturbations of focal mirrors affect the statistics of photon stream detections inherent in II instrumentation and science.\u2

    Sleep detection with photoplethysmography for wearable-based health monitoring

    Get PDF
    Remote health monitoring has gained increasing attention in the recent years. Detecting sleep patterns provides users with insights on their personal health issues, and can help in the diagnosis of various sleep disorders. Conventional methods are focused on the acceleration data, or are not suitable for continuous monitoring, like the polysomnography. Wearable devices enable a way to continuously measure photoplethysmography signal. Photoplethysmography signal contains information on multiple physiological systems, and can be used to detect sleep patterns. Sleep detection using wearable-based photoplethysmography signal offers a convenient and easy way to monitor health. In this thesis, a photoplethysmography-based sleep detection method for wearable-based health monitoring is described. This technique aims to separate wakefulness and asleep states with adequate accuracy. To examine the importance of good quality data in sleep detection, the quality of the signal is assessed. The proposed method uses statistical and heart rate based features extracted from the photoplethysmography signal. Using the most relevant features, various supervised learning algorithms are trained, compared and evaluated. These algorithms are logistic regression, decision tree, random forest, support vector machine, k-nearest neighbors, and Naive Bayes. The best performance is obtained by the random forest classifier. The method received an overall accuracy of 81 percent. It was able to detect the sleep periods with 86 percent accuracy and the awake periods with 74 percent accuracy. Motion artifacts occurring during the awake time caused distortion to the signal. Features related to the shape of the signal improved the accuracy of sleep detection, since signal distortion was associated with the awake time. It is concluded that photoplethysmography signal provides a good alternative for wearable-based sleep detection. Future studies with more comprehensive sleep level analysis could be conducted to provide valuable information on the quality of sleep.Viime vuosina etänä tapahtuva terveyden seuranta on saanut yhä enemmän huomiota. Unen tunnistaminen antaa käyttäjille tietoa heidän henkilökohtaisista terveysongelmistaan ja voi auttaa erilaisten unihäiriöiden diagnosoinnissa. Tavanomaiset menetelmät käyttävät kiihtyvyyteen perustuvaa dataa, tai eivät ole soveltuvia jatkuvaan seurantaan, kuten polysomnografia. Puettavan teknologian avulla fotopletysmografiasignaalin jatkuva mittaus on mahdollista. Fotopletysmografiasignaali sisältää tietoa useista fysiologisista järjestelmistä ja sitä voidaan käyttää unen tunnistamiseen. Puettavan teknologian avulla mitatun fotopletysmografiasignaalin käyttö unen tunnistuksessa tarjoaa kätevän ja helpon tavan seurata terveyttä. Tässä diplomityössä kuvataan fotopletysmografiaan perustuva unenhavaitsemismenetelmä, joka soveltuu puettavaa teknologiaa hyödyntävään terveyden seurantaan. Tekniikalla pyritään erottamaan hereillä olo ja uni riittävän tarkasti. Signaalin laatu arvioidaan, jotta voidaan tutkia datan laadun tärkeys unen tunnistuksessa. Kehitetty menetelmä käyttää tilastollisia ja sykkeeseen perustuvia ominaisuuksia, jotka on erotettu fotopletysmografiasignaalista. Tärkeimpiä ominaisuuksia käyttämällä erilaisia valvottuja oppimisalgoritmeja koulutetaan, vertaillaan ja arvioidaan. Käytetyt algoritmit ovat logistinen regressio, päätöspuu, satunnainen metsä, tukivektorikone, k-lähimmät naapurit ja Naive Bayes. Paras tulos saadaan käyttämällä satunnainen metsä -algoritmia. Menetelmällä saavutetaan 81 prosentin kokonaistarkkuus. Uni pystytään tunnistamaan 86 prosentin tarkkuudella ja hereillä olo 74 prosentin tarkkuudella. Hereillä ollessa liikkeestä johtuvat häiriöt aiheuttavat vääristymää signaaliin. Signaalin muotoon liittyvät ominaisuudet paransivat unentunnistuksen tarkkuutta, koska signaalin vääristyminen yhdistettiin hereilläoloaikaan. Tutkimuksen tuloksista voidaan tehdä johtopäätös, että fotopletysmografiasignaali tarjoaa hyvän vaihtoehdon puettavaa teknologiaa hyödyntävään unen tunnistamiseen. Tulevaisuudessa unen eri vaiheita voitaisiin tutkia kattavammin, jolloin saataisiin arvokasta tietoa unen laadusta

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Neural network-based data-driven modelling of anomaly detection in thermal power plant

    Get PDF
    The thermal power plant systems are one of the most complex dynamical systems which must function properly all the time with least amount of costs. More sophisticated monitoring systems with early detection of failures and abnormal behaviour of the power plants are required. The detection of anomalies in historical data using machine learning techniques can lead to system health monitoring. The goal of the research is to build a neural network-based data-driven model that will be used for anomaly detection in selected sections of thermal power plant. Selected sections are Steam Superheaters and Steam Drum. Inputs for neural networks are some of the most important process variables of these sections. All of the inputs are observable from installed monitoring system of thermal power plant, and their anomaly/normal behaviour is recognized by operator’s experiences. The results of applying three different types of neural networks (MLP, recurrent and probabilistic) to solve the problem of anomaly detection confirm that neural network-based data-driven modelling has potential to be integrated in real-time health monitoring system of thermal power plant

    Spintronics: Fundamentals and applications

    Get PDF
    Spintronics, or spin electronics, involves the study of active control and manipulation of spin degrees of freedom in solid-state systems. This article reviews the current status of this subject, including both recent advances and well-established results. The primary focus is on the basic physical principles underlying the generation of carrier spin polarization, spin dynamics, and spin-polarized transport in semiconductors and metals. Spin transport differs from charge transport in that spin is a nonconserved quantity in solids due to spin-orbit and hyperfine coupling. The authors discuss in detail spin decoherence mechanisms in metals and semiconductors. Various theories of spin injection and spin-polarized transport are applied to hybrid structures relevant to spin-based devices and fundamental studies of materials properties. Experimental work is reviewed with the emphasis on projected applications, in which external electric and magnetic fields and illumination by light will be used to control spin and charge dynamics to create new functionalities not feasible or ineffective with conventional electronics.Comment: invited review, 36 figures, 900+ references; minor stylistic changes from the published versio
    corecore