158 research outputs found

    Artificial Neural Network-based error compensation procedure for low-cost encoders

    Full text link
    An Artificial Neural Network-based error compensation method is proposed for improving the accuracy of resolver-based 16-bit encoders by compensating for their respective systematic error profiles. The error compensation procedure, for a particular encoder, involves obtaining its error profile by calibrating it on a precision rotary table, training the neural network by using a part of this data and then determining the corrected encoder angle by subtracting the ANN-predicted error from the measured value of the encoder angle. Since it is not guaranteed that all the resolvers will have exactly similar error profiles because of the inherent differences in their construction on a micro scale, the ANN has been trained on one error profile at a time and the corresponding weight file is then used only for compensating the systematic error of this particular encoder. The systematic nature of the error profile for each of the encoders has also been validated by repeated calibration of the encoders over a period of time and it was found that the error profiles of a particular encoder recorded at different epochs show near reproducible behavior. The ANN-based error compensation procedure has been implemented for 4 encoders by training the ANN with their respective error profiles and the results indicate that the accuracy of encoders can be improved by nearly an order of magnitude from quoted values of ~6 arc-min to ~0.65 arc-min when their corresponding ANN-generated weight files are used for determining the corrected encoder angle.Comment: 16 pages, 4 figures. Accepted for Publication in Measurement Science and Technology (MST

    Data sciences and teaching methods—learning

    Get PDF
    Data Science (DS) is an interdisciplinary field responsible for extracting knowledge from the data. This discipline is particularly complex in the face of Big Data: large volumes of data make it difficult to store, process and analyze with standard computer science technologies. The new revolution in Data Science is already changing the way we do business, healthcare, politics, education and innovation. This article describes three different teaching and learning models for Data Science, inspired by the experiential learning paradigm

    Industry 4.0: A Special Section in IEEE Access

    Get PDF
    Industry 4.0 can be said to be the current trend of automation and data exchange in manufacturing technologies. Originally, the term ???Industrie 4.0??? is from a project in the high-tech strategy of the German government, which hope to promote the computerization of manufacturing. Usually involves terms like cyber-physical systems, Internet of things, amd cloud computing. For now, Industry 4.0 becomes an emerging buzzword that is gaining significant interest among all stakeholders of the global industry-related R&D market from academia to international companies. It is a new business model attracting much interest, yet the definitions are not very matured and is an amazing melting pot of disruptive technologies. No doubt, to maximize the impact of Industry 4.0, researchers from different fields and industry have to work together applying the new technologies in practice. On the top of the wave, it is timely to analyze the cross section who can benefit from the novel achievements of Industry 4.0

    Independent Component Analysis-motivated Approach to Classificatory Decomposition of Cortical Evoked Potentials

    Get PDF
    BACKGROUND: Independent Component Analysis (ICA) proves to be useful in the analysis of neural activity, as it allows for identification of distinct sources of activity. Applied to measurements registered in a controlled setting and under exposure to an external stimulus, it can facilitate analysis of the impact of the stimulus on those sources. The link between the stimulus and a given source can be verified by a classifier that is able to "predict" the condition a given signal was registered under, solely based on the components. However, the ICA's assumption about statistical independence of sources is often unrealistic and turns out to be insufficient to build an accurate classifier. Therefore, we propose to utilize a novel method, based on hybridization of ICA, multi-objective evolutionary algorithms (MOEA), and rough sets (RS), that attempts to improve the effectiveness of signal decomposition techniques by providing them with "classification-awareness." RESULTS: The preliminary results described here are very promising and further investigation of other MOEAs and/or RS-based classification accuracy measures should be pursued. Even a quick visual analysis of those results can provide an interesting insight into the problem of neural activity analysis. CONCLUSION: We present a methodology of classificatory decomposition of signals. One of the main advantages of our approach is the fact that rather than solely relying on often unrealistic assumptions about statistical independence of sources, components are generated in the light of a underlying classification problem itself

    Identification and classification of high risk groups for Coal Workers' Pneumoconiosis using an artificial neural network based on occupational histories: a retrospective cohort study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Coal workers' pneumoconiosis (CWP) is a preventable, but not fully curable occupational lung disease. More and more coal miners are likely to be at risk of developing CWP owing to an increase in coal production and utilization, especially in developing countries. Coal miners with different occupational categories and durations of dust exposure may be at different levels of risk for CWP. It is necessary to identify and classify different levels of risk for CWP in coal miners with different work histories. In this way, we can recommend different intervals for medical examinations according to different levels of risk for CWP. Our findings may provide a basis for further emending the measures of CWP prevention and control.</p> <p>Methods</p> <p>The study was performed using longitudinal retrospective data in the Tiefa Colliery in China. A three-layer artificial neural network with 6 input variables, 15 neurons in the hidden layer, and 1 output neuron was developed in conjunction with coal miners' occupational exposure data. Sensitivity and ROC analyses were adapted to explain the importance of input variables and the performance of the neural network. The occupational characteristics and the probability values predicted were used to categorize coal miners for their levels of risk for CWP.</p> <p>Results</p> <p>The sensitivity analysis showed that influence of the duration of dust exposure and occupational category on CWP was 65% and 67%, respectively. The area under the ROC in 3 sets was 0.981, 0.969, and 0.992. There were 7959 coal miners with a probability value < 0.001. The average duration of dust exposure was 15.35 years. The average duration of ex-dust exposure was 0.69 years. Of the coal miners, 79.27% worked in helping and mining. Most of the coal miners were born after 1950 and were first exposed to dust after 1970. One hundred forty-four coal miners had a probability value ≥0.1. The average durations of dust exposure and ex-dust exposure were 25.70 and 16.30 years, respectively. Most of the coal miners were born before 1950 and began to be exposed to dust before 1980. Of the coal miners, 90.28% worked in tunneling.</p> <p>Conclusion</p> <p>The duration of dust exposure and occupational category were the two most important factors for CWP. Coal miners at different levels of risk for CWP could be classified by the three-layer neural network analysis based on occupational history.</p

    Real Estate valuation and forecasting in non-homogeneous markets: A case study in Greece during the financial crisis

    Get PDF
    In this paper we develop an automatic valuation model for property valuation using a large database of historical prices from Greece. The Greek property market is an inefficient, nonhomogeneous market, still at its infancy and governed by lack of information. As a result modelling the Greek real estate market is a very interesting and challenging problem. The available data cover a wide range of properties across time and include the financial crisis period in Greece which led to tremendous changes in the dynamics of the real estate market. We formulate and compare linear and non-linear models based on regression, hedonic equations and artificial neural networks. The forecasting ability of each method is evaluated out-of-sample. Special care is given on measuring the success of the forecasts but also on identifying the property characteristics that lead to large forecasting errors. Finally, by examining the strengths and the performance of each method we apply a combined forecasting rule to improve forecasting accuracy. Our results indicate that the proposed methodology constitutes an accurate tool for property valuation in a non-homogeneous, newly developed market

    Narzędzia technologii informacyjnej w rozwoju Business Intelligence w organizacjach

    No full text
    The research objective for this study is to investigate different Information Technology (IT) tools for Business Intelligence (BI) development. Firstly, the issue of BI was identified and the various ages in BI development were presented. Then, the most important IT tools used in BI development were discussed. Finally, some strengths and weaknesses of described tools were demonstrated. The study was based mainly on a critical analysis of literature, creative thinking and an interpretive philosophy. The results of this research can be used by IT and business leaders as they plan and develop BI applications in their organizations.Celem artykułu jest rozpoznanie możliwości różnych narzędzi IT do budowy aplikacji Business Intelligence (BI). Na wstępie zidentyfikowano istotę BI oraz opisano trzy etapy rozwoju systemów BI. Następnie omówiono różne narzędzia IT, jakie mogą być stosowane do budowy BI w organizacjach. Ostatecznie, mocne i słabe strony tych narzędzi zostały zaprezentowane. Badania zostały przeprowadzone w oparciu o krytyczną analizę literatury przedmiotu. Wyniki badań mogą być interesujące dla specjalistów IT oraz menedżerów, którzy planują tworzenie aplikacji BI w swoich organizacjach

    Multiplicative Algorithm for Correntropy-Based Nonnegative Matrix Factorization

    No full text
    Nonnegative matrix factorization (NMF) is a popular dimension reduction technique used for clustering by extracting latent features from highdimensional data and is widely used for text mining. Several optimization algorithms have been developed for NMF with different cost functions. In this paper we evaluate the correntropy similarity cost function. Correntropy is a nonlinear localized similarity measure which measures the similarity between two random variables using entropy-based criterion, and is especially robust to outliers. Some algorithms based on gradient descent have been used for correntropy cost function, but their convergence is highly dependent on proper initialization and step size and other parameter selection. The proposed general multiplicative factorization algorithm uses the gradient descent algorithm with adaptive step size to maximize the correntropy similarity between the data matrix and its factorization. After devising the algorithm, its performance has been evaluated for document clustering. Results were compared with constrained gradient descent method using steepest descent and L-BFGS methods. The simulations show that the performance of steepest descent and LBFGS convergence are highly dependent on gradient descent step size which depends on σ parameter of correntropy cost function. However, the multiplicative algorithm is shown to be less sensitive to σ parameterand yields better clustering results than other algorithms. The results demonstrate that clustering performance measured by entropy and purity improve the clustering. The multiplicative correntropy-based algorithm also shows less variation in accuracy of document clusters for variable number of clusters. The convergence of each algorithm is also investigated, and the experiments have shown that the multiplicative algorithm converges faster than L-BFGS and steepest descent method
    corecore