8 research outputs found

    Risk-based neuro-grid architecture for multimodal biometrics

    Get PDF
    Recent research indicates that multimodal biometrics is the way forward for a highly reliable adoption of biometric identification systems in various applications, such as banks, businesses, government

    Modelling of the Breakdown Voltage of Solid Insulating Materials using Soft Computing Techniques

    Get PDF
    The aim of the project is to use Soft Computing Techniques (SCT) in order to model the breakdown voltage of solid insulating materials. Since the breakdown voltage behaviour is non-linear, it can be best modeled using SCT such as Artificial Neural Network (ANN), Radial Basis Function (RBF) Network, Fuzzy Logic (FL) Techniques etc. In order to obtain the experimental data on the breakdown voltage, experiments are conducted under AC and DC conditions and then all the SCT model are applied on it. The prediction of the breakdown voltage of solid insulating materials is indeed a challenging task. Hence the best way to go about it is by resorting to SCT in order to model and predict the breakdown voltage

    Predicting Hazardous Driving Behaviour with Quantum Neural Networks

    Get PDF
    Quantum Neural Networks (QNN) were used to predict both future steering wheel signals and upcoming lane departures for N=34 drivers undergoing 37 h of sleep deprivation. The drivers drove in a moving-base truck simulator for 55 min once every third hour, resulting in 31 200 km of highway driving, out of which 8 432 km were on straights. Predicting the steering wheel signal one time step ahead, 0.1 s, was achieved with a 15-40-20-1 time-delayed feed-forward QNN with a root-mean-square error of RMSEtot = 0.007 a.u. corresponding to a 0.4 % relative error. The best prediction of the number of lane departures during the subsequent 10 s was achieved using the maximum peak-to-peak amplitude of the steering wheel signal from the previous ten 1 s segments as inputs to a 10-15-5-1 time-delayed feed-forward QNN. A correct prediction was achieved in 55 % of cases and the overall sensitivity and specificity were 31 % and 80 %, respectively.Kvantneuronätverk (QNN) användes för att förutsäga både framtida rattsignaler och filavkörningar för N=34 bilförare som genomgick 37 timmars vaka. Bilförarna körde 55 min var tredje timme i en lastbilssimulator på en rörlig plattform, vilket resulterade i 31 200 km landsvägskörning, varav 8 432 km inföll på raksträckor. Ett 15-40-20-1 strukturerat tidsförskjutet, framåtkopplat QNN användes för att förutsäga rattsignalen ett tidssteg framåt, 0,1 s, vilket lyckades med ett kvadratiskt medelvärdesfel på RMSEtot = 0.007 a.u., som motsvarar ett relativt fel på 0,4 %. Den bästa föutsägelsen av antalet filavkörningar under de följande 10 s uppnåddes genom att som in-signal till ett 10-15-5-1 tidsförskjutet, framåtkopplat QNN använda skillnaden mellan maximi- och minimivärdet i rattsignalen i de tio föregående 1 s segmenten. En korrekt förutsägelse uppnåddes i 55 % av fallen och den totala sensitiviteten var 31 % medan specificiteten var 80 %.Kvanttineuroverkkoja (QNN) käytettiin ennustamaan tulevaa rattisignaalia ja tulevia kaistalta poikkeamisia 37 tuntia valvoneille N=34 kuljettajalle. Kuljettajat ajoivat liikuvapohjaisesssa rekkasimulaattorissa 55 min ajan joka kolmas tunti, eli kokonaisuudessaan 31 200 km maantieajoa, joista 8 432 km olivat suorilla. Rattisignaalin ennustaminen yhden aika-askeleen eteenpäin, 0,1 s, suoritettin aikaviivästetyllä eteenpäinkytkeyllä QNN:llä, jolla oli 15-40-20-1 rakenne. Neliöllinen keskiarvollinen virhe oli RMSEtot = 0.007 a.u., mikä vastaa 0,4 % suhteellista virhettä. Paras ennustus kaistalta poikkeamisten määrälle tulevan 10 s aikana saavutettiin käyttämällä sisäänmenona rattisignaalin suurinta huipusta huippuun amplitudia kymmenen edellisten 1 s pätkien ajalta ja aikaviivästettyä eteenpäinkytkettyä 10-15-5-1 QNN:ää. Oikeaa ennustusta saavutettiin 55 % tapauksista ja sensitiviteetti oli 31 % ja spesifisiteetti oli 80 %

    Dynamic analysis of synchronous machine using neural network based characterization clustering and pattern recognition

    Get PDF
    Synchronous generators form the principal source of electric energy in power systems. Dynamic analysis for transient condition of a synchronous machine is done under different fault conditions. Synchronous machine models are simulated numerically based on mathematical models where saturation on main flux was ignored in one model and taken into account in another. The developed models were compared and scrutinized for transient conditions under different kind of faults – loss of field (LOF), disturbance in torque (DIT) & short circuit (SC). The simulation was done for LOF and DIT for different levels of fault and time durations, whereas, for SC simulation was done for different time durations. The model is also scrutinized for stability stipulations. Based on the synchronous machine model, a neural network model of synchronous machine is developed using neural network based characterization. The model is trained to approximate different transient conditions; such as – loss of field, disturbance in torque and short circuit conditions. In the case of multiple or mixture of different kinds of faults, neural network based clustering is used to distinguish and identify specific fault conditions by looking at the behaviour of the load angle. By observing the weight distribution pattern of the Self Organizing Map (SOM) space, specific kinds of faults is recognized. Neural network patter identification is used to identify and specify unknown fault patterns. Once the faults are identified neural network pattern identification is used to recognize and indicate the level or time duration of the fault

    Previsão de churn em companhias de seguros

    Get PDF
    Dissertação de mestrado em Engenharia de InformáticaTransversal a qualquer indústria, a retenção de clientes é um aspeto de elevada importância e a que se deve dar toda a atenção possível. O abandono de um produto ou de um serviço por parte de um cliente, situação usualmente denominada por churn, é cada vez mais um indicador a ter em atenção por parte das empresas prestadoras de serviços. Juntamente com técnicas de Customer Relationship Management (CRM), a previsão de churn, oferece às empresas uma forte vantagem competitiva, uma vez que lhes permite obter melhores resultados na fidelização dos seus clientes. Com o constante crescimento e amadurecimento dos sistemas de informação, torna-se cada vez mais viável a utilização de técnicas de Data Mining, capazes de extrair padrões de comportamento que forneçam, entre outros, informação intrínseca nos dados, com sentido e viável no domínio do negócio em questão. O trabalho desta dissertação foca-se na utilização de técnicas de Data Mining para a previsão de situação de churn dos clientes no ramo das seguradoras, tendo como o objetivo principal a previsão de casos de churn e, assim, possibilitar informação suficiente para a tomada de ações que visem prever o abandono de clientes. Nesse sentido, foi desenvolvido nesta dissertação um conjunto de modelos preditivos de churn, estes modelos foram implementados utilizando diferentes técnicas de data mining. Com esta implementação de vários modelos, pretende-se realizar uma avaliação comparativa dos mesmos, de forma a analisar qual o mais eficaz na previsão de casos churn.Transversal to any industry, customer retention is a highly important aspect and that we should give all possible attention. The abandonment of a product or a service by a customer, a situation usually referred to as churn, is an indicator that the service provider company should take in attention. Along with techniques of Customer Relationship Management (CRM), the churn prediction offers to companies a strong competitive advantage since it allows them to get better results in customer retention. With the constant growth and maturity of information systems, it becomes more feasible to use data mining techniques, which can extract behavior patterns that provide intrinsic information hided in the data. This dissertation focuses on using data mining techniques for predicting customer churn situations in insurance companies, having as main objective the prediction of cases of churn and thereby allow information gathering that can be used to take actions to avoid the customer desertion. In this dissertation we develop a set of predictive churn models using different data mining techniques. We studied the following techniques: decision trees, neural networks, logistic regression and SVM. The implementation of various models using this set of techniques allowed us to conclude that the most suitable techniques to predict churn in an insurance company are decision trees and logistic regression, in addiction we did a study about the most relevant churn indicators

    Optimizing AI at the Edge: from network topology design to MCU deployment

    Get PDF
    The first topic analyzed in the thesis will be Neural Architecture Search (NAS). I will focus on two different tools that I developed, one to optimize the architecture of Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged, and one to optimize the data precision of tensors inside CNNs. The first NAS proposed explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive field, and the number of features in each layer. Note that this is the first NAS that explicitly targets these networks. The second NAS proposed instead focuses on finding the most efficient data format for a target CNN, with the granularity of the layer filter. Note that applying these two NASes in sequence allows an "application designer" to minimize the structure of the neural network employed, minimizing the number of operations or the memory usage of the network. After that, the second topic described is the optimization of neural network deployment on edge devices. Importantly, exploiting edge platforms' scarce resources is critical for NN efficient execution on MCUs. To do so, I will introduce DORY (Deployment Oriented to memoRY) -- an automatic tool to deploy CNNs on low-cost MCUs. DORY, in different steps, can manage different levels of memory inside the MCU automatically, offload the computation workload (i.e., the different layers of a neural network) to dedicated hardware accelerators, and automatically generates ANSI C code that orchestrates off- and on-chip transfers with the computation phases. On top of this, I will introduce two optimized computation libraries that DORY can exploit to deploy TCNs and Transformers on edge efficiently. I conclude the thesis with two different applications on bio-signal analysis, i.e., heart rate tracking and sEMG-based gesture recognition
    corecore