17,113 research outputs found

    Debt detection and debt recovery with advanced classification techniques

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.My study is part of an ARC linkage project between University of Technology, Sydney and Centrelink Australia, which aims to applying data mining techniques to optimise the debt detection and debt recovery. A debt indicates an overpayment made by the government to a customer who is not entitled to that payment. In social security, an interaction between a customer and the government department is recorded as an activity. Each customer’s activities happen sequentially along the time, which can be regarded as a sequence. Based on the experience of debt detection experts, there are usually some patterns in the sequence of activities of customers who commit debts. The patterns indicating the customers’ intention to be overpaid can thus be used to discover or predict debt occurrence. The development of debt detection and recovery over sequential transaction data, however, is a challenging problem due to following reasons. (1) The size of transaction data is vast, and the transaction data are being generated continuously as the business goes on. (2) Transaction data are always time stamped by the business system, and the temporal order of the transaction data is highly related to the business logic. (3) The patterns and relationships hidden behind the transaction data may be affected by a lot of factors. They are not only dependent on business domain knowledge, but also subject to seasonal and social factors outside the business. Based on a survey of existing methods on debt detection and recovery, data mining techniques are studied in this thesis to detect and recovery debt in an adaptive and efficient fashion. Firstly, sequence data is used to model the evolvement of customer activities, and the sequential patterns generalize the trends of sequences. For long running sequence classification issues, even if the sequences come from the same source, the sequential patterns may vary from time to time. An adaptive sequential classification model is to be built to make the sequence classification adapt to the sequential pattern variation. The model is applied to 15,931 activity sequences from Centrelink which includes 849,831 activity records. The experimental results show that the proposed adaptive sequence classification framework performs effectively on the continuously arriving data. Secondly, a new technique of sequence classification using both positive and negative patterns is to be studied, which is able to find the relationship between activity sequences and debt occurrences and also the impact of oncoming activities on the debt occurrence. The same dataset is used for the evaluation. The outcome shows if built with the same number of rules, in terms of recall, the classifier built with both positive and negative rules outperforms traditional classifiers with only positive rules under most conditions. Finally, decision trees are to be built in the thesis to model debt recovery and predict the response of customers if contacted by phone. The customer contact strategy driven by the model aims to improve the efficiency of debt recovery process. The model is utilized in a real life pilot project for debt recovery in Centrelink. The pilot result outperforms the traditional random customer selection. In summary, this thesis studies debt detection and debt recovery in social security using data mining techniques. The proposed models are novel and effective, showing potentials in real business

    Neural Networks based Smart e-Health Application for the Prediction of Tuberculosis using Serverless Computing.

    Get PDF
    The convergence of the Internet of Things (IoT) with e-health records is creating a new era of advancements in the diagnosis and treatment of disease, which is reshaping the modern landscape of healthcare. In this paper, we propose a neural networks-based smart e-health application for the prediction of Tuberculosis (TB) using serverless computing. The performance of various Convolution Neural Network (CNN) architectures using transfer learning is evaluated to prove that this technique holds promise for enhancing the capabilities of IoT and e-health systems in the future for predicting the manifestation of TB in the lungs. The work involves training, validating, and comparing Densenet-201, VGG-19, and Mobilenet-V3-Small architectures based on performance metrics such as test binary accuracy, test loss, intersection over union, precision, recall, and F1 score. The findings hint at the potential of integrating these advanced Machine Learning (ML) models within IoT and e-health frameworks, thereby paving the way for more comprehensive and data-driven approaches to enable smart healthcare. The best-performing model, VGG-19, is selected for different deployment strategies using server and serless-based environments. We used JMeter to measure the performance of the deployed model, including the average response rate, throughput, and error rate. This study provides valuable insights into the selection and deployment of ML models in healthcare, highlighting the advantages and challenges of different deployment options. Furthermore, it also allows future studies to integrate such models into IoT and e-health systems, which could enhance healthcare outcomes through more informed and timely treatments

    Influence of ferromagnetic spin waves on persistent currents in one-dimensional mesoscopic rings

    Get PDF
    The influence of the electron-magnon and the electron-phonon interactions on the persistent current in a one-dimensional mesoscopic ring is studied. We show that, due to the electron-magnon interaction, the amplitude of the persistent current is exponentially reduced compared to the free case. Two features occur in the presence of an electron-phonon interaction. For the normal state of electrons, the persistent current is weakened by the Debye-Waller factor. Considering the so-called Peierls distortions, we show that the effect of the Peierls instability on the amplitude of the persistent current (i.e., the oscillation with respect to the flux) is suppressed significantly and the persistent current will be practically undetectable in the case of a wide-gap Peierls material. © 1996 The American Physical Society.published_or_final_versio

    State-independent error-disturbance trade-off for measurement operators

    Get PDF
    postprin

    Computation Energy Efficiency Maximization for Intelligent Reflective Surface-Aided Wireless Powered Mobile Edge Computing

    Get PDF
    A wide variety of Mobile Devices (MDs) are adopted in Internet of Things (IoT) environments, resulting in a dramatic increase in the volume of task data and greenhouse gas emissions. However, due to the limited battery power and computing resources of MD, it is critical to process more data with less energy. This paper studies the Wireless Power Transfer-based Mobile Edge Computing (WPT-MEC) network system assisted by Intelligent Reflective Surface (IRS) to enhance communication performance while improving the battery life of MD. In order to maximize the Computation Energy Efficiency (CEE) of the system and reduce the carbon footprint of the MEC server, we jointly optimize the CPU frequencies of MDs and MEC server, the transmit power of Power Beacon (PB), the processing time of MEC server, the offloading time and the energy harvesting time of MDs, the local processing time and the offloading power of MD and the phase shift coefficient matrix of Intelligent Reflecting Surface (IRS). Moreover, we transform this joint optimization problem into a fractional programming problem. We then propose the Dinkelbach Iterative Algorithm with Gradient Updates (DIA-GU) to solve this problem effectively. With the help of convex optimization theory, we can obtain closed-form solutions, revealing the correlation between different variables. Compared to other algorithms, the DIA-GU algorithm not only exhibits superior performance in enhancing the system's CEE but also demonstrates significant reductions in carbon emissions

    Load Balancing in SDN-Enabled WSNs Toward 6G IoE: Partial Cluster Migration Approach

    Get PDF
    The vision for the sixth-generation (6G) network involves the integration of communication and sensing capabilities in internet of everything (IoE), towards enabling broader interconnection in the devices of distributed wireless sensor networks (WSN). Moreover, the merging of SDN policies in 6G IoE-based WSNs i.e. SDN-enable WSN improves the network’s reliability and scalability via integration of sensing and communication (ISAC). It consists of multiple controllers to deploy the control services closer to the data plane for a speedy response through control messages. However, controller placement and load balancing are the major challenges in SDN-enabled WSNs due to the dynamic nature of data plane devices. To address the controller placement problem, an optimal number of controllers is identified using the articulation point method. Furthermore, a nature-inspired cheetah optimization algorithm is proposed for the efficient placement of controllers by considering the latency and synchronization overhead. Moreover, a load-sharing based control node migration (LS-CNM) method is proposed to address the challenges of controller load balancing dynamically. The LS-CNM identifies the overloaded controller and corresponding assistant controller with low utilization. Then, a suitable control node is chosen for partial migration in accordance with the load of the assistant controller. Subsequently, LS-CNM ensures dynamic load balancing by considering threshold loads, intelligent assistant controller selection, and real-time monitoring for effective partial load migration. The proposed LS-CNM scheme is executed on the open network operating system (ONOS) controller and the whole network is simulated in ns-3 simulator. The simulation results of the proposed LS-CNM outperform the state of the art in terms of frequency of controller overload, load variation of each controller, round trip time, and average delay

    Magnetoresistance in La- and Ca-doped YBa2Cu3O7–δ

    Get PDF
    We studied the microstructures, electronic, and magnetic properties on La-doped and La- and Ca-codoped YBa2Cu3O7−δ (YBCO). The superconducting transition temperature remains unchanged up to 10% for La-doped YBCO. The competition between electrons and holons was assumed according to the variation of Tc0 in La and Ca codopings in YBCO. The magnetoresistance (MR) effect is about 8%, which is observed obviously near the critical temperature and is independent of the content of La in La-doped YBCO. MR increases up to about 40% with the incorporation of Ca in La-doped YBCO. We present here possible explanations for the magnetoresistance effect in polycrystalline samples based on the microstructure and the increase of oxygen vacancies at grain-boundary interface. © 2006 American Institute of Physicspublished_or_final_versio

    Breath-hold FSE for accurate imaging of myocardial and hepatic R2

    Get PDF
    Session 21: Hepatic Storage Disease - Oral presentationMRI provides a means to non-invasively assess tissue iron concentration by exploiting the paramagnetic effects of iron on T2 or T2*. The most widely used method is T2* imaging is sensitive to non-iron related magnetic field (B0) inhomogeneities, which can confound T2* measurements within the whole heart and liver. An alternative method is T2 imaging, but they are generally performed during free breathing with respiratory gating due to their low data acquisition efficiency. The purpose of this study was to develop a breath-hold fast spin echo (FSE) sequence for fast and accurate imaging of myocardial and hepatic T2.published_or_final_versionThe 17th Scientific Meeting & Exhibition of the International Society of Magnetic Resonance in Medicine (ISMRM), Honolulu, HI., 18-24 April 2009. In Proceedings of ISMRM 17th Scientific Meeting & Exhibition, 2009, p. 20

    Data Driven Stochastic Game Network-Based Smart Home Monitoring System Using IoT-Enabled Edge Computing Environments

    Get PDF
    Edge computing plays a crucial role in the processing of Consumer Internet of Things (IoT)-enabled latency-sensitive applications. In smart homes, dynamic action strategies based on multiple IoT objects with edge processing can be the best solution for handling adverse events. To overcome these challenges, the use of Stochastic Game Net (SGN) forming IoT devices as players with predefined action sets is one of the feasible solutions. Relative to this context, the edge-assisted IoT-enabled data-driven SGN model is proposed to handle various events in the smart home environment. Stochastic Petri Nets (SPNs) and game theory are integrated into our proposed model to build data-driven dynamic SGNs for the smart home environment. Dynamic SGNs for a comprehensive smart home system are generated in real-time through transitions based on sensor data, enhancing interoperability and scalability in smart home environments. We use the Net logo tool and state-of-the-art smart home sensor datasets to generate dynamic SGNs for various events. Experimental results demonstrate the effectiveness of the proposed model within a data-driven smart home environment. It shows that the present work significantly outperforms other state-of-the-art techniques in terms of decision-making at the edge layer. Moreover, using the proposed system the energy efficacy increased to around 39mJ/K nodes, and the average temporal delay for different events was reduced significantly

    Blockchain and Reinforcement Neural Network for Trusted Cloud-Enabled IoT Network

    Get PDF
    The rapid integration of Internet of Things (IoT) services and applications across various sectors is primarily driven by their ability to process real-time data and create intelligent environments through artificial intelligence for service consumers. However, the security and privacy of data have emerged as significant threats to consumers within IoT networks. Issues such as node tampering, phishing attacks, malicious code injection, malware threats, and the potential for Denial of Service (DoS) attacks pose serious risks to the safety and confidentiality of information. To solve this problem, we propose an integrated autonomous IoT network within a cloud architecture, employing Blockchain technology to heighten network security. The primary goal of this approach is to establish a Heterogeneous Autonomous Network (HAN), wherein data is processed and transmitted through cloud architecture. This network is integrated with a Reinforced Neural Network (RNN) called ClouD_RNN, specifically designed to classify the data perceived and collected by sensors. Further, the collected data is continuously monitored by an autonomous network and classified for fault detection and malicious activity. In addition, network security is enhanced by the Blockchain Adaptive Windowing Meta Optimization Protocol (BAWMOP). Extensive experimental results validate that our proposed approach significantly outperforms state-of-the-art approaches in terms of throughput, accuracy, end-to-end delay, data delivery ratio, network security, and energy efficiency
    • …
    corecore