5,223 research outputs found

    Measurement of body temperature and heart rate for the development of healthcare system using IOT platform

    Get PDF
    Health can be define as a state of complete mental, physical and social well-being and not merely the absence of disease or infirmity according to the World Health Organization (WHO) [1]. Having a healthy body is the greatest blessing of life, hence healthcare is required to maintain or improve the health since the healthcare is the maintenance or improvement of health through the diagnosis, prevention, and treatment of injury, disease, illness, and other mental and physical impairments in human beings. The novel paradigm of Internet of Things (IoT) has the potential to transform modern healthcare and improve the well-being of entire society [2]. IoT is a concept aims to connec

    Episodic Memory for Cognitive Robots in Dynamic, Unstructured Environments

    Full text link
    Elements from cognitive psychology have been applied in a variety of ways to artificial intelligence. One of the lesser studied areas is in how episodic memory can assist learning in cognitive robots. In this dissertation, we investigate how episodic memories can assist a cognitive robot in learning which behaviours are suited to different contexts. We demonstrate the learning system in a domestic robot designed to assist human occupants of a house. People are generally good at anticipating the intentions of others. When around people that we are familiar with, we can predict what they are likely to do next, based on what we have observed them doing before. Our ability to record and recall different types of events that we know are relevant to those types of events is one reason our cognition is so powerful. For a robot to assist rather than hinder a person, artificial agents too require this functionality. This work makes three main contributions. Since episodic memory requires context, we first propose a novel approach to segmenting a metric map into a collection of rooms and corridors. Our approach is based on identifying critical points on a Generalised Voronoi Diagram and creating regions around these critical points. Our results show state of the art accuracy with 98% precision and 96% recall. Our second contribution is our approach to event recall in episodic memory. We take a novel approach in which events in memory are typed and a unique recall policy is learned for each type of event. These policies are learned incrementally, using only information presented to the agent and without any need to take that agent off line. Ripple Down Rules provide a suitable learning mechanism. Our results show that when trained appropriately we achieve a near perfect recall of episodes that match to an observation. Finally we propose a novel approach to how recall policies are trained. Commonly an RDR policy is trained using a human guide where the instructor has the option to discard information that is irrelevant to the situation. However, we show that by using Inductive Logic Programming it is possible to train a recall policy for a given type of event after only a few observations of that type of event

    Dynamic Rule Covering Classification in Data Mining with Cyber Security Phishing Application

    Get PDF
    Data mining is the process of discovering useful patterns from datasets using intelligent techniques to help users make certain decisions. A typical data mining task is classification, which involves predicting a target variable known as the class in previously unseen data based on models learnt from an input dataset. Covering is a well-known classification approach that derives models with If-Then rules. Covering methods, such as PRISM, have a competitive predictive performance to other classical classification techniques such as greedy, decision tree and associative classification. Therefore, Covering models are appropriate decision-making tools and users favour them carrying out decisions. Despite the use of Covering approach in data processing for different classification applications, it is also acknowledged that this approach suffers from the noticeable drawback of inducing massive numbers of rules making the resulting model large and unmanageable by users. This issue is attributed to the way Covering techniques induce the rules as they keep adding items to the rule’s body, despite the limited data coverage (number of training instances that the rule classifies), until the rule becomes with zero error. This excessive learning overfits the training dataset and also limits the applicability of Covering models in decision making, because managers normally prefer a summarised set of knowledge that they are able to control and comprehend rather a high maintenance models. In practice, there should be a trade-off between the number of rules offered by a classification model and its predictive performance. Another issue associated with the Covering models is the overlapping of training data among the rules, which happens when a rule’s classified data are discarded during the rule discovery phase. Unfortunately, the impact of a rule’s removed data on other potential rules is not considered by this approach. However, When removing training data linked with a rule, both frequency and rank of other rules’ items which have appeared in the removed data are updated. The impacted rules should maintain their true rank and frequency in a dynamic manner during the rule discovery phase rather just keeping the initial computed frequency from the original input dataset. In response to the aforementioned issues, a new dynamic learning technique based on Covering and rule induction, that we call Enhanced Dynamic Rule Induction (eDRI), is developed. eDRI has been implemented in Java and it has been embedded in WEKA machine learning tool. The developed algorithm incrementally discovers the rules using primarily frequency and rule strength thresholds. These thresholds in practice limit the search space for both items as well as potential rules by discarding any with insufficient data representation as early as possible resulting in an efficient training phase. More importantly, eDRI substantially cuts down the number of training examples scans by continuously updating potential rules’ frequency and strength parameters in a dynamic manner whenever a rule gets inserted into the classifier. In particular, and for each derived rule, eDRI adjusts on the fly the remaining potential rules’ items frequencies as well as ranks specifically for those that appeared within the deleted training instances of the derived rule. This gives a more realistic model with minimal rules redundancy, and makes the process of rule induction efficient and dynamic and not static. Moreover, the proposed technique minimises the classifier’s number of rules at preliminary stages by stopping learning when any rule does not meet the rule’s strength threshold therefore minimising overfitting and ensuring a manageable classifier. Lastly, eDRI prediction procedure not only priorities using the best ranked rule for class forecasting of test data but also restricts the use of the default class rule thus reduces the number of misclassifications. The aforementioned improvements guarantee classification models with smaller size that do not overfit the training dataset, while maintaining their predictive performance. The eDRI derived models particularly benefit greatly users taking key business decisions since they can provide a rich knowledge base to support their decision making. This is because these models’ predictive accuracies are high, easy to understand, and controllable as well as robust, i.e. flexible to be amended without drastic change. eDRI applicability has been evaluated on the hard problem of phishing detection. Phishing normally involves creating a fake well-designed website that has identical similarity to an existing business trustful website aiming to trick users and illegally obtain their credentials such as login information in order to access their financial assets. The experimental results against large phishing datasets revealed that eDRI is highly useful as an anti-phishing tool since it derived manageable size models when compared with other traditional techniques without hindering the classification performance. Further evaluation results using other several classification datasets from different domains obtained from University of California Data Repository have corroborated eDRI’s competitive performance with respect to accuracy, number of knowledge representation, training time and items space reduction. This makes the proposed technique not only efficient in inducing rules but also effective

    Internet banking fraud detection using prudent analysis

    Get PDF
    The threat posed by cybercrime to individuals, banks and other online financial service providers is real and serious. Through phishing, unsuspecting victims’ Internet banking usernames and passwords are stolen and their accounts robbed. In addressing this issue, commercial banks and other financial institutions use a generically similar approach in their Internet banking fraud detection systems. This common approach involves the use of a rule-based system combined with an Artificial Neural Network (ANN). The approach used by commercial banks has limitations that affect their efficiency in curbing new fraudulent transactions. Firstly, the banks’ security systems are focused on preventing unauthorized entry and have no way of conclusively detecting an imposter using stolen credentials. Also, updating these systems is slow and their maintenance is labour-intensive and ultimately costly to the business. A major limitation of these rule-bases is brittleness; an inability to recognise the limits of their knowledge. To address the limitations highlighted above, this thesis proposes, develops and evaluates a new system for use in Internet banking fraud detection using Prudence Analysis, a technique through which a system can detect when its knowledge is insufficient for a given case. Specifically, the thesis proposes the following contributions:Doctor of Philosoph

    A study on machine learning algorithms for fall detection and movement classification

    Get PDF
    Fall among the elderly is an important health issue. Fall detection and movement tracking techniques are therefore instrumental in dealing with this issue. This thesis responds to the challenge of classifying different movement types as a part of a system designed to fulfill the need for a wearable device to collect data for fall and near-fall analysis. Four different fall activities (forward, backward, left and right), three normal activities (standing, walking and lying down) and near-fall situations are identified and detected. Different machine learning algorithms are compared and the best one is used for the real time classification. The comparison is made using Waikato Environment for Knowledge Analysis or in short WEKA. The system also has the ability to adapt to different gaits of different people. A feature selection algorithm is also introduced to reduce the number of features required for the classification problem

    The use of proof plans in tactic synthesis

    Get PDF
    We undertake a programme of tactic synthesis. We first formalize the notion of a tactic as a rewrite rule, then give a correctness criterion for this by means of a reflection mechanism in the constructive type theory OYSTER. We further formalize the notion of a tactic specification, given as a synthesis goal and a decidability goal. We use a proof planner. CIAM. to guide the search for inductive proofs of these, and are able to successfully synthesize several tactics in this fashion. This involves two extensions to existing methods: context-sensitive rewriting and higher-order wave rules. Further, we show that from a proof of the decidability goal one may compile to a Prolog program a pseudo- tactic which may be run to efficiently simulate the input/output behaviour of the synthetic tacti
    • …
    corecore