13,226 research outputs found

    Medical imaging analysis with artificial neural networks

    Get PDF
    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging

    How Europe can deliver: Optimising the division of competences among the EU and its member states

    Get PDF
    This study aims to give guidance for a better-performing EU through an improved allocation of competences between the European Union and its member states. The study analyses eight specific policies from a wide range of fields with respect to their preferable assignment. The analysis applies a unified quantified approach and is precise in its definition of ‘counterfactuals’. These counterfactuals are understood as conceptual alternatives to the allocation of competences under the status quo. As such, they either relate to a new European competence (if the policy is currently a national responsibility) or a new national competence (if the policy is currently assigned to the EU). The comprehensive, quantification-based assessments indicate that it would be preferable to have responsibility for higher education and providing farmers with income support at the national level. Conversely, a shift of competences to the EU level would be advantageous when it comes to asylum policies, defence, corporate taxation, development aid and a (complementary) unemployment insurance scheme in the euro area. For one policy – railway freight transport – the findings are indeterminate. Overall, the study recommends a differentiated integration strategy comprising both new European policies and a roll-back of EU competences in other fields

    Intelligent data mining using artificial neural networks and genetic algorithms : techniques and applications

    Get PDF
    Data Mining (DM) refers to the analysis of observational datasets to find relationships and to summarize the data in ways that are both understandable and useful. Many DM techniques exist. Compared with other DM techniques, Intelligent Systems (ISs) based approaches, which include Artificial Neural Networks (ANNs), fuzzy set theory, approximate reasoning, and derivative-free optimization methods such as Genetic Algorithms (GAs), are tolerant of imprecision, uncertainty, partial truth, and approximation. They provide flexible information processing capability for handling real-life situations. This thesis is concerned with the ideas behind design, implementation, testing and application of a novel ISs based DM technique. The unique contribution of this thesis is in the implementation of a hybrid IS DM technique (Genetic Neural Mathematical Method, GNMM) for solving novel practical problems, the detailed description of this technique, and the illustrations of several applications solved by this novel technique. GNMM consists of three steps: (1) GA-based input variable selection, (2) Multi- Layer Perceptron (MLP) modelling, and (3) mathematical programming based rule extraction. In the first step, GAs are used to evolve an optimal set of MLP inputs. An adaptive method based on the average fitness of successive generations is used to adjust the mutation rate, and hence the exploration/exploitation balance. In addition, GNMM uses the elite group and appearance percentage to minimize the randomness associated with GAs. In the second step, MLP modelling serves as the core DM engine in performing classification/prediction tasks. An Independent Component Analysis (ICA) based weight initialization algorithm is used to determine optimal weights before the commencement of training algorithms. The Levenberg-Marquardt (LM) algorithm is used to achieve a second-order speedup compared to conventional Back-Propagation (BP) training. In the third step, mathematical programming based rule extraction is not only used to identify the premises of multivariate polynomial rules, but also to explore features from the extracted rules based on data samples associated with each rule. Therefore, the methodology can provide regression rules and features not only in the polyhedrons with data instances, but also in the polyhedrons without data instances. A total of six datasets from environmental and medical disciplines were used as case study applications. These datasets involve the prediction of longitudinal dispersion coefficient, classification of electrocorticography (ECoG)/Electroencephalogram (EEG) data, eye bacteria Multisensor Data Fusion (MDF), and diabetes classification (denoted by Data I through to Data VI). GNMM was applied to all these six datasets to explore its effectiveness, but the emphasis is different for different datasets. For example, the emphasis of Data I and II was to give a detailed illustration of how GNMM works; Data III and IV aimed to show how to deal with difficult classification problems; the aim of Data V was to illustrate the averaging effect of GNMM; and finally Data VI was concerned with the GA parameter selection and benchmarking GNMM with other IS DM techniques such as Adaptive Neuro-Fuzzy Inference System (ANFIS), Evolving Fuzzy Neural Network (EFuNN), Fuzzy ARTMAP, and Cartesian Genetic Programming (CGP). In addition, datasets obtained from published works (i.e. Data II & III) or public domains (i.e. Data VI) where previous results were present in the literature were also used to benchmark GNMM’s effectiveness. As a closely integrated system GNMM has the merit that it needs little human interaction. With some predefined parameters, such as GA’s crossover probability and the shape of ANNs’ activation functions, GNMM is able to process raw data until some human-interpretable rules being extracted. This is an important feature in terms of practice as quite often users of a DM system have little or no need to fully understand the internal components of such a system. Through case study applications, it has been shown that the GA-based variable selection stage is capable of: filtering out irrelevant and noisy variables, improving the accuracy of the model; making the ANN structure less complex and easier to understand; and reducing the computational complexity and memory requirements. Furthermore, rule extraction ensures that the MLP training results are easily understandable and transferrable

    A Common Protocol for Agent-Based Social Simulation

    Get PDF
    Traditional (i.e. analytical) modelling practices in the social sciences rely on a very well established, although implicit, methodological protocol, both with respect to the way models are presented and to the kinds of analysis that are performed. Unfortunately, computer-simulated models often lack such a reference to an accepted methodological standard. This is one of the main reasons for the scepticism among mainstream social scientists that results in low acceptance of papers with agent-based methodology in the top journals. We identify some methodological pitfalls that, according to us, are common in papers employing agent-based simulations, and propose appropriate solutions. We discuss each issue with reference to a general characterization of dynamic micro models, which encompasses both analytical and simulation models. In the way, we also clarify some confusing terminology. We then propose a three-stage process that could lead to the establishment of methodological standards in social and economic simulations.Agent-Based, Simulations, Methodology, Calibration, Validation, Sensitivity Analysis

    USTOPIA REQUIREMENTS THOUGHTS ON A USER-FRIENDLY SYSTEM FOR TRANSFORMATION OF PROGRAMS IN ABSTRACTO

    Get PDF
    Transformational programming is a program development method which is usually applied using 'pen and paper'. Since this requires a lot of clerical work (copying expressions, con- sistent substitution) which is tiresome and prone to error, some form of machine support is desirable. In this paper a number of systems are described that have already been built to this aim. Some of their shortcomings and limitations are identified. Based on experience with program transformation and transformation systems, a long list of features is given that would be useful in an 'utopian' transformation system. This list is presented using an orthogonal division of the problem area. A number of problems with the realisation of some aspects of our 'utopian' system are identified, and some areas for further research are indicated

    Rewritability in Monadic Disjunctive Datalog, MMSNP, and Expressive Description Logics

    Get PDF
    We study rewritability of monadic disjunctive Datalog programs, (the complements of) MMSNP sentences, and ontology-mediated queries (OMQs) based on expressive description logics of the ALC family and on conjunctive queries. We show that rewritability into FO and into monadic Datalog (MDLog) are decidable, and that rewritability into Datalog is decidable when the original query satisfies a certain condition related to equality. We establish 2NExpTime-completeness for all studied problems except rewritability into MDLog for which there remains a gap between 2NExpTime and 3ExpTime. We also analyze the shape of rewritings, which in the MMSNP case correspond to obstructions, and give a new construction of canonical Datalog programs that is more elementary than existing ones and also applies to formulas with free variables

    Intelligent systems in manufacturing: current developments and future prospects

    Get PDF
    Global competition and rapidly changing customer requirements are demanding increasing changes in manufacturing environments. Enterprises are required to constantly redesign their products and continuously reconfigure their manufacturing systems. Traditional approaches to manufacturing systems do not fully satisfy this new situation. Many authors have proposed that artificial intelligence will bring the flexibility and efficiency needed by manufacturing systems. This paper is a review of artificial intelligence techniques used in manufacturing systems. The paper first defines the components of a simplified intelligent manufacturing systems (IMS), the different Artificial Intelligence (AI) techniques to be considered and then shows how these AI techniques are used for the components of IMS

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Continuous trust management frameworks : concept, design and characteristics

    Get PDF
    PhD ThesisA Trust Management Framework is a collection of technical components and governing rules and contracts to establish secure, confidential, and Trustworthy transactions among the Trust Stakeholders whether they are Users, Service Providers, or Legal Authorities. Despite the presence of many Trust Frameworks projects, they still fail at presenting a mature Framework that can be Trusted by all its Stakeholders. Particularly speaking, most of the current research focus on the Security aspects that may satisfy some Stakeholders but ignore other vital Trust Properties like Privacy, Legal Authority Enforcement, Practicality, and Customizability. This thesis is all about understanding and utilising the state of the art technologies of Trust Management to come up with a Trust Management Framework that could be Trusted by all its Stakeholders by providing a Continuous Data Control where the exchanged data would be handled in a Trustworthy manner before and after the data release from one party to another. For that we call it: Continuous Trust Management Framework. In this thesis, we present a literature survey where we illustrate the general picture of the current research main categorise as well as the main Trust Stakeholders, Trust Challenges, and Trust Requirements. We picked few samples representing each of the main categorise in the literature of Trust Management Frameworks for detailed comparison to understand the strengths and weaknesses of those categorise. Showing that the current Trust Management Frameworks are focusing on fulfilling most of the Trust Attributes needed by the Trust Stakeholders except for the Continuous Data Control Attribute, we argued for the vitality of our proposed generic design of the Continuous Trust Management Framework. To demonstrate our Design practicality, we present a prototype implementing its basic Stakeholders like the Users, Service Providers, Identity Provider, and Auditor on top of the OpenID Connect protocol. The sample use-case of our prototype is to protect the Users’ email addresses. That is, Users would ask for their emails not to be iii shared with third parties but some Providers would act maliciously and share these emails with third parties who would, in turn, send spam emails to the victim Users. While the prototype Auditor would be able to protect and track data before their release to the Service Providers, it would not be able to enforce the data access policy after release. We later generalise our sample use-case to cover various Mass Active Attacks on Users’ Credentials like, for example, using stolen credit cards or illegally impersonating third-party identity. To protect the Users’ Credentials after release, we introduce a set of theories and building blocks to aid our Continuous Trust Framework’s Auditor that would act as the Trust Enforcement point. These theories rely primarily on analysing the data logs recorded by our prototype prior to releasing the data. To test our theories, we present a Simulation Model of the Auditor to optimise its parameters. During some of our Simulation Stages, we assumed the availability of a Data Governance Unit, DGU, that would provide hardware roots of Trust. This DGU is to be installed in the Service Providers’ server-side to govern how they handle the Users’ data. The final simulation results include a set of different Defensive Strategies’ Flavours that could be utilized by the Auditor depending on the environment where it operates. This thesis concludes with the fact that utilising Hard Trust Measures such as DGU without effective Defensive Strategies may not provide the ultimate Trust solution. That is especially true at the bootstrapping phase where Service Providers would be reluctant to adopt a restrictive technology like our proposed DGU. Nevertheless, even in the absence of the DGU technology now, deploying the developed Defensive Strategies’ Flavours that do not rely on DGU would still provide significant improvements in terms of enforcing Trust even after data release compared to the currently widely deployed Strategy: doing nothing!Public Authority for Applied Education and Training in Kuwait, PAAET
    • …
    corecore