123 research outputs found

    PhysicsGP: A Genetic Programming Approach to Event Selection

    Full text link
    We present a novel multivariate classification technique based on Genetic Programming. The technique is distinct from Genetic Algorithms and offers several advantages compared to Neural Networks and Support Vector Machines. The technique optimizes a set of human-readable classifiers with respect to some user-defined performance measure. We calculate the Vapnik-Chervonenkis dimension of this class of learning machines and consider a practical example: the search for the Standard Model Higgs Boson at the LHC. The resulting classifier is very fast to evaluate, human-readable, and easily portable. The software may be downloaded at: http://cern.ch/~cranmer/PhysicsGP.htmlComment: 16 pages 9 figures, 1 table. Submitted to Comput. Phys. Commu

    The state of MIIND

    Get PDF
    MIIND (Multiple Interacting Instantiations of Neural Dynamics) is a highly modular multi-level C++ framework, that aims to shorten the development time for models in Cognitive Neuroscience (CNS). It offers reusable code modules (libraries of classes and functions) aimed at solving problems that occur repeatedly in modelling, but tries not to impose a specific modelling philosophy or methodology. At the lowest level, it offers support for the implementation of sparse networks. For example, the library SparseImplementationLib supports sparse random networks and the library LayerMappingLib can be used for sparse regular networks of filter-like operators. The library DynamicLib, which builds on top of the library SparseImplementationLib, offers a generic framework for simulating network processes. Presently, several specific network process implementations are provided in MIIND: the Wilson–Cowan and Ornstein–Uhlenbeck type, and population density techniques for leaky-integrate-and-fire neurons driven by Poisson input. A design principle of MIIND is to support detailing: the refinement of an originally simple model into a form where more biological detail is included. Another design principle is extensibility: the reuse of an existing model in a larger, more extended one. One of the main uses of MIIND so far has been the instantiation of neural models of visual attention. Recently, we have added a library for implementing biologically-inspired models of artificial vision, such as HMAX and recent successors. In the long run we hope to be able to apply suitably adapted neuronal mechanisms of attention to these artificial models

    High Performance Data Mining Techniques For Intrusion Detection

    Get PDF
    The rapid growth of computers transformed the way in which information and data was stored. With this new paradigm of data access, comes the threat of this information being exposed to unauthorized and unintended users. Many systems have been developed which scrutinize the data for a deviation from the normal behavior of a user or system, or search for a known signature within the data. These systems are termed as Intrusion Detection Systems (IDS). These systems employ different techniques varying from statistical methods to machine learning algorithms. Intrusion detection systems use audit data generated by operating systems, application softwares or network devices. These sources produce huge amount of datasets with tens of millions of records in them. To analyze this data, data mining is used which is a process to dig useful patterns from a large bulk of information. A major obstacle in the process is that the traditional data mining and learning algorithms are overwhelmed by the bulk volume and complexity of available data. This makes these algorithms impractical for time critical tasks like intrusion detection because of the large execution time. Our approach towards this issue makes use of high performance data mining techniques to expedite the process by exploiting the parallelism in the existing data mining algorithms and the underlying hardware. We will show that how high performance and parallel computing can be used to scale the data mining algorithms to handle large datasets, allowing the data mining component to search a much larger set of patterns and models than traditional computational platforms and algorithms would allow. We develop parallel data mining algorithms by parallelizing existing machine learning techniques using cluster computing. These algorithms include parallel backpropagation and parallel fuzzy ARTMAP neural networks. We evaluate the performances of the developed models in terms of speedup over traditional algorithms, prediction rate and false alarm rate. Our results showed that the traditional backpropagation and fuzzy ARTMAP algorithms can benefit from high performance computing techniques which make them well suited for time critical tasks like intrusion detection

    Artificial cognitive architecture with self-learning and self-optimization capabilities. Case studies in micromachining processes

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de lectura : 22-09-201

    Parallel computing for artificial neural network training

    Get PDF
    The big-data is an oil of this century. A high amount of computational power is required to get knowledge from data. Parallel and distributed computing is essential to processing a large amount of data. Artificial Neural Networks (ANNs) need as much as possible data to have high accuracy, whereas parallel processing can help us to save time in ANNs training. In this paper, we have implemented exemplary parallelization of neural network training by dint of Java and its native socket libraries. During the experiments, we have noticed that Java implementation tends to have memory issues when a large amount of training data sets are involved in training. We have remarked that exemplary parallelization of a neural network training will not outperform drastically when additional nodes are introduced into the system after a certain point. This is widely due to network communication complexity in the system

    A Model of Emotion as Patterned Metacontrol

    Get PDF
    Adaptive agents use feedback as a key strategy to cope with un- certainty and change in their environments. The information fed back from the sensorimotor loop into the control subsystem can be used to change four different elements of the controller: parameters associated to the control model, the control model itself, the functional organization of the agent and the functional realization of the agent. There are many change alternatives and hence the complexity of the agent’s space of potential configurations is daunting. The only viable alternative for space- and time-constrained agents —in practical, economical, evolutionary terms— is to achieve a reduction of the dimensionality of this configuration space. Emotions play a critical role in this reduction. The reduction is achieved by func- tionalization, interface minimization and by patterning, i.e. by selection among a predefined set of organizational configurations. This analysis lets us state how autonomy emerges from the integration of cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. Emotion-based morphofunctional systems are able to exhibit complex adaptation patterns at a reduced cognitive cost. In this article we show a general model of how emotion supports functional adaptation and how the emotional biological systems operate following this theoretical model. We will also show how this model is also of applicability to the construction of a wide spectrum of artificial systems1

    Designing Round-Trip Systems by Change Propagation and Model Partitioning

    Get PDF
    Software development processes incorporate a variety of different artifacts (e.g., source code, models, and documentation). For multiple reasons the data that is contained in these artifacts does expose some degree of redundancy. Ensuring global consistency across artifacts during all stages in the development of software systems is required, because inconsistent artifacts can yield to failures. Ensuring consistency can be either achieved by reducing the amount of redundancy or by synchronizing the information that is shared across multiple artifacts. The discipline of software engineering that addresses these problems is called Round-Trip Engineering (RTE). In this thesis we present a conceptual framework for the design RTE systems. This framework delivers precise definitions for essential terms in the context of RTE and a process that can be used to address new RTE applications. The main idea of the framework is to partition models into parts that require synchronization - skeletons - and parts that do not - clothings. Once such a partitioning is obtained, the relations between the elements of the skeletons determine whether a deterministic RTE system can be built. If not, manual decisions may be required by developers. Based on this conceptual framework, two concrete approaches to RTE are presented. The first one - Backpropagation-based RTE - employs change translation, traceability and synchronization fitness functions to allow for synchronization of artifacts that are connected by non-injective transformations. The second approach - Role-based Tool Integration - provides means to avoid redundancy. To do so, a novel tool design method that relies on role modeling is presented. Tool integration is then performed by the creation of role bindings between role models. In addition to the two concrete approaches to RTE, which form the main contributions of the thesis, we investigate the creation of bridges between technical spaces. We consider these bridges as an essential prerequisite for performing logical synchronization between artifacts. Also, the feasibility of semantic web technologies is a subject of the thesis, because the specification of synchronization rules was identified as a blocking factor during our problem analysis. The thesis is complemented by an evaluation of all presented RTE approaches in different scenarios. Based on this evaluation, the strengths and weaknesses of the approaches are identified. Also, the practical feasibility of our approaches is confirmed w.r.t. the presented RTE applications

    Web-based strategies in the manufacturing industry

    Get PDF
    The explosive growth of Internet-based architectures is allowing an efficient access to information resources over geographically dispersed areas. This fact is exerting a major influence on current manufacturing practices. Business activities involving customers, partners, employees and suppliers are being rapidly and efficiently integrated through networked information management environments. Therefore, efforts are required to take advantage of distributed infrastructures that can satisfy information integration and collaborative work strategies in corporate environments. In this research, Internet-based distributed solutions focused on the manufacturing industry are proposed. Three different systems have been developed for the tooling sector, specifically for the company Seco Tools UK Ltd (industrial collaborator). They are summarised as follows. SELTOOL is a Web-based open tool selection system involving the analysis of technical criteria to establish appropriate selection of inserts, toolholders and cutting data for turning, threading and grooving operations. It has been oriented to world-wide Seco customers. SELTOOL provides an interactive and crossed-way of searching for tooling parameters, rather than conventional representation schemes provided by catalogues. Mechanisms were developed to filter, convert and migrate data from different formats to the database (SQL-based) used by SELTOOL.TTS (Tool Trials System) is a Web-based system developed by the author and two other researchers to support Seco sales engineers and technical staff, who would perform tooling trials in geographically dispersed machining centres and benefit from sharing data and results generated by these tests. Through TTS tooling engineers (authorised users) can submit and retrieve highly specific technical tooling data for both milling and turning operations. Moreover, it is possible for tooling engineers to avoid the execution of new tool trials knowing the results of trials carried out in physically distant places, when another engineer had previously executed these trials. The system incorporates encrypted security features suitable for restricted use on the World Wide Web. An urgent need exists for tools to make sense of raw data, extracting useful knowledge from increasingly large collections of data now being constructed and made available from networked information environments. This explosive growth in the availability of information is overwhelming the capabilities of traditional information management systems, to provide efficient ways of detecting anomalies and significant patterns in large sets of data. Inexorably, the tooling industry is generating valuable experimental data. It is a potential and unexplored sector regarding the application of knowledge capturing systems. Hence, to address this issue, a knowledge discovery system called DISKOVER was developed. DISKOVER is an integrated Java-application consisting of five data mining modules, able to be operated through the Internet. Kluster and Q-Fast are two of these modules, entirely developed by the author. Fuzzy-K has been developed by the author in collaboration with another research student in the group at Durham. The final two modules (R-Set and MQG) have been developed by another member of the Durham group. To develop Kluster, a complete clustering methodology was proposed. Kluster is a clustering application able to combine the analysis of quantitative as well as categorical data (conceptual clustering) to establish data classification processes. This module incorporates two original contributions. Specifically, consistent indicators to measure the quality of the final classification and application of optimisation methods to the final groups obtained. Kluster provides the possibility, to users, of introducing case-studies to generate cutting parameters for particular Input requirements. Fuzzy-K is an application having the advantages of hierarchical clustering, while applying fuzzy membership functions to support the generation of similarity measures. The implementation of fuzzy membership functions helped to optimise the grouping of categorical data containing missing or imprecise values. As the tooling database is accessed through the Internet, which is a relatively slow access platform, it was decided to rely on faster Information retrieval mechanisms. Q-fast is an SQL-based exploratory data analysis (EDA) application, Implemented for this purpose

    The Feature-Architecture Mapping Method for Feature-Oriented Development of Software Product Lines

    Get PDF
    Software Produktlinien sind die Antwort von Software Engineering auf die zu-nehmende Komplexität und kürzerenProdukteinführungszeiten von heutigen Softwaresystemen. Nichtsdestotrotz erfordern Software Produktlinien einefortgeschrittene Wartbarkeit und hohe Flexibilität. Das kann durch die angemessene Trennung der Belange erreicht werden.Merkmale stellen die Hauptbelange im Kontext von Software Produktlinien dar. Demzufolge sollte ein Merkmal idealerweise ingenau einer Architekturkomponente implementiert werden. In der Praxis ist das jedoch nicht immer machbar. Deshalb solltezumindest ein starkes Mapping zwischen Merkmalen und der Architektur bestehen. Die Methoden zur Entwicklung von SoftwareProduktlinien, die dem Stand der Technik entsprechen, führen zu bedeutender Verstreutheit und Vermischung von Merkmalen. Indieser Arbeit wird die Feature-Architecture Mapping (FArM) Methode entwickelt, um ein stärkeres Mapping zwischen Merkmalenund der Produktlinien-Architektur zu erzielen. Der Input für FArM besteht in einem initialen Merkmalmodell, das anhand einerMethode zur Domänenanalyse erstellt wurde. Dieses initiale Merkmalmodell wird einer Serie von Transformationen unterzogen.Die Transformationen streben danach, ein Gleichgewicht zwischen der Sichtweise von Kunden und Softwarearchitekteneinzustellen. Die Merkmalinteraktionen werden während der Transformationen ausdrücklich optimiert. Von jedem Merkmal destransformierten Merkmalmodells wird eine Architekturkomponente abgeleitet. Die Architekturkomponenten implementieren dieApplikationslogik der entsprechenden Merkmale. Die Kommunikation zwischen den Komponenten spiegelt die Interaktion zwischenden Merkmalen wider. Dieser Ansatz führt im Vergleich zu den Produktlinien-Entwicklungsmethoden des Stands der Technik zueinem stärkeren Mapping zwischen Merkmalen und der Architektur und zu einer höheren Variabilität auf Merkmalebene. DieseEigenschaften haben eine bessere Wartbarkeit und eine vereinfachte generative Produktinstanzierung zur Folge, was wiederumdie Flexibilität der Produktlinien steigert. FArM wurde durch ihre Anwendung in einigen Domänen evaluiert, z.B. in denDomänen von Mobiltelefonen und Integrierten Entwicklungsumgebungen (IDEs). Diese Arbeit wird FArM anhand einer Fallstudie inder Domäne von Künstlichen Neuronalen Netzwerken präsentieren.Software product lines are the answer of software engineering to the increasing complexity and shorter time-to-market ofcontemporary software systems. Nonetheless, software product lines demand for advanced maintainability and high flexibility.The latter can be achieved through the proper separation of concerns. Features pose the main concerns in the context ofsoftware product lines. Consequently, one feature should ideally be implemented into exactly one architectural component. Inpractice, this is not always feasible. Therefore, at least a strong mapping between features and the architecture mustexist. The state of the art product line development methodologies introduce significant scattering and tangling offeatures. In this work, the Feature-Architecture Mapping (FArM) method is developed, to provide a stronger mapping betweenfeatures and the product line architecture. FArM receives as input an initial feature model created by a domain analysismethod. The initial feature model undergoes a series of transformations. The transformations strive to achieve a balancebetween the customer and architectural perspectives. Feature interaction is explicitly optimized during the feature modeltransformations. For each feature of the transformed feature model, one architectural component is derived. Thearchitectural components implement the application logic of the respective features. The component communication reflectsthe feature interaction. This approach, compared to the state of the art product line methodologies, allows a strongerfeature-architecture mapping and for higher variability on the feature level. These attributes provide highermaintainability and an improved generative approach to product instantiation, which in turn enhances product lineflexibility. FArM has been evaluated through its application in a number of domains, e.g in the mobile phone domain and theIntegrated Development Environment (IDE) domain. This work will present FArM on the basis of a case study in the domain ofartificial Neural Networks

    Creating and Maintaining Consistent Documents with Elucidative Development

    Get PDF
    Software systems usually consist of multiple artefacts, such as requirements, class diagrams, or source code. Documents, such as specifications and documentation, can also be viewed as artefacts. In practice, however, writing and updating documents is often neglected because it is expensive and brings no immediate benefit. Consequently, documents are often outdated and communicate wrong information about the software. The price is paid later when a software system must be maintained and much implicit knowledge that existed at the time of the original development has been lost. A simple way to keep documents up to date is generation. However, not all documents can be fully generated. Usually, at least some content must be written by a human author. This handwritten content is lost if the documents must be regenerated. In this thesis, Elucidative Development is introduced. It is an approach to create documents by partial generation. Partial generation means that some parts of the document are generated whereas others are handwritten. Elucidative Development retains manually written content when the document is regenerated. An integral part of Elucidative Development is a guidance system, which informs the author about changes in the generated content and helps him update the handwritten content.:1 Introduction 1.1 Contributions 1.2 Scope of the Thesis 1.3 Organisation 2 Problem Analysis and Solution Outline 2.1 Redundancy and Inconsistency 2.2 Improving Consistency with Partial Generation 2.3 Conclusion 3 Background 3.1 Grammar-Based Modularisation 3.2 Model-Driven Software Development 3.3 Round-Trip Engineering 3.4 Conclusion 4 Elucidative Development 4.1 General Idea and Running Example 4.2 Requirements of Elucidative Development 4.3 Structure and Basic Concepts of Elucidative Documents 4.4 Presentation Layer 4.5 Guidance 4.6 Conclusion 5 Model-Driven Elucidative Development 5.1 General Idea and Running Example 5.2 Requirements of Model-Driven Elucidative Development 5.3 Structure and Basic Concepts of Elucidative Documents in Model-Driven Elucidative Development 5.4 Guidance 5.5 Conclusion 6 Extensions of Elucidative Development 6.1 Validating XML-based Elucidative Documents 6.2 Backpropagation-Based Round-Trip Engineering for Computed Text Document Fragments 6.3 Conclusion 7 Tool Support for an Elucidative Development Environment 7.1 Managing Active References 7.2 Inserting Computed Document Fragments 7.3 Caching the Computed Document Fragments 7.4 Elucidative Document Validation with Schemas 7.5 Conclusion 8 Related Work 8.1 Related Documentation Approaches 8.2 Consistency Approaches 8.3 Compound Documents 8.4 Conclusion 9 Evaluation 9.1 Creating and Maintaining the Cool Component Specification 9.2 Creating and Maintaining the UML Specification 9.3 Feasibility Studies 9.4 Conclusion 10 ConclusionSoftwaresysteme setzen sich üblicherweise aus vielen verschiedenen Artefakten zusammen, zum Beispiel Anforderungen, Klassendiagrammen oder Quellcode. Dokumente, wie zum Beispiel Spezifikationen oder Dokumentation, können auch als Artefakte betrachtet werden. In der Praxis wird aber das Schreiben und Aktualisieren von Dokumenten oft vernachlässigt, weil es zum einen teuer ist und zum anderen keinen unmittelbaren Vorteil bringt. Dokumente sind darum häufig veraltet und vermitteln falsche Informationen über die Software. Den Preis muss man später zahlen, wenn die Software gepflegt wird, weil viel von dem impliziten Wissen, das zur Zeit der Entwicklung existierte, verloren ist. Eine einfache Möglichkeit, Dokumente aktuell zu halten, ist Generierung. Allerdings können nicht alle Dokumente generiert werden. Meist muss wenigstens ein Teil von einem Menschen geschrieben werden. Dieser handgeschriebene Inhalt geht verloren, wenn das Dokument neu generiert werden muss. In dieser Arbeit wird das Elucidative Development vorgestellt. Dabei handelt es sich um einen Ansatz zur Dokumenterzeugung mittels partieller Generierung. Das bedeutet, dass Teile eines Dokuments generiert werden und der Rest von Hand ergänzt wird. Beim Elucidative Development bleibt der handgeschriebene Inhalt bestehen, wenn das restliche Dokument neu generiert wird. Ein integraler Bestandteil von Elucidative Development ist darüber hinaus ein Hilfesystem, das den Autor über Änderungen an generiertem Inhalt informiert und ihm hilft, den handgeschriebenen Inhalt zu aktualisieren.:1 Introduction 1.1 Contributions 1.2 Scope of the Thesis 1.3 Organisation 2 Problem Analysis and Solution Outline 2.1 Redundancy and Inconsistency 2.2 Improving Consistency with Partial Generation 2.3 Conclusion 3 Background 3.1 Grammar-Based Modularisation 3.2 Model-Driven Software Development 3.3 Round-Trip Engineering 3.4 Conclusion 4 Elucidative Development 4.1 General Idea and Running Example 4.2 Requirements of Elucidative Development 4.3 Structure and Basic Concepts of Elucidative Documents 4.4 Presentation Layer 4.5 Guidance 4.6 Conclusion 5 Model-Driven Elucidative Development 5.1 General Idea and Running Example 5.2 Requirements of Model-Driven Elucidative Development 5.3 Structure and Basic Concepts of Elucidative Documents in Model-Driven Elucidative Development 5.4 Guidance 5.5 Conclusion 6 Extensions of Elucidative Development 6.1 Validating XML-based Elucidative Documents 6.2 Backpropagation-Based Round-Trip Engineering for Computed Text Document Fragments 6.3 Conclusion 7 Tool Support for an Elucidative Development Environment 7.1 Managing Active References 7.2 Inserting Computed Document Fragments 7.3 Caching the Computed Document Fragments 7.4 Elucidative Document Validation with Schemas 7.5 Conclusion 8 Related Work 8.1 Related Documentation Approaches 8.2 Consistency Approaches 8.3 Compound Documents 8.4 Conclusion 9 Evaluation 9.1 Creating and Maintaining the Cool Component Specification 9.2 Creating and Maintaining the UML Specification 9.3 Feasibility Studies 9.4 Conclusion 10 Conclusio
    • …
    corecore