32 research outputs found

    Dynamical Systems Theory for Transparent Symbolic Computation in Neuronal Networks

    Get PDF
    In this thesis, we explore the interface between symbolic and dynamical system computation, with particular regard to dynamical system models of neuronal networks. In doing so, we adhere to a definition of computation as the physical realization of a formal system, where we say that a dynamical system performs a computation if a correspondence can be found between its dynamics on a vectorial space and the formal system’s dynamics on a symbolic space. Guided by this definition, we characterize computation in a range of neuronal network models. We first present a constructive mapping between a range of formal systems and Recurrent Neural Networks (RNNs), through the introduction of a Versatile Shift and a modular network architecture supporting its real-time simulation. We then move on to more detailed models of neural dynamics, characterizing the computation performed by networks of delay-pulse-coupled oscillators supporting the emergence of heteroclinic dynamics. We show that a correspondence can be found between these networks and Finite-State Transducers, and use the derived abstraction to investigate how noise affects computation in this class of systems, unveiling a surprising facilitatory effect on information transmission. Finally, we present a new dynamical framework for computation in neuronal networks based on the slow-fast dynamics paradigm, and discuss the consequences of our results for future work, specifically for what concerns the fields of interactive computation and Artificial Intelligence

    Dynamic Matching of Services by Negotation

    Get PDF
    This paper presents my research project on the topic of dynamic matching of services by negotiation. The goal of the research is to develop a design theory for information systems that offer negotiation services. The result of the research will be a specification of a design framework for such systems. This framework will contain knowledge about architectural choices one has to make when designing an information system with particular properties satisfying particular requirements

    Context-driven methodologies for context-aware and adaptive systems

    Get PDF
    Applications which are both context-aware and adapting, enhance users’ experience by anticipating their need in relation with their environment and adapt their behavior according to environmental changes. Being by definition both context-aware and adaptive these applications suffer both from faults related to their context-awareness and to their adaptive nature plus from a novel variety of faults originated by the combination of the two. This research work analyzes, classifies, detects, and reports faults belonging to this novel class aiming to improve the robustness of these Context-Aware Adaptive Applications (CAAAs). To better understand the peculiar dynamics driving the CAAAs adaptation mechanism a general high-level architectural model has been designed. This architectural model clearly depicts the stream of information coming from sensors and being computed all the way to the adaptation mechanism. The model identifies a stack of common components representing increasing abstractions of the context and their general interconnections. Known faults involving context data can be re-examined according to this architecture and can be classified in terms of the component in which they are happening and in terms of their abstraction from the environment. Resulting from this classification is a CAAA-oriented fault taxonomy. Our architectural model also underlines that there is a common evolutionary path for CAAAs and shows the importance of the adaptation logic. Indeed most of the adaptation failures are caused by invalid interpretations of the context by the adaptation logic. To prevent such faults we defined a model, the Adaptation Finite-State Machine (A-FSM), describing how the application adapts in response to changes in the context. The A-FSM model is a powerful instrument which allows developers to focus in those context-aware and adaptive aspects in which faults reside. In this model we have identified a set of patterns of faults representing the most common faults in this application domain. Such faults are represented as violation of given properties in the A-FSM. We have created four techniques to detect such faults. Our proposed algorithms are based on three different technologies: enumerative, symbolic and goal planning. Such techniques compensate each other. We have evaluated them by comparing them to each other using both crafted models and models extracted from existing commercial and free applications. In the evaluation we observe the validity, the readability of the reported faults, the scalability and their behavior in limited memory environments. We conclude this Thesis by suggesting possible extensions

    Generic low power reconfigurable distributed arithmetic processor

    Get PDF
    Higher performance, lower cost, increasingly minimizing integrated circuit components, and higher packaging density of chips are ongoing goals of the microelectronic and computer industry. As these goals are being achieved, however, power consumption and flexibility are increasingly becoming bottlenecks that need to be addressed with the new technology in Very Large-Scale Integrated (VLSI) design. For modern systems, more energy is required to support the powerful computational capability which accords with the increasing requirements, and these requirements cause the change of standards not only in audio and video broadcasting but also in communication such as wireless connection and network protocols. Powerful flexibility and low consumption are repellent, but their combination in one system is the ultimate goal of designers. A generic domain-specific low-power reconfigurable processor for the distributed arithmetic algorithm is presented in this dissertation. This domain reconfigurable processor features high efficiency in terms of area, power and delay, which approaches the performance of an ASIC design, while retaining the flexibility of programmable platforms. The architecture not only supports typical distributed arithmetic algorithms which can be found in most still picture compression standards and video conferencing standards, but also offers implementation ability for other distributed arithmetic algorithms found in digital signal processing, telecommunication protocols and automatic control. In this processor, a simple reconfigurable low power control unit is implemented with good performance in area, power and timing. The generic characteristic of the architecture makes it applicable for any small and medium size finite state machines which can be used as control units to implement complex system behaviour and can be found in almost all engineering disciplines. Furthermore, to map target applications efficiently onto the proposed architecture, a new algorithm is introduced for searching for the best common sharing terms set and it keeps the area and power consumption of the implementation at low level. The software implementation of this algorithm is presented, which can be used not only for the proposed architecture in this dissertation but also for all the implementations with adder-based distributed arithmetic algorithms. In addition, some low power design techniques are applied in the architecture, such as unsymmetrical design style including unsymmetrical interconnection arranging, unsymmetrical PTBs selection and unsymmetrical mapping basic computing units. All these design techniques achieve extraordinary power consumption saving. It is believed that they can be extended to more low power designs and architectures. The processor presented in this dissertation can be used to implement complex, high performance distributed arithmetic algorithms for communication and image processing applications with low cost in area and power compared with the traditional methods

    A modular architecture for transparent computation in recurrent neural networks

    Get PDF
    publisher: Elsevier articletitle: A modular architecture for transparent computation in recurrent neural networks journaltitle: Neural Networks articlelink: http://dx.doi.org/10.1016/j.neunet.2016.09.001 content_type: article copyright: © 2016 Elsevier Ltd. All rights reserved

    Model Driven Communication Protocol Engineering and Simulation based Performance Analysis using UML 2.0

    Get PDF
    The automated functional and performance analysis of communication systems specified with some Formal Description Technique has long been the goal of telecommunication engineers. In the past SDL and Petri nets have been the most popular FDTs for the purpose. With the growth in popularity of UML the most obvious question to ask is whether one can translate one or more UML diagrams describing a system to a performance model. Until the advent of UML 2.0, that has been an impossible task since the semantics were not clear. Even though the UML semantics are still not clear for the purpose, with UML 2.0 now released and using ITU recommendation Z.109, we describe in this dissertation a methodology and tool called proSPEX (protocol Software Performance Engineering using XMI), for the design and performance analysis of communication protocols specified with UML. Our first consideration in the development of our methodology was to identify the roles of UML 2.0 diagrams in the performance modelling process. In addition, questions regarding the specification of non-functional duration contraints, or temporal aspects, were considered. We developed a semantic time model with which a lack of means of specifying communication delay and processing times in the language are addressed. Environmental characteristics such as channel bandwidth and buffer space can be specified and realistic assumptions are made regarding time and signal transfer. With proSPEX we aimed to integrate a commercial UML 2.0 model editing tool and a discrete-event simulation library. Such an approach has been advocated as being necessary in order to develop a closer integration of performance engineering with formal design and implementation methodologies. In order to realize the integration we firstly identified a suitable simulation library and then extended the library with features required to represent high-level SDL abstractions, such as extended finite state machines (EFSM) and signal addressing. In implementing proSPEX we filtered the XML output of our editor and used text templates for code generation. The filtering of the XML output and the need to extend our simulation library with EFSM abstractions was found to be significant implementation challenges. Lastly, in order to to illustrate the utility of proSPEX we conducted a performance analysis case-study in which the efficient short remote operations (ESRO) protocol is used in a wireless e-commerce scenario

    Analysis and Applications of Receptive Safety Properties in Concurrent Systems

    Get PDF
    Formal verification for complex concurrent systesm is a computationally intensive and in some cases, intractable process. The compexity is an inherent part of the verification process due to the system complexity that is an exponential function of the sizes of its components. However, some properties can be enforced by atuomatically synchronizing the components, thus eliminating the need for verfication. Moreover, the complexity of the analysis required to enforce the properties grows incrementally with addition of new components and properties that make the system complexity grow exponentially. The properties in question are the receptive safety properties, a subset of safety properties that can only be violated by component actions. The receptive safety properties represent the realizable subset of the gerneral safety properties because a system that satisfies any non-receptive safety properties mst satisfy related receptive safety properties. This implies that any system with realizable safety requirements can be described as a set of components and receptive safety properties that specify the component interaction that satisfies the requirements. We have developed a methos that automaticaly synchronizes complex concurrent systems to enforce their receptive safety propeties. Many non-safety properties, and automated synchronization can be used to enforce them. (Also cross-referenced as UMIACS-TR-98-11

    Evaluation of XPath Queries against XML Streams

    Get PDF
    XML is nowadays the de facto standard for electronic data interchange on the Web. Available XML data ranges from small Web pages to ever-growing repositories of, e.g., biological and astronomical data, and even to rapidly changing and possibly unbounded streams, as used in Web data integration and publish-subscribe systems. Animated by the ubiquity of XML data, the basic task of XML querying is becoming of great theoretical and practical importance. The last years witnessed efforts as well from practitioners, as also from theoreticians towards defining an appropriate XML query language. At the core of this common effort has been identified a navigational approach for information localization in XML data, comprised in a practical and simple query language called XPath. This work brings together the two aforementioned ``worlds'', i.e., the XPath query evaluation and the XML data streams, and shows as well theoretical as also practical relevance of this fusion. Its relevance can not be subsumed by traditional database management systems, because the latter are not designed for rapid and continuous loading of individual data items, and do not directly support the continuous queries that are typical for stream applications. The first central contribution of this work consists in the definition and the theoretical investigation of three term rewriting systems to rewrite queries with reverse predicates, like parent or ancestor, into equivalent forward queries, i.e., queries without reverse predicates. Our rewriting approach is vital to the evaluation of queries with reverse predicates against unbounded XML streams, because neither the storage of past fragments of the stream, nor several stream traversals, as required by the evaluation of reverse predicates, are affordable. Beyond their declared main purpose of providing equivalences between queries with reverse predicates and forward queries, the applications of our rewriting systems shed light on other query language properties, like the expressivity of some of its fragments, the query minimization, or even the complexity of query evaluation. For example, using these systems, one can rewrite any graph query into an equivalent forward forest query. The second main contribution consists in a streamed and progressive evaluation strategy of forward queries against XML streams. The evaluation is specified using compositions of so-called stream processing functions, and is implemented using networks of deterministic pushdown transducers. The complexity of this evaluation strategy is polynomial in both the query and the data sizes for forward forest queries and even for a large fragment of graph queries. The third central contribution consists in two real monitoring applications that use directly the results of this work: the monitoring of processes running on UNIX computers, and a system for providing graphically real-time traffic and travel information, as broadcasted within ubiquitous radio signals

    Reverse Engineering and Testing of Rich Internet Applications

    Get PDF
    The World Wide Web experiences a continuous and constant evolution, where new initiatives, standards, approaches and technologies are continuously proposed for developing more effective and higher quality Web applications. To satisfy the growing request of the market for Web applications, new technologies, frameworks, tools and environments that allow to develop Web and mobile applications with the least effort and in very short time have been introduced in the last years. These new technologies have made possible the dawn of a new generation of Web applications, named Rich Internet Applications (RIAs), that offer greater usability and interactivity than traditional ones. This evolution has been accompanied by some drawbacks that are mostly due to the lack of applying well-known software engineering practices and approaches. As a consequence, new research questions and challenges have emerged in the field of web and mobile applications maintenance and testing. The research activity described in this thesis has addressed some of these topics with the specific aim of proposing new and effective solutions to the problems of modelling, reverse engineering, comprehending, re-documenting and testing existing RIAs. Due to the growing relevance of mobile applications in the renewed Web scenarios, the problem of testing mobile applications developed for the Android operating system has been addressed too, in an attempt of exploring and proposing new techniques of testing automation for these type of applications
    corecore