17 research outputs found
Selective applicative functors & probabilistic programming
Dissertação de mestrado integrado em Informatics EngineeringIn functional programming, selective applicative functors (SAF) are an abstraction between
applicative functors and monads. This abstraction requires all effects to be statically declared,
but provides a way to select which effects to execute dynamically. SAF have been shown to
be a useful abstraction in several examples, including two industrial case studies. Selective
functors have been used for their static analysis capabilities. The collection of information
about all possible effects in a computation and the fact that they enable speculative execution
make it possible to take advantage to describe probabilistic computations instead of using
monads. In particular, selective functors appear to provide a way to obtain a more efficient
implementation of probability distributions than monads.
This dissertation addresses a probabilistic interpretation for the arrow and selective abstractions
in the light of the linear algebra of programming discipline, as well as exploring
ways of offering SAF capabilities to probabilistic programming, by exposing sampling as a
concurrency problem. As a result, provides a Haskell type-safe matrix library capable of
expressing probability distributions and probabilistic computations as typed matrices, and a
probabilistic programming eDSL that explores various techniques in order to offer a novel,
performant solution to probabilistic functional programming.Em programação funcional, os functores aplicativos seletivos (FAS) são uma abstração entre functores
aplicativos e monades. Essa abstração requer que todos os efeitos sejam declarados estaticamente,
mas fornece uma maneira de selecionar quais efeitos serão executados dinamicamente. FAS têm se
mostrado uma abstração útil em vários exemplos, incluindo dois estudos de caso industriais. Functores
seletivos têm sido usados pela suas capacidade de análise estática. O conjunto de informações sobre
todos os efeitos possíveis numa computação e o facto de que eles permitem a execução especulativa
tornam possível descrever computações probabilísticas. Em particular, functores seletivos parecem
oferecer uma maneira de obter uma implementação mais eficiente de distribuições probabilisticas do
que monades.
Esta dissertação aborda uma interpretação probabilística para as abstrações Arrow e Selective
à luz da disciplina da álgebra linear da programação, bem como explora formas de oferecer as
capacidades dos FAS para programação probabilística, expondo sampling como um problema de
concorrência. Como resultado, fornece uma biblioteca de matrizes em Haskell, capaz de expressar
distribuições de probabilidade e cálculos probabilísticos como matrizes tipadas e uma eDSL de
programação probabilística que explora várias técnicas, com o obejtivo de oferecer uma solução
inovadora e de alto desempenho para a programação funcional probabilística
Industrial Robotics
This book covers a wide range of topics relating to advanced industrial robotics, sensors and automation technologies. Although being highly technical and complex in nature, the papers presented in this book represent some of the latest cutting edge technologies and advancements in industrial robotics technology. This book covers topics such as networking, properties of manipulators, forward and inverse robot arm kinematics, motion path-planning, machine vision and many other practical topics too numerous to list here. The authors and editor of this book wish to inspire people, especially young ones, to get involved with robotic and mechatronic engineering technology and to develop new and exciting practical applications, perhaps using the ideas and concepts presented herein
Automatically Comparing Memory Consistency Models
A memory consistency model (MCM) is the part of a programming language or computer architecture specification that defines which values can legally be read from shared memory locations. Because MCMs take into account various optimisations employed by archi- tectures and compilers, they are often complex and counterintu- itive, which makes them challenging to design and to understand.
We identify four tasks involved in designing and understanding MCMs: generating conformance tests, distinguishing two MCMs, checking compiler optimisations, and checking compiler mappings. We show that all four tasks are instances of a general constraint-satisfaction problem to which the solution is either a program or a pair of programs. Although this problem is intractable for automatic solvers when phrased over programs directly, we show how to solve analogous constraints over program executions, and then construct programs that satisfy the original constraints.
Our technique, which is implemented in the Alloy modelling framework, is illustrated on several software- and architecture-level MCMs, both axiomatically and operationally defined. We automatically recreate several known results, often in a simpler form, including: distinctions between variants of the C11 MCM; a failure of the ‘SC-DRF guarantee’ in an early C11 draft; that x86 is ‘multi-copy atomic’ and Power is not; bugs in common C11 compiler optimisations; and bugs in a compiler mapping from OpenCL to AMD-style GPUs. We also use our technique to develop and validate a new MCM for NVIDIA GPUs that supports a natural mapping from OpenCL
Extending Two-Dimensional Knowledge Management System Theory with Organizational Activity Systems\u27 Workflow Dynamics
Between 2005 and 2010 and across 48 countries, including the United States, an increasing positive correlation emerged between national intellectual capital and gross domestic product per capita. The problem remains organizations operating with increasingly complex knowledge networks often lose intellectual capital resulting from ineffective knowledge management practices. The purpose of this study was to provide management opportunities to reduce intellectual capital loss. The first research question addressed how an enhanced intelligent, complex, and adaptive system (ICAS) model could clarify management\u27s understanding of organizational knowledge transfer. The second research question addressed how interdisciplinary theory could become more meaningfully infused to enhance management practices of the organization\u27s knowledge ecosystem. The nature of this study was phenomenological to gain deeper understanding of individual experiences related to knowledge flow phenomena. Data were collected from a single historical research dataset containing 11 subject interviews and analyzed using Moustakas\u27 heuristic framework. Original interviews were collected in 2012 during research within a military unit, included in this study based on theme alignment. Organizational, knowledge management, emergent systems, and cognition theories were synthesized to enhance understandings of emergent ICAS forces. Individuals create unique ICAS flow emergent force dynamics in relation to micro- and macro-meso sensemaking and sensegiving. Findings indicated individual knowledge work significantly shapes emergent ICAS flow dynamics. Collectively enhancing knowledge stewardship over time could foster positive social change by improving national welfare
Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected Works), Vol. 4
The fourth volume on Advances and Applications of Dezert-Smarandache Theory (DSmT) for information fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics. The contributions (see List of Articles published in this book, at the end of the volume) have been published or presented after disseminating the third volume (2009, http://fs.unm.edu/DSmT-book3.pdf) in international conferences, seminars, workshops and journals.
First Part of this book presents the theoretical advancement of DSmT, dealing with Belief functions, conditioning and deconditioning, Analytic Hierarchy Process, Decision Making, Multi-Criteria, evidence theory, combination rule, evidence distance, conflicting belief, sources of evidences with different importance and reliabilities, importance of sources, pignistic probability transformation, Qualitative reasoning under uncertainty, Imprecise belief
structures, 2-Tuple linguistic label, Electre Tri Method, hierarchical proportional redistribution, basic belief assignment, subjective probability measure, Smarandache codification, neutrosophic logic, Evidence theory, outranking methods, Dempster-Shafer Theory, Bayes fusion rule, frequentist probability, mean square error, controlling factor, optimal assignment solution, data association, Transferable Belief Model, and others.
More applications of DSmT have emerged in the past years since the apparition of the third book of DSmT 2009. Subsequently, the second part of this volume is about applications of DSmT in correlation with Electronic Support Measures, belief function, sensor networks, Ground Moving Target and Multiple target tracking, Vehicle-Born Improvised Explosive Device, Belief Interacting Multiple Model filter, seismic and acoustic sensor, Support Vector Machines, Alarm
classification, ability of human visual system, Uncertainty Representation and Reasoning Evaluation Framework, Threat Assessment, Handwritten Signature Verification, Automatic Aircraft Recognition, Dynamic Data-Driven Application System, adjustment of secure communication trust analysis, and so on.
Finally, the third part presents a List of References related with DSmT published or presented along the years since its inception in 2004, chronologically ordered
Actes des Cinquièmes journées nationales du Groupement De Recherche CNRS du Génie de la Programmation et du Logiciel
National audienceCe document contient les actes des Cinquièmes journées nationales du Groupement De Recherche CNRS du Gé}nie de la Programmation et du Logiciel (GDR GPL) s'étant déroulées à Nancy du 3 au 5 avril 2013. Les contributions présentées dans ce document ont été sélectionnées par les différents groupes de travail du GDR. Il s'agit de résumés, de nouvelles versions, de posters et de démonstrations qui correspondent à des travaux qui ont déjà été validés par les comités de programmes d'autres conférences et revues et dont les droits appartiennent exclusivement à leurs auteurs
Extending relational model transformations to better support the verification of increasingly autonomous systems
Over the past decade the capabilities of autonomous systems have been steadily increasing. Unmanned systems are moving from systems that are predominantly remotely operated, to systems that include a basic decision making capability. This is a trend that is expected to continue with autonomous systems making decisions in increasingly complex environments, based on more abstract, higher-level missions and goals. These changes have significant implications for how these systems should be designed and engineered. Indeed, as the goals and tasks these systems are to achieve become more abstract, and the environments they operate in become more complex, are current approaches to verification and validation sufficient?
Domain Specific Modelling is a key technology for the verification of autonomous systems. Verifying these systems will ultimately involve understanding a significant number of domains. This includes goals/tasks, environments, systems functions and their associated performance. Relational Model Transformations provide a means to utilise, combine and check models for consistency across these domains. In this thesis an approach that utilises relational model transformation technologies for systems verification, Systems MDD, is presented along with the results of a series of trials conducted with an existing relational model transformation language (QVT-Relations). These trials identified a number of problems with existing model transformation languages, including poorly or loosely defined semantics, differing interpretations of specifications across different tools and the lack of a guarantee that a model transformation would generate a model that was compliant with its associated meta-model.
To address these problems, two related solvers were developed to assist with realising the Systems MDD approach. The first solver, MMCS, is concerned with partial model completion, where a partial model is defined as a model that does not fully conform with its associated meta-model. It identifies appropriate modifications to be made to a partial model in order to bring it into full compliance. The second solver, TMPT, is a relational model transformation engine that prioritises target models. It considers multiple interpretations of a relational transformation specification, chooses an interpretation that results in a compliant target model (if one exists) and, optionally, maximises some other attribute associated with the model. A series of experiments were conducted that applied this to common transformation problems in the published literature
Development and Evaluation of Methodologies for Vulnerability Analysis of Ad-hoc Routing Protocols
This thesis presents a number methodologies for computer assisted vulnerability analysis of routing protocols in ad-hoc networks towards the goal of automating the process of finding vulnerabilities (possible attacks) on such network routing protocols and correcting the protocols. The methodologies developed are (each) based on a different representation (model) of the routing protocol, which model predicated the quantitative methods and algorithms used. Each methodology is evaluated with respect to effectiveness feasibility and possibility of application to realistically sized networks. The first methodology studied is based on formal models of the protocols and associated symbolic partially ordered model checkers. Using this methodology, a simple attack in unsecured AODV is demonstrated. An extension of the Strands model is developed which is suitable for such routing protocols. The second methodology is based on timed-probabilistic formal models which is necessary due to the probabilistic nature of ad-hoc routing protocols. This second methodolgy uses natural extensions of the first one. A nondeterministic-timing model based on partially ordered events is considered for application towards the model checking problem. Determining probabilities within this structure requires the calculation of the volume of a particular type of convex volume, which is known to be #P-hard. A new algorithm is derived, exploiting the particular problem structure, that can be used to reduce the amount of time used to compute these quantities over conventional algorithms. We show that timed-probabilistic formal models can be linked to trace-based techniques by sampling methods, and conversely how execution traces can serve as starting points for formal exploration of the state space. We show that an approach combining both trace-based and formal methods can have faster convergence than either alone on a set of problems. However, the applicability of both of these techniques to ad-hoc network routing protocols is limited to small networks and relatively simple attacks. We provide evidence to this end. To address this limitation, a final technique employing only trace-based methods within an optimization framework is developed. In an application of this third methodology, it is shown that it can be used to evaluate the effects of a simple attack on OLSR. The result can be viewed (from a certain perspective) as an example of automatically discovering a new attack on the OLSR routing protocol
Extracting information from manufacturing data using data mining methods
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Automated Reasoning in Quantified Modal and Temporal Logics
Centre for Intelligent Systems and their ApplicationsThis thesis is about automated reasoning in quantified modal and temporal logics, with an application to formal methods. Quantified modal and temporal logics are extensions of classical first-order logic in which the notion of truth is extended to take into account its necessity or equivalently, in the temporal setting, its persistence through time.
Due to their high complexity, these logics are less widely known and studied than their propositional counterparts. Moreover, little so far is known about their mechanisability and usefulness for formal methods.
The relevant contributions of this thesis are threefold: firstly, we devise a sound and complete set of sequent calculi for quantified modal logics; secondly, we extend the approach to the quantified temporal logic of linear, discrete time and develop a framework for doing automated reasoning via Proof Planning in it; thirdly, we show a set of experimental results obtained by applying the framework to the problem of
Feature Interactions in telecommunication systems.
These results indicate that (a) the problem can be concisely and effectively modeled in the aforementioned logic, (b) proof planning actually captures common structures in the related proofs, and (c) the approach is viable also from the point of view of efficiency