68,739 research outputs found
libcppa - Designing an Actor Semantic for C++11
Parallel hardware makes concurrency mandatory for efficient program
execution. However, writing concurrent software is both challenging and
error-prone. C++11 provides standard facilities for multiprogramming, such as
atomic operations with acquire/release semantics and RAII mutex locking, but
these primitives remain too low-level. Using them both correctly and
efficiently still requires expert knowledge and hand-crafting. The actor model
replaces implicit communication by sharing with an explicit message passing
mechanism. It applies to concurrency as well as distribution, and a lightweight
actor model implementation that schedules all actors in a properly
pre-dimensioned thread pool can outperform equivalent thread-based
applications. However, the actor model did not enter the domain of native
programming languages yet besides vendor-specific island solutions. With the
open source library libcppa, we want to combine the ability to build reliable
and distributed systems provided by the actor model with the performance and
resource-efficiency of C++11.Comment: 10 page
A LISP-Ada connection
The prototype demonstrates the feasibility of using Ada for expert systems and the implementation of an expert-friendly interface which supports knowledge entry. In the Ford LISP-Ada Connection (FLAC) system LISP and Ada are used in ways which complement their respective capabilities. Future investigation will concentrate on the enhancement of the expert knowledge entry/debugging interface and on the issues associated with multitasking and real-time expert systems implementation in Ada
A Case Study in Coordination Programming: Performance Evaluation of S-Net vs Intel's Concurrent Collections
We present a programming methodology and runtime performance case study
comparing the declarative data flow coordination language S-Net with Intel's
Concurrent Collections (CnC). As a coordination language S-Net achieves a
near-complete separation of concerns between sequential software components
implemented in a separate algorithmic language and their parallel orchestration
in an asynchronous data flow streaming network. We investigate the merits of
S-Net and CnC with the help of a relevant and non-trivial linear algebra
problem: tiled Cholesky decomposition. We describe two alternative S-Net
implementations of tiled Cholesky factorization and compare them with two CnC
implementations, one with explicit performance tuning and one without, that
have previously been used to illustrate Intel CnC. Our experiments on a 48-core
machine demonstrate that S-Net manages to outperform CnC on this problem.Comment: 9 pages, 8 figures, 1 table, accepted for PLC 2014 worksho
Intelligent Integrated Management for Telecommunication Networks
As the size of communication networks keeps on growing, faster connections, cooperating technologies and the divergence of equipment and data communications, the management of the resulting networks gets additional important and time-critical. More advanced tools are needed to support this activity. In this article we describe the design and implementation of a management platform using Artificial Intelligent reasoning technique. For this goal we make use of an expert system. This study focuses on an intelligent framework and a language for formalizing knowledge management descriptions and combining them with existing OSI management model. We propose a new paradigm where the intelligent network management is integrated into the conceptual repository of management information called Managed Information Base (MIB). This paper outlines the development of an expert system prototype based in our propose GDMO+ standard and describes the most important facets, advantages and drawbacks that were found after prototyping our proposal
Data Cleaning for XML Electronic Dictionaries via Statistical Anomaly Detection
Many important forms of data are stored digitally in XML format. Errors can
occur in the textual content of the data in the fields of the XML. Fixing these
errors manually is time-consuming and expensive, especially for large amounts
of data. There is increasing interest in the research, development, and use of
automated techniques for assisting with data cleaning. Electronic dictionaries
are an important form of data frequently stored in XML format that frequently
have errors introduced through a mixture of manual typographical entry errors
and optical character recognition errors. In this paper we describe methods for
flagging statistical anomalies as likely errors in electronic dictionaries
stored in XML format. We describe six systems based on different sources of
information. The systems detect errors using various signals in the data
including uncommon characters, text length, character-based language models,
word-based language models, tied-field length ratios, and tied-field
transliteration models. Four of the systems detect errors based on expectations
automatically inferred from content within elements of a single field type. We
call these single-field systems. Two of the systems detect errors based on
correspondence expectations automatically inferred from content within elements
of multiple related field types. We call these tied-field systems. For each
system, we provide an intuitive analysis of the type of error that it is
successful at detecting. Finally, we describe two larger-scale evaluations
using crowdsourcing with Amazon's Mechanical Turk platform and using the
annotations of a domain expert. The evaluations consistently show that the
systems are useful for improving the efficiency with which errors in XML
electronic dictionaries can be detected.Comment: 8 pages, 4 figures, 5 tables; published in Proceedings of the 2016
IEEE Tenth International Conference on Semantic Computing (ICSC), Laguna
Hills, CA, USA, pages 79-86, February 201
Competing creole transcripts on trial
A criminal prosecution of Jamaican Creole (JC) speaking ‘posse’(=gang) members in New York included evidence of recorded speech in JC. Clandestinerecordings (discussions of criminal events, including narration of a homicide) wereintroduced at trial. Taped data were translated for prosecution by a non-linguist nativespeaker of JC. Defense disputed these texts and commissioned alternative transcriptionsfrom a creolist linguist, who was a non-speaker of JC. Prosecution in turn hired anothercreolist, a near-native speaker of and specialist in JC, to testify on the relative accuracyof both sets of earlier texts. Differing representations of key conversations weresubmitted to a non-creole speaking judge/jury, both linguists testified, and defendantswere convicted. The role of linguistic testimony and practice (especially transcription)in the trial is analysed. A typology of linguistic expertise is given, and effects of thelanguage’s Creole status and lack of instrumentalization on the trial are discussed
- …