663 research outputs found
Massively Parallel Continuous Local Search for Hybrid SAT Solving on GPUs
Although state-of-the-art (SOTA) SAT solvers based on conflict-driven clause
learning (CDCL) have achieved remarkable engineering success, their sequential
nature limits the parallelism that may be extracted for acceleration on
platforms such as the graphics processing unit (GPU). In this work, we propose
FastFourierSAT, a highly parallel hybrid SAT solver based on gradient-driven
continuous local search (CLS). This is realized by a novel parallel algorithm
inspired by the Fast Fourier Transform (FFT)-based convolution for computing
the elementary symmetric polynomials (ESPs), which is the major computational
task in previous CLS methods. The complexity of our algorithm matches the best
previous result. Furthermore, the substantial parallelism inherent in our
algorithm can leverage the GPU for acceleration, demonstrating significant
improvement over the previous CLS approaches. We also propose to incorporate
the restart heuristics in CLS to improve search efficiency. We compare our
approach with the SOTA parallel SAT solvers on several benchmarks. Our results
show that FastFourierSAT computes the gradient 100+ times faster than previous
prototypes implemented on CPU. Moreover, FastFourierSAT solves most instances
and demonstrates promising performance on larger-size instances
Analog Network Coding for Multi-User Spread-Spectrum Communication Systems
This work presents another look at an analog network coding scheme for
multi-user spread-spectrum communication systems. Our proposed system combines
coding and cooperation between a relay and users to boost the throughput and to
exploit interference. To this end, each pair of users, and
, that communicate with each other via a relay
shares the same spreading code. The relay has two roles, it synchronizes
network transmissions and it broadcasts the combined signals received from
users. From user 's point of view, the signal is decoded, and
then, the data transmitted by user is recovered by subtracting
user 's own data. We derive the analytical performance of this
system for an additive white Gaussian noise channel with the presence of
multi-user interference, and we confirm its accuracy by simulation.Comment: 6 pages, 2 figures, to appear at IEEE WCNC'1
Representing Conversations for Scalable Overhearing
Open distributed multi-agent systems are gaining interest in the academic
community and in industry. In such open settings, agents are often coordinated
using standardized agent conversation protocols. The representation of such
protocols (for analysis, validation, monitoring, etc) is an important aspect of
multi-agent applications. Recently, Petri nets have been shown to be an
interesting approach to such representation, and radically different approaches
using Petri nets have been proposed. However, their relative strengths and
weaknesses have not been examined. Moreover, their scalability and suitability
for different tasks have not been addressed. This paper addresses both these
challenges. First, we analyze existing Petri net representations in terms of
their scalability and appropriateness for overhearing, an important task in
monitoring open multi-agent systems. Then, building on the insights gained, we
introduce a novel representation using Colored Petri nets that explicitly
represent legal joint conversation states and messages. This representation
approach offers significant improvements in scalability and is particularly
suitable for overhearing. Furthermore, we show that this new representation
offers a comprehensive coverage of all conversation features of FIPA
conversation standards. We also present a procedure for transforming AUML
conversation protocol diagrams (a standard human-readable representation), to
our Colored Petri net representation
A NOVEL CONSTRUCTION OF VECTOR COMBINATORIAL (VC) CODE FAMILIES AND DETECTION SCHEME FOR SAC OCDMA SYSTEMS
There has been growing interests in using optical code division multiple access
(OCDMA) systems for the next generation high-speed optical fiber networks. The
advantage of spectral amplitude coding (SAC-OCDMA) over conventional OCDMA
systems is that, when using appropriate detection technique, the multiple access
interference (MAI) can totally be canceled. The motivation of this research is to
develop new code families to enhance the overall performance of optical OCDMA
systems. Four aspects are tackled in this research. Firstly, a comprehensive discussion
takes place on all important aspects of existing codes from advantages and
disadvantages point of view. Two algorithms are proposed to construct several code
families namely Vector Combinatorial (VC). Secondly, a new detection technique
based on exclusive-OR (XOR) logic is developed and compared to the reported
detection techniques. Thirdly, a software simulation for SAC OCDMA system with
the VC families using a commercial optical system, Virtual Photonic Instrument,
“VPITM TransmissionMaker 7.1” is conducted. Finally, an extensive investigation to
study and characterize the VC-OCDMA in local area network (LAN) is conducted.
For the performance analysis, the effects of phase-induced intensity noise (PIIN), shot
noise, and thermal noise are considered simultaneously. The performances of the
system compared to reported systems were characterized by referring to the signal to
noise ratio (SNR), the bit error rate (BER) and the effective power (Psr). Numerical
results show that, an acceptable BER of 10−9 was achieved by the VC codes with 120
active users while a much better performance can be achieved when the effective
received power Psr > -26 dBm. In particular, the BER can be significantly improved
when the VC optimal channel spacing width is carefully selected; best performance
occurs at a spacing bandwidth between 0.8 and 1 nm. The simulation results indicate
that VC code has a superior performance compared to other reported codes for the
same transmission quality. It is also found that for a transmitted power at 0 dBm, the
BER specified by eye diagrams patterns are 10-14 and 10-5 for VC and Modified
Quadratic Congruence (MQC) codes respectively
A graphical formalism for mixed multi-unit combinatorial auctions
Mixed multi-unit combinatorial auctions are auctions that allow participants to bid for bundles of goods to buy, for bundles of goods to sell, and for transformations of goods. The intuitive meaning of a bid for a transformation is that the bidder is offering to produce a set of output goods after having received a set of input goods. To solve such an auction the auctioneer has to choose a set of bids to accept and decide on a sequence in which to implement the associated transformations. Mixed auctions can potentially be employed for the automated assembly of supply chains of agents. However, mixed auctions can be effectively applied only if we can also ensure their computational feasibility without jeopardising optimality. To this end, we propose a graphical formalism, based on Petri nets, that facilitates the compact represention of both the search space and the solutions associated with the winner determination problem for mixed auctions. This approach allows us to dramatically reduce the number of decision variables required for solving a broad class of mixed auction winner determination problems. An additional major benefit of our graphical formalism is that it provides new ways to formally analyse the structural and behavioural properties of mixed auctions. © 2009 Springer Science+Business Media, LLC.This work was funded by the Jose Castillejo programme (JC2008-00337), IEA (TIN2006-15662-C02-01), OK (IST-4-027253-STP), eREP(EC-FP6-CIT5-28575) and Agreement Technologies (CONSOLIDER CSD2007-0022, INGENIO 2010)Peer Reviewe
Automated streamliner portfolios for constraint satisfaction problems
Funding: This work is supported by the EPSRC grants EP/P015638/1 and EP/P026842/1, and Nguyen Dang is a Leverhulme Early Career Fellow. We used the Cirrus UK National Tier-2 HPC Service at EPCC (http://www.cirrus.ac.uk) funded by the University of Edinburgh and EPSRC (EP/P020267/1).Constraint Programming (CP) is a powerful technique for solving large-scale combinatorial problems. Solving a problem proceeds in two distinct phases: modelling and solving. Effective modelling has a huge impact on the performance of the solving process. Even with the advance of modern automated modelling tools, search spaces involved can be so vast that problems can still be difficult to solve. To further constrain the model, a more aggressive step that can be taken is the addition of streamliner constraints, which are not guaranteed to be sound but are designed to focus effort on a highly restricted but promising portion of the search space. Previously, producing effective streamlined models was a manual, difficult and time-consuming task. This paper presents a completely automated process to the generation, search and selection of streamliner portfolios to produce a substantial reduction in search effort across a diverse range of problems. The results demonstrate a marked improvement in performance for both Chuffed, a CP solver with clause learning, and lingeling, a modern SAT solver.Publisher PDFPeer reviewe
Reducing sequencing complexity in dynamical quantum error suppression by Walsh modulation
We study dynamical error suppression from the perspective of reducing
sequencing complexity, in order to facilitate efficient semi-autonomous
quantum-coherent systems. With this aim, we focus on digital sequences where
all interpulse time periods are integer multiples of a minimum clock period and
compatibility with simple digital classical control circuitry is intrinsic,
using so-called em Walsh functions as a general mathematical framework. The
Walsh functions are an orthonormal set of basis functions which may be
associated directly with the control propagator for a digital modulation
scheme, and dynamical decoupling (DD) sequences can be derived from the
locations of digital transitions therein. We characterize the suite of the
resulting Walsh dynamical decoupling (WDD) sequences, and identify the number
of periodic square-wave (Rademacher) functions required to generate a Walsh
function as the key determinant of the error-suppressing features of the
relevant WDD sequence. WDD forms a unifying theoretical framework as it
includes a large variety of well-known and novel DD sequences, providing
significant flexibility and performance benefits relative to basic
quasi-periodic design. We also show how Walsh modulation may be employed for
the protection of certain nontrivial logic gates, providing an implementation
of a dynamically corrected gate. Based on these insights we identify Walsh
modulation as a digital-efficient approach for physical-layer error
suppression.Comment: 15 pages, 3 figure
A specification-based QoS-aware design framework for service-based applications
Effective and accurate service discovery and composition rely on complete specifications of service behaviour, containing inputs and preconditions that are required before service execution, outputs, effects and ramifications of a
successful execution and explanations for unsuccessful executions. The previously defined Web Service Specification Language (WSSL) relies on the fluent calculus formalism to produce such rich specifications for atomic and composite
services. In this work, we propose further extensions that focus on the specification of QoS profiles, as well as partially observable service states. Additionally, a design framework for service-based applications is implemented
based on WSSL, advancing state of the art by being the first service framework to simultaneously provide several desirable
capabilities, such as supporting ramifications and partial observability, as well as non-determinism in composition schemas using heuristic encodings; providing explanations
for unexpected behaviour; and QoS-awareness through goal-based techniques. These capabilities are illustrated through a comparative evaluation against prominent state-of-the-art approaches based on a typical SBA design scenario
- …