137,364 research outputs found
On the complexity of exchanging
We analyze the computational complexity of the problem of deciding
whether, for a given simple game, there exists the possibility of rearranging the participants in a set of j given losing coalitions into a set of j winning coalitions. We also look at the problem of turning winning coalitions into losing coalitions. We analyze the problem when the simple game is represented by a list of wining, losing, minimal winning or maximal loosing coalitions.Peer ReviewedPostprint (author’s final draft
Experimental quantum "Guess my Number" protocol using multiphoton entanglement
We present an experimental demonstration of a modified version of the
entanglement-assisted "Guess my Number" protocol for the reduction of
communication complexity among three separated parties. The results of
experimental measurements imply that the separated parties can compute a
function of distributed inputs by exchanging less classical information than by
using any classical strategy. And the results also demonstrate the advantages
of entanglement-enhanced communication, which is very close to quantum
communication. The advantages are based on the properties of
Greenberger-Horne-Zeilinger states.Comment: 4 pages, 2 figure
A task-based metric for telerobotic performance assessment
A methodology is described for developing a task complexity index based on combining the six basic motion primitives (three translation, three orientation) with force control and accuracy requirements. The result of this development is a set of complexity values that can be assigned to the high-level task primitives derived from a relatively shallow top-down mission analysis. These values are then averaged to arrive at total average mission complexities, such as for the mission of exchanging the Hubble Space Telescope (HST) battery modules. Application of this metric to a candidate set of NASA Flight Telerobotic Servicer evaluation tasks is discussed using the HST battery module mission for an in-depth example
Symmetry properties of the Novelli-Pak-Stoyanovskii algorithm
The number of standard Young tableaux of a fixed shape is famously given by
the hook-length formula due to Frame, Robinson and Thrall. A bijective proof of
Novelli, Pak and Stoyanovskii relies on a sorting algorithm akin to
jeu-de-taquin which transforms an arbitrary filling of a partition into a
standard Young tableau by exchanging adjacent entries. Recently, Krattenthaler
and M\"uller defined the complexity of this algorithm as the average number of
performed exchanges, and Neumann and the author proved it fulfils some nice
symmetry properties. In this paper we recall and extend the previous results
and provide new bijective proofs.Comment: 13 pages, 3 figure, submitted to FPSAC 2014 Chicag
Measuring and Evaluating a Design Complexity Metric for XML Schema Documents
The eXtensible Markup Language (XML) has been gaining extraordinary acceptance from many diverse enterprise software companies for their object repositories, data
interchange, and development tools. Further, many different domains, organizations and content providers have been publishing and exchanging information via internet by the
usage of XML and standard schemas. Efficient implementation of XML in these domains requires well designed XML schemas. In this point of view, design of XML schemas plays an extremely important role in software development process and needs to be quantified for ease of maintainability. In this paper, an attempt has been made to evaluate the quality of XML schema documents (XSD) written in W3C XML Schema language. We propose a metric, which measures the complexity due to the internal architecture of XSD components, and due to recursion. This is the single metric, which cover all major factors responsible for complexity of XSD. The metric has been empirically
and theoretically validated, demonstrated with examples and supported by comparison with other well known structure metrics applied on XML schema documents
Iterative Slepian-Wolf Decoding and FEC Decoding for Compress-and-Forward Systems
While many studies have concentrated on providing theoretical analysis for the relay assisted compress-and-forward systems little effort has yet been made to the construction and evaluation of a practical system. In this paper a practical CF system incorporating an error-resilient multilevel Slepian-Wolf decoder is introduced and a novel iterative processing structure which allows information exchanging between the Slepian-Wolf decoder and the forward error correction decoder of the main source message is proposed. In addition, a new quantization scheme is incorporated as well to avoid the complexity of the reconstruction of the relay signal at the final decoder of the destination. The results demonstrate that the iterative structure not only reduces the decoding loss of the Slepian-Wolf decoder, it also improves the decoding performance of the main message from the source
Supply chain risks: an automotive case study
The supply chain is a complex system exchanging information, goods, material and money within enterprises, as well as between enterprises within the value chain. An effective supply chain management contributes to large corporate profits and it is therefore a valid path to reinforce the enterprises' competitiveness. However, supply chain is exposed to influences from undesirable factors both from the outside environment and the entities in the chain. Moreover, industrial trends towards lean production, increasing outsourcing, globalisation and reliance on supply networks capabilities and innovations, increase the complexity of the supply chain . Therefore, managers need to identify, and manage risks, as well as opportunities, from a more diverse range of sources and contexts. This paper contributes to identify and categorise supply chain risks based on a literature study and an automotive manufacturer’s viewpoint. The empirical results indicate suppliers and raw material prices as the major internal and external potential risks
An Assurance Framework for Independent Co-assurance of Safety and Security
Integrated safety and security assurance for complex systems is difficult for
many technical and socio-technical reasons such as mismatched processes,
inadequate information, differing use of language and philosophies, etc.. Many
co-assurance techniques rely on disregarding some of these challenges in order
to present a unified methodology. Even with this simplification, no methodology
has been widely adopted primarily because this approach is unrealistic when met
with the complexity of real-world system development.
This paper presents an alternate approach by providing a Safety-Security
Assurance Framework (SSAF) based on a core set of assurance principles. This is
done so that safety and security can be co-assured independently, as opposed to
unified co-assurance which has been shown to have significant drawbacks. This
also allows for separate processes and expertise from practitioners in each
domain. With this structure, the focus is shifted from simplified unification
to integration through exchanging the correct information at the right time
using synchronisation activities
LDPC Code Design for Noncoherent Physical Layer Network Coding
This work considers optimizing LDPC codes in the physical-layer network coded
two-way relay channel using noncoherent FSK modulation. The error-rate
performance of channel decoding at the relay node during the multiple-access
phase was improved through EXIT-based optimization of Tanner graph variable
node degree distributions. Codes drawn from the DVB-S2 and WiMAX standards were
used as a basis for design and performance comparison. The computational
complexity characteristics of the standard codes were preserved in the
optimized codes by maintaining the extended irregular repeat-accumulate (eIRA).
The relay receiver performance was optimized considering two modulation orders
M = {4, 8} using iterative decoding in which the decoder and demodulator refine
channel estimates by exchanging information. The code optimization procedure
yielded unique optimized codes for each case of modulation order and available
channel state information. Performance of the standard and optimized codes were
measured using Monte Carlo simulation in the flat Rayleigh fading channel, and
error rate improvements up to 1.2 dB are demonstrated depending on system
parameters.Comment: Six pages, submitted to 2015 IEEE International Conference on
Communication
- …