1,372 research outputs found
Disjoint difference families and their applications
Difference sets and their generalisations to difference families arise from the study of designs and many other applications. Here we give a brief survey of some of these applications, noting in particular the diverse definitions of difference families and the variations in priorities in constructions. We propose a definition of disjoint difference families that encompasses these variations and allows a comparison of the similarities and disparities. We then focus on two constructions of disjoint difference families arising from frequency hopping sequences and showed that they are in fact the same. We conclude with a discussion of the notion of equivalence for frequency hopping sequences and for disjoint difference families
The NASA computer science research program plan
A taxonomy of computer science is included, one state of the art of each of the major computer science categories is summarized. A functional breakdown of NASA programs under Aeronautics R and D, space R and T, and institutional support is also included. These areas were assessed against the computer science categories. Concurrent processing, highly reliable computing, and information management are identified
Postevent information and the impairment of eyewitness memory : a methodological examination
Recent work in the cognitive psychology of memory suggests that misleading information may permanently alter memory for an event. This work, which takes much of its impetus from the prospect of applying itself to the legal question of eyewitness evidence, has recently come under severe criticism. McCloskey & Zaragoza (1985a, 1985b) provide evidence to suggest that the experimental design used by almost all relevant studies is seriously flawed, and that results which appear to indicate the deleterious effect of misinformation on memory are artefactual. An analysis of the misinformation paradigm is presented here, with particular attention being paid to the claim of artifactuality. Two lines of approach are adopted in the analysis. In the first, the misinformation paradigm is assessed for its theoretical basis. The notion of 'application' that informs the paradigm is subjected to conceptual scrutiny, and the body of research that constitutes the paradigm is reviewed in terms of its applied orientation. In the second line of approach, the claim of artifactuality is investigated directly. Three methods are devised to test the claim of artifactuality. In two of these, post-hoc analyses are performed, one of which suggests that the claim of artifactuality is incorrect in at least some respects. The third method is constituted by an experiment which submits the claim of artifactuality to exhaustive empirical test. The results of the experiment support the claim that findings of memorial alteration are artefactual. The two lines of approach are united by showing how the experimental work developed out of the applied basis of the paradigm.· It is argued that the inadequacies in the experimental design reflect the impoverished theoretical basis of the research. It is further argued that the question regarding the effect that false information has on memory for an event is one that is still. eminently worth pursuing. A few preliminary remarks are made regarding applied considerations relevant to this pursuit
Recommended from our members
Algorithm Based Fault Tolerance in Massively Parallel Systems
An A complex computer system consists of billions of transistors, miles of wires, and many interactions with an unpredictable environment. Correct results must be produced despite faults that dynamically occur in some of these components. Many techniques have been developed for fault tolerant computation. General purpose methods are independent of the application, yet incur an overhead cost which may be unacceptable for massively parallel systems. Algorithm-specific methods, which can operate at lower cost, are a developing alternative [1, 72]. This paper first reviews the general-purpose approach and then focuses on the algorithm-specific method, with an eye toward massively parallel processors. Algorithm-based fault tolerance has the attraction of low overhead; furthermore it addresses both the detection and also the correction problems. The principle is to build low-cost checking and correcting mechanism based exclusively on the redundancies inherent in the system
SInCom 2015
2nd Baden-Württemberg Center of Applied Research Symposium on Information and Communication Systems, SInCom 2015, 13. November 2015 in Konstan
A Survey on Quantum Channel Capacities
Quantum information processing exploits the quantum nature of information. It
offers fundamentally new solutions in the field of computer science and extends
the possibilities to a level that cannot be imagined in classical communication
systems. For quantum communication channels, many new capacity definitions were
developed in comparison to classical counterparts. A quantum channel can be
used to realize classical information transmission or to deliver quantum
information, such as quantum entanglement. Here we review the properties of the
quantum communication channel, the various capacity measures and the
fundamental differences between the classical and quantum channels.Comment: 58 pages, Journal-ref: IEEE Communications Surveys and Tutorials
(2018) (updated & improved version of arXiv:1208.1270
Quantum cryptography: key distribution and beyond
Uniquely among the sciences, quantum cryptography has driven both
foundational research as well as practical real-life applications. We review
the progress of quantum cryptography in the last decade, covering quantum key
distribution and other applications.Comment: It's a review on quantum cryptography and it is not restricted to QK
Combining Error-Correcting Codes and Decision Diagrams for the Design of Fault-Tolerant Logic
In modern logic circuits, fault-tolerance is increasingly important, since even atomic-scale imperfections can result in circuit failures as the size of the components is shrinking. Therefore, in addition to existing techniques for providing fault-tolerance to logic circuits, it is important to develop new techniques for detecting and correcting possible errors resulting from faults in the circuitry.
Error-correcting codes are typically used in data transmission for error detection and correction. Their theory is far developed, and linear codes, in particular, have many useful properties and fast decoding algorithms. The existing fault-tolerance techniques utilizing error-correcting codes require less redundancy than other error detection and correction schemes, and such techniques are usually implemented using special decoding circuits.
Decision diagrams are an efficient graphical representation for logic functions, which, depending on the technology, directly determine the complexity and layout of the circuit. Therefore, they are easy to implement.
In this thesis, error-correcting codes are combined with decision diagrams to obtain a new method for providing fault-tolerance in logic circuits. The resulting method of designing fault-tolerant logic, namely error-correcting decision diagrams, introduces redundancy already to the representations of logic functions, and as a consequence no additional checker circuits are needed in the circuit layouts obtained with the new method. The purpose of the thesis is to introduce this original concept and provide fault-tolerance analysis for the obtained decision diagrams.
The fault-tolerance analysis of error-correcting decision diagrams carried out in this thesis shows that the obtained robust diagrams have a significantly reduced probability for an incorrect output in comparison with non-redundant diagrams. However, such useful properties are not obtained without a cost, since adding redundancy also adds complexity, and consequently better error-correcting properties result in increased complexity in the circuit layout. /Kir1
- …