25,827 research outputs found

    Probing Contact Interactions at High Energy Lepton Colliders

    Full text link
    Fermion compositeness and other new physics can be signalled by the presence of a strong four-fermion contact interaction. Here we present a study of ℓℓqq\ell\ell qq and ℓℓℓ′ℓ′\ell\ell\ell'\ell' contact interactions using the reactions: ℓ+ℓ−→ℓ′+ℓ′−,bbˉ,ccˉ\ell^+ \ell^- \to \ell'^+\ell'^-,b\bar b, c\bar c at future e+e−e^+e^- linear colliders with s=0.5−5\sqrt s=0.5-5 TeV and μ+μ−\mu^+\mu^- colliders with s=0.5,4\sqrt s=0.5,4 TeV. We find that very large compositeness scales can be probed at these machines and that the use of polarized beams can unravel their underlying helicity structure.Comment: 12 pg, to appear in the {\it Proceedings of the 1996 DPF/DPB Summer Study on New Directions for High Energy Physics - Snowmass96}, Snowmass, CO, 25 June - 12 July, 199

    Frame synchronization methods based on channel symbol measurements

    Get PDF
    The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required

    Node synchronization schemes for the Big Viterbi Decoder

    Get PDF
    The Big Viterbi Decoder (BVD), currently under development for the DSN, includes three separate algorithms to acquire and maintain node and frame synchronization. The first measures the number of decoded bits between two consecutive renormalization operations (renorm rate), the second detects the presence of the frame marker in the decoded bit stream (bit correlation), while the third searches for an encoded version of the frame marker in the encoded input stream (symbol correlation). A detailed account of the operation is given, as well as performance comparison, of the three methods

    Performance of Galileo's concatenated codes with nonideal interleaving

    Get PDF
    The Galileo spacecraft employs concatenated coding schemes with Reed-Solomon interleaving depth 2. The bit error rate (BER) performance of Galileo's concatenated codes, assuming different interleaving depths (including infinite interleaving depth) are compared. It is observed that Galileo's depth 2 interleaving, when used with the experimental (15, 1/4) code, requires about 0.4 to 0.5 dB additional signal-to-noise ratio to achieve the same BER performance as the concatenated code with ideal interleaving. When used with the standard (7, 1/2) code, depth 2 interleaving requires about 0.2 dB more signal-to-noise ratio than ideal interleaving

    Detection of the Horizontal Divergent Flow prior to the Solar Flux Emergence

    Full text link
    It is widely accepted that solar active regions including sunspots are formed by the emerging magnetic flux from the deep convection zone. In previous numerical simulations, we found that the horizontal divergent flow (HDF) occurs before the flux emergence at the photospheric height. This Paper reports the HDF detection prior to the flux emergence of NOAA AR 11081, which is located away from the disk center. We use SDO/HMI data to study the temporal changes of the Doppler and magnetic patterns from those of the reference quiet Sun. As a result, the HDF appearance is found to come before the flux emergence by about 100 minutes. Also, the horizontal speed of the HDF during this time gap is estimated to be 0.6 to 1.5 km s^-1, up to 2.3 km s^-1. The HDF is caused by the plasma escaping horizontally from the rising magnetic flux. And the interval between the HDF and the flux emergence may reflect the latency during which the magnetic flux beneath the solar surface is waiting for the instability onset to the further emergence. Moreover, SMART Halpha images show that the chromospheric plages appear about 14 min later, located co-spatial with the photospheric pores. This indicates that the plages are caused by plasma flowing down along the magnetic fields that connect the pores at their footpoints. One importance of observing the HDF may be the possibility to predict the sunspot appearances that occur in several hours.Comment: 32 pages, 8 figures, 3 tables, accepted for publication in Ap

    Fast transform decoding of nonsystematic Reed-Solomon codes

    Get PDF
    A Reed-Solomon (RS) code is considered to be a special case of a redundant residue polynomial (RRP) code, and a fast transform decoding algorithm to correct both errors and erasures is presented. This decoding scheme is an improvement of the decoding algorithm for the RRP code suggested by Shiozaki and Nishida, and can be realized readily on very large scale integration chips

    Compressed/reconstructed test images for CRAF/Cassini

    Get PDF
    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity
    • …
    corecore