2,254,941 research outputs found

    A solenoidal electron spectrometer for a precision measurement of the neutron β\beta-asymmetry with ultracold neutrons

    Full text link
    We describe an electron spectrometer designed for a precision measurement of the neutron β\beta-asymmetry with spin-polarized ultracold neutrons. The spectrometer consists of a 1.0-Tesla solenoidal field with two identical multiwire proportional chamber and plastic scintillator electron detector packages situated within 0.6-Tesla field-expansion regions. Select results from performance studies of the spectrometer with calibration sources are reported.Comment: 30 pages, 19 figures, 1 table, submitted to NIM

    Precision bounds for noisy nonlinear quantum metrology

    Get PDF
    We derive the ultimate bounds on the performance of nonlinear measurement schemes in the presence of noise. In particular, we investigate the precision of the second-order estimation scheme in the presence of the two most detrimental types of noise, photon loss and phase diffusion. We find that the second-order estimation scheme is affected by both types of noise in an analogous way as the linear one. Moreover, we observe that for both types of noise the gain in the phase sensitivity with respect to the linear estimation scheme is given by a multiplicative term O(1/N)\mathcal{O}(1/N). Interestingly, we also find that under certain circumstances, a careful engineering of the environment can, in principle, improve the performance of measurement schemes affected by phase diffusion.Comment: 9 pages, 2 figures, 1 table, 1 appendix; v3 contains an improved analysis and a stronger precision bound for the case of photon loss; published versio

    New Method of Measuring TCP Performance of IP Network using Bio-computing

    Full text link
    The measurement of performance of Internet Protocol IP network can be done by Transmission Control Protocol TCP because it guarantees send data from one end of the connection actually gets to the other end and in the same order it was send, otherwise an error is reported. There are several methods to measure the performance of TCP among these methods genetic algorithms, neural network, data mining etc, all these methods have weakness and can't reach to correct measure of TCP performance. This paper proposed a new method of measuring TCP performance for real time IP network using Biocomputing, especially molecular calculation because it provides wisdom results and it can exploit all facilities of phylogentic analysis. Applying the new method at real time on Biological Kurdish Messenger BIOKM model designed to measure the TCP performance in two types of protocols File Transfer Protocol FTP and Internet Relay Chat Daemon IRCD. This application gives very close result of TCP performance comparing with TCP performance which obtains from Little's law using same model (BIOKM), i.e. the different percentage of utilization (Busy or traffic industry) and the idle time which are obtained from a new method base on Bio-computing comparing with Little's law was (nearly) 0.13%. KEYWORDS Bio-computing, TCP performance, Phylogenetic tree, Hybridized Model (Normalized), FTP, IRCDComment: 17 Pages,10 Figures,5 Table

    Compressive Sensing for Spread Spectrum Receivers

    Get PDF
    With the advent of ubiquitous computing there are two design parameters of wireless communication devices that become very important power: efficiency and production cost. Compressive sensing enables the receiver in such devices to sample below the Shannon-Nyquist sampling rate, which may lead to a decrease in the two design parameters. This paper investigates the use of Compressive Sensing (CS) in a general Code Division Multiple Access (CDMA) receiver. We show that when using spread spectrum codes in the signal domain, the CS measurement matrix may be simplified. This measurement scheme, named Compressive Spread Spectrum (CSS), allows for a simple, effective receiver design. Furthermore, we numerically evaluate the proposed receiver in terms of bit error rate under different signal to noise ratio conditions and compare it with other receiver structures. These numerical experiments show that though the bit error rate performance is degraded by the subsampling in the CS-enabled receivers, this may be remedied by including quantization in the receiver model. We also study the computational complexity of the proposed receiver design under different sparsity and measurement ratios. Our work shows that it is possible to subsample a CDMA signal using CSS and that in one example the CSS receiver outperforms the classical receiver.Comment: 11 pages, 11 figures, 1 table, accepted for publication in IEEE Transactions on Wireless Communication

    Bell's inequality violation with spins in silicon

    Full text link
    Bell's theorem sets a boundary between the classical and quantum realms, by providing a strict proof of the existence of entangled quantum states with no classical counterpart. An experimental violation of Bell's inequality demands simultaneously high fidelities in the preparation, manipulation and measurement of multipartite quantum entangled states. For this reason the Bell signal has been tagged as a single-number benchmark for the performance of quantum computing devices. Here we demonstrate deterministic, on-demand generation of two-qubit entangled states of the electron and the nuclear spin of a single phosphorus atom embedded in a silicon nanoelectronic device. By sequentially reading the electron and the nucleus, we show that these entangled states violate the Bell/CHSH inequality with a Bell signal of 2.50(10). An even higher value of 2.70(9) is obtained by mapping the parity of the two-qubit state onto the nuclear spin, which allows for high-fidelity quantum non-demolition measurement (QND) of the parity. Furthermore, we complement the Bell inequality entanglement witness with full two-qubit state tomography exploiting QND measurement, which reveals that our prepared states match the target maximally entangled Bell states with >>96\% fidelity. These experiments demonstrate complete control of the two-qubit Hilbert space of a phosphorus atom, and show that this system is able to maintain its simultaneously high initialization, manipulation and measurement fidelities past the single-qubit regime.Comment: 10 pages, 3 figures, 1 table, 4 extended data figure

    Low latency vision-based control for robotics : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Mechatronics at Massey University, Manawatu, New Zealand

    Get PDF
    In this work, the problem of controlling a high-speed dynamic tracking and interception system using computer vision as the measurement unit was explored. High-speed control systems alone present many challenges, and these challenges are compounded when combined with the high volume of data processing required by computer vision systems. A semi-automated foosball table was chosen as the test-bed system because it combines all the challenges associated with a vision-based control system into a single platform. While computer vision is extremely useful and can solve many problems, it can also introduce many problems such as latency, the need for lens and spatial calibration, potentially high power consumption, and high cost. The objective of this work is to explore how to implement computer vision as the measurement unit in a high-speed controller, while minimising latencies caused by the vision itself, communication interfaces, data processing/strategy, instruction execution, and actuator control. Another objective was to implement the solution in one low-latency, low power, low cost embedded system. A field programmable gate array (FPGA) system on chip (SoC), which combines programmable digital logic with a dual core ARM processor (HPS) on the same chip, was hypothesised to be capable of running the described vision-based control system. The FPGA was used to perform streamed image pre-processing, concurrent stepper motor control and provide communication channels for user input, while the HPS performed the lens distortion mapping, intercept calculation and “strategy” control tasks, as well as controlling overall function of the system. Individual vision systems were compared for latency performance. Interception performance of the semi-automated foosball table was then tested for straight, moderate-speed shots with limited view time, and latency was artificially added to the system and the interception results for the same, centre-field shot tested with a variety of different added latencies. The FPGA based system performed the best in both steady-state latency, and novel event detection latency tests. The developed stepper motor control modules performed well in terms of speed, smoothness, resource consumption, and versatility. They are capable of constant velocity, constant acceleration and variable acceleration profiles, as well as being completely parameterisable. The interception modules on the foosball table achieved a 100% interception rate, with a confidence interval of 95%, and reliability of 98.4%. As artificial latency was added to the system, the performance dropped in terms of overall number of successful intercepts. The decrease in performance was roughly linear with a 60% in reduction in performance caused by 100 ms of added latency. Performance dropped to 0% successful intercepts when 166 ms of latency was added. The implications of this work are that FPGA SoC technology may, in future, enable computer vision to be used as a general purpose, high-speed measurement system for a wide variety of control problems

    In-Orbit Instrument Performance Study and Calibration for POLAR Polarization Measurements

    Full text link
    POLAR is a compact space-borne detector designed to perform reliable measurements of the polarization for transient sources like Gamma-Ray Bursts in the energy range 50-500keV. The instrument works based on the Compton Scattering principle with the plastic scintillators as the main detection material along with the multi-anode photomultiplier tube. POLAR has been launched successfully onboard the Chinese space laboratory TG-2 on 15th September, 2016. In order to reliably reconstruct the polarization information a highly detailed understanding of the instrument is required for both data analysis and Monte Carlo studies. For this purpose a full study of the in-orbit performance was performed in order to obtain the instrument calibration parameters such as noise, pedestal, gain nonlinearity of the electronics, threshold, crosstalk and gain, as well as the effect of temperature on the above parameters. Furthermore the relationship between gain and high voltage of the multi-anode photomultiplier tube has been studied and the errors on all measurement values are presented. Finally the typical systematic error on polarization measurements of Gamma-Ray Bursts due to the measurement error of the calibration parameters are estimated using Monte Carlo simulations.Comment: 43 pages, 30 figures, 1 table; Preprint accepted by NIM
    corecore