1,989,696 research outputs found

    Apollo Experiment Report: Lunar-Sample Processing in the Lunar Receiving Laboratory High-Vacuum Complex

    Get PDF
    A high-vacuum complex composed of an atmospheric decontamination system, sample-processing chambers, storage chambers, and a transfer system was built to process and examine lunar material while maintaining quarantine status. Problems identified, equipment modifications, and procedure changes made for Apollo 11 and 12 sample processing are presented. The sample processing experiences indicate that only a few operating personnel are required to process the sample efficiently, safely, and rapidly in the high-vacuum complex. The high-vacuum complex was designed to handle the many contingencies, both quarantine and scientific, associated with handling an unknown entity such as the lunar sample. Lunar sample handling necessitated a complex system that could not respond rapidly to changing scientific requirements as the characteristics of the lunar sample were better defined. Although the complex successfully handled the processing of Apollo 11 and 12 lunar samples, the scientific requirement for vacuum samples was deleted after the Apollo 12 mission just as the vacuum system was reaching its full potential

    A Microfluidic Platform for Precision Small-volume Sample Processing and Its Use to Size Separate Biological Particles with an Acoustic Microdevice.

    Get PDF
    A major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15-1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection, system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing

    Development of a low-cost automated sample presentation and analysis system for counting and classifying nematode eggs : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Mechatronics at Massey University, Manawatu, New Zealand

    Get PDF
    This thesis discusses the concept development and design of a low-cost, automated, sample presentation system for faecal egg counting, and classification. The system developed uses microfluidics to present nematode eggs for digital imaging to produce images suitable for image analysis and classification. The system costs are kept low by using simple manufacturing methods and commonly available equipment to produce microfluidic counting chambers, which can be interfaced with conventional microscopes. This thesis includes details of the design and implementation of the software developed to allow capture and processing of images from the presentation system. This thesis also includes details on the measures taken to correct for the optical aberrations introduced by the sample presentation system

    Exploiting graphic processing units parallelism to improve intelligent data acquisition system performance in JET's correlation reflectometer

    Get PDF
    The performance of intelligent data acquisition systems relies heavily on their processing capabilities and local bus bandwidth, especially in applications with high sample rates or high number of channels. This is the case of the self adaptive sampling rate data acquisition system installed as a pilot experiment in KG8B correlation reflectometer at JET. The system, which is based on the ITMS platform, continuously adapts the sample rate during the acquisition depending on the signal bandwidth. In order to do so it must transfer acquired data to a memory buffer in the host processor and run heavy computational algorithms for each data block. The processing capabilities of the host CPU and the bandwidth of the PXI bus limit the maximum sample rate that can be achieved, therefore limiting the maximum bandwidth of the phenomena that can be studied. Graphic processing units (GPU) are becoming an alternative for speeding up compute intensive kernels of scientific, imaging and simulation applications. However, integrating this technology into data acquisition systems is not a straight forward step, not to mention exploiting their parallelism efficiently. This paper discusses the use of GPUs with new high speed data bus interfaces to improve the performance of the self adaptive sampling rate data acquisition system installed on JET. Integration issues are discussed and performance evaluations are presente

    Engineering Crowdsourced Stream Processing Systems

    Full text link
    A crowdsourced stream processing system (CSP) is a system that incorporates crowdsourced tasks in the processing of a data stream. This can be seen as enabling crowdsourcing work to be applied on a sample of large-scale data at high speed, or equivalently, enabling stream processing to employ human intelligence. It also leads to a substantial expansion of the capabilities of data processing systems. Engineering a CSP system requires the combination of human and machine computation elements. From a general systems theory perspective, this means taking into account inherited as well as emerging properties from both these elements. In this paper, we position CSP systems within a broader taxonomy, outline a series of design principles and evaluation metrics, present an extensible framework for their design, and describe several design patterns. We showcase the capabilities of CSP systems by performing a case study that applies our proposed framework to the design and analysis of a real system (AIDR) that classifies social media messages during time-critical crisis events. Results show that compared to a pure stream processing system, AIDR can achieve a higher data classification accuracy, while compared to a pure crowdsourcing solution, the system makes better use of human workers by requiring much less manual work effort

    The S2 VLBI Correlator: A Correlator for Space VLBI and Geodetic Signal Processing

    Get PDF
    We describe the design of a correlator system for ground and space-based VLBI. The correlator contains unique signal processing functions: flexible LO frequency switching for bandwidth synthesis; 1 ms dump intervals, multi-rate digital signal-processing techniques to allow correlation of signals at different sample rates; and a digital filter for very high resolution cross-power spectra. It also includes autocorrelation, tone extraction, pulsar gating, signal-statistics accumulation.Comment: 44 pages, 13 figure

    General purpose rocket furnace

    Get PDF
    A multipurpose furnace for space vehicles used for material processing experiments in an outer space environment is described. The furnace contains three separate cavities designed to process samples of the widest possible range of materials and thermal requirements. Each cavity contains three heating elements capable of independent function under the direction of an automatic and programmable control system. A heat removable mechanism is also provided for each cavity which operates in conjunction with the control system for establishing an isothermally heated cavity or a wide range of thermal gradients and cool down rates. A monitoring system compatible with the rocket telemetry provides furnace performance and sample growth rate data throughout the processing cycle

    Progress in AMS target production in sub-milligram samples at the NERC Radiocarbon Laboratory

    Get PDF
    . Recent progress in graphite target production for sub-milligram environmental samples in our facility is presented. We describe an optimized hydrolysis procedure now routinely used for the preparation of CO2 from inorganic samples, a new high-vacuum line dedicated to small sample processing (combining sample distillation and graphitization units), as well as a modified graphitization procedure. Although measurements of graphite targets as small as 35 µg C have been achieved, system background and measurement uncertainties increase significantly below 150 µg C. As target lifetime can become critically short for targets <150 µg C, the facility currently only processes inorganic samples down to 150 µg C. All radiocarbon measurements are made at the Scottish Universities Environmental Research Centre (SUERC) accelerator mass spectrometry (AMS) facility. Sample processing and analysis are labor-intensive, taking approximately 3 times longer than samples ≥500 µg C. The technical details of the new system, graphitization yield, fractionation introduced during the process, and the system blank are discussed in detail

    Optical coherence tomography with a Fizeau interferometer configuration

    Get PDF
    We report the investigation of a Fizeau interferometer-based OCT system. A secondary processing interferometer is necessary in this configuration, to compensate the optical path difference formed in the Fizeau interferometer between the end of the fibre and the sample. The Fizeau configuration has the advantage of 'downlead insensitivity', which eliminates polarisation fading. An optical circulator is used in our system to route light efficiently from the source to the sample, and backscattered light from the sample and the fibre end through to the Mach-Zehnder processing interferometer. The choice of a Mach- Zehnder processing interferometer, from which both antiphase outputs are available, facilitates the incorporation of balanced detection, which often results in a large improvement in the Signal-to-Noise ratio (SNR) compared with the use of a single detector. Balanced detection comprises subtraction of the two antiphase interferometer outputs, implying that the signal amplitude is doubled and the noise is well reduced. It has been discerned that the SNR drops when the refractive index variation at a boundary is small. Several OCT images of samples (resin, resin + crystals, fibre composite) are presented
    • …
    corecore