88 research outputs found

    Amplification of light pulses with orbital angular momentum (OAM) in nitrogen ions lasing

    Full text link
    Nitrogen ions pumped by intense femtosecond laser pulses give rise to optical amplification in the ultraviolet range. Here, we demonstrated that a seed light pulse carrying orbital angular momentum (OAM) can be significantly amplified in nitrogen plasma excited by a Gaussian femtosecond laser pulse. With the topological charge of +1 and -1, we observed an energy amplification of the seed light pulse by two orders of magnitude, while the amplified pulse carries the same OAM as the incident seed pulse. Moreover, we show that a spatial misalignment of the plasma amplifier with the OAM seed beam leads to an amplified emission of Gaussian mode without OAM, due to the special spatial profile of the OAM seed pulse that presents a donut-shaped intensity distribution. Utilizing this misalignment, we can implement an optical switch that toggles the output signal between Gaussian mode and OAM mode. This work not only certifies the phase transfer from the seed light to the amplified signal, but also highlights the important role of spatial overlap of the donut-shaped seed beam with the gain region of the nitrogen plasma for the achievement of OAM beam amplification.Comment: 10 pages, 7 figure

    Reliable Detection of Myocardial Ischemia Using Machine Learning Based on Temporal-Spatial Characteristics of Electrocardiogram and Vectorcardiogram

    Get PDF
    Background: Myocardial ischemia is a common early symptom of cardiovascular disease (CVD). Reliable detection of myocardial ischemia using computer-aided analysis of electrocardiograms (ECG) provides an important reference for early diagnosis of CVD. The vectorcardiogram (VCG) could improve the performance of ECG-based myocardial ischemia detection by affording temporal-spatial characteristics related to myocardial ischemia and capturing subtle changes in ST-T segment in continuous cardiac cycles. We aim to investigate if the combination of ECG and VCG could improve the performance of machine learning algorithms in automatic myocardial ischemia detection. Methods: The ST-T segments of 20-second, 12-lead ECGs, and VCGs were extracted from 377 patients with myocardial ischemia and 52 healthy controls. Then, sample entropy (SampEn, of 12 ECG leads and of three VCG leads), spatial heterogeneity index (SHI, of VCG) and temporal heterogeneity index (THI, of VCG) are calculated. Using a grid search, four SampEn and two features are selected as input signal features for ECG-only and VCG-only models based on support vector machine (SVM), respectively. Similarly, three features (S ( I ), THI, and SHI, where S ( I ) is the SampEn of lead I) are further selected for the ECG + VCG model. 5-fold cross validation was used to assess the performance of ECG-only, VCG-only, and ECG + VCG models. To fully evaluate the algorithmic generalization ability, the model with the best performance was selected and tested on a third independent dataset of 148 patients with myocardial ischemia and 52 healthy controls. Results: The ECG + VCG model with three features (S ( I ),THI, and SHI) yields better classifying results than ECG-only and VCG-only models with the average accuracy of 0.903, sensitivity of 0.903, specificity of 0.905, F1 score of 0.942, and AUC of 0.904, which shows better performance with fewer features compared with existing works. On the third independent dataset, the testing showed an AUC of 0.814. Conclusion: The SVM algorithm based on the ECG + VCG model could reliably detect myocardial ischemia, providing a potential tool to assist cardiologists in the early diagnosis of CVD in routine screening during primary care services

    Acceleration and execution of relational queries using general purpose graphics processing unit (GPGPU)

    Get PDF
    This thesis first maps the relational computation onto Graphics Processing Units (GPU)s by designing a series of tools and then explores the different opportunities of reducing the limitation brought by the memory hierarchy across the CPU and GPU system. First, a complete end-to-end compiler and runtime infrastructure, Red Fox, is proposed. The evaluation on the full set of industry standard TPC-H queries on a single node GPU shows on average Red Fox is 11.20x faster compared with a commercial database system on a state of art CPU machine. Second, a new compiler technique called kernel fusion is designed to fuse the code bodies of several relational operators to reduce data movement. Third, a multi-predicate join algorithm is designed for GPUs which can provide much better performance and be used with more flexibility compared with kernel fusion. Fourth, the GPU optimized multi-predicate join is integrated into a multi-threaded CPU database runtime system that supports out-of-core data set to solve real world problem. This thesis presents key insights, lessons learned, measurements from the implementations, and opportunities for further improvements.Ph.D

    ThreadMarks: A Framework for Input-Aware Prediction of Parallel Application Behavior

    Get PDF
    Chip-multiprocessors (CMPs) are quickly becoming entrenched as the main-stream architectural platform in computer systems. One of the critical challenges facing CMPs is designing applications to effectively leverage the computational resources they provide. Modifying applications to effectively run on CMPs requires understanding the bottlenecks in applications, which necessitates a detailed understanding of architectural features. Unfortunately, identifying bottlenecks is complex and often requires enumerating a wide range of behaviors. To assist in identifying bottlenecks, this paper presents a framework for developing analytical models based on dynamic program behaviors. That is, given a program and set of training inputs, the framework will generate several analytical models that accurately predict online program behaviors such as memory utilization and synchronization overhead, while taking program input into consideration. These models can prove invaluable for online optimization systems and input-specific analysis of program behavior. We demonstrate that this framework is practical and accurate on a wide range of synthetic and real-world parallel applications over various workloads

    A stress test on 235U(n, f) in adjustment with HCI and HMI benchmarks

    No full text
    To understand how compensation errors occur in a nuclear data adjustment mostly devoted to U-Pu fuelled fast critical experiments and with only limited information on U-235 data, a stress test on 235U(n,f) was suggested, using critical benchmarks sensitive to 235U(n,f) in 1∼ 10 keV region. The adjustment benchmark exercise with 20 integral data suggested by the NEA WPEC/SG33 was used as the reference, where practically only one experiment did give information on U-235 data. The keff of HCI4.1 and HCI6.2 experimental benchmarks were used as the 21st and 22nd integral data separately to perform stress tests. The adjusted integral values and cross sections based on 20, 21 and 22 integral data using the same nuclear data and covariance data sets were compared. The results confirm that compensation errors can be created by missing essential constraints
    corecore