113 research outputs found

    Two-Dimensional Digitized Picture Arrays and Parikh Matrices

    Get PDF
    Parikh matrix mapping or Parikh matrix of a word has been introduced in the literature to count the scattered subwords in the word. Several properties of a Parikh matrix have been extensively investigated. A picture array is a two-dimensional connected digitized rectangular array consisting of a finite number of pixels with each pixel in a cell having a label from a finite alphabet. Here we extend the notion of Parikh matrix of a word to a picture array and associate with it two kinds of Parikh matrices, called row Parikh matrix and column Parikh matrix. Two picture arrays A and B are defined to be M-equivalent if their row Parikh matrices are the same and their column Parikh matrices are the same. This enables to extend the notion of M-ambiguity to a picture array. In the binary and ternary cases, conditions that ensure M-ambiguity are then obtained

    Algebraic Properties of Parikh Matrices of Binary Picture Arrays

    Get PDF
    A word is a finite sequence of symbols. Parikh matrix of a word is an upper triangular matrix with ones in the main diagonal and non-negative integers above the main diagonal which are counts of certain scattered subwords in the word. On the other hand a picture array, which is a rectangular arrangement of symbols, is an extension of the notion of word to two dimensions. Parikh matrices associated with a picture array have been introduced and their properties have been studied. Here we obtain certain algebraic properties of Parikh matrices of binary picture arrays based on the notions of power, fairness and a restricted shuffle operator extending the corresponding notions studied in the case of words. We also obtain properties of Parikh matrices of arrays formed by certain geometric operations

    Image Reconstructions of Compressed Sensing MRI with Multichannel Data

    Get PDF
    Magnetic resonance imaging (MRI) provides high spatial resolution, high-quality of soft-tissue contrast, and multi-dimensional images. However, the speed of data acquisition limits potential applications. Compressed sensing (CS) theory allowing data being sampled at sub-Nyquist rate provides a possibility to accelerate the MRI scan time. Since most MRI scanners are currently equipped with multi-channel receiver systems, integrating CS with multi-channel systems can further shorten the scan time and also provide a better image quality. In this dissertation, we develop several techniques for integrating CS with parallel MRI. First, we propose a method which extends the reweighted l1 minimization to the CS-MRI with multi-channel data. The individual channel images are recovered according to the reweighted l1 minimization algorithm. Then, the final image is combined by the sum-of-squares method. Computer simulations show that the new method can improve the reconstruction quality at a slightly increased computation cost. Second, we propose a reconstruction approach using the ubiquitously available multi-core CPU to accelerate CS reconstructions of multiple channel data. CS reconstructions for phase array system using iterative l1 minimization are significantly time-consuming, where the computation complexity scales with the number of channels. The experimental results show that the reconstruction efficiency benefits significantly from parallelizing the CS reconstructions, and pipelining multi-channel data on multi-core processors. In our experiments, an additional speedup factor of 1.6 to 2.0 was achieved using the proposed method on a quad-core CPU. Finally, we present an efficient reconstruction method for high-dimensional CS MRI with a GPU platform to shorten the time of iterative computations. Data managements as well as the iterative algorithm are properly designed to meet the way of SIMD (single instruction/multiple data) parallelizations. For three-dimension multi-channel data, all slices along frequency encoding direction and multiple channels are highly parallelized and simultaneously processed within GPU. Generally, the runtime on GPU only requires 2.3 seconds for reconstructing a simulated 4-channel data with a volume size of 256Ă—256Ă—32. Comparing to 67 seconds using CPU, it achieves 28 faster with the proposed method. The rapid reconstruction algorithms demonstrated in this work are expected to help bring high dimensional, multichannel parallel CS MRI closer to clinical applications

    Index to 1985 NASA Tech Briefs, volume 10, numbers 1-4

    Get PDF
    Short announcements of new technology derived from the R&D activities of NASA are presented. These briefs emphasize information considered likely to be transferrable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. This index for 1985 Tech Briefs contains abstracts and four indexes: subject, personal author, originating center, and Tech Brief Number. The following areas are covered: electronic components and circuits, electronic systems, physical sciences, materials, life sciences, mechanics, machinery, fabrication technology, and mathematics and information sciences

    Collaborative Regression and Classification via Bootstrapping

    Get PDF
    In modern machine learning problems and applications, we deal with vast quantities of data that are often high dimensional, making data analysis time-consuming and computationally inefficient. Sparse recovery algorithms are developed to extract the underlining low dimensional structure from the data. Classical signal recovery based on â„“1\ell_1 minimization solves the least squares problem with all available measurements via sparsity-promoting regularization. It has shown promising performances in regression and classification. Previous work on Compressed Sensing (CS) theory reveals that when the true solution is sparse and if the number of measurements is large enough, then solutions to â„“1\ell_1 minimization converge to the ground truths. In practice, when the number of measurements is low, when the noise level is high, or when measurements arrive sequentially in streaming fashion, conventional â„“1\ell_1 minimization algorithms tend to under-perform. This research work aims at using multiple local measurements generated from resampling using bootstrap or sub-sampling to efficiently make global predictions to deal with the aforementioned challenging scenarios. We develop two main approaches -- one extends the conventional bagging scheme in sparse regression from a fixed bootstrapping ratio whereas the other called JOBS applies a support consistency among bootstrapped estimators in a collaborative fashion. We first derive rigorous theoretical guarantees for both proposed approaches and then carefully evaluate them with extensive simulations to quantify their performances. Our algorithms are quite robust compared to the conventional â„“1\ell_1 minimization, especially in the scenarios with high measurements noise and low number of measurements. Our theoretical analysis also provides key guidance on how to choose optimal parameters, including bootstrapping ratios and number of collaborative estimates. Finally, we demonstrate that our proposed approaches yield significant performance gains in both sparse regression and classification, which are two crucial problems in the field of signal processing and machine learning

    NASA Tech Briefs, October 1988

    Get PDF
    Topics include: New Product Ideas; NASA TU Services; Electronic Components and Circuits; Electronic Systems; Physical Sciences Materials; Computer Programs; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences; Life Sciences

    On-scalp MEG using high-Tc SQUIDs: Measuring brain activity with superconducting magnetometers

    Get PDF
    This thesis describes work done towards realizing on-scalp magnetoencephalography (MEG) based on high critical temperature (high-Tc) superconducting quantum interference device (SQUID) sensors. MEG is a non-invasive neuroimaging modality that records the magnetic fields produced by neural currents with good spatial and high temporal resolution. However, state-of-the-art MEG is limited by the use of liquid helium-cooled sensors (T ~ 4 K). The amount of thermal insulation between the sensors and the subject\u27s head that is required to achieve the extreme temperature difference (~300 K), typically realized in the form of superinsulation foil and ~2 centimeters of vacuum, limits measurable signals. Replacing the sensors with high-Tc SQUIDs can mitigate this problem. High-Tc SQUIDs operate at much higher temperatures (90 K) allowing significant reduction of the stand-off distance (to ~1 mm). They can furthermore be cooled with liquid nitrogen (77 K), a cheaper, more sustainable alternative to the liquid helium used for cooling in conventional MEG systems.The work described in this thesis can be divided into three main areas: (I) simulation work for practical implementations of on-scalp systems, (II) development of a 7-channel high-Tc SQUID-based on-scalp MEG system, and (III) on-scalp MEG recordings.In the first part, spatial information density (SID), a metric to evaluate the performance of simulated MEG sensor arrays, is introduced and - along with total information capacity - used to compare the performance of various simulated full-head on-scalp MEG sensor arrays.\ua0Simulations demonstrate the potential of on-scalp MEG, with all on-scalp systems exhibiting higher information capacity than the state-of-the-art. SID further reveals more homogeneous sampling of the brain with flexible systems. A method for localizing magnetometers in on-scalp MEG systems is introduced and tested in simulations. The method uses small, magnetic dipole-like coils to determine the location and orientation of individual sensors, enabling straightforward co-registration in flexible on-scalp MEG systems. The effects of different uncertainties and errors on the accuracy of the method were quantified.In the second part, design, construction, and performance of a 7-channel on-scalp MEG system is described. The system houses seven densely-packed (2 mm edge-to-edge), head-aligned high-Tc SQUID magnetometers (9.2 mm x 8.6 mm) inside a single, liquid nitrogen-cooled cryostat. With a single filling, the system can be utilized for MEG recordings for >16 h with low noise levels (~0-130 fT). Using synchronized clocks and a direct injection feedback scheme, the system achieves low sensor crosstalk (<0.6%).\ua0In the third part, on-scalp MEG recordings with the 7-channel system as well as its predecessor, a single-channel system, are presented. The recordings are divided into proof-of-principle and benchmarking experiments. The former consist of well-studied, simple paradigms such as auditory evoked activity and visual alpha. Expected signal components were clearly seen in the on-scalp recordings. The benchmarking studies were done to compare and contrast on-scalp with state-of-the-art MEG. To this end, a number of experimental stimulus paradigms were recorded on human subjects with the high-Tc SQUID-based on-scalp systems as well as a state-of-the-art, commercial full-head MEG system. Results include the expected signal gains that are associated with recording on-scalp as well as new details of the neurophysiological signals. Using the previously described on-scalp MEG co-registration method enabled source localization with high agreement to the full-head recording (the distance between dipoles localized with the two systems was 4.2 mm)

    Cumulative index to NASA Tech Briefs, 1986-1990, volumes 10-14

    Get PDF
    Tech Briefs are short announcements of new technology derived from the R&D activities of the National Aeronautics and Space Administration. These briefs emphasize information considered likely to be transferrable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. This cumulative index of Tech Briefs contains abstracts and four indexes (subject, personal author, originating center, and Tech Brief number) and covers the period 1986 to 1990. The abstract section is organized by the following subject categories: electronic components and circuits, electronic systems, physical sciences, materials, computer programs, life sciences, mechanics, machinery, fabrication technology, and mathematics and information sciences

    Fast Monte Carlo Simulations for Quality Assurance in Radiation Therapy

    Get PDF
    Monte Carlo (MC) simulation is generally considered to be the most accurate method for dose calculation in radiation therapy. However, it suffers from the low simulation efficiency (hours to days) and complex configuration, which impede its applications in clinical studies. The recent rise of MRI-guided radiation platform (e.g. ViewRay’s MRIdian system) brings urgent need of fast MC algorithms because the introduced strong magnetic field may cause big errors to other algorithms. My dissertation focuses on resolving the conflict between accuracy and efficiency of MC simulations through 4 different approaches: (1) GPU parallel computation, (2) Transport mechanism simplification, (3) Variance reduction, (4) DVH constraint. Accordingly, we took several steps to thoroughly study the performance and accuracy influence of these methods. As a result, three Monte Carlo simulation packages named gPENELOPE, gDPMvr and gDVH were developed for subtle balance between performance and accuracy in different application scenarios. For example, the most accurate gPENELOPE is usually used as golden standard for radiation meter model, while the fastest gDVH is usually used for quick in-patient dose calculation, which significantly reduces the calculation time from 5 hours to 1.2 minutes (250 times faster) with only 1% error introduced. In addition, a cross-platform GUI integrating simulation kernels and 3D visualization was developed to make the toolkit more user-friendly. After the fast MC infrastructure was established, we successfully applied it to four radiotherapy scenarios: (1) Validate the vender provided Co60 radiation head model by comparing the dose calculated by gPENELOPE to experiment data; (2) Quantitatively study the effect of magnetic field to dose distribution and proposed a strategy to improve treatment planning efficiency; (3) Evaluate the accuracy of the build-in MC algorithm of MRIdian’s treatment planning system. (4) Perform quick quality assurance (QA) for the “online adaptive radiation therapy” that doesn’t permit enough time to perform experiment QA. Many other time-sensitive applications (e.g. motional dose accumulation) will also benefit a lot from our fast MC infrastructure

    The Data Science Design Manual

    Get PDF
    • …
    corecore