437 research outputs found

    Tube-forming device design for the creation of cell-integrated alginate tubes

    Get PDF
    This thesis describes a generic system capable of forming cell-populated alginate tubes either by seeding cells within its lumen, or integrating cells within the tube walls. The applications of an alginate tube are as diverse as applications of alginate beads, and could be used with many types of cells for many different purposes – from cell therapy to tissue engineering. The aim of this work has therefore been on the ability: to reproducibly create alginate tubes with uniformly thick walls of predictable thickness; to be able to monitor and quality control said tubes and a generic cell suspension, including automating aspects of mammalian anchorage-dependent cell culture to improve reliability; and to integrate said cells into an alginate tube without compromise to wall thickness, cell viability and cellular spatial distribution within the alginate tube. This work describes experimental verification a novel fluid dynamics model that predicts with any two fluids used in this reverse dip-coating device that the tube wall thickness will be approximately equal to 2/3 the gap width as gap width becomes negligible. Robustness testing of the tube-forming device prompted two base unit designs and a protocol in order to achieve coefficient of variation (CV) values under 5% of tube length for infusion rates up to 100ml/min and alginate concentrations ranging from 0.50 – 1.00%. Tubes with wall thicknesses between 143.4 – 277.3ìm can be reliably reproduced for tubes of any length. Optical coherence tomography (OCT) at ±10ìm accuracy was adapted to directly monitor alginate wall thickness and rate of shrinkage in real-time through air. This was determined to be ~12 minutes for tube walls to stabilise and a high speed camera showed no spherical regulator spin as the tube is formed, indicating that monitoring at one point is sufficient to determine the overall quality of the tube wall consistency. Cell sample homogeneity monitored by particle sizer revealed two distinct single-celled populations, and smaller peak of cytoplasmic residue. Capillary cytometer was determined the best way to enumerate cell quantity reliably and consistently. Holding time above 3 hours can significantly cause aggregation, but this can be controlled using filtration of known pore size. Kenics static mixers were used to integrate cells into alginate prior to tube formation and showed equally good control to wall thickness as pure alginate tubes at ~CV 7%. Cell viability of above 90% after processing through the static mixer and the tube-forming mark 2 device was achievable using Pronova SLG 100 pre-liquified alginate. The Kenics mixers at 12 elements showed a 49.6% improvement in CV of spatial distribution of cells across alginate, although this could be bettered by increasing number of Kenics static mixing elements, at the cost of increasing dead volume

    Sparsity based methods for target localization in multi-sensor radar

    Get PDF
    In this dissertation, several sparsity-based methods for ground moving target indicator (GMTI) radar with multiple-input multiple-output (MIMO) random arrays are proposed. MIMO random arrays are large arrays that employ multiple transmitters and receivers, the positions of the transmitters and the receivers are randomly chosen. Since the resolution of the array depends on the size of the array, MIMO random arrays obtain a high resolution. However, since the positions of the sensors are randomly chosen, the array suffers from large sidelobes which may lead to an increased false alarm probability. The number of sensors of a MIMO random array required to maintain a certain level of peak sidelobes is studied. It is shown that the number of sensors scales with the logarithm of the array aperture, in contrast with a ULA where the number of elements scales linearly with the array aperture. The problem of sparse target detection given space-time observations from MIMO random arrays is presented. The observations are obtained in the presence of Gaussian colored noise of unknown covariance matrix, but for which secondary data is available for its estimation. To solve the detection problem two sparsity-based algorithms, the MP-STAP and the MBMP-STAP algorithms are proposed that utilizes knowledge of the upper bound on the number of targets. A constant false alarm rate (CFAR) sparsity based detector that does not utilize any information on the number of targets referred to as MP-CFAR and MBMP-CFAR are also developed. A performance analysis for the new CFAR detector is also derived, the metrics used to describe the performance of the detector are the probability of false alarm and the probability of detection. A grid refinement procedure is also proposed to eliminate the need for a dense grid which would increase the computational complexity significantly. Expressions for the computational complexity of the proposed CFAR detectors are derived. It is shown that the proposed CFAR detectors outperforms the popular adaptive beamformer at a modest increase in computational complexity

    Dynamic Data Encoding for Page-Oriented Memories

    Get PDF
    This dissertation presents a key portion of the system architecture for a high performance page-oriented memory. The focus of this research is the development of new dynamic encoding algorithms that provide high data reliability with code density that is higher than in the conventional static modulation schemes. It also presents an intelligent read/write head architecture capable of implementing the most promising of these algorithms in real-time.Data encoding techniques for page-oriented mass storage devices are typically conservative in order to overcome the destructive effects of inter-symbol interference and noise due to the physical characteristics of the media. Therefore significantly more bits are required in an encoded version of data than in the original information. This penalty in the code density, usually referred to as code rate, keeps the utilization of the media relatively low, often less than 50% of the capacity of a maximally dense code. This is partially because encoding techniques are static and assume the worst case for the information surrounding the data block being encoded. However, in the context of page-oriented data transfers it is possible to evaluate the surrounding information for each code block location, and, thus, to apply a custom code set for each code block. Since evaluating each possible code during runtime leads to very high time complexity for encoding and decoding algorithms, we also present alternative algorithms that successfully trade time complexity for code density and are a strong competition to the traditional static modulation schemes. In order to verify that the encoding algorithms are both efficient and applicable, they were analyzed using a two-photon optical memory model. The analysis focused on how well the algorithms performed as a trade off between complexity and code density. It resulted that a full enumeration of codes yielded code density as high as 83%, although the time complexity for the enumeration approach was exponential. In another study, a linear time algorithm was analyzed. The code density of this algorithm was just over 54% percent. Finally, a novel quasidynamic encoding algorithm was created, which yielded 76% code density and had constant time complexity

    NASA Tech Briefs, September 2008

    Get PDF
    Topics covered include: Nanotip Carpets as Antireflection Surfaces; Nano-Engineered Catalysts for Direct Methanol Fuel Cells; Capillography of Mats of Nanofibers; Directed Growth of Carbon Nanotubes Across Gaps; High-Voltage, Asymmetric-Waveform Generator; Magic-T Junction Using Microstrip/Slotline Transitions; On-Wafer Measurement of a Silicon-Based CMOS VCO at 324 GHz; Group-III Nitride Field Emitters; HEMT Amplifiers and Equipment for their On-Wafer Testing; Thermal Spray Formation of Polymer Coatings; Improved Gas Filling and Sealing of an HC-PCF; Making More-Complex Molecules Using Superthermal Atom/Molecule Collisions; Nematic Cells for Digital Light Deflection; Improved Silica Aerogel Composite Materials; Microgravity, Mesh-Crawling Legged Robots; Advanced Active-Magnetic-Bearing Thrust- Measurement System; Thermally Actuated Hydraulic Pumps; A New, Highly Improved Two-Cycle Engine; Flexible Structural-Health-Monitoring Sheets; Alignment Pins for Assembling and Disassembling Structures; Purifying Nucleic Acids from Samples of Extremely Low Biomass; Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery; UV-Resistant Non-Spore-Forming Bacteria From Spacecraft-Assembly Facilities; Hard-X-Ray/Soft-Gamma-Ray Imaging Sensor Assembly for Astronomy; Simplified Modeling of Oxidation of Hydrocarbons; Near-Field Spectroscopy with Nanoparticles Deposited by AFM; Light Collimator and Monitor for a Spectroradiometer; Hyperspectral Fluorescence and Reflectance Imaging Instrument; Improving the Optical Quality Factor of the WGM Resonator; Ultra-Stable Beacon Source for Laboratory Testing of Optical Tracking; Transmissive Diffractive Optical Element Solar Concentrators; Delaying Trains of Short Light Pulses in WGM Resonators; Toward Better Modeling of Supercritical Turbulent Mixing; JPEG 2000 Encoding with Perceptual Distortion Control; Intelligent Integrated Health Management for a System of Systems; Delay Banking for Managing Air Traffic; and Spline-Based Smoothing of Airfoil Curvatures

    A Sequential MUSIC algorithm for Scatterers Detection 2 in SAR Tomography Enhanced by a Robust Covariance 3 Estimator

    Full text link
    Synthetic aperture radar (SAR) tomography (TomoSAR) is an appealing tool for the extraction of height information of urban infrastructures. Due to the widespread applications of the MUSIC algorithm in source localization, it is a suitable solution in TomoSAR when multiple snapshots (looks) are available. While the classical MUSIC algorithm aims to estimate the whole reflectivity profile of scatterers, sequential MUSIC algorithms are suited for the detection of sparse point-like scatterers. In this class of methods, successive cancellation is performed through orthogonal complement projections on the MUSIC power spectrum. In this work, a new sequential MUSIC algorithm named recursive covariance canceled MUSIC (RCC-MUSIC), is proposed. This method brings higher accuracy in comparison with the previous sequential methods at the cost of a negligible increase in computational cost. Furthermore, to improve the performance of RCC-MUSIC, it is combined with the recent method of covariance matrix estimation called correlation subspace. Utilizing the correlation subspace method results in a denoised covariance matrix which in turn, increases the accuracy of subspace-based methods. Several numerical examples are presented to compare the performance of the proposed method with the relevant state-of-the-art methods. As a subspace method, simulation results demonstrate the efficiency of the proposed method in terms of estimation accuracy and computational load

    Sparse Binary Features for Image Classification

    Get PDF
    In this work a new method for automatic image classification is proposed. It relies on a compact representation of images using sets of sparse binary features. This work first evaluates the Fast Retina Keypoint binary descriptor and proposes improvements based on an efficient descriptor representation. The efficient representation is created using dimensionality reduction techniques, entropy analysis and decorrelated sampling. In a second part, the problem of image classification is tackled. The traditional approach uses machine learning algorithms to create classifiers, and some works already propose to use a compact image representation using feature extraction as preprocessing. The second contribution of this work is to show that binary features, while being very compact and low dimensional (compared to traditional representation of images), still provide a very high discriminant power. This is shown using various learning algorithms and binary descriptors. These years a scheme has been widely used to perform object recognition on images, or equivalently image classification. It is based on the concept of Bag of Visual Words. More precisely, an image is described using an unordered set of visual words, that are generally represented by feature descriptions. The last contribution of this work is to use binary features with a simple Bag of Visual Words classifier. Tests of performance for the image classification are performed on a large database of images

    Structured antibody surfaces for bio-recognition and a label-free detection of bacteria

    Get PDF

    Quantitative Analysis of Particulate Burden in Lung Tissue

    Get PDF
    Numerous methods have been used in the preparation and analysis of the particulate matter deposited in human lungs. Preparation techniques include those for particle isolation and for in situ analysis. Analytical techniques include bulk and particle-by-particle analysis. In this paper, a general discussion of many of these methods is presented along with examples of how two specific techniques have been used. In one study, individual particles from the lungs of 75 randomly selected autopsy cases were analyzed using an automated scanning electron microscopy (SEM)/ energy dispersive X-ray microanalysis (EDX) system. An average of 613 million particles, of exogenous origin, per gram of dry lung tissue were found, the major classes of particles being silica, talc, aluminum silicates, and rutile. In the second study, lungs from 50 randomly selected autopsy cases were analyzed using gravimetric and X-ray diffraction (XRD) analysis. The median total particulate material was 0.33 grams, for cases in which samples were prepared by high temperature ashing, and 0.41 grams, for those in which nitric acid digestion was used. The median amount of quartz for all cases, was 0.044 grams. Samples of eighteen of the 75 lungs previously analyzed by automated SEM/EDX were also analyzed using gravimetric and XRD analysis. A good correlation was seen between the results of the two procedures (r=0.91 for number of exogenous particles versus grams of particulate matter and r=0.97 for number of silica particles versus amount of quartz)

    Development and Application of Integrated Silicon-in-Plastic Microfabrication in Polymer Microfluidic Systems

    Get PDF
    Polymer-based microfluidic devices can offer a number of advantages over conventional devices, and have found many applications in chemical and biological analysis. In order to fully develop a lab-on-chip (LOC) device, the functional components, such as sensors and actuators, tend to be assembled to complete a functional device. But the integration of silicon chips into polymer-based microfluidic systems remains a virtually unexplored area. In this work, a novel silicon-in-plastic microfabrication technology is developed, which involves seamlessly integrating individual microfabricated silicon chips into a larger polymer substrate, where the silicon components provide functionality, and the plastic substrate provides system-level fluid handling. This technology employs low-cost polymer substrates and simple polymer processing techniques which are amenable to mass production. The fabrication and testing of two polymer microfluidic systems using the silicon-in-plastic technology are presented in this dissertation. The first integrated microsystem is a water-based chemical monitoring system based on microhotplate gas sensor and polymer microfluidics. The chemical monitoring system is designed to sample a water source, extract solvent present within the aqueous sample into the vapor phase, and direct the solvent vapor past the integrated gas sensor for analysis. Design, fabrication, and characterization of a prototype system are described, and results from illustrative measurements performed using methanol, toluene, and 1,2-dichloroethane in water are presented. The second one is an integrated UV absorbance detection system that uses silicon-in-plastic technology to seamlessly integrate bare photodiode chips into a polymer microfluidic system. Detection platforms fabricated using this approach exhibit excellent detection limits down to 1.5 x 10 8 M for bovine serum albumin (BSA) as a model protein. In addition to providing high sensitivity, sub-nanoliter detection volumes are enabled by the use of direct photodetector integration. The fabrication methodology is detailed, and system performance metrics including minimum detection limit, detection volume, dynamic range, and linearity are reported
    • …
    corecore