1,476,816 research outputs found

    A Novel Seeding and Conditioning Bioreactor for Vascular Tissue Engineering

    Get PDF
    Multiple efforts have been made to develop small-diameter tissue engineered vascular grafts using a great variety of bioreactor systems at different steps of processing. Nevertheless, there is still an extensive need for a compact all-in-one system providing multiple and simultaneous processing. The aim of this project was to develop a new device to fulfill the major requirements of an ideal system that allows simultaneous seeding, conditioning, and perfusion. The newly developed system can be actuated in a common incubator and consists of six components: a rotating cylinder, a pump, a pulse generator, a control unit, a mixer, and a reservoir. Components that are in direct contact with cell media, cells, and/or tissue allow sterile processing. Proof-of-concept experiments were performed with polyurethane tubes and collagen tubes. The scaffolds were seeded with fibroblasts and endothelial cells that were isolated from human saphenous vein segments. Scanning electron microscopy and immunohistochemistry showed better seeding success of polyurethane scaffolds in comparison to collagen. Conditioning of polyurethane tubes with 100 dyn/cm2 resulted in cell detachments, whereas a moderate conditioning program with stepwise increase of shear stress from 10 to 40 dyn/cm2 induced a stable and confluent cell layer. The new bioreactor is a powerful tool for quick and easy testing of various scaffold materials for the development of tissue engineered vascular grafts. The combination of this bioreactor with native tissue allows testing of medical devices and medicinal substances under physiological conditions that is a good step towards reduction of animal testing. In the long run, the bioreactor could turn out to produce tissue engineered vascular grafts for human applications ā€œat the bedsideā€

    Visualizing test diversity to support test optimisation

    Full text link
    Diversity has been used as an effective criteria to optimise test suites for cost-effective testing. Particularly, diversity-based (alternatively referred to as similarity-based) techniques have the benefit of being generic and applicable across different Systems Under Test (SUT), and have been used to automatically select or prioritise large sets of test cases. However, it is a challenge to feedback diversity information to developers and testers since results are typically many-dimensional. Furthermore, the generality of diversity-based approaches makes it harder to choose when and where to apply them. In this paper we address these challenges by investigating: i) what are the trade-off in using different sources of diversity (e.g., diversity of test requirements or test scripts) to optimise large test suites, and ii) how visualisation of test diversity data can assist testers for test optimisation and improvement. We perform a case study on three industrial projects and present quantitative results on the fault detection capabilities and redundancy levels of different sets of test cases. Our key result is that test similarity maps, based on pair-wise diversity calculations, helped industrial practitioners identify issues with their test repositories and decide on actions to improve. We conclude that the visualisation of diversity information can assist testers in their maintenance and optimisation activities

    A High Rate Testbeam Data Acquisition System and Characterization of High Voltage Monolithic Active Pixel Sensors

    Get PDF
    New experiments, designed to test the Standard Model of particle physics with unprecedented precision and to search for physics beyond, push detector technologies to their limits. The Mu3e experiment searches for the charged lepton flavor violating decay Ī¼+ ā†’ e+eāˆ’e+ with a branching ratio sensitivity of better than 1 Ā·10āˆ’16. This decay is suppressed in the StandardModel to unobservable levels but can be sizable in models beyond the Standard Model. The Mu3e detector consists of a thin pixel spectrometer combined with scintillating detectors to measure the vertex, momentum and time of the decay particles. Requirements on rate and material budget cannot be fulfilled by classical pixel sensors and demand the development of a novel pixel technology: high-voltage monolithic active pixel sensors (HV-MAPS). Two important steps towards a final pixel detector are discussed within the scope of this thesis: the characterization of two HV-MAPS prototypes from the MUPIX family and the development of a tracking telescope based on HV-MAPS with online monitoring, tracking and efficiency calculation for particle rates above 10 MHz. Using the telescope it is shown that the transition from the small-scale MUPIX7 to the full-scale MUPIX8 has been successful. Sensor characterization studies of the MUPIX8 show efficiencies above 99% at noise rates below 0.4 Hz/pixel over a large threshold range as well as a time resolution of 6.5 ns after time-walk corrections, thus fulfilling allMu3e sensor requirements. Additionally, the radiation tolerance of the MUPIX7 has been demonstrated up to a fluence of 1.5 Ā·10+15 24 GeV p/cm2

    Information Systems Development Methodologies Transitions: An Analysis of Waterfall to Agile Methodology

    Get PDF

    Guide to Streamlining Series: Making Streamlining Stick

    Get PDF
    You have decided to streamline your grantmaking process -- congratulations! Your organization could be just beginning to explore ways to make your application and reporting requirements less burdensome to grantees. Or you might have a team deeply engaged in a change process already. This framework illustrates the four basic phases that many grantmakers move through as they streamline and suggests activities and questions that can propel your process forward

    SEED: efficient clustering of next-generation sequences.

    Get PDF
    MotivationSimilarity clustering of next-generation sequences (NGS) is an important computational problem to study the population sizes of DNA/RNA molecules and to reduce the redundancies in NGS data. Currently, most sequence clustering algorithms are limited by their speed and scalability, and thus cannot handle data with tens of millions of reads.ResultsHere, we introduce SEED-an efficient algorithm for clustering very large NGS sets. It joins sequences into clusters that can differ by up to three mismatches and three overhanging residues from their virtual center. It is based on a modified spaced seed method, called block spaced seeds. Its clustering component operates on the hash tables by first identifying virtual center sequences and then finding all their neighboring sequences that meet the similarity parameters. SEED can cluster 100 million short read sequences in <4 h with a linear time and memory performance. When using SEED as a preprocessing tool on genome/transcriptome assembly data, it was able to reduce the time and memory requirements of the Velvet/Oasis assembler for the datasets used in this study by 60-85% and 21-41%, respectively. In addition, the assemblies contained longer contigs than non-preprocessed data as indicated by 12-27% larger N50 values. Compared with other clustering tools, SEED showed the best performance in generating clusters of NGS data similar to true cluster results with a 2- to 10-fold better time performance. While most of SEED's utilities fall into the preprocessing area of NGS data, our tests also demonstrate its efficiency as stand-alone tool for discovering clusters of small RNA sequences in NGS data from unsequenced organisms.AvailabilityThe SEED software can be downloaded for free from this site: http://manuals.bioinformatics.ucr.edu/home/[email protected] informationSupplementary data are available at Bioinformatics online

    Employer ownership of skills: building the momentum

    Get PDF

    The Coyote Universe I: Precision Determination of the Nonlinear Matter Power Spectrum

    Full text link
    Near-future cosmological observations targeted at investigations of dark energy pose stringent requirements on the accuracy of theoretical predictions for the clustering of matter. Currently, N-body simulations comprise the only viable approach to this problem. In this paper we demonstrate that N-body simulations can indeed be sufficiently controlled to fulfill these requirements for the needs of ongoing and near-future weak lensing surveys. By performing a large suite of cosmological simulation comparison and convergence tests we show that results for the nonlinear matter power spectrum can be obtained at 1% accuracy out to k~1 h/Mpc. The key components of these high accuracy simulations are: precise initial conditions, very large simulation volumes, sufficient mass resolution, and accurate time stepping. This paper is the first in a series of three, with the final aim to provide a high-accuracy prediction scheme for the nonlinear matter power spectrum.Comment: 18 pages, 22 figures, minor changes to address referee repor
    • ā€¦
    corecore