541 research outputs found

    The application of plastic compression to modulate fibrin hydrogel mechanical properties.

    Get PDF
    The inherent biocompatibility of fibrin hydrogels makes them an attractive material for use in a wide range of tissue engineering applications. Despite this, their relatively low stiffness and high compliance limits their potential for certain orthopaedic applications. Enhanced mechanical properties are desirable so as to withstand surgical handling and in vivo loading after implantation and additionally, can provide important cues to cells seeded within the hydrogel. Standard methods used to enhance the mechanical properties of biological scaffolds such as chemical or thermal crosslinking cannot be used with fibrin hydrogels as cell seeding and gel formation occurs simultaneously. The objective of this study was to investigate the use of plastic compression as a means to improve the mechanical properties of chondrocyte-seeded fibrin hydrogels and to determine the influence of such compression on cell viability within these constructs. It was found that the application of 80% strain to fibrin hydrogels for 30 min (which resulted in a permanent strain of 47.4%) produced a 2.1-fold increase in the subsequent compressive modulus. Additionally, chondrocyte viability was maintained in the plastically compressed gels with significant cellular proliferation and extracellular matrix accumulation observed over 28 days of culture. In conclusion, plastic compression can be used to modulate the density and mechanical properties of cell-seeded fibrin hydrogels and represents a useful tool for both in theatre and in vitro tissue engineering applications

    Parametric study of EEG sensitivity to phase noise during face processing

    Get PDF
    <b>Background: </b> The present paper examines the visual processing speed of complex objects, here faces, by mapping the relationship between object physical properties and single-trial brain responses. Measuring visual processing speed is challenging because uncontrolled physical differences that co-vary with object categories might affect brain measurements, thus biasing our speed estimates. Recently, we demonstrated that early event-related potential (ERP) differences between faces and objects are preserved even when images differ only in phase information, and amplitude spectra are equated across image categories. Here, we use a parametric design to study how early ERP to faces are shaped by phase information. Subjects performed a two-alternative force choice discrimination between two faces (Experiment 1) or textures (two control experiments). All stimuli had the same amplitude spectrum and were presented at 11 phase noise levels, varying from 0% to 100% in 10% increments, using a linear phase interpolation technique. Single-trial ERP data from each subject were analysed using a multiple linear regression model. <b>Results: </b> Our results show that sensitivity to phase noise in faces emerges progressively in a short time window between the P1 and the N170 ERP visual components. The sensitivity to phase noise starts at about 120–130 ms after stimulus onset and continues for another 25–40 ms. This result was robust both within and across subjects. A control experiment using pink noise textures, which had the same second-order statistics as the faces used in Experiment 1, demonstrated that the sensitivity to phase noise observed for faces cannot be explained by the presence of global image structure alone. A second control experiment used wavelet textures that were matched to the face stimuli in terms of second- and higher-order image statistics. Results from this experiment suggest that higher-order statistics of faces are necessary but not sufficient to obtain the sensitivity to phase noise function observed in response to faces. <b>Conclusion: </b> Our results constitute the first quantitative assessment of the time course of phase information processing by the human visual brain. We interpret our results in a framework that focuses on image statistics and single-trial analyses

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300

    Modulating gradients in regulatory signals within mesenchymal stem cell seeded hydrogels: a novel strategy to engineer zonal articular cartilage.

    Get PDF
    This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Engineering organs and tissues with the spatial composition and organisation of their native equivalents remains a major challenge. One approach to engineer such spatial complexity is to recapitulate the gradients in regulatory signals that during development and maturation are believed to drive spatial changes in stem cell differentiation. Mesenchymal stem cell (MSC) differentiation is known to be influenced by both soluble factors and mechanical cues present in the local microenvironment. The objective of this study was to engineer a cartilaginous tissue with a native zonal composition by modulating both the oxygen tension and mechanical environment thorough the depth of MSC seeded hydrogels. To this end, constructs were radially confined to half their thickness and subjected to dynamic compression (DC). Confinement reduced oxygen levels in the bottom of the construct and with the application of DC, increased strains across the top of the construct. These spatial changes correlated with increased glycosaminoglycan accumulation in the bottom of constructs, increased collagen accumulation in the top of constructs, and a suppression of hypertrophy and calcification throughout the construct. Matrix accumulation increased for higher hydrogel cell seeding densities; with DC further enhancing both glycosaminoglycan accumulation and construct stiffness. The combination of spatial confinement and DC was also found to increase proteoglycan-4 (lubricin) deposition toward the top surface of these tissues. In conclusion, by modulating the environment through the depth of developing constructs, it is possible to suppress MSC endochondral progression and to engineer tissues with zonal gradients mimicking certain aspects of articular cartilage.Funding was provided by Science Foundation Ireland (President of Ireland Young Researcher Award: 08/Y15/B1336) and the European Research Council (StemRepair – Project number 258463)

    A re-randomisation design for clinical trials

    Get PDF
    Background: Recruitment to clinical trials is often problematic, with many trials failing to recruit to their target sample size. As a result, patient care may be based on suboptimal evidence from underpowered trials or non-randomised studies. Methods: For many conditions patients will require treatment on several occasions, for example, to treat symptoms of an underlying chronic condition (such as migraines, where treatment is required each time a new episode occurs), or until they achieve treatment success (such as fertility, where patients undergo treatment on multiple occasions until they become pregnant). We describe a re-randomisation design for these scenarios, which allows each patient to be independently randomised on multiple occasions. We discuss the circumstances in which this design can be used. Results: The re-randomisation design will give asymptotically unbiased estimates of treatment effect and correct type I error rates under the following conditions: (a) patients are only re-randomised after the follow-up period from their previous randomisation is complete; (b) randomisations for the same patient are performed independently; and (c) the treatment effect is constant across all randomisations. Provided the analysis accounts for correlation between observations from the same patient, this design will typically have higher power than a parallel group trial with an equivalent number of observations. Conclusions: If used appropriately, the re-randomisation design can increase the recruitment rate for clinical trials while still providing an unbiased estimate of treatment effect and correct type I error rates. In many situations, it can increase the power compared to a parallel group design with an equivalent number of observations

    Biospecimen Reporting for Improved Study Quality

    Full text link
    Human biospecimens are subject to a number of different collection, processing, and storage factors that can significantly alter their molecular composition and consistency. These biospecimen preanalytical factors, in turn, influence experimental outcomes and the ability to reproduce scientific results. Currently, the extent and type of information specific to the biospecimen preanalytical conditions reported in scientific publications and regulatory submissions varies widely. To improve the quality of research utilizing human tissues, it is critical that information regarding the handling of biospecimens be reported in a thorough, accurate, and standardized manner. The Biospecimen Reporting for Improved Study Quality recommendations outlined herein are intended to apply to any study in which human biospecimens are used. The purpose of reporting these details is to supply others, from researchers to regulators, with more consistent and standardized information to better evaluate, interpret, compare, and reproduce the experimental results. The Biospecimen Reporting for Improved Study Quality guidelines are proposed as an important and timely resource tool to strengthen communication and publications around biospecimen-related research and help reassure patient contributors and the advocacy community that the contributions are valued and respected.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/90474/1/bio-2E2010-2E0036.pd

    A Long Baseline Neutrino Oscillation Experiment Using J-PARC Neutrino Beam and Hyper-Kamiokande

    Get PDF
    Document submitted to 18th J-PARC PAC meeting in May 2014. 50 pages, 41 figuresDocument submitted to 18th J-PARC PAC meeting in May 2014. 50 pages, 41 figuresDocument submitted to 18th J-PARC PAC meeting in May 2014. 50 pages, 41 figuresHyper-Kamiokande will be a next generation underground water Cherenkov detector with a total (fiducial) mass of 0.99 (0.56) million metric tons, approximately 20 (25) times larger than that of Super-Kamiokande. One of the main goals of Hyper-Kamiokande is the study of CPCP asymmetry in the lepton sector using accelerator neutrino and anti-neutrino beams. In this document, the physics potential of a long baseline neutrino experiment using the Hyper-Kamiokande detector and a neutrino beam from the J-PARC proton synchrotron is presented. The analysis has been updated from the previous Letter of Intent [K. Abe et al., arXiv:1109.3262 [hep-ex]], based on the experience gained from the ongoing T2K experiment. With a total exposure of 7.5 MW ×\times 107^7 sec integrated proton beam power (corresponding to 1.56×10221.56\times10^{22} protons on target with a 30 GeV proton beam) to a 2.52.5-degree off-axis neutrino beam produced by the J-PARC proton synchrotron, it is expected that the CPCP phase δCP\delta_{CP} can be determined to better than 19 degrees for all possible values of δCP\delta_{CP}, and CPCP violation can be established with a statistical significance of more than 3σ3\,\sigma (5σ5\,\sigma) for 7676% (5858%) of the δCP\delta_{CP} parameter space

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty
    corecore