164 research outputs found

    Nanometer scale thermal response of polymers to fast thermal perturbations

    Get PDF
    © 2018 Author(s). Nanometer scale thermal response of polymers to fast thermal perturbations is described by linear integro-differential equations with dynamic heat capacity. The exact analytical solution for the non-equilibrium thermal response of polymers in plane and spherical geometry is obtained in the absence of numerical (finite element) calculations. The solution is different from the iterative method presented in a previous publication. The solution provides analytical relationships for fast thermal response of polymers even at the limit t → 0, when the application of the iterative process is very problematic. However, both methods give the same result. It was found that even fast (ca. 1 ns) components of dynamic heat capacity greatly enhance the thermal response to local thermal perturbations. Non-equilibrium and non-linear thermal response of typical polymers under pulse heating with relaxation parameters corresponding to polystyrene and poly(methyl methacrylate) is determined. The obtained results can be used to analyze the heat transfer process at the early stages of crystallization with fast formation of nanometer scale crystals

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    Clinicopathologic study associated with long-term survival in Japanese patients with node-negative breast cancer

    Get PDF
    This study was undertaken to determine the absolute and relative value of blood vessel invasion (BVI) using both factor VIII-related antigen and elastica van Gieson staining, proliferating cell nuclear antigen (PCNA), p53, c- erb B-2, and conventional prognostic factors in predicting relapse-free survival (RFS) and overall survival (OS) rates associated with long-term survival in Japanese patients with node-negative breast cancer. Two hundred patients with histological node-negative breast cancer were studied. We investigated nine clinicopathological factors, including PCNA, p53, c- erb B-2 using permanent-section immunohistochemistry, clinical tumour size (T), histological grade (HG), mitotic index (MI), tumour necrosis (TN), lymphatic vessel invasion (LVI) and BVI, followed for a median of 10 years (range 1–20). Twenty-one patients (10.5%) had recurrence and 15 patients (7.5%) died of breast cancer. Univariate analysis showed that BVI, PCNA, T, HG, MI, p53, c- erb B-2 and LVI were significantly predictive of 20-year RFS or OS. Multivariate analysis showed that BVI (P = 0.0159, P = 0.0368), proliferating cell nuclear antigen (PCNA) (P = 0.0165, P = 0.0001), and T (P = 0.0190, P = 0.0399) were significantly independent prognostic factors for RFS or OS respectively. BVI, PCNA and T were independent prognostic indicators for RFS or OS in Japanese patients with node-negative breast cancer and are useful in selecting high-risk patients who may be eligible to receive strong adjuvant therapies. © 2000 Cancer Research Campaig

    The Stakes in Bayh-Dole: Public Values Beyond the Pace of Innovation

    Get PDF
    Evaluation studies of the Bayh-Dole Act are generally concerned with the pace of innovation or the transgressions to the independence of research. While these concerns are important, I propose here to expand the range of public values considered in assessing Bayh-Dole and formulating future reforms. To this end, I first examine the changes in the terms of the Bayh-Dole debate and the drift in its design. Neoliberal ideas have had a definitive influence on U.S. innovation policy for the last thirty years, including legislation to strengthen patent protection. Moreover, the neoliberal policy agenda is articulated and justified in the interest of “competitiveness.” Rhetorically, this agenda equates competitiveness with economic growth and this with the public interest. Against that backdrop, I use Public Value Failure criteria to show that values such as political equality, transparency, and fairness in the distribution of the benefits of innovation, are worth considering to counter the “policy drift” of Bayh-Dole

    The Role of Regional Knowledge Production in University Technology Transfer: Isolating Coevolutionary Effects

    Full text link
    The rate and magnitude of university-to-industry-technology-transfer (UITT) is a function not only of university characteristics but also of regional factors. A university's embeddedness in an innovative regional milieu moderates UITT. This necessary balance of the supply side (technology push) and demand side (market pull) of technology transfer has so far neither been systematically addressed in the technology transfer literature nor has it been acknowledged by policy makers.We investigate UITT as a function of the interrelation of the industrial innovative milieu of a region and the characteristics of regional universities to identify the impact of the industry on UITT. Thereby we do not only aim to reduce the existing empirical gap in the academic entrepreneurship literature but also to inform policy in its attempt to foster UITT in European regions

    Graph Theoretical Analysis of Functional Brain Networks: Test-Retest Evaluation on Short- and Long-Term Resting-State Functional MRI Data

    Get PDF
    Graph-based computational network analysis has proven a powerful tool to quantitatively characterize functional architectures of the brain. However, the test-retest (TRT) reliability of graph metrics of functional networks has not been systematically examined. Here, we investigated TRT reliability of topological metrics of functional brain networks derived from resting-state functional magnetic resonance imaging data. Specifically, we evaluated both short-term (<1 hour apart) and long-term (>5 months apart) TRT reliability for 12 global and 6 local nodal network metrics. We found that reliability of global network metrics was overall low, threshold-sensitive and dependent on several factors of scanning time interval (TI, long-term>short-term), network membership (NM, networks excluding negative correlations>networks including negative correlations) and network type (NT, binarized networks>weighted networks). The dependence was modulated by another factor of node definition (ND) strategy. The local nodal reliability exhibited large variability across nodal metrics and a spatially heterogeneous distribution. Nodal degree was the most reliable metric and varied the least across the factors above. Hub regions in association and limbic/paralimbic cortices showed moderate TRT reliability. Importantly, nodal reliability was robust to above-mentioned four factors. Simulation analysis revealed that global network metrics were extremely sensitive (but varying degrees) to noise in functional connectivity and weighted networks generated numerically more reliable results in compared with binarized networks. For nodal network metrics, they showed high resistance to noise in functional connectivity and no NT related differences were found in the resistance. These findings provide important implications on how to choose reliable analytical schemes and network metrics of interest

    Cytokine preconditioning of engineered cartilage provides protection against interleukin-1 insult

    Get PDF
    Research reported in this publication was supported in part by the National Institute of Arthritis and Musculoskeletal and Skin Diseases and National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under Award Number R01AR60361, R01AR061988, P41EB002520). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. ART was supported by a National Science Foundation Graduate Fellowship

    The Effects of Technology-as-Knowledge on the Economic Performance of Developing Countries: An Econometric Analysis using Annual Publications Data for Botswana, Namibia, and South Africa, 1976-2004

    Get PDF
    Extant literature indicates that technology, and by implication its underlying knowledge base, determines long-run economic performance. Absent from the literature with respect to developing countries are quantitative assessments of the nexus between technology as knowledge and economic performance. This paper imposes a simple production function on annual pooled observations on Botswana, Namibia, and South Africa over the 1976-2004 period to estimate the marginal impacts of technology as knowledge on economic performance. It finds that capital (k), openness to trade (τ), and even the share of government expenditure of GDP (G) among other factors, influence economic performance. However, the economic performance of countries like Botswana, Namibia, and South Africa depends largely on technology, technological change, and the basic knowledge that forms the foundation for both. For instance, measured as a homogenous “manna from heaven”, technology is the strongest determinant of real per capita income of the three nations. The strength of technology as a determinant of performance depends on the knowledge underpinnings of technology measured as the number of publications (Q, q). Both Q and q are strongly correlated with the countries’ performance. This suggests that the “social capability” and “technological congruence” of these countries are improving, and that developing countries like Botswana, Namibia, and South Africa gain from increased investment in knowledge-building activities including publishing. Obviously there is room for strengthening results, but this analysis has succeeded in producing a testable hypothesis
    corecore