867 research outputs found

    The combined immunohistochemical expression of AMBRA1 and SQSTM1 identifies patients with poorly differentiated cutaneous squamous cell carcinoma at risk of metastasis: A proof of concept study

    Get PDF
    \ua9 2024 AMLo Biosciences Ltd. Journal of Cutaneous Pathology published by John Wiley & Sons Ltd.Background: Cutaneous squamous cell carcinoma (cSCC) incidence continues to increase globally with, as of yet, an unmet need for reliable prognostic biomarkers to identify patients at increased risk of metastasis. The aim of the present study was to test the prognostic potential of the combined immunohistochemical expression of the autophagy regulatory biomarkers, AMBRA1 and SQSTM1, to identify high-risk patient subsets. Methods: A retrospective cohort of 68 formalin-fixed paraffin-embedded primary cSCCs with known 5-year metastatic outcomes were subjected to automated immunohistochemical staining for AMBRA1 and SQSTM1. Digital images of stained slides were annotated to define four regions of interest: the normal and peritumoral epidermis, the tumor mass, and the tumor growth front. H-score analysis was used to semi-quantify AMBRA1 or SQSTM1 expression in each region of interest using Aperio ImageScope software, with receiver operator characteristics and Kaplan–Meier analysis used to assess prognostic potential. Results: The combined loss of expression of AMBRA1 in the tumor growth front and SQSTM1 in the peritumoral epidermis identified patients with poorly differentiated cSCCs at risk of metastasis (*p < 0.05). Conclusions: Collectively, these proof of concept data suggest loss of the combined expression of AMBRA1 in the cSCC growth front and SQSTM1 in the peritumoral epidermis as a putative prognostic biomarker for poorly differentiated cSCC

    Plugging a hole and lightening the burden: A process evaluation of a practice education team

    Get PDF
    Aim: To investigate the perceptions of clinical and senior managers about the role of Practice Educators employed in one acute hospital in the UK. Background: Producing nurses who are fit for practice, purpose and academic award is a key issue for nurse education partnership providers in the UK. Various new models for practice learning support structures and new roles within health care institutions have been established. To sustain funding and policy support for these models, there is a need for evaluation research. Design: A process evaluation methodology was employed to determine the current value of a practice education team and to provide information to guide future direction. Methods: Data were collected through semi-structured telephone interviews using a previously designed schedule. All senior nurse managers (N=5) and a purposive sample of clinical managers (n=13) who had personal experience of and perceptions about the role of practice educators provided the data. Interview notes were transcribed, coded and a thematic framework devised to present the results. Results: A number of key themes emerged including: qualities needed for being a successful practice educator; visibility and presence of practice educators; providing a link with the university; ‘plugging a hole’ in supporting learning needs; providing relief to practitioners in dealing with ‘the burden of students’; alleviating the ‘plight of students’; and effects on student attrition. Conclusions: Findings provided evidence for the continued funding of the practice educator role with improvements to be made in dealing with stakeholder expectations and outcomes. Relevance to clinical practice: In the UK, there still remain concerns about the fitness for practice of newly registered nurses, prompting a recent national consultation by the professional regulating body. Despite fiscal pressures, recommendations for further strengthening of all systems that will support the quality of practice learning may continue to sustain practice learning support roles

    Trans-ethnic study design approaches for fine-mapping.

    Get PDF
    Studies that traverse ancestrally diverse populations may increase power to detect novel loci and improve fine-mapping resolution of causal variants by leveraging linkage disequilibrium differences between ethnic groups. The inclusion of African ancestry samples may yield further improvements because of low linkage disequilibrium and high genetic heterogeneity. We investigate the fine-mapping resolution of trans-ethnic fixed-effects meta-analysis for five type II diabetes loci, under various settings of ancestral composition (European, East Asian, African), allelic heterogeneity, and causal variant minor allele frequency. In particular, three settings of ancestral composition were compared: (1) single ancestry (European), (2) moderate ancestral diversity (European and East Asian), and (3) high ancestral diversity (European, East Asian, and African). Our simulations suggest that the European/Asian and European ancestry-only meta-analyses consistently attain similar fine-mapping resolution. The inclusion of African ancestry samples in the meta-analysis leads to a marked improvement in fine-mapping resolution

    Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at √ s = 8 TeV with the ATLAS detector

    Get PDF
    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses 20.3 fb−1 of √ s = 8 TeV data collected in 2012 with the ATLAS detector at the LHC. Events are required to have at least one jet with pT > 120 GeV and no leptons. Nine signal regions are considered with increasing missing transverse momentum requirements between Emiss T > 150 GeV and Emiss T > 700 GeV. Good agreement is observed between the number of events in data and Standard Model expectations. The results are translated into exclusion limits on models with either large extra spatial dimensions, pair production of weakly interacting dark matter candidates, or production of very light gravitinos in a gauge-mediated supersymmetric model. In addition, limits on the production of an invisibly decaying Higgs-like boson leading to similar topologies in the final state are presente

    Observation of associated near-side and away-side long-range correlations in √sNN=5.02  TeV proton-lead collisions with the ATLAS detector

    Get PDF
    Two-particle correlations in relative azimuthal angle (Δϕ) and pseudorapidity (Δη) are measured in √sNN=5.02  TeV p+Pb collisions using the ATLAS detector at the LHC. The measurements are performed using approximately 1  μb-1 of data as a function of transverse momentum (pT) and the transverse energy (ΣETPb) summed over 3.1<η<4.9 in the direction of the Pb beam. The correlation function, constructed from charged particles, exhibits a long-range (2<|Δη|<5) “near-side” (Δϕ∼0) correlation that grows rapidly with increasing ΣETPb. A long-range “away-side” (Δϕ∼π) correlation, obtained by subtracting the expected contributions from recoiling dijets and other sources estimated using events with small ΣETPb, is found to match the near-side correlation in magnitude, shape (in Δη and Δϕ) and ΣETPb dependence. The resultant Δϕ correlation is approximately symmetric about π/2, and is consistent with a dominant cos⁡2Δϕ modulation for all ΣETPb ranges and particle pT

    Measurement of the cross-section of high transverse momentum vector bosons reconstructed as single jets and studies of jet substructure in pp collisions at √s = 7 TeV with the ATLAS detector

    Get PDF
    This paper presents a measurement of the cross-section for high transverse momentum W and Z bosons produced in pp collisions and decaying to all-hadronic final states. The data used in the analysis were recorded by the ATLAS detector at the CERN Large Hadron Collider at a centre-of-mass energy of √s = 7 TeV;{\rm Te}{\rm V}andcorrespondtoanintegratedluminosityof and correspond to an integrated luminosity of 4.6\;{\rm f}{{{\rm b}}^{-1}}.ThemeasurementisperformedbyreconstructingtheboostedWorZbosonsinsinglejets.ThereconstructedjetmassisusedtoidentifytheWandZbosons,andajetsubstructuremethodbasedonenergyclusterinformationinthejetcentreofmassframeisusedtosuppressthelargemultijetbackground.ThecrosssectionforeventswithahadronicallydecayingWorZboson,withtransversemomentum. The measurement is performed by reconstructing the boosted W or Z bosons in single jets. The reconstructed jet mass is used to identify the W and Z bosons, and a jet substructure method based on energy cluster information in the jet centre-of-mass frame is used to suppress the large multi-jet background. The cross-section for events with a hadronically decaying W or Z boson, with transverse momentum {{p}_{{\rm T}}}\gt 320\;{\rm Ge}{\rm V}andpseudorapidity and pseudorapidity |\eta |\lt 1.9,ismeasuredtobe, is measured to be {{\sigma }_{W+Z}}=8.5\pm 1.7$ pb and is compared to next-to-leading-order calculations. The selected events are further used to study jet grooming techniques

    Search for pair-produced long-lived neutral particles decaying to jets in the ATLAS hadronic calorimeter in ppcollisions at √s=8TeV

    Get PDF
    The ATLAS detector at the Large Hadron Collider at CERN is used to search for the decay of a scalar boson to a pair of long-lived particles, neutral under the Standard Model gauge group, in 20.3fb−1of data collected in proton–proton collisions at √s=8TeV. This search is sensitive to long-lived particles that decay to Standard Model particles producing jets at the outer edge of the ATLAS electromagnetic calorimeter or inside the hadronic calorimeter. No significant excess of events is observed. Limits are reported on the product of the scalar boson production cross section times branching ratio into long-lived neutral particles as a function of the proper lifetime of the particles. Limits are reported for boson masses from 100 GeVto 900 GeV, and a long-lived neutral particle mass from 10 GeVto 150 GeV

    Preparation of name and address data for record linkage using hidden Markov models

    Get PDF
    BACKGROUND: Record linkage refers to the process of joining records that relate to the same entity or event in one or more data collections. In the absence of a shared, unique key, record linkage involves the comparison of ensembles of partially-identifying, non-unique data items between pairs of records. Data items with variable formats, such as names and addresses, need to be transformed and normalised in order to validly carry out these comparisons. Traditionally, deterministic rule-based data processing systems have been used to carry out this pre-processing, which is commonly referred to as "standardisation". This paper describes an alternative approach to standardisation, using a combination of lexicon-based tokenisation and probabilistic hidden Markov models (HMMs). METHODS: HMMs were trained to standardise typical Australian name and address data drawn from a range of health data collections. The accuracy of the results was compared to that produced by rule-based systems. RESULTS: Training of HMMs was found to be quick and did not require any specialised skills. For addresses, HMMs produced equal or better standardisation accuracy than a widely-used rule-based system. However, acccuracy was worse when used with simpler name data. Possible reasons for this poorer performance are discussed. CONCLUSION: Lexicon-based tokenisation and HMMs provide a viable and effort-effective alternative to rule-based systems for pre-processing more complex variably formatted data such as addresses. Further work is required to improve the performance of this approach with simpler data such as names. Software which implements the methods described in this paper is freely available under an open source license for other researchers to use and improve

    Recommendations for increasing the use of HIV/AIDS resource allocation models

    Get PDF
    The article of record as published may be found at: http://dx.doi.org/10.1186/1471-2458-9-S1-S8Background: Resource allocation models have not had a substantial impact on HIV/AIDS resource allocation decisions in spite of the important, additional insights they may provide. In this paper, we highlight six difficulties often encountered in attempts to implement such models in policy settings; these are: model complexity, data requirements, multiple stakeholders, funding issues, and political and ethical considerations. We then make recommendations as to how each of these difficulties may be overcome. Results: To ensure that models can inform the actual decision, modellers should understand the environment in which decision-makers operate, including full knowledge of the stakeholders' key issues and requirements. HIV/AIDS resource allocation model formulations should be contextualized and sensitive to societal concerns and decision-makers' realities. Modellers should provide the required education and training materials in order for decision-makers to be reasonably well versed in understanding the capabilities, power and limitations of the model. Conclusion: This paper addresses the issue of knowledge translation from the established resource allocation modelling expertise in the academic realm to that of policymaking
    corecore