2 research outputs found

    Characterisation of the carboxypeptidase G2 catalytic site and design of new inhibitors for cancer therapy

    Get PDF
    The enzyme carboxypeptidase G2 (CPG2) is used in antibody‐directed enzyme prodrug therapy (ADEPT) to catalyse the formation of an active drug from an inert prodrug. Free CPG2 in the bloodstream must be inhibited before administration of the prodrug in order to avoid a systemic reaction in the patient. Although a few small‐molecule CPG2 inhibitors have been reported, none has been taken forward thus far. This lack of progress is due in part to a lack of structural understanding of the CPG2 active site as well as the absence of small molecules that can block the active site whilst targeting the complex for clearance. The work described here aimed to address both areas. We report the structural/functional impact of extensive point mutation across the putative CPG2 catalytic site and adjacent regions for the first time, revealing that residues outside the catalytic region (K208A, S210A and T357A) are crucial to enzyme activity. We also describe novel molecules that inhibit CPG2 whilst maintaining the accessibility of galactosylated moieties aimed at targeting the enzyme for clearance. This work acts as a platform for the future development of high‐affinity CPG2 inhibitors that occupy new chemical space and will advance the safe application of ADEPT in cancer treatment

    Review of the AMLAS Methodology for Application in Healthcare

    Get PDF
    In recent years, the number of machine learning (ML) technologies gaining regulatory approval for healthcare has increased significantly allowing them to be placed on the market. However, the regulatory frameworks applied to them were originally devised for traditional software, which has largely rule-based behaviour, compared to the data-driven and learnt behaviour of ML. As the frameworks are in the process of reformation, there is a need to proactively assure the safety of ML to prevent patient safety being compromised. The Assurance of Machine Learning for use in Autonomous Systems (AMLAS) methodology was developed by the Assuring Autonomy International Programme based on well-established concepts in system safety. This review has appraised the methodology by consulting ML manufacturers to understand if it converges or diverges from their current safety assurance practices, whether there are gaps and limitations in its structure and if it is fit for purpose when applied to the healthcare domain. Through this work we offer the view that there is clear utility for AMLAS as a safety assurance methodology when applied to healthcare machine learning technologies, although development of healthcare specific supplementary guidance would benefit those implementing the methodology
    corecore