40 research outputs found

    Invariant preservation in machine learned PDE solvers via error correction

    Full text link
    Machine learned partial differential equation (PDE) solvers trade the reliability of standard numerical methods for potential gains in accuracy and/or speed. The only way for a solver to guarantee that it outputs the exact solution is to use a convergent method in the limit that the grid spacing Δx\Delta x and timestep Δt\Delta t approach zero. Machine learned solvers, which learn to update the solution at large Δx\Delta x and/or Δt\Delta t, can never guarantee perfect accuracy. Some amount of error is inevitable, so the question becomes: how do we constrain machine learned solvers to give us the sorts of errors that we are willing to tolerate? In this paper, we design more reliable machine learned PDE solvers by preserving discrete analogues of the continuous invariants of the underlying PDE. Examples of such invariants include conservation of mass, conservation of energy, the second law of thermodynamics, and/or non-negative density. Our key insight is simple: to preserve invariants, at each timestep apply an error-correcting algorithm to the update rule. Though this strategy is different from how standard solvers preserve invariants, it is necessary to retain the flexibility that allows machine learned solvers to be accurate at large Δx\Delta x and/or Δt\Delta t. This strategy can be applied to any autoregressive solver for any time-dependent PDE in arbitrary geometries with arbitrary boundary conditions. Although this strategy is very general, the specific error-correcting algorithms need to be tailored to the invariants of the underlying equations as well as to the solution representation and time-stepping scheme of the solver. The error-correcting algorithms we introduce have two key properties. First, by preserving the right invariants they guarantee numerical stability. Second, in closed or periodic systems they do so without degrading the accuracy of an already-accurate solver.Comment: 41 pages, 10 figure

    Data Augmentation for Neutron Spectrum Unfolding with Neural Networks

    Get PDF
    Neural networks require a large quantity of training spectra and detector responses in order to learn to solve the inverse problem of neutron spectrum unfolding. In addition, due to the under-determined nature of unfolding, non-physical spectra which would not be encountered in usage should not be included in the training set. While physically realistic training spectra are commonly determined experimentally or generated through Monte Carlo simulation, this can become prohibitively expensive when considering the quantity of spectra needed to effectively train an unfolding network. In this paper, we present three algorithms for the generation of large quantities of realistic and physically motivated neutron energy spectra. Using an IAEA compendium of 251 spectra, we compare the unfolding performance of neural networks trained on spectra from these algorithms, when unfolding real-world spectra, to two baselines. We also investigate general methods for evaluating the performance of and optimizing feature engineering algorithms

    Meta-PDE: Learning to Solve PDEs Quickly Without a Mesh

    Full text link
    Partial differential equations (PDEs) are often computationally challenging to solve, and in many settings many related PDEs must be be solved either at every timestep or for a variety of candidate boundary conditions, parameters, or geometric domains. We present a meta-learning based method which learns to rapidly solve problems from a distribution of related PDEs. We use meta-learning (MAML and LEAP) to identify initializations for a neural network representation of the PDE solution such that a residual of the PDE can be quickly minimized on a novel task. We apply our meta-solving approach to a nonlinear Poisson's equation, 1D Burgers' equation, and hyperelasticity equations with varying parameters, geometries, and boundary conditions. The resulting Meta-PDE method finds qualitatively accurate solutions to most problems within a few gradient steps; for the nonlinear Poisson and hyper-elasticity equation this results in an intermediate accuracy approximation up to an order of magnitude faster than a baseline finite element analysis (FEA) solver with equivalent accuracy. In comparison to other learned solvers and surrogate models, this meta-learning approach can be trained without supervision from expensive ground-truth data, does not require a mesh, and can even be used when the geometry and topology varies between tasks

    Safety and Pharmacokinetics of Motesanib in Combination with Panitumumab and Gemcitabine-Cisplatin in Patients with Advanced Cancer

    Get PDF
    Purpose. The aim of this study was to assess the safety and tolerability of motesanib (an orally administered small-molecule antagonist of vascular endothelial growth factor receptors 1, 2, and 3, platelet-derived growth factor receptor, and Kit) when administered in combination with panitumumab, gemcitabine, and cisplatin. Methods. This was an open-label, multicenter phase 1b study in patients with advanced solid tumors with an ECOG performance status ≤1 and for whom a gemcitabine/cisplatin regimen was indicated. Patients received motesanib (0 mg [control], 50 mg once daily [QD], 75 mg QD, 100 mg QD, 125 mg QD, or 75 mg twice daily [BID]) with panitumumab (9 mg/kg), gemcitabine (1250 mg/m2) and cisplatin (75 mg/m2) in 21-day cycles. The primary endpoint was the incidence of dose-limiting toxicities (DLTs). Results. Forty-one patients were enrolled and received treatment (including 8 control patients). One of eight patients in the 50 mg QD cohort and 5/11 patients in the 125 mg QD cohort experienced DLTs. The maximum tolerated dose was established as 100 mg QD. Among patients who received motesanib (n = 33), 29 had motesanib-related adverse events. Fourteen patients had serious motesanib-related events. Ten patients had motesanib-related venous thromboembolic events and three had motesanib-related arterial thromboembolic events, two of which were considered serious. One patient had a complete response and nine had partial responses as their best objective response. Conclusions. The combination of motesanib, panitumumab, and gemcitabine/cisplatin could not be administered consistently and, at the described doses and schedule, may be intolerable. However, encouraging antitumor activity was noted in some cases

    Safety and pharmacokinetics of motesanib in combination with gemcitabine for the treatment of patients with solid tumours

    Get PDF
    The aim of this open-label phase 1b study was to assess the safety and pharmacokinetics of motesanib in combination with gemcitabine in patients with advanced solid tumours. Eligible patients with histologically or cytologically documented solid tumours or lymphoma were enroled in three sequential, dose-escalating cohorts to receive motesanib 50 mg once daily (QD), 75 mg two times daily (BID), or 125 mg QD in combination with gemcitabine (1000 mg m−2). The primary end point was the incidence of dose-limiting toxicities (DLTs). Twenty-six patients were enroled and received motesanib and gemcitabine. No DLTs occurred. The 75 mg BID cohort was discontinued early; therefore, 125 mg QD was the maximum target dose. Sixteen patients (62%) experienced motesanib-related adverse events, most commonly lethargy (n=6), diarrhoea (n=4), fatigue (n=3), headache (n=3), and nausea (n=3). The pharmacokinetics of motesanib and of gemcitabine were not markedly affected after combination therapy. The objective response rate was 4% (1 of 26), and 27% (7 of 26) of patients achieved stable disease. In conclusion, treatment with motesanib plus gemcitabine was well tolerated, with adverse event and pharmacokinetic profiles similar to that observed in monotherapy studies

    Targeting BTK with Ibrutinib in Relapsed or Refractory Mantle-Cell Lymphoma – Results of an International, Multicenter, Phase 2 Study of Ibrutinib (PCI-32765) – EHA Encore

    Get PDF
    Bruton's tyrosine kinase (BTK) is a central mediator of B-cell receptor (BCR) signaling essential for normal B-cell development. Ibrutinib is an oral BTK inhibitor that induces apoptosis and inhibits migration and adhesion of malignant B-cells. Updated results of this international, multicenter, phase 2 study of single agent ibrutinib in relapsed or refractory MCL will be presented.Ibrutinib 560mg PO QD was administered continuously until disease progression. Tumor response was assessed every 2 cycles (one cycle=28 days). The study enrolled 115 patients (65 bortezomib-naïve, 50 bortezomib-exposed); 111 patients were treated; 110 were evaluable for response. Baseline characteristics included: median age 68 years, time since diagnosis 42 months, number of prior treatments 3; bulky disease (>10cm) 13%, prior stem cell transplant 10%, high risk MIPI 49%.Median time on treatment was 9.2 months; 53% of patients remain on therapy. Median PFS was 13.9 months and DOR has not yet been reached. Responses increased with longer treatment: comparing to previous data described at ASH 2011, the CR rate increased from 16% to 39%, and the ORR increased from 69% to 75%

    Convolutional Layers are Equivariant to Discrete Shifts But Not Continuous Translations

    Full text link
    The purpose of this short and simple note is to clarify a common misconception about convolutional neural networks (CNNs). CNNs are made up of convolutional layers which are shift equivariant due to weight sharing. However, convolutional layers are not translation equivariant, even when boundary effects are ignored and when pooling and subsampling are absent. This is because shift equivariance is a discrete symmetry while translation equivariance is a continuous symmetry. This fact is well known among researchers in equivariant machine learning, but is usually overlooked among non-experts. To minimize confusion, we suggest using the term `shift equivariance' to refer to discrete shifts in pixels and `translation equivariance' to refer to continuous translations

    VLSI technologies through the 80s and beyond

    No full text
    corecore