355 research outputs found

    Two-photon imaging and photothermal therapy of cancer cells using biofunctional gold nanorods

    Full text link
    Transferrin-conjugated gold nanorods were used for targeting, two-photon imaging and photothermal therapy of cancer cells. The presence of nanorods significantly reduced the laser power effective for therapy.<br /

    Nonlinear macromodel based on Krylov subspace for micromixer of the microfluidic chip

    Get PDF
    The simulation of MEMS (Micro-Electro-Mechanical-System) containing fluid field could not be well performed by conventional numerical analysis methods. The micro flow field characteristics can be simulated by using macromodel including a nonlinear analysis. This paper set up the macromodel of the micromixer of the microfluidic chip using Krylov subspace projection method. The system functions were assembled through finite element analysis using COMSOL. We took the flow field-concentration field analysis for micromixer finite element model. The finite element functions order is reduced by second-order Krylov subspace projection method based on Lanczos algorithm. It can be shown that the simulation results obtained by using the macromodel are highly consistent with the results of finite element analysis. The calculation using the macromodel is two orders of magnitude faster than the calculation performed by the finite element analysis method. This macromodel should facilitate the design of microfluidic devices with sophisticated channel networks

    Enhanced photothermal therapy assisted with gold nanorods using a radially polarized beam

    Full text link
    We report on the use of a radially polarized beam for photothermal therapy of cancer cells labeled with gold nanorods. Due to a three-dimensionally distributed electromagnetic field in the focal volume, the radially polarized beam is proven to be a highly efficient laser mode to excite gold nanorods randomly oriented in cancer cells. As a result, the energy fluence for effective cancer cell damage is reduced to one fifth of that required for a linearly polarized beam, which is only 9.3% of the medical safety level.<br /

    Optimization Landscape of Policy Gradient Methods for Discrete-time Static Output Feedback

    Full text link
    In recent times, significant advancements have been made in delving into the optimization landscape of policy gradient methods for achieving optimal control in linear time-invariant (LTI) systems. Compared with state-feedback control, output-feedback control is more prevalent since the underlying state of the system may not be fully observed in many practical settings. This paper analyzes the optimization landscape inherent to policy gradient methods when applied to static output feedback (SOF) control in discrete-time LTI systems subject to quadratic cost. We begin by establishing crucial properties of the SOF cost, encompassing coercivity, L-smoothness, and M-Lipschitz continuous Hessian. Despite the absence of convexity, we leverage these properties to derive novel findings regarding convergence (and nearly dimension-free rate) to stationary points for three policy gradient methods, including the vanilla policy gradient method, the natural policy gradient method, and the Gauss-Newton method. Moreover, we provide proof that the vanilla policy gradient method exhibits linear convergence towards local minima when initialized near such minima. The paper concludes by presenting numerical examples that validate our theoretical findings. These results not only characterize the performance of gradient descent for optimizing the SOF problem but also provide insights into the effectiveness of general policy gradient methods within the realm of reinforcement learning

    Generalized Policy Iteration for Optimal Control in Continuous Time

    Full text link
    This paper proposes the Deep Generalized Policy Iteration (DGPI) algorithm to find the infinite horizon optimal control policy for general nonlinear continuous-time systems with known dynamics. Unlike existing adaptive dynamic programming algorithms for continuous time systems, DGPI does not require the admissibility of initialized policy, and input-affine nature of controlled systems for convergence. Our algorithm employs the actor-critic architecture to approximate both policy and value functions with the purpose of iteratively solving the Hamilton-Jacobi-Bellman equation. Both the policy and value functions are approximated by deep neural networks. Given any arbitrary initial policy, the proposed DGPI algorithm can eventually converge to an admissible, and subsequently an optimal policy for an arbitrary nonlinear system. We also relax the update termination conditions of both the policy evaluation and improvement processes, which leads to a faster convergence speed than conventional Policy Iteration (PI) methods, for the same architecture of function approximators. We further prove the convergence and optimality of the algorithm with thorough Lyapunov analysis, and demonstrate its generality and efficacy using two detailed numerical examples
    corecore