140 research outputs found

    On Neuroscience-Inspired Statistical and Computational Problems

    Get PDF
    Recent years have witnessed a surge of problems lying at the intersection of statistics andneuroscience. In this thesis, we explore various statistical and computational problems that are inspired by neuroscience. This thesis consists of two main parts, each inspired by a different system in the brain. In the first part, we study problems related to the visual system. In Chapter 2, we investigate the problem of estimating the collision time of a looming object using a theoretical formulation based on statistical hypothesis testing. In Chapter 3, we build computational models for the compound eye of Drosophila, and analyze how the models recover features of actual visual loom-selective neurons. In the second part, we study problems related to the memory system. In Chapter 4, we consider approaches for accelerating and reducing memory requirements for reinforcement learning algorithms, with provable guarantees on the performance of the algorithm

    Perturbation Algorithms for Adversarial Online Learning

    Full text link
    Honors (Bachelor's)StatisticsUniversity of Michiganhttps://deepblue.lib.umich.edu/bitstream/2027.42/139668/1/zifanli.pd

    Local Convergence of Approximate Newton Method for Two Layer Nonlinear Regression

    Full text link
    There have been significant advancements made by large language models (LLMs) in various aspects of our daily lives. LLMs serve as a transformative force in natural language processing, finding applications in text generation, translation, sentiment analysis, and question-answering. The accomplishments of LLMs have led to a substantial increase in research efforts in this domain. One specific two-layer regression problem has been well-studied in prior works, where the first layer is activated by a ReLU unit, and the second layer is activated by a softmax unit. While previous works provide a solid analysis of building a two-layer regression, there is still a gap in the analysis of constructing regression problems with more than two layers. In this paper, we take a crucial step toward addressing this problem: we provide an analysis of a two-layer regression problem. In contrast to previous works, our first layer is activated by a softmax unit. This sets the stage for future analyses of creating more activation functions based on the softmax function. Rearranging the softmax function leads to significantly different analyses. Our main results involve analyzing the convergence properties of an approximate Newton method used to minimize the regularized training loss. We prove that the loss function for the Hessian matrix is positive definite and Lipschitz continuous under certain assumptions. This enables us to establish local convergence guarantees for the proposed training algorithm. Specifically, with an appropriate initialization and after O(log(1/ϵ))O(\log(1/\epsilon)) iterations, our algorithm can find an ϵ\epsilon-approximate minimizer of the training loss with high probability. Each iteration requires approximately O(nnz(C)+dω)O(\mathrm{nnz}(C) + d^\omega) time, where dd is the model size, CC is the input matrix, and ω<2.374\omega < 2.374 is the matrix multiplication exponent

    propnet: Propagating 2D Annotation to 3D Segmentation for Gastric Tumors on CT Scans

    Full text link
    **Background:** Accurate 3D CT scan segmentation of gastric tumors is pivotal for diagnosis and treatment. The challenges lie in the irregular shapes, blurred boundaries of tumors, and the inefficiency of existing methods. **Purpose:** We conducted a study to introduce a model, utilizing human-guided knowledge and unique modules, to address the challenges of 3D tumor segmentation. **Methods:** We developed the PropNet framework, propagating radiologists' knowledge from 2D annotations to the entire 3D space. This model consists of a proposing stage for coarse segmentation and a refining stage for improved segmentation, using two-way branches for enhanced performance and an up-down strategy for efficiency. **Results:** With 98 patient scans for training and 30 for validation, our method achieves a significant agreement with manual annotation (Dice of 0.803) and improves efficiency. The performance is comparable in different scenarios and with various radiologists' annotations (Dice between 0.785 and 0.803). Moreover, the model shows improved prognostic prediction performance (C-index of 0.620 vs. 0.576) on an independent validation set of 42 patients with advanced gastric cancer. **Conclusions:** Our model generates accurate tumor segmentation efficiently and stably, improving prognostic performance and reducing high-throughput image reading workload. This model can accelerate the quantitative analysis of gastric tumors and enhance downstream task performance
    corecore