137 research outputs found

    Counting Process Based Dimension Reduction Methods for Censored Outcomes

    Get PDF
    We propose a class of dimension reduction methods for right censored survival data using a counting process representation of the failure process. Semiparametric estimating equations are constructed to estimate the dimension reduction subspace for the failure time model. The proposed method addresses two fundamental limitations of existing approaches. First, using the counting process formulation, it does not require any estimation of the censoring distribution to compensate the bias in estimating the dimension reduction subspace. Second, the nonparametric part in the estimating equations is adaptive to the structural dimension, hence the approach circumvents the curse of dimensionality. Asymptotic normality is established for the obtained estimators. We further propose a computationally efficient approach that simplifies the estimation equation formulations and requires only a singular value decomposition to estimate the dimension reduction subspace. Numerical studies suggest that our new approaches exhibit significantly improved performance for estimating the true dimension reduction subspace. We further conduct a real data analysis on a skin cutaneous melanoma dataset from The Cancer Genome Atlas. The proposed method is implemented in the R package "orthoDr".Comment: First versio

    Two-color soliton meta-atoms and molecules

    Full text link
    We present a detailed overview of the physics of two-color soliton molecules in nonlinear waveguides, i.e. bound states of localized optical pulses which are held together due to an incoherent interaction mechanism. The mutual confinement, or trapping, of the subpulses, which leads to a stable propagation of the pulse compound, is enabled by the nonlinear Kerr effect. Special attention is paid to the description of the binding mechanism in terms of attractive potential wells, induced by the refractive index changes of the subpulses, exerted on one another through cross-phase modulation. Specifically, we discuss nonlinear-photonics meta atoms, given by pulse compounds consisting of a strong trapping pulse and a weak trapped pulse, for which trapped states of low intensity are determined by a Schr\"odinger-type eigenproblem. We discuss the rich dynamical behavior of such meta-atoms, demonstrating that an increase of the group-velocity mismatch of both subpulses leads to an ionization-like trapping-to-escape transition. We further demonstrate that if both constituent pulses are of similar amplitude, molecule-like bound-states are formed. We show that z-periodic amplitude variations permit a coupling of these pulse compound to dispersive waves, resulting in the resonant emission of Kushi-comb-like multi-frequency radiation

    Beyond Worst-Case Analysis in Stochastic Approximation: Moment Estimation Improves Instance Complexity

    Full text link
    We study oracle complexity of gradient based methods for stochastic approximation problems. Though in many settings optimal algorithms and tight lower bounds are known for such problems, these optimal algorithms do not achieve the best performance when used in practice. We address this theory-practice gap by focusing on instance-dependent complexity instead of worst case complexity. In particular, we first summarize known instance-dependent complexity results and categorize them into three levels. We identify the domination relation between different levels and propose a fourth instance-dependent bound that dominates existing ones. We then provide a sufficient condition according to which an adaptive algorithm with moment estimation can achieve the proposed bound without knowledge of noise levels. Our proposed algorithm and its analysis provide a theoretical justification for the success of moment estimation as it achieves improved instance complexity

    Event-based Backpropagation for Analog Neuromorphic Hardware

    Full text link
    Neuromorphic computing aims to incorporate lessons from studying biological nervous systems in the design of computer architectures. While existing approaches have successfully implemented aspects of those computational principles, such as sparse spike-based computation, event-based scalable learning has remained an elusive goal in large-scale systems. However, only then the potential energy-efficiency advantages of neuromorphic systems relative to other hardware architectures can be realized during learning. We present our progress implementing the EventProp algorithm using the example of the BrainScaleS-2 analog neuromorphic hardware. Previous gradient-based approaches to learning used "surrogate gradients" and dense sampling of observables or were limited by assumptions on the underlying dynamics and loss functions. In contrast, our approach only needs spike time observations from the system while being able to incorporate other system observables, such as membrane voltage measurements, in a principled way. This leads to a one-order-of-magnitude improvement in the information efficiency of the gradient estimate, which would directly translate to corresponding energy efficiency improvements in an optimized hardware implementation. We present the theoretical framework for estimating gradients and results verifying the correctness of the estimation, as well as results on a low-dimensional classification task using the BrainScaleS-2 system. Building on this work has the potential to enable scalable gradient estimation in large-scale neuromorphic hardware as a continuous measurement of the system state would be prohibitive and energy-inefficient in such instances. It also suggests the feasibility of a full on-device implementation of the algorithm that would enable scalable, energy-efficient, event-based learning in large-scale analog neuromorphic hardware

    Distributed Deep Learning in the Cloud and Energy-efficient Real-time Image Processing at the Edge for Fish Segmentation in Underwater Videos Segmentation in Underwater Videos

    Get PDF
    Using big marine data to train deep learning models is not efficient, or sometimes even possible, on local computers. In this paper, we show how distributed learning in the cloud can help more efficiently process big data and train more accurate deep learning models. In addition, marine big data is usually communicated over wired networks, which if possible to deploy in the first place, are costly to maintain. Therefore, wireless communications dominantly conducted by acoustic waves in underwater sensor networks, may be considered. However, wireless communication is not feasible for big marine data due to the narrow frequency bandwidth of acoustic waves and the ambient noise. To address this problem, we propose an optimized deep learning design for low-energy and real-time image processing at the underwater edge. This leads to trading the need to transmit the large image data, for transmitting only the low-volume results that can be sent over wireless sensor networks. To demonstrate the benefits of our approaches in a real-world application, we perform fish segmentation in underwater videos and draw comparisons against conventional techniques. We show that, when underwater captured images are processed at the collection edge, 4 times speedup can be achieved compared to using a landside server. Furthermore, we demonstrate that deploying a compressed DNN at the edge can save 60% of power compared to a full DNN model. These results promise improved applications of affordable deep learning in underwater exploration, monitoring, navigation, tracking, disaster prevention, and scientific data collection projects
    • …
    corecore