9,259 research outputs found

    Decomposition numbers for Hecke algebras of type G(r,p,n)G(r,p,n): the (ϵ,q)(\epsilon,q)-separated case

    Full text link
    The paper studies the modular representation theory of the cyclotomic Hecke algebras of type G(r,p,n)G(r,p,n) with (\eps,q)-separated parameters. We show that the decomposition numbers of these algebras are completely determined by the decomposition matrices of related cyclotomic Hecke algebras of type G(s,1,m)G(s,1,m), where 1≤s≤r1\le s\le r and 1≤m≤n1\le m\le n. Furthermore, the proof gives an explicit algorithm for computing these decomposition numbers. Consequently, in principle, the decomposition matrices of these algebras are now known in characteristic zero. In proving these results, we develop a Specht module theory for these algebras, explicitly construct their simple modules and introduce and study analogues of the cyclotomic Schur algebras of type G(r,p,n)G(r,p,n) when the parameters are (\eps,q)-separated. The main results of the paper rest upon two Morita equivalences: the first reduces the calculation of all decomposition numbers to the case of the \textit{ll-splittable decomposition numbers} and the second Morita equivalence allows us to compute these decomposition numbers using an analogue of the cyclotomic Schur algebras for the Hecke algebras of type G(r,p,n)G(r,p,n).Comment: Final versio

    Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing

    Full text link
    This work considers the trade-off between accuracy and test-time computational cost of deep neural networks (DNNs) via \emph{anytime} predictions from auxiliary predictions. Specifically, we optimize auxiliary losses jointly in an \emph{adaptive} weighted sum, where the weights are inversely proportional to average of each loss. Intuitively, this balances the losses to have the same scale. We demonstrate theoretical considerations that motivate this approach from multiple viewpoints, including connecting it to optimizing the geometric mean of the expectation of each loss, an objective that ignores the scale of losses. Experimentally, the adaptive weights induce more competitive anytime predictions on multiple recognition data-sets and models than non-adaptive approaches including weighing all losses equally. In particular, anytime neural networks (ANNs) can achieve the same accuracy faster using adaptive weights on a small network than using static constant weights on a large one. For problems with high performance saturation, we also show a sequence of exponentially deepening ANNscan achieve near-optimal anytime results at any budget, at the cost of a const fraction of extra computation

    Detection and Diagnosis of Motor Stator Faults using Electric Signals from Variable Speed Drives

    Get PDF
    Motor current signature analysis has been investigated widely for diagnosing faults of induction motors. However, most of these studies are based on open loop drives. This paper examines the performance of diagnosing motor stator faults under both open and closed loop operation modes. It examines the effectiveness of conventional diagnosis features in both motor current and voltage signals using spectrum analysis. Evaluation results show that the stator fault causes an increase in the sideband amplitude of motor current signature only when the motor is under the open loop control. However, the increase in sidebands can be observed in both the current and voltage signals under the sensorless control mode, showing that it is more promising in diagnosing the stator faults under the sensorless control operation
    • …
    corecore