2,979 research outputs found

    On continuation and convex Lyapunov functions

    Full text link
    Given any two continuous dynamical systems on Euclidean space such that the origin is globally asymptotically stable and assume that both systems come equipped with -- possibly different -- convex smooth Lyapunov functions asserting the origin is indeed globally asymptotically stable. We show that this implies those two dynamical systems are homotopic through qualitatively equivalent dynamical systems. It turns out that relaxing the assumption on the origin to any compact convex set or relaxing the convexity assumption to geodesic convexity does not alter the conclusion. Imposing the same convexity assumptions on control Lyapunov functions leads to a Hautus-like stabilizability test. These results ought to find applications in optimal control and reinforcement learning.Comment: 16 pages, comments are welcome. V2: fixed 1 typ

    E2GC: Energy-efficient Group Convolution in Deep Neural Networks

    Full text link
    The number of groups (gg) in group convolution (GConv) is selected to boost the predictive performance of deep neural networks (DNNs) in a compute and parameter efficient manner. However, we show that naive selection of gg in GConv creates an imbalance between the computational complexity and degree of data reuse, which leads to suboptimal energy efficiency in DNNs. We devise an optimum group size model, which enables a balance between computational cost and data movement cost, thus, optimize the energy-efficiency of DNNs. Based on the insights from this model, we propose an "energy-efficient group convolution" (E2GC) module where, unlike the previous implementations of GConv, the group size (GG) remains constant. Further, to demonstrate the efficacy of the E2GC module, we incorporate this module in the design of MobileNet-V1 and ResNeXt-50 and perform experiments on two GPUs, P100 and P4000. We show that, at comparable computational complexity, DNNs with constant group size (E2GC) are more energy-efficient than DNNs with a fixed number of groups (FggGC). For example, on P100 GPU, the energy-efficiency of MobileNet-V1 and ResNeXt-50 is increased by 10.8% and 4.73% (respectively) when E2GC modules substitute the FggGC modules in both the DNNs. Furthermore, through our extensive experimentation with ImageNet-1K and Food-101 image classification datasets, we show that the E2GC module enables a trade-off between generalization ability and representational power of DNN. Thus, the predictive performance of DNNs can be optimized by selecting an appropriate GG. The code and trained models are available at https://github.com/iithcandle/E2GC-release.Comment: Accepted as a conference paper in 2020 33rd International Conference on VLSI Design and 2020 19th International Conference on Embedded Systems (VLSID
    corecore