186 research outputs found

    Cassini Ring Seismology as a Probe of Saturn's Interior I: Rigid Rotation

    Full text link
    Seismology of the gas giants holds the potential to resolve long-standing questions about their internal structure and rotation state. We construct a family of Saturn interior models constrained by the gravity field and compute their adiabatic mode eigenfrequencies and corresponding Lindblad and vertical resonances in Saturn's C ring, where more than twenty waves with pattern speeds faster than the ring mean motion have been detected and characterized using high-resolution Cassini Visual and Infrared Mapping Spectrometer (VIMS) stellar occultation data. We present identifications of the fundamental modes of Saturn that appear to be the origin of these observed ring waves, and use their observed pattern speeds and azimuthal wavenumbers to estimate the bulk rotation period of Saturn's interior to be 10h33m38s1m19s+1m52s10{\rm h}\, 33{\rm m}\, 38{\rm s}^{+1{\rm m}\, 52{\rm s}}_{-1{\rm m}\, 19{\rm s}} (median and 5%/95% quantiles), significantly faster than Voyager and Cassini measurements of periods in Saturn's kilometric radiation, the traditional proxy for Saturn's bulk rotation period. The global fit does not exhibit any clear systematics indicating strong differential rotation in Saturn's outer envelope.Comment: 19 pages, 6 figures, 3 tables, accepted to ApJ; a bug fix improves the fit, predicts faster bulk spin periods (Figure 4) and virtually eliminates evidence for strong radial differential rotation (Figure 5

    Weight compression for deep networks using Kronecker products

    Get PDF
    Deep networks have shown success in many challenging applications, e.g., image understanding, natural language processing, etc. The success of deep networks is traced to the large numbers of neurons deployed, each with weighted interconnections to other neurons. The large numbers of weights result in classification accuracy, but also use significant memory. This disclosure describes techniques to reduce the number of weights used in deep networks by representing the matrices of deep network weights as the Kronecker product of two or more smaller matrices. The reduction in weights is made possible by the observation that deep networks do not always use a majority of their weights. Training procedures are described for the resulting compressed network. The techniques of this disclosure enable deep networks to be deployed in small footprint applications, e.g., mobile or wearable devices. Applications with no immediate memory constraint, e.g., servers, also benefit by the greater speed of deployment enabled by the techniques herein

    Structural and dynamical properties of random walk clusters

    Get PDF
    Abstract. We study the structural and dynamical properties of the clusters generated by a nearest-neighbour random walk embedded in a d-dimensional space. We have focused on the non-trivial case in which the cluster is generated in d = 3. The structure of this cluster is characterised by loops for all length scales on the one hand and by the fact that deadends are negligible (upon scaling) on the other hand. The cluster is very dilute and is characterised by fractal dimension d, = 2 and chemical dimension d, = 1.29 * 0.04. From these results it follows that i = d , / d , = $ , which is consistent with the formula i = 2 / d ( 2 s d C4), obtained using a Flory-type argument. The dynamical diffusion exponents d, and d k were calculated using the exact enumeration method and found to be d, = 3.45 * 0.10 and dk = 2.2850.05. Our results suggest that the effect of loops is small but not negligible. We also calculated the fracton dimensionality of the cluster and obtained d,= 1.14~t0.02. A scaling function is presented for the end-to-end mean square displacement of a random walk performed on a random walk cluster. This scaling function is supported by our numerical results

    Few-shot learning using generative modeling

    Get PDF
    In many machine learning tasks, the available training data has a skewed distribution- a small set of training classes for which a large number of examples are available (“base classes”), and many classes for which only a limited number of examples are available (fewshot classes). This is known as the long-tail distribution problem. Few-shot learning refers to understanding new concepts from only a few examples. Training a classifier on these fewexample classes is known as the few-shot classification task. Techniques disclosed herein improve classification accuracy for few-shot classes by leveraging examples from the base classes. A generative machine-learning model is trained using the base class examples and learns essential properties of the base classes. These essential properties, representing the intersection between base and few-shot classes, are applied to fewshot classes to generate additional few-shot examples. The generated few-shot examples are used to train a machine classifier to achieve better classification of inputs from few-shot classes
    corecore