22 research outputs found

    Star-forming cores embedded in a massive cold clump: Fragmentation, collapse and energetic outflows

    Full text link
    The fate of massive cold clumps, their internal structure and collapse need to be characterised to understand the initial conditions for the formation of high-mass stars, stellar systems, and the origin of associations and clusters. We explore the onset of star formation in the 75 M_sun SMM1 clump in the region ISOSS J18364-0221 using infrared and (sub-)millimetre observations including interferometry. This contracting clump has fragmented into two compact cores SMM1 North and South of 0.05 pc radius, having masses of 15 and 10 M_sun, and luminosities of 20 and 180 L_sun. SMM1 South harbours a source traced at 24 and 70um, drives an energetic molecular outflow, and appears supersonically turbulent at the core centre. SMM1 North has no infrared counterparts and shows lower levels of turbulence, but also drives an outflow. Both outflows appear collimated and parsec-scale near-infrared features probably trace the outflow-powering jets. We derived mass outflow rates of at least 4E-5 M_sun/yr and outflow timescales of less than 1E4 yr. Our HCN(1-0) modelling for SMM1 South yielded an infall velocity of 0.14 km/s and an estimated mass infall rate of 3E-5 M_sun/yr. Both cores may harbour seeds of intermediate- or high-mass stars. We compare the derived core properties with recent simulations of massive core collapse. They are consistent with the very early stages dominated by accretion luminosity.Comment: Accepted for publication in ApJ, 14 pages, 7 figure

    Long Short-Term Memory Learns Context Free and Context Sensitive Languages

    No full text
    Previous work on learning regular languages from exemplary training sequences showed that Long ShortTerm Memory (LSTM) outperforms traditional recurrent neural networks (RNNs). Here we demonstrate LSTM's superior performance on context free language (CFL) benchmarks, and show that it works even better than previous hardwired or highly specialized architectures. To the best of our knowledge, LSTM variants are also the rst RNNs to learn a context sensitive language (CSL), namely,

    Multiple Network Systems (Minos) Modules: Task Division and Module Discrimination

    No full text
    It is widely considered an ultimate connectionist objective to incorporate neural networks into intelligent systems. These systems are intended to possess a varied repertoire of functions enabling adaptable interaction with a non-static environment. The first step in this direction is to develop various neural network algorithms and models, the second step is to combine such networks into a modular structure that might be incorporated into a workable system. In this paper we consider one aspect of the second point, namely: processing reliability and hiding of wetware details. Presented is an architecture for a type of neural expert module, named an Authority. An Authority consists of a number of Minos modules. Each of the Minos modules in an Authority has the same processing capabilities, but varies with respect to its particular specialization to aspects of the problem domain. The Authority employs the collection of Minoses like a panel of experts. The expert with the highest confide..

    Online learning with adaptive local step sizes

    No full text
    Almeida et al. have recently proposed online algorithms for local step size adaptation in nonlinear systems trained by gradient descent. Here we develop an alternative to their approach by extending Sutton’s work on linear systems to the general, nonlinear case. The resulting algorithms are computationally little more expensive than other acceleration techniques, do not assume statistical independence between successive training patterns, and do not require an arbitrary smoothing parameter. In our benchmark experiments, they consistently outperform other acceleration methods as well as stochastic gradient descent with fixed learning rate and momentum.
    corecore