471 research outputs found

    Combating catastrophic forgetting with developmental compression

    Full text link
    Generally intelligent agents exhibit successful behavior across problems in several settings. Endemic in approaches to realize such intelligence in machines is catastrophic forgetting: sequential learning corrupts knowledge obtained earlier in the sequence, or tasks antagonistically compete for system resources. Methods for obviating catastrophic forgetting have sought to identify and preserve features of the system necessary to solve one problem when learning to solve another, or to enforce modularity such that minimally overlapping sub-functions contain task specific knowledge. While successful, both approaches scale poorly because they require larger architectures as the number of training instances grows, causing different parts of the system to specialize for separate subsets of the data. Here we present a method for addressing catastrophic forgetting called developmental compression. It exploits the mild impacts of developmental mutations to lessen adverse changes to previously-evolved capabilities and `compresses' specialized neural networks into a generalized one. In the absence of domain knowledge, developmental compression produces systems that avoid overt specialization, alleviating the need to engineer a bespoke system for every task permutation and suggesting better scalability than existing approaches. We validate this method on a robot control problem and hope to extend this approach to other machine learning domains in the future

    Evolving higher-order synergies reveals a trade-off between stability and information integration capacity in complex systems

    Full text link
    There has recently been an explosion of interest in how "higher-order" structures emerge in complex systems. This "emergent" organization has been found in a variety of natural and artificial systems, although at present the field lacks a unified understanding of what the consequences of higher-order synergies and redundancies are for systems. Typical research treat the presence (or absence) of synergistic information as a dependent variable and report changes in the level of synergy in response to some change in the system. Here, we attempt to flip the script: rather than treating higher-order information as a dependent variable, we use evolutionary optimization to evolve boolean networks with significant higher-order redundancies, synergies, or statistical complexity. We then analyse these evolved populations of networks using established tools for characterizing discrete dynamics: the number of attractors, average transient length, and Derrida coefficient. We also assess the capacity of the systems to integrate information. We find that high-synergy systems are unstable and chaotic, but with a high capacity to integrate information. In contrast, evolved redundant systems are extremely stable, but have negligible capacity to integrate information. Finally, the complex systems that balance integration and segregation (known as Tononi-Sporns-Edelman complexity) show features of both chaosticity and stability, with a greater capacity to integrate information than the redundant systems while being more stable than the random and synergistic systems. We conclude that there may be a fundamental trade-off between the robustness of a systems dynamics and its capacity to integrate information (which inherently requires flexibility and sensitivity), and that certain kinds of complexity naturally balance this trade-off

    Spectral Modeling of SNe Ia Near Maximum Light: Probing the Characteristics of Hydro Models

    Full text link
    We have performed detailed NLTE spectral synthesis modeling of 2 types of 1-D hydro models: the very highly parameterized deflagration model W7, and two delayed detonation models. We find that overall both models do about equally well at fitting well observed SNe Ia near to maximum light. However, the Si II 6150 feature of W7 is systematically too fast, whereas for the delayed detonation models it is also somewhat too fast, but significantly better than that of W7. We find that a parameterized mixed model does the best job of reproducing the Si II 6150 line near maximum light and we study the differences in the models that lead to better fits to normal SNe Ia. We discuss what is required of a hydro model to fit the spectra of observed SNe Ia near maximum light.Comment: 29 pages, 14 figures, ApJ, in pres

    The DICE calibration project: design, characterization, and first results

    Full text link
    We describe the design, operation, and first results of a photometric calibration project, called DICE (Direct Illumination Calibration Experiment), aiming at achieving precise instrumental calibration of optical telescopes. The heart of DICE is an illumination device composed of 24 narrow-spectrum, high-intensity, light-emitting diodes (LED) chosen to cover the ultraviolet-to-near-infrared spectral range. It implements a point-like source placed at a finite distance from the telescope entrance pupil, yielding a flat field illumination that covers the entire field of view of the imager. The purpose of this system is to perform a lightweight routine monitoring of the imager passbands with a precision better than 5 per-mil on the relative passband normalisations and about 3{\AA} on the filter cutoff positions. The light source is calibrated on a spectrophotometric bench. As our fundamental metrology standard, we use a photodiode calibrated at NIST. The radiant intensity of each beam is mapped, and spectra are measured for each LED. All measurements are conducted at temperatures ranging from 0{\deg}C to 25{\deg}C in order to study the temperature dependence of the system. The photometric and spectroscopic measurements are combined into a model that predicts the spectral intensity of the source as a function of temperature. We find that the calibration beams are stable at the 10410^{-4} level -- after taking the slight temperature dependence of the LED emission properties into account. We show that the spectral intensity of the source can be characterised with a precision of 3{\AA} in wavelength. In flux, we reach an accuracy of about 0.2-0.5% depending on how we understand the off-diagonal terms of the error budget affecting the calibration of the NIST photodiode. With a routine 60-mn calibration program, the apparatus is able to constrain the passbands at the targeted precision levels.Comment: 25 pages, 27 figures, accepted for publication in A&

    Acoustic transmission line metamaterial with negative/zero/positive refractive index

    Get PDF
    A one-dimensional acoustic negative refractive index metamaterial based on the transmission line approach is presented. This structure implements the dual transmission line concept extensively investigated in microwave engineering. It consists of an acoustic waveguide periodically loaded with membranes realizing the function of series ?capacitances? and transversally connected open channels realizing shunt ?inductances.? Transmission line based metamaterials can exhibit a negative refractive index without relying on resonance phenomena, which results in a bandwidth of operation much broader than that observed in resonant devices. In the present case, the negative refractive index band extends over almost one octave, from 0.6 to 1 kHz. The developed structure also exhibits a seamless transition between the negative and positive refractive index bands with a zero index at the transition frequency of 1 kHz. At this frequency, the unit cell is only one tenth of the wavelength. Simple acoustic circuit models are introduced, which allow efficient designs both in terms of dispersion and impedance, while accurately describing all the physical phenomena. Using this approach, a good matching at the structure terminations is achieved. Full-wave simulations, made for a 10-cell-long structure, confirm the good performances in terms of dispersion diagram, Bloch impedance, and reflection and transmission coefficients

    Type Ia Supernova Spectral Line Ratios as Luminosity Indicators

    Get PDF
    Type Ia supernovae have played a crucial role in the discovery of the dark energy, via the measurement of their light curves and the determination of the peak brightness via fitting templates to the observed lightcurve shape. Two spectroscopic indicators are also known to be well correlated with peak luminosity. Since the spectroscopic luminosity indicators are obtained directly from observed spectra, they will have different systematic errors than do measurements using photometry. Additionally, these spectroscopic indicators may be useful for studies of effects of evolution or age of the SNe Ia progenitor population. We present several new variants of such spectroscopic indicators which are easy to automate and which minimize the effects of noise. We show that these spectroscopic indicators can be measured by proposed JDEM missions such as SNAP and JEDI.Comment: 50 pages, 19 figures, 24 tables, submitted to Ap

    Constraining Type Ia supernova models: SN 2011fe as a test case

    Get PDF
    The nearby supernova SN 2011fe can be observed in unprecedented detail. Therefore, it is an important test case for Type Ia supernova (SN Ia) models, which may bring us closer to understanding the physical nature of these objects. Here, we explore how available and expected future observations of SN 2011fe can be used to constrain SN Ia explosion scenarios. We base our discussion on three-dimensional simulations of a delayed detonation in a Chandrasekhar-mass white dwarf and of a violent merger of two white dwarfs-realizations of explosion models appropriate for two of the most widely-discussed progenitor channels that may give rise to SNe Ia. Although both models have their shortcomings in reproducing details of the early and near-maximum spectra of SN 2011fe obtained by the Nearby Supernova Factory (SNfactory), the overall match with the observations is reasonable. The level of agreement is slightly better for the merger, in particular around maximum, but a clear preference for one model over the other is still not justified. Observations at late epochs, however, hold promise for discriminating the explosion scenarios in a straightforward way, as a nucleosynthesis effect leads to differences in the 55Co production. SN 2011fe is close enough to be followed sufficiently long to study this effect.Comment: Accepted for publication in The Astrophysical Journal Letter
    corecore