419,802 research outputs found

    The birth rate of subluminous and overluminous type Ia supernovae

    Full text link
    Based on the results of Chen & Li (2009) and Pakmor et al. (2010), we carried out a series of binary population synthesis calculations and considered two treatment of common envelope (CE) evolution, i.e. α\alpha-formalism and γ\gamma-algorithm. We found that the evolution of birth rate of these peculiar SNe Ia is heavily dependent on how to treat the CE evolution. The over-luminous SNe Ia may only occur for α\alpha-formalism with low CE ejection efficiency and the delay time of the SNe Ia is between 0.4 and 0.8 Gyr. The upper limit of the contribution rate of the supernovae to all SN Ia is less than 0.3%. The delay time of sub-luminous SNe Ia from equal-mass DD systems is between 0.1 and 0.3 Gyr for α\alpha-formalism with α=3.0\alpha=3.0, while longer than 9 Gyr for α=1.0\alpha=1.0. The range of the delay time for γ\gamma-algorithm is very wide, i.e. longer than 0.22 Gyr, even as long as 15 Gyr. The sub-luminous SNe Ia from equal-mass DD systems may only account for no more than 1% of all SNe Ia observed. The super-Chandrasekhar mass model of Chen & Li (2009) may account for a part of 2003fg-like supernovae and the equal-mass DD model developed by Pakmor et al. (2010) may explain some 1991bg-like events, too. In addition, based on the comparison between theories and observations, including the birth rate and delay time of the 1991bg-like events, we found that the γ\gamma-algorithm is more likely to be an appropriate prescription of the CE evolution of DD systems than the α\alpha-formalism if the equal-mass DD systems is the progenitor of 1991bg-like SNe Ia.Comment: 8 pages, 2 figures, accepted for publication in A&

    Deep Learning Based on Orthogonal Approximate Message Passing for CP-Free OFDM

    Full text link
    Channel estimation and signal detection are very challenging for an orthogonal frequency division multiplexing (OFDM) system without cyclic prefix (CP). In this article, deep learning based on orthogonal approximate message passing (DL-OAMP) is used to address these problems. The DL-OAMP receiver includes a channel estimation neural network (CE-Net) and a signal detection neural network based on OAMP, called OAMP-Net. The CE-Net is initialized by the least square channel estimation algorithm and refined by minimum mean-squared error (MMSE) neural network. The OAMP-Net is established by unfolding the iterative OAMP algorithm and adding some trainable parameters to improve the detection performance. The DL-OAMP receiver is with low complexity and can estimate time-varying channels with only a single training. Simulation results demonstrate that the bit-error rate (BER) of the proposed scheme is lower than those of competitive algorithms for high-order modulation.Comment: 5 pages, 4 figures, updated manuscript, International Conference on Acoustics, Speech and Signal Processing (ICASSP 2019). arXiv admin note: substantial text overlap with arXiv:1903.0476

    Identifying and evaluating parallel design activities using the design structure matrix

    Get PDF
    This paper describes an approach based upon the Design Structure Matrix (DSM) for identifying, evaluating and optimising one aspect of CE: activity parallelism. Concurrent Engineering (CE) has placed emphasis on the management of the product development process and one of its major benefits is the reduction in lead-time and product cost [1]. One approach that CE promotes for the reduction of lead-time is the simultaneous enactment of activities otherwise known as Simultaneous Engineering. Whilst activity parallelism may contribute to the reduction in lead-time and product cost, the effect of iteration is also recognised as a contributing factor on lead-time, and hence was also combined within the investigation. The paper describes how parallel activities may be identified within the DSM, before detailing how a process may be evaluated with respect to parallelism and iteration using the DSM. An optimisation algorithm is then utilised to establish a near-optimal sequence for the activities with respect to parallelism and iteration. DSM-based processes from previously published research are used to describe the development of the approach

    Marginal Likelihood Estimation with the Cross-Entropy Method

    Get PDF
    We consider an adaptive importance sampling approach to estimating the marginal likelihood, a quantity that is fundamental in Bayesian model comparison and Bayesian model averaging. This approach is motivated by the difficulty of obtaining an accurate estimate through existing algorithms that use Markov chain Monte Carlo (MCMC) draws, where the draws are typically costly to obtain and highly correlated in high-dimensional settings. In contrast, we use the cross-entropy (CE) method, a versatile adaptive Monte Carlo algorithm originally developed for rare-event simulation. The main advantage of the importance sampling approach is that random samples can be obtained from some convenient density with little additional costs. As we are generating independent draws instead of correlated MCMC draws, the increase in simulation effort is much smaller should one wish to reduce the numerical standard error of the estimator. Moreover, the importance density derived via the CE method is in a well-defined sense optimal. We demonstrate the utility of the proposed approach by two empirical applications involving women's labor market participation and U.S. macroeconomic time series. In both applications the proposed CE method compares favorably to existing estimators
    corecore