5,344 research outputs found

    Photodissociation and photoionisation of atoms and molecules of astrophysical interest

    Get PDF
    A new collection of photodissociation and photoionisation cross sections for 102 atoms and molecules of astrochemical interest has been assembled, along with a brief review of the basic processes involved. These have been used to calculate dissociation and ionisation rates, with uncertainties, in a standard ultraviolet interstellar radiation field (ISRF) and wavelength-dependent radiation fields. The new ISRF rates generally agree within 30% with our previous compilations, with a few notable exceptions. The reduction of rates in shielded regions was calculated as a function of dust, molecular and atomic hydrogen, atomic C, and self-shielding column densities. The relative importance of shielding types depends on the species in question and the dust optical properties. The new data are publicly available from the Leiden photodissociation and ionisation database. Sensitivity of rates to variation of temperature and isotope, and cross section uncertainties, are tested. Tests were conducted with an interstellar-cloud chemical model, and find general agreement (within a factor of two) with the previous iteration of the Leiden database for the ISRF, and order-of-magnitude variations assuming various kinds of stellar radiation. The newly parameterised dust-shielding factors makes a factor-of-two difference to many atomic and molecular abundances relative to parameters currently in the UDfA and KIDA astrochemical reaction databases. The newly-calculated cosmic-ray induced photodissociation and ionisation rates differ from current standard values up to a factor of 5. Under high temperature and cosmic-ray-flux conditions the new rates alter the equilibrium abundances of abundant dark cloud abundances by up to a factor of two. The partial cross sections for H2O and NH3 photodissociation forming OH, O, NH2 and NH are also evaluated and lead to radiation-field-dependent branching ratios.Comment: Corrected some inconsistent table/figure data. Significant change: Zn photoionisation rate corrected. Accepted for publication by A&

    Die hantering van neweskikkers en onderskikkers in Afrikaanse woordeboeke

    Get PDF
    Die diskrepansie tussen die behoefte aan leksikografiese leiding met betrekking tot voegwoorde en die relatiewe onverskilligheid hierteenoor in terme van leksikografiese navorsing en praktyk het tot hierdie artikel aanleiding gegee, waarin die onbevredigende hantering van voegwoorde in Afrikaanse woordeboeke aangedui word en enkele konstruktiewe leksikografiese oplossings vir die hantering van hierdie woordsoortkategorie aan die hand gedoen word. 'n Eerste aanbeveling is dat die lemmata voegwoord, verbindingswoord, neweskikker, onderskikker en voegende bywoord meer diepgaande sintaktiese inligting, met genoeg voorbeelde (ook oor sinsgrense heen) voorsien. Daar behoort kruisverwysings van die spesifieke voegwoordlemmata na hierdie lemmata te wees. Die voorbeelde wat aangebied word, behoort ook tipiese leksikale en grammatiese patrone aan te dui, sowel as of hipotaktiese binding moontlik is of net inlywing. In aanleerderwoordeboeke kan die tipiese leksikale patrone in vet druk verskyn. Sorg moet gedra word in omvattende woordeboeke, soos WAT, om vinniger inligtingsherwinning tot gevolg te hê en leksikograwe behoort nie funksies gelyk te stel aan polisemiese betekenisonderskeidings nie. So is daar byvoorbeeld twee lemmas nodig by of aangesien dit 'n homoniem is wat duidelik aparte lemmas vereis. Sleutelwoorde: Leksikografie, Neweskikker, Korrelatiewe Neweskikker, Onderskikker, Eentalige Woordeboek, Hipotaktiese Binding, Inlywing, Komplementsinne, Grammatikale Leiding, Linguistiese Fundering, Woordorde, Klousintegrasie, Funksiewoor

    Scalable genetic programming by gene-pool optimal mixing and input-space entropy-based building-block learning

    Get PDF
    The Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) is a recently introduced model-based EA that has been shown to be capable of outperforming state-of-the-art alternative EAs in terms of scalability when solving discrete optimization problems. One of the key aspects of GOMEA's success is a variation operator that is designed to extensively exploit linkage models by effectively combining partial solutions. Here, we bring the strengths of GOMEA to Genetic Programming (GP), introducing GP-GOMEA. Under the hypothesis of having little problem-specific knowledge, and in an effort to design easy-to-use EAs, GP-GOMEA requires no parameter specification. On a set of well-known benchmark problems we find that GP-GOMEA outperforms standard GP while being on par with more recently introduced, state-of-the-art EAs. We furthermore introduce Input-space Entropy-based Building-block Learning (IEBL), a novel approach to identifying and encapsulating relevant building blocks (subroutines) into new terminals and functions. On problems with an inherent degree of modularity, IEBL can contribute to compact solution representations, providing a large potential for knock-on effects in performance. On the difficult, but highly modular Even Parity problem, GP-GOMEA+IEBL obtains excellent scalability, solving the 14-bit instance in less than 1 hour

    A Probabilistic Linear Genetic Programming with Stochastic Context-Free Grammar for solving Symbolic Regression problems

    Full text link
    Traditional Linear Genetic Programming (LGP) algorithms are based only on the selection mechanism to guide the search. Genetic operators combine or mutate random portions of the individuals, without knowing if the result will lead to a fitter individual. Probabilistic Model Building Genetic Programming (PMB-GP) methods were proposed to overcome this issue through a probability model that captures the structure of the fit individuals and use it to sample new individuals. This work proposes the use of LGP with a Stochastic Context-Free Grammar (SCFG), that has a probability distribution that is updated according to selected individuals. We proposed a method for adapting the grammar into the linear representation of LGP. Tests performed with the proposed probabilistic method, and with two hybrid approaches, on several symbolic regression benchmark problems show that the results are statistically better than the obtained by the traditional LGP.Comment: Genetic and Evolutionary Computation Conference (GECCO) 2017, Berlin, German
    • …
    corecore