431 research outputs found

    Molecular mechanisms of mitochondrial de novo [2Fe-2S] cluster formation and lipoyl biosynthesis

    Get PDF
    Iron-sulfur (Fe/S) clusters are small inorganic protein cofactors found in almost all known organisms. They enable various protein functions including electron transfer and catalysis and are integral to numerous essential biological processes like cellular respi-ration, translation, and DNA synthesis and repair. Fe/S clusters typically exhibit simple structures, with the rhombic [2Fe-2S]- and cubic [4Fe-4S]-types being the most com-mon. Nevertheless, complex protein machineries are required for their biosynthesis and insertion into target apo-proteins. Mitochondrial Fe/S protein biogenesis requires the Fe/S cluster assembly (ISC) machinery consisting of up to 18 different proteins. The early ISC machinery assembles the [2Fe-2S] clusters de novo, and the late ISC ma-chinery uses these clusters to produce and insert [4Fe-4S] clusters. Despite the functions of many proteins of the ISC machinery being well characterized, the molecular mechanisms underlying mitochondrial Fe/S protein biogenesis, in particu-lar de novo [2Fe-2S] cluster assembly, are not fully understood. The overarching aim of the first of the two projects in this work was to decipher at the molecular level how Fe and S are assembled on the scaffold protein ISCU2 to form [2Fe-2S] clusters de novo. In this process, one Fe ion and one persulfide moiety are delivered in a stepwise man-ner to the ISCU2 assembly site, which exhibits five conserved residues (Cys69, Asp71, Cys95, His137 and Cys138) believed to be critical for assembly. Efficient persulfidation of one of the three conserved ISCU2 Cys residues requires the heterodimeric cysteine desulfurase complex NFS1-ISD11-ACP (termed (NIA)2) to bind to both ISCU2 (U) and FXN (X), forming (NIAUX)2. The ISCU2-bound persulfide is reduced to sulfide via elec-tron flow from the ferredoxin FDX2, and finally dimerization of two [1Fe-1S] ISCU2 units enables [2Fe-2S] cluster formation. It was shown in this work that NFS1 persulfi-dates ISCU2 Cys138 efficiently and with high specificity, and no detectable sulfur relay via other ISCU2 Cys residues was observed. Importantly, ISCU2 had to be preloaded with one Fe(II) ion to enable physiologically relevant persulfidation. Furthermore, the ISCU2 residues Cys69, Cys95, Cys138 and likely Asp71 were identified as ligands of the mature [2Fe-2S] cluster. A combined structural, spectroscopic and biochemical approach revealed the hitherto ill-defined Fe coordination by ISCU2 at various intermediate stages of [2Fe-2S] cluster synthesis. Initially, Fe(II) is coordinated by free ISCU2 in a tetrahedral fashion (via Cys69, Asp71, Cys95 and His137). Binding of ISCU2 to (NIA)2 was found to induce an equilibrium between the tetrahedral and a distinct octahedral coordination (via Asp71, Cys95, Cys138 and water ligands). The tetrahedral coordination was favored in (Fe-NIAU)2, but the binding of FXN, leading to the formation of (Fe-NIAUX)2, shifted the equilibrium towards the octahedral species. Specific intermolecular interactions be-tween FXN and ISCU2 assembly site residues support the formation of the octahedral species and are required for efficient [2Fe-2S] cluster synthesis. Furthermore, the 3D structure of the (Fe-NIAUX)2 complex with persulfidated ISCU2 Cys138 was obtained by electron cryo-microscopy at 2.4 Å resolution, which is the first (NIAUX)2 structure resolved below 3 Å. The Cys138 persulfide moiety participated in an octahedral Fe co-ordination similar to that in non-persulfidated complexes. Together, the aforementioned studies enabled the delineation of a detailed mechanistic route to physiological ISCU2 persulfidation as a decisive intermediate of [2Fe-2S] cluster synthesis. The second project of this work focused on the function of human lipoyl synthase (LI-AS), a mitochondrial radical S-adenosyl methionine (SAM) [4Fe-4S] enzyme. Lipoyl is a cofactor of α-ketoacid dehydrogenases as well as the glycine cleavage system and thus integral to mitochondrial carbon metabolism. LIAS possesses a catalytic and an auxiliary [4Fe-4S] cluster. The catalytic cluster receives electrons to initiate a radical SAM-based reaction mechanism in which two sulfur atoms from the auxiliary cluster are incorporated into an octanoyl substrate. Despite extensive characterisation of the molecular mechanism of lipoylation, the physiological electron donor for the catalytic cluster of human LIAS has remained unknown. To address this issue, an in vitro assay closely mimicking human lipoyl biosynthesis was developed. It was found that only the mitochondrial ferredoxin FDX1, but not the structurally similar FDX2, serves as an effi-cient electron donor for LIAS catalysis. This finding was corroborated by AlphaFold-based in silico analyses of LIAS-FDX interactions. FDX1 supported in vitro lipoylation much more efficiently than the commonly employed artificial reductant dithionite. The high specificity of lipoylation for FDX1 was found to be connected to the C-terminus, because removal of the conserved FDX2 C-terminus largely enhanced residual FDX2 function in lipoylation. The in vitro lipoylation assay was also employed to investigate the toxic effect of elesclomol (Ele), an anticancer agent and copper ionophore. It was shown that both Cu and the Ele:Cu complex, but not Ele alone, inhibit lipoylation, thus identifying the major cellular target of Ele toxicity. In summary, this work structurally defined the cooperative action of five ISCU2 resi-dues critical for consecutive states of de novo [2Fe-2S] cluster synthesis, and thereby provides valuable insights into the molecular dynamics of this process. Furthermore, the work contributes towards a better understanding of human lipoyl biosynthesis and the highly distinct functions of the two human FDXs. FDX1, in addition to its long-known role in steroidogenesis, was revealed as the physiological electron donor for LIAS catal-ysis

    Why Firms Grow : The Roles of Institutions, Trade, and Technology during Swedish Industrialization

    Get PDF
    Industrialization and the emergence of a manufacturing sector are generally perceived as key drivers for countries to see economic growth and increases in living standards. Only 200 years ago, most countries were relatively poor and had similarly low living standards. With industrialization and the growth of manufacturing, primarily Western countries pulled ahead and noticed sustained increases in living standards. Eventually, this process led to a divergence in economic performance. While today high-income economies are characterized by relatively larger firms that use novel production techniques based on the latest scientific advances, firms in low-income countries generally remain small and are less efficient.How did today’s high-income countries initially manage to start growing and industrializing? While existing explanations focus on the roles of, for example, institutions, trade, and technology, such aspects have generally not been analyzed at the level where economic growth occurred: the industrial firm. Consequently, understanding how (Western) firms managed to increase in size and productivity may also inform current debates.This thesis analyzes the causes of industrialization at the firm level. It studies how (some) manufacturing establishments managed to start growing, adopted new technologies, and learned to organize themselves more efficiently in late nineteenth-century Sweden. As such, the thesis focuses on the formative years of the Swedish economy when the country developed from being one of the poorest on Europe’s periphery into one of the fastest-growing economies worldwide. To do so, the study leverages newly digitized data that cover in unique detail the yearly performance of Swedish manufacturing firms.In four papers, the thesis shows how policies that generally have been perceived as key drivers of the industrialization process—e.g., general incorporation laws or tariff protection—enabled marginal establishments to grow, organize as factories, and adopt new technologies, such as steam power. Yet, state policy was no panacea as it (sometimes) negatively affected leading establishments. Using individual census data on the employment of individuals in Sweden, the USA, and Great Britain, the study also documents how industrialization led to further growth dynamics, primarily in the service sector. More broadly, this thesis shows how firm-level growth in manufacturing created an economic dynamism that would ultimately better the lives of people

    More animals than markers: a study into the application of the single step T-BLUP model in large-scale multi-trait Australian Angus beef cattle genetic evaluation

    Get PDF
    International audienceAbstractMulti-trait single step genetic evaluation is increasingly facing the situation of having more individuals with genotypes than markers within each genotype. This creates a situation where the genomic relationship matrix (G\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}G\mathbf{G }\end{document}) is not of full rank and its inversion is algebraically impossible. Recently, the SS-T-BLUP method was proposed as a modified version of the single step equations, providing an elegant way to circumvent the inversion of the G\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}G\mathbf{G }\end{document} and therefore accommodate the situation described. SS-T-BLUP uses the Woodbury matrix identity, thus it requires an add-on matrix, which is usually the covariance matrix of the residual polygenic effet. In this paper, we examine the application of SS-T-BLUP to a large-scale multi-trait Australian Angus beef cattle dataset using the full BREEDPLAN single step genetic evaluation model and compare the results to the application of two different methods of using G\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}G\mathbf{G }\end{document} in a single step model. Results clearly show that SS-T-BLUP outperforms other single step formulations in terms of computational speed and avoids approximation of the inverse of G\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}G\mathbf{G }\end{document}

    Interaction of reactive gases with platinum aerosol particles at room temperature: effects on morphology and surface properties

    Get PDF
    Nanoparticles produced in technical aerosol processes exhibit often dendritic structures, composed of primary particles. Surprisingly, a small but consistent discrepancy was observed between the results of common aggregation models and in situ measurements of structural parameters, such as fractal dimension or mass-mobility exponent. A phenomenon which has received little attention so far is the interaction of agglomerates with admixed gases, which might be responsible for this discrepancy. In this work, we present an analytical series, which underlines the agglomerate morphology depending on the reducing or oxidizing nature of a carrier gas for platinum particles. When hydrogen is added to openly structured particles, as investigated by tandem differential mobility analysis (DMA) and transmission electron microscopy (TEM) analysis, Pt particles compact already at room temperature, resulting in an increased fractal dimension. Aerosol Photoemission Spectroscopy (APES) was also able to demonstrate the interaction of a gas with a nanoscaled platinum surface, resulting in a changed sintering behavior for reducing and oxidizing atmospheres in comparison to nitrogen. The main message of this work is about the structural change of particles exposed to a new environment after complete particle formation. We suspect significant implications for the interpretation of agglomerate formation, as many aerosol processes involve reactive gases or slightly contaminated gases in terms of trace amounts of unintended species

    Methodische Beiträge zur Züchtungsplanung

    Get PDF
    The goal of breeding activities in commercial livestock populations is the increase of the mean of the genetically based performance capacity concerning one or numerous traits being summarised the in aggregate genotype via weighting factors if a certain breeding scheme is applied. Given the breeding scheme, the extent of this increase depends on the accuracy of breeding value estimation which is a function of a) the amount of information available about the selection candidates and its correlation structure to the aggregate genotype, and b) the usefulness of the statistical model in order to regress the genotype of the selection candidate on these available information. Recent molecular-genetical findings concern both, the amount of available information as well as the statistical model. The latter is affected by a ``genomic imprinting'' called mechanism, leading to an alteration of a genes effect on the phenotype of offspring due to a sex specific DNA methylation during gametogenesis in parents. Genomic imprinting can be regarded in breeding value estimation due to the calculation of two breeding values for each individual, one if it acts as sire and the other if it acts as a dam. Weighting factors to summarise this breeding values can be derived by an extension of the gene flow method. This extension is developed in the first part of this thesis and allows for tracing the flow of genes of a certain founder or group of founders within a population across tiers (e.g. nucleus, multiplier, production) and generations with special regard to the sex of the direct parent of an individual carrying these genes. Thus, it allows to assess the probability that a gene of a founder is inherited to its descendants via their direct sires or dams. The discounted and summarised trait realisations out of the genes inherited by the sire and the dam can be used as weighting coefficients for summarising the breeding values of an individual as a sire and a dam. The extended gene flow method is applied to a hypothetical pig breeding program showing that the weights for the breeding values as a dam and as a sire can differ according to the chosen breeding scheme and the planning horizon. Furthermore, it is shown that depending the breeding scheme the breeding value of a dam when acting as a sire might be weighted higher than when acting as a dam. Additionally, a possibility to predict the increase in inbreeding due to one round of selection inherent in the method is presented. The above mentioned amount of available information about a selection candidate is affected by the discovery of hundreds of thousands DNA markers in form of single nucleotide polymorphisms (SNP marker) being in strong linkage disequilibrium with neighbouring trait affecting genes or quantitative trait loci (QTL). The sum over all estimated marker effects on the phenotype, the genomically estimated breeding value (GEBV), allow for the explanation of a certain proportion of the additive genetic variance dependent on the trait, and can be used as an additional information about the selection candidate for estimating breeding values. The application of genomic selection (GS) as the selection on the basis of GEBVs may lead to multistage selection schemes especially in dairy cattle, using GS as a preselection stage in order to reduce the number of test bulls in breeding schemes using progeny testing, or to replace this information source. A major problem of multistage selection is to choose the combination of stages and selection intensities maximising the genetic gain. Approaches of optimisation research may be applied, but since the selection indices of successive stages are correlated, multidimensional integration for deriving the selection intensities at selection stages is necessary, which might be unstable and time consuming according to the correlation structure and number of stages. The second part of this thesis compares the optimisation results of multistage breeding schemes regarding genomic selection, where two different approaches for deriving the selection intensity and the genetic gain are used and the accuracy and cost of GEBVs are varied. The first approach derives the stage dependent breeding values such that the correlation between stages is zero allowing for the calculation of the stage selection intensity via one dimensional integration and, therefore, a fast optimisation of breeding schemes containing even an unlimited number of selection stages. A disadvantage of this approach is a loss in variance of stage breeding values and the genetic gain. The second approach uses new developments for the integration of multivariate normal distributions and calculates an exact solution for the selection intensity and the genetic gain after a certain number of selection stages. The results clearly show that the integration algorithm is fast and stable enough to compare even a large number of possible breeding schemes. Furthermore, the loss in breeding value variance is unpredictable when using the decorrelated selection indices, and a proper consideration of the interaction between selection paths due to cost limitation and paths specific selection strategies will lead to illogical suggestions concerning the breeding scheme structure. As the accuracies and costs of GEBVs were varied in a certain range, the results also show that GS is competitive to conventional progeny testing in dairy cattle breeding even if the accuracy of GEBVs is decreased to 0.45. GS will increase the breeding costs linear due to the number of genotyped individuals. Thus, genotyping large proportions of a population might lead to uneconomical breeding schemes. This is especially the case for bull dam selection in dairy cattle breeding because the cow population size is equal to the number of potential selection candidates. Additionally, the number of selected bull dams is dictated by the demand for potential sires. Therefore, decreasing the number of genotyped selection candidates in order to fulfil economical limitations might lead to a very small selection intensity making the financial efforts difficult to justify concerning the genetic gain. A possible way out is the usage of inexpensive SNP chips containing only a minor number of SNPs for genotyping huge proportions of the selection candidates population and estimate less accurate GEBVs on this basis by using imputation algorithms. The third part of this thesis investigates multistage dairy cattle breeding schemes regrading the possibility of using a low density and high density SNP chip in each selection path. The costs of each chip and the accuracy of the subsequently estimated GEBVs were varied within a certain parameter space, where it was assured that the costs of the low density SNP chip and the subsequent accuracy of the GEBVs were always lower than those for the high density SNP chip. The results underline the potential of low density SNP chips for selecting bull dams from large cow populations, but also draw the attention to the non-linearity of the genetic gain as a function of the selection intensity. Thus, there exist combinations of cost and accuracies were it was found to be economical to limit the number of low density genotyped bull dams and include a further selection stage using high density SNP chips in that path. Furthermore, the results also show that the genetic gain is much more influenced by the cost and accuracy of the GEBV out of a high density chip, but the breeding scheme structure reacts more sensible to a change of this parameter concerning the low density chip.Züchtungsplanung beabsichtigt unter anderem die Vorhersage des genetisch bedingten Leistungszuwachses einer Population bezüglich eines oder mehrerer im aggregierten Genotyp zusammengefasster Merkmale bei Anwendung einer bestimmten Zuchtprogrammes. Der Umfang dieses Leistungszuwachses hängt bei gegebenem Zuchtprogramm im wesentlichen von der Genauigkeit der Zuchtwertschätzung ab, welche wiederum als Funktion der Informationsmenge über einen Selektionskandidaten, als auch der Eignung des statistische Modells beschrieben werden kann. Neuerer molekulargenetische Erkenntnisse betreffen sowohl das statistische Modell als auch die Informationsmenge. Mögliche Änderungen am statistischen Modell ergeben sich aus der Entdeckung der genomischen Prägung, einer auf DNA-Methylierung während der Gametogenese beruhenden Abschwächung der Genexpression im Nachkommen in Abhängigkeit von Geschlecht des vererbenden Elters. Genomisches Prägung kann durch Modellierung jeweils eines Zuchtwertes als Vater und als Mutter für jedes Individuum in der Zuchtwertschätzung berücksichtigt werden. Gewichte zur Zusammenfassung dieser Zuchtwerte in einem Gesamtzuchtwert können durch eine Erweiterung der Genflussmethode abgeleitet werden. Diese Erweiterung wird im ersten Teil dieser Dissertation entwickelt. Sie erlaubt die Verfolgung der Gene eines einzelnen oder einer Gruppe von Gründertieren über unterschiedliche Tiergruppen (z.B, männliche und weibliche Nucleustiere, Vermehrer, Schlachttiere) und Generationen hinweg, wobei zusätzlich zur Wahrscheinlichkeit, dass eine bestimmte Tiergruppe zu einer bestimmten Zeit Gene der Gründertiere erhalten hat, auch eine Aussage darüber getroffen werden kann, mit welcher Wahrscheinlichkeit diese Gene vom unmittelbaren Vater oder der unmittelbaren Mutter auf die Tiere der Subpopulation übertragen wurden. Die diskontierten und summierten Merkmalsrealisierungen aus maternal bzw. paternal vererbten Genen der Gründertiere können sodann als Gewichte zur Zusammenfassung der Zuchtwerte als Vater und als Mutter verwendet werden. Die Anwendung der erweiterten Genflussmethode auf ein hypothetisches Schweinezuchtprogramm zeigt, dass die Gewichte für die beiden o.g. Zuchtwerte in Abhängigkeit vom Zuchtplan und dem Planungshorizont differieren können, wobei selbst bei weiblichen Tieren der Zuchtwert als Vater unter Umständen höher zu gewichten ist als jener als Mutter. Weiterhin kann auf Grund der Eigenschaften der erweiterten Genflussmethode der Inzuchtanstieg durch eine Selektionsrunde abgeschätzt werden. Die Änderung der Informationsmenge für einen Selektionskandidaten ergibt sich aus der Entdeckung von hunderttausenden DNA-Markern in Form von singulären Nukleotidpolymorphismen (SNP-Marker), welche sich jeweils im starken Kopplungsungleichgewicht mit benachbarten merkmalsbeeinflussenden Genen oder DNA-Abschnitten (QTL) befinden, und je nach Merkmal einen gewissen Anteil der additive-genetischen Varianz erklären. Genomische Zuchtwerte (GZW) als die Summe über alle SNP-Marker/-effekte können genutzt werden, um Individuen zu selektieren sobald deren Markergenotyp bekannt ist. Die Anwendung genomischer Selektion (GS) als Selektion basierend auf GZWs könnte insbesondere in der Milchrindzucht zu Zuchtprogrammen mit mehr als zwei Selektionsstufen führen, wenn GZWs als Vorselektion für eine folgende Nachkommenschaftsprüfung verwendet werden. Die richtige Kombination von Selektionsstufen und Selektionsintensitäten zur Maximierung des Zuchtfortschrittes kann mit Hilfe von Maximierungsalgorithmen gefunden werden, wird jedoch durch die Notwendigkeit multipler numerischer Integration zur exakten Berechnung der Selektionsintensität erschwert, da die Selektionindices aufeinanderfolgender Stufen korreliert sind und die Stabilität der Lösung durch Integration sowie deren Rechenzeitbedarf von der Korrelationsstruktur und der Stufenzahl abhängig ist. Der zweite Teil dieser Arbeit beschäftigt sich daher mit einem Methodenvergleich zur Berechnung der Selektionsintensität und des Zuchtfortschrittes bei der Optimierung von mehrstufigen Zuchtprogrammen, welche genomische Selektion berücksichtigen und in denen die Kosten und Genauigkeiten genomischer Zuchtwerte variiert werden. Die erste Methode leitet die Stufenzuchtwerte derart ab, dass deren Korrelation zum Zuchtziel maximal ist, die Korrelation zwischen den Stufen jedoch null. Dies erlaubt die Berechnung der Selektionsintensität jeder Stufe mittels eindimensionaler Integration, und daher eine schnelle Optimierung von Zuchtprogrammen mit einer theoretisch unbegrenzten Stufenzahl. Der Nachteil dieser Methode ist eine verminderte Varianz der Stufenzuchtwerte ab der zweiten Stufen und somit ein verminderter Zuchtfortschritt. Die zweite Methode benutzt neue Entwicklungen zur numerischen Integration multivariater Normalverteilungen zur exakten Berechnung der Selektionsintensitäten der einzelnen Stufen und des Zuchtfortschrittes. Die Ergebnisse des Methodenvergleiches zeigen deutlich das der Integrationsalgorithmus schnell und stabil genug rechnet um selbst ein große Anzahl von Zuchtprogrammen zu vergleichen. Dagegen führt die Anwendung unkorrlierter Selektionsindizes zu einem nicht vorhersehbaren Verlust von vorhergesagtem Zuchtfortschritt, und eine exakte Berücksichtigung der Interaktion zwischen den Selektionspfaden durch Kostenbugetierung und pfadspezifischen Selektionsstrategien ergibt Optimierungsergebnisse die zwar die Logik des Algorithmus widerspiegeln, deren Annäherung an das wahre Optimum jedoch unbekannt ist. Da die Kosten und Genauigkeiten der GZW variiert wurden zeigen die Ergebnisse weiterhin, dass GS bezüglich des Zuchtfortschrittes pro Jahr mit konventionellen Milchviehzuchtprogrammen konkurrieren kann, selbst wenn die GZW-Genauigkeit auf 0.45 sinkt. Durch GS steigen die Züchtungskosten linear mit der Anzahl genotypisierter Tiere an. Die Genotypisierung großer Teile einer Population kann somit zu unverhältnismäßig hohen Kosten und unökonomischen Zuchtprogrammen führen. Dies trifft insbesondere auf die Anwendung von GS zur Selektion von Bullenmüttern in Milchviehzuchtprogrammen zu, da die Anzahl potentieller Selektionskandidaten equivalent zur Größe der Kuhpopulation ist. Weiterhin problematisch ist der durch den Bedarf an männlichen Selektionskandidaten vorgegebenen Bedarf an Bullenmüttern, und die daraus resultierenden geringe Selektionsintensität der GS wenn die Zahl genotypisierter Bullenmütter vorhandenen Kostenlimitierungen angepasst wird. Der Umfang des Zuchtfortschrittes könnte dann in keinem Verhältnis zu den Kosten stehen. Ein möglicher Ausweg ist die Verwendung preisgünstiger SNP-Chips, welche eine deutlich verringerte Anzahl von Markern beinhalten, um damit große Teile der Kuhpopulation zu genotypisieren. Die damit zu schätzenden Zuchtwerte besitzen eine geringere Genauigkeit, die sich jedoch durch Anwendung von Zuweisungsalgorithmen ({\it engl.} imputing) verbessern lässt. Der dritte Teil dieser Arbeit untersucht mehrstufige Milchviehzuchtprogramme, welche die Möglichkeit berücksichtigen, unterschiedliche SNP Chips in jedem Selektionspfad zu verwenden bzw. diese zu kombinieren. Die Kosten und Genauigkeiten der GZWs die auf Basis dieser SNP Chips berechnet werden können, wurden semikontinuierlich variiert, wobei die Kosten bzw. Genauigkeiten der GZWs auf Basis des preisgünstigen Chips immer niedriger waren als jene auf Basis des kostenintensiven Chips. Die Ergebnisse dieser Untersuchung zeigen deutlich das Potenzial der genomischen Selektion auf Basis kostengünstiger SNP Chips für die Selektion von Bullenmüttern aus großen Kuhpopulationen, unterstreichen jedoch auch den nichtlinearen und asymptotischen Zusammenhang zwischen Selektionsintensität und Zuchtfortschritt. Es wurden daher auch Kombinationen aus Kosten und Genauigkeiten genomischer Zuchtwerte gefunden, bei denen auf eine weitere Ausdehnung der Genotypisierung von potentiellen Bullenmüttern mit einem kostengünstigen SNP Chip zugunsten einer nachfolgenden Selektionsstufe und der zusätzlichen Anwendung des kostenintensiven SNP Chips verzichtet wurde um den Zuchtfortschritt zu maximieren. Weiterhin wurde der Zuchtfortschritt durch den Preis des kostenintensiven Chips und die mit ihm erzielte Genauigkeit der GZW wesentlich stärker beeinflusst als durch Änderung dieser Parameter für den preisgünstigen Chip. Im Gegensatz dazu war der Änderungsdruck auf die Struktur des Zuchtprogrammes durch Veränderungen der Kosten und erzielten Genauigkeiten der GZWs auf Basis des kleinen Chips am größten

    Deep-ELA:Deep Exploratory Landscape Analysis with Self-Supervised Pretrained Transformers for Single- and Multi-Objective Continuous Optimization Problems

    Get PDF
    In many recent works, the potential of Exploratory Landscape Analysis (ELA) features to numerically characterize, in particular, single-objective continuous optimization problems has been demonstrated. These numerical features provide the input for all kinds of machine learning tasks on continuous optimization problems, ranging, i.a., from High-level Property Prediction to Automated Algorithm Selection and Automated Algorithm Configuration. Without ELA features, analyzing and understanding the characteristics of single-objective continuous optimization problems would be impossible. Yet, despite their undisputed usefulness, ELA features suffer from several drawbacks. These include, in particular, (1.) a strong correlation between multiple features, as well as (2.) its very limited applicability to multi-objective continuous optimization problems. As a remedy, recent works proposed deep learning-based approaches as alternatives to ELA. In these works, e.g., point-cloud transformers were used to characterize an optimization problem's fitness landscape. However, these approaches require a large amount of labeled training data. Within this work, we propose a hybrid approach, Deep-ELA, which combines (the benefits of) deep learning and ELA features. Specifically, we pre-trained four transformers on millions of randomly generated optimization problems to learn deep representations of the landscapes of continuous single- and multi-objective optimization problems. Our proposed framework can either be used out-of-the-box for analyzing single- and multi-objective continuous optimization problems, or subsequently fine-tuned to various tasks focussing on algorithm behavior and problem understanding

    Das didaktische Potenzial von Podcasts im Sachunterricht

    Get PDF
    Während sich Podcasts mittlerweile als Massenmedium etabliert haben, zeigt sich nun auch ihr didaktisches Potenzial in Lernsituationen. Podcasts sind digitale Audio- oder Videodateien, die sich leicht mit Hilfe eines Tablets oder Smartphones erstellen und verbreiten lassen. Es gibt dabei zwei Möglichkeiten für den unterrichtlichen Einsatz: Entweder werden Podcasts als Lerngegenstand im Unterricht angehört und analysiert oder die Lernenden erstellen ihre eigenen Podcasts. Selbst erstellte Podcasts können dabei in jeden Schritt des Lernprozesses integriert werden oder eine gesamte Lerneinheit begleiten, um diese zu reflektieren und Metakognition zu fördern. Darüber hinaus bieten sich Podcasts auch dazu an, die digitalisierungsbezogenen Kompetenzen der Lernenden zu fördern. (DIPF/Orig.
    • …
    corecore