1,202 research outputs found

    Constructing lattice points for numerical integration by a reduced fast successive coordinate search algorithm

    Full text link
    In this paper, we study an efficient algorithm for constructing node sets of high-quality quasi-Monte Carlo integration rules for weighted Korobov, Walsh, and Sobolev spaces. The algorithm presented is a reduced fast successive coordinate search (SCS) algorithm, which is adapted to situations where the weights in the function space show a sufficiently fast decay. The new SCS algorithm is designed to work for the construction of lattice points, and, in a modified version, for polynomial lattice points, and the corresponding integration rules can be used to treat functions in different kinds of function spaces. We show that the integration rules constructed by our algorithms satisfy error bounds of optimal convergence order. Furthermore, we give details on efficient implementation such that we obtain a considerable speed-up of previously known SCS algorithms. This improvement is illustrated by numerical results. The speed-up obtained by our results may be of particular interest in the context of QMC for PDEs with random coefficients, where both the dimension and the required numberof points are usually very large. Furthermore, our main theorems yield previously unknown generalizations of earlier results.Comment: 33 pages, 2 figure

    Horizontal Equity and Progession when Equivalence Scales are not Constant.

    Get PDF
    Household needs must be taken into account when designing an equitable income tax. If the equivalence scale is income dependent it is not transparent how to achieve equity. In this paper we explore the question of horizontal equity and the implications for progression (vertical equity), when the equivalence scale depends on income level. In particular an 'equal progression among equals' criterion is articulated and shown to be achievable along with horizontal equity under specified conditions.

    Native defects in the Co2_2TiZZ (Z=Z= Si, Ge, Sn) full Heusler alloys: formation and influence on the thermoelectric properties

    Full text link
    We have performed first-principles investigations on the native defects in the full Heusler alloys Co2_2TiZZ (ZZ one of the group IV elements Si, Ge, Sn), determining their formation energies and how they influence the transport properties. We find that Co vacancies (Vc) in all compounds and the TiSn_\text{Sn} anti-site exhibit negative formation energies. The smallest positive values occur for Co in excess on anti-sites (CoZ_Z or CoTi_\text{Ti}) and for TiZ_Z. The most abundant native defects were modeled as dilute alloys, treated with the coherent potential approximation in combination with the multiple-scattering theory Green function approach. The self-consistent potentials determined this way were used to calculate the residual resistivity via the Kubo-Greenwood formula and, based on its energy dependence, the Seebeck coefficient of the systems. The latter is shown to depend significantly on the type of defect, leading to variations that are related to subtle, spin-orbit coupling induced, changes in the electronic structure above the half-metallic gap. Two of the systems, VcCo_\text{Co} and CoZ_Z, are found to exhibit a negative Seebeck coefficient. This observation, together with their low formation energy, offers an explanation for the experimentally observed negative Seebeck coefficient of the Co2_2TiZZ compounds as being due to unintentionally created native defects

    All Fingers Are Not the Same: Handling Variable-Length Sequences in a Discriminative Setting Using Conformal Multi-Instance Kernels

    Get PDF
    Most string kernels for comparison of genomic sequences are generally tied to using (absolute) positional information of the features in the individual sequences. This poses limitations when comparing variable-length sequences using such string kernels. For example, profiling chromatin interactions by 3C-based experiments results in variable-length genomic sequences (restriction fragments). Here, exact position-wise occurrence of signals in sequences may not be as important as in the scenario of analysis of the promoter sequences, that typically have a transcription start site as reference. Existing position-aware string kernels have been shown to be useful for the latter scenario. In this work, we propose a novel approach for sequence comparison that enables larger positional freedom than most of the existing approaches, can identify a possibly dispersed set of features in comparing variable-length sequences, and can handle both the aforementioned scenarios. Our approach, emph{CoMIK}, identifies not just the features useful towards classification but also their locations in the variable-length sequences, as evidenced by the results of three binary classification experiments, aided by recently introduced visualization techniques. Furthermore, we show that we are able to efficiently retrieve and interpret the weight vector for the complex setting of multiple multi-instance kernels

    What we leave behind : reproducibility in chromatin analysis within and across species

    Get PDF
    Epigenetics is the field of biology that investigates heritable factors regulating gene expression without being directly encoded in the genome of an organism. The human genome is densely packed inside a cell's nucleus in the form of chromatin. Certain constituents of chromatin play a vital role as epigenetic factors in the dynamic regulation of gene expression. Epigenetic changes on the chromatin level are thus an integral part of the mechanisms governing the development of the functionally diverse cell types in multicellular species such as human. Studying these mechanisms is not only important to understand the biology of healthy cells, but also necessary to comprehend the epigenetic component in the formation of many complex diseases. Modern wet lab technology enables scientists to probe the epigenome with high throughput and in extensive detail. The fast generation of epigenetic datasets burdens computational researchers with the challenge of rapidly performing elaborate analyses without compromising on the scientific reproducibility of the reported findings. To facilitate reproducible computational research in epigenomics, this thesis proposes a task-oriented metadata model, relying on web technology and supported by database engineering, that aims at consistent and human-readable documentation of standardized computational workflows. The suggested approach features, e.g., computational validation of metadata records, automatic error detection, and progress monitoring of multi-step analyses, and was successfully field-tested as part of a large epigenome research consortium. This work leaves aside theoretical considerations, and intentionally emphasizes the realistic need of providing scientists with tools that assist them in performing reproducible research. Irrespective of the technological progress, the dynamic and cell-type specific nature of the epigenome commonly requires restricting the number of analyzed samples due to resource limitations. The second project of this thesis introduces the software tool SCIDDO, which has been developed for the differential chromatin analysis of cellular samples with potentially limited availability. By combining statistics, algorithmics, and best practices for robust software development, SCIDDO can quickly identify biologically meaningful regions of differential chromatin marking between cell types. We demonstrate SCIDDO's usefulness in an exemplary study in which we identify regions that establish a link between chromatin and gene expression changes. SCIDDO's quantitative approach to differential chromatin analysis is user-customizable, providing the necessary flexibility to adapt SCIDDO to specific research tasks. Given the functional diversity of cell types and the dynamics of the epigenome in response to environmental changes, it is hardly realistic to map the complete epigenome even for a single organism like human or mouse. For non-model organisms, e.g., cow, pig, or dog, epigenome data is particularly scarce. The third project of this thesis investigates to what extent bioinformatics methods can compensate for the comparatively little effort that is invested in charting the epigenome of non-model species. This study implements a large integrative analysis pipeline, including state-of-the-art machine learning, to transfer chromatin data for predictive modeling between 13 species. The evidence presented here indicates that a partial regulatory epigenetic signal is stably retained even over millions of years of evolutionary distance between the considered species. This finding suggests complementary and cost-effective ways for bioinformatics to contribute to comparative epigenome analysis across species boundaries.Epigenetik ist das Teilgebiet der Biologie, welches vererbbare Faktoren untersucht, die die Genexpression regulieren, ohne dabei direkt im Genom eines Organismus kodiert zu sein. Das menschliche Genom liegt dicht gepackt im Zellkern in der Form von Chromatin vor. Bestimmte Bestandteile des Chromatin spielen als epigenetische Faktoren eine zentrale Rolle bei der dynamischen Regulation von Genexpression. Epigenetische Veränderungen auf Chromatinebene sind daher ein integraler Teil jener Mechanismen, die die Entwicklung von funktionell diversen Zelltypen in multizellulären Spezies wie Mensch maßgeblich steuern. Diese Mechanismen zu untersuchen ist nicht nur wichtig, um die Biologie von gesunden Zellen zu erklären, sondern auch, um den epigenetischen Anteil an der Entstehung von vielen komplexen Krankheiten zu verstehen. Moderne Labortechnologien erlauben es Wissenschaftlern, Epigenome mit hohem Durchsatz und sehr detailliert zu erforschen. Ein schneller Aufbau von epigenetischen Datensätzen stellt die computerbasierte Forschung vor die Herausforderung, schnell aufwendige Analysen durchzuführen, ohne dabei Kompromisse bei der wissenschaftlichen Reproduzierbarkeit der gelieferten Ergebnisse einzugehen. Um die computerbasierte reproduzierbare Forschung im Bereich der Epigenomik zu vereinfachen, schlägt diese Dissertation ein aufgabenorientiertes Metadaten-Modell vor, welches, aufbauend auf Internet- und Datenbanktechnologie, auf eine konsistente und gleichzeitig menschenlesbare Dokumentation für standardisierte computerbasierte Arbeitsabläufe abzielt. Das vorgeschlagene Modell ermöglicht unter anderem eine computergestützte Validierung von Metadaten, automatische Fehlererkennung, sowie Fortschrittskontrollen bei mehrstufigen Analysen, und wurde unter realen Bedingungen in einem epigenetischen Forschungskonsortium erfolgreich getestet. Die beschriebene Arbeit präsentiert keine theoretischen Betrachtungen, sondern setzt den Schwerpunkt auf die realistische Notwendigkeit, Forscher mit Werkzeugen auszustatten, die ihnen bei der Durchführung von reproduzierbarer Arbeit helfen. Unabhängig vom technologischen Fortschritt, erfordert die zellspezifische und dynamische Natur des Epigenoms häufig eine Beschränkung bei der Anzahl an zu untersuchenden Proben, um Ressourcenvorgaben einzuhalten. Das zweite Projekt dieser Arbeit stellt die Software SCIDDO vor, welche für die differenzielle Analyse von Chromatindaten auch bei geringer Verfügbarkeit von Zellproben entwickelt wurde. Durch die Kombination von Statistik, Algorithmik, und bewährten Methoden zur robusten Software-Entwicklung, erlaubt es SCIDDO, schnell biologisch sinnvolle Regionen zu identifizieren, die ein differenzielles Chromatinprofil zwischen Zelltypen aufzeigen. Wir demonstrieren SCIDDOs Nutzwert in einer beispielhaften Studie, z.B. durch die Identifikation von Regionen, die eine Verbindung von änderungen auf Chromatinebene und Genexpression herstellen. SCIDDOs quantitativer Ansatz bei der differenziellen Analyse von Chromatindaten erlaubt eine nutzer- und aufgabenspezifische Anpassung, was Flexibilität bei der Bearbeitung anderer Fragestellungen ermöglicht. Bedingt durch die funktionelle Vielfalt an Zelltypen und die Dynamik des Epigenoms resultierend aus Umgebungsveränderungen, ist es kaum realistisch, das komplette Epigenom von auch nur einer einzigen Spezies wie Mensch zu erfassen. Insbesondere für nicht-Modellorganismen wie Kuh, Schwein, oder Hund sind sehr wenig Epigenomdaten verfügbar. Das dritte Projekt dieser Dissertation untersucht, inwieweit bioinformatische Methoden dazu verwendet werden könnten, den vergleichsweise geringen Aufwand, welcher betrieben wird um das Epigenom von nicht-Modellspezies zu erforschen, zu kompensieren. Diese Studie realisiert eine große, integrative Computeranalyse, welche basierend auf Methoden des maschinellen Lernens und auf Transfer von Chromatindaten Modelle zur Genexpressionsvorhersage über Speziesgrenzen hinweg etabliert. Die gewonnenen Erkenntnisse lassen vermuten, dass ein Teil des regulatorischen epigenetischen Signals auch über Millionen von Jahren an evolutionärer Distanz zwischen den 13 betrachteten Spezies stabil erhalten bleibt. Diese Arbeit zeigt dadurch ergänzende und kosteneffektive Möglichkeiten auf, wie Bioinformatik einen Beitrag zur vergleichenden Epigenomanalyse über Speziesgrenzen hinweg leisten könnte

    Visualization and Evolution of Software Architectures

    Get PDF
    Software systems are an integral component of our everyday life as we find them in tools and embedded in equipment all around us. In order to ensure smooth, predictable, and accurate operation of these systems, it is crucial to produce and maintain systems that are highly reliable. A well-designed and well-maintained architecture goes a long way in achieving this goal. However, due to the intangible and often complex nature of software architecture, this task can be quite complicated. The field of software architecture visualization aims to ease this task by providing tools and techniques to examine the hierarchy, relationship, evolution, and quality of architecture components. In this paper, we present a discourse on the state of the art of software architecture visualization techniques. Further, we highlight the importance of developing solutions tailored to meet the needs and requirements of the stakeholders involved in the analysis process

    The Impact of Structural Pattern Types on the Electrochemical Performance of Ultra-Thick NMC 622 Electrodes for Lithium-Ion Batteries

    Get PDF
    An increase in the energy density on the cell level while maintaining a high power density can be realized by combining thick-film electrodes and the 3D battery concept. The effect of laser structuring using different pattern types on the electrochemical performance was studied. For this purpose, LiNi0.6Mn0.2Co0.2O2 (NMC 622) thick-film cathodes were prepared with a PVDF binder and were afterward structured using ultrafast laser ablation. Eight different pattern types were realized, which are lines, grids, holes, hexagonal structures, and their respective combinations. In addition, the mass loss caused by laser ablation was kept the same regardless of the pattern type. The laser-structured electrodes were assembled in coin cells and subsequently electrochemically characterized. It was found that when discharging the cells for durations of less than 2 h, a significant, positive impact of laser patterning on the electrochemical cell performance was observed. For example, when discharging was performed for one hour, cells containing laser-patterned electrodes with different structure types exhibited a specific capacity increase of up to 70 mAh/g in contrast to the reference ones. Although cells with a hole-patterned electrode exhibited a minimum capacity increase in the rate capability analysis, the combination of holes with lines, grids, or hexagons led to further capacity increases. In addition, long-term cycle analyses demonstrated the benefits of laser patterning on the cell lifetime, while cyclic voltammetry highlighted an increase in the Li-ion diffusion kinetics in cells containing hexagonal-patterned electrodes

    Regional Cultures and the Psychological Geography of Switzerland: Person-Environment-Fit in Personality Predicts Subjective Wellbeing.

    Get PDF
    The present study extended traditional nation-based research on person-culture-fit to the regional level. First, we examined the geographical distribution of Big Five personality traits in Switzerland. Across the 26 Swiss cantons, unique patterns were observed for all traits. For Extraversion and Neuroticism clear language divides emerged between the French- and Italian-speaking South-West vs. the German-speaking North-East. Second, multilevel modeling demonstrated that person-environment-fit in Big Five, composed of elevation (i.e., mean differences between individual profile and cantonal profile), scatter (differences in mean variances) and shape (Pearson correlations between individual and cantonal profiles across all traits; Furr, 2008, 2010), predicted the development of subjective wellbeing (i.e., life satisfaction, satisfaction with personal relationships, positive affect, negative affect) over a period of 4 years. Unexpectedly, while the effects of shape were in line with the person-environment-fit hypothesis (better fit predicted higher subjective wellbeing), the effects of scatter showed the opposite pattern, while null findings were observed for elevation. Across a series of robustness checks, the patterns for shape and elevation were consistently replicated. While that was mostly the case for scatter as well, the effects of scatter appeared to be somewhat less robust and more sensitive to the specific way fit was modeled when predicting certain outcomes (negative affect, positive affect). Distinguishing between supplementary and complementary fit may help to reconcile these findings and future research should explore whether and if so under which conditions these concepts may be applicable to the respective facets of person-culture-fit
    • …
    corecore