4,290 research outputs found
Royal adultery, biblical history and political conflict in Tenth Century Francia : the Lothar Crystal reconsidered
The Lothar Crystal, sometimes called the Susanna Crystal, is one of the most famous artworks produced in Western Europe during the early Middle Ages. Much study has been devoted to its manufacture and the symbolism of its artistic scheme, which depicts accusations of adultery against the wealthy woman Susanna as narrated in chapter 13 of the Book of Daniel from the Vulgate version of the Bible. A central inscription tells the viewer that the crystal was made on the instruction of a certain Lothar, king of the Franks. This king has long been identified as the Carolingian Lothar II (855-869), whose turbulent marriage and attempts at divorce caused a major political crisis during the 860s. In this article I re-examine the arguments for this attribution and suggest that it is worth considering an alternative context for the creation of the crystal: the reign of Lothar of West Francia (954-986). By reading it as a product of the later tenth century, I argue that the crystal may cast light on accusations of adultery made at that time against Queen Emma II, and on the struggle to control the important territory of Lotharingia in the crisis that followed the death of Emperor Otto II (973-983).PostprintPeer reviewe
Computational Analyses of Metagenomic Data
Metagenomics studies the collective microbial genomes extracted from a particular environment without requiring the culturing or isolation of individual genomes, addressing questions revolving around the composition, functionality, and dynamics of microbial communities. The intrinsic complexity of metagenomic data and the diversity of applications call for efficient and accurate computational methods in data handling. In this thesis, I present three primary projects that collectively focus on the computational analysis of metagenomic data, each addressing a distinct topic.
In the first project, I designed and implemented an algorithm named Mapbin for reference-free genomic binning of metagenomic assemblies. Binning aims to group a mixture of genomic fragments based on their genome origin. Mapbin enhances binning results by building a multilayer network that combines the initial binning, assembly graph, and read-pairing information from paired-end sequencing data. The network is further partitioned by the community-detection algorithm, Infomap, to yield a new binning result. Mapbin was tested on multiple simulated and real datasets. The results indicated an overall improvement in the common binning quality metrics.
The second and third projects are both derived from ImMiGeNe, a collaborative and multidisciplinary study investigating the interplay between gut microbiota, host genetics, and immunity in stem-cell transplantation (SCT) patients. In the second project, I conducted microbiome analyses for the metagenomic data. The workflow included the removal of contaminant reads and multiple taxonomic and functional profiling. The results revealed that the SCT recipients' samples yielded significantly fewer reads with heavy contamination of the host DNA, and their microbiomes displayed evident signs of dysbiosis. Finally, I discussed several inherent challenges posed by extremely low levels of target DNA and high levels of contamination in the recipient samples, which cannot be rectified solely through bioinformatics approaches.
The primary goal of the third project is to design a set of primers that can be used to cover bacterial flagellin genes present in the human gut microbiota. Considering the notable diversity of flagellins, I incorporated a method to select representative bacterial flagellin gene sequences, a heuristic approach based on established primer design methods to generate a degenerate primer set, and a selection method to filter genes unlikely to occur in the human gut microbiome. As a result, I successfully curated a reduced yet representative set of primers that would be practical for experimental implementation
Gabriel Harvey and the History of Reading: Essays by Lisa Jardine and others
Few articles in the humanities have had the impact of Lisa Jardine and Anthony Grafton’s seminal ‘Studied for Action’ (1990), a study of the reading practices of Elizabethan polymath and prolific annotator Gabriel Harvey. Their excavation of the setting, methods and ambitions of Harvey’s encounters with his books ignited the History of Reading, an interdisciplinary field which quickly became one of the most exciting corners of the scholarly cosmos. A generation inspired by the model of Harvey fanned out across the world’s libraries and archives, seeking to reveal the many creative, unexpected and curious ways that individuals throughout history responded to texts, and how these interpretations in turn illuminate past worlds.
Three decades on, Harvey’s example and Jardine’s work remain central to cutting-edge scholarship in the History of Reading. By uniting ‘Studied for Action’ with published and unpublished studies on Harvey by Jardine, Grafton and the scholars they have influenced, this collection provides a unique lens on the place of marginalia in textual, intellectual and cultural history. The chapters capture subsequent work on Harvey and map the fields opened by Jardine and Grafton’s original article, collectively offering a posthumous tribute to Lisa Jardine and an authoritative overview of the History of Reading
Self-supervised learning for transferable representations
Machine learning has undeniably achieved remarkable advances thanks to large labelled datasets and supervised learning. However, this progress is constrained by the labour-intensive annotation process. It is not feasible to generate extensive labelled datasets for every problem we aim to address. Consequently, there has been a notable shift in recent times toward approaches that solely leverage raw data. Among these, self-supervised learning has emerged as a particularly powerful approach, offering scalability to massive datasets and showcasing considerable potential for effective knowledge transfer. This thesis investigates self-supervised representation learning with a strong focus on computer vision applications. We provide a comprehensive survey of self-supervised methods across various modalities, introducing a taxonomy that categorises them into four distinct families while also highlighting practical considerations for real-world implementation. Our focus thenceforth is on the computer vision modality, where we perform a comprehensive benchmark evaluation of state-of-the-art self supervised models against many diverse downstream transfer tasks. Our findings reveal that self-supervised models often outperform supervised learning across a spectrum of tasks, albeit with correlations weakening as tasks transition beyond classification, particularly for datasets with distribution shifts. Digging deeper, we investigate the influence of data augmentation on the transferability of contrastive learners, uncovering a trade-off between spatial and appearance-based invariances that generalise to real-world transformations. This begins to explain the differing empirical performances achieved by self-supervised learners on different downstream tasks, and it showcases the advantages of specialised representations produced with tailored augmentation. Finally, we introduce a novel self-supervised pre-training algorithm for object detection, aligning pre-training with downstream architecture and objectives, leading to reduced localisation errors and improved label efficiency. In conclusion, this thesis contributes a comprehensive understanding of self-supervised representation learning and its role in enabling effective transfer across computer vision tasks
The development of liquid crystal lasers for application in fluorescence microscopy
Lasers can be found in many areas of optical medical imaging and their properties have enabled the rapid advancement of many imaging techniques and modalities. Their narrow linewidth, relative brightness and coherence are advantageous in obtaining high quality images of biological samples. This is particularly beneficial in fluorescence microscopy. However, commercial imaging systems depend on the combination of multiple independent laser sources or use tuneable sources, both of which are expensive and have large footprints. This thesis demonstrates the use of liquid crystal (LC) laser technology, a compact and portable alternative, as an exciting candidate to provide a tailorable light source for fluorescence microscopy.
Firstly, to improve the laser performance parameters such that high power and high specification lasers could be realised; device fabrication improvements were presented. Studies exploring the effect of alignment layer rubbing depth and the device cell gap spacing on laser performance were conducted. The results were the first of their kind and produced advances in fabrication that were critical to repeatedly realising stable, single-mode LC laser outputs with sufficient power to conduct microscopy. These investigations also aided with the realisation of laser diode pumping of LC lasers. Secondly, the identification of optimum dye concentrations for single and multi-dye systems were used to optimise the LC laser mixtures for optimal performance. These investigations resulted in novel results relating to the gain media in LC laser systems. Collectively, these advancements yielded lasers of extremely low threshold, comparable to the lowest reported thresholds in the literature.
A portable LC laser system was integrated into a microscope and used to perform fluorescence microscopy. Successful two-colour imaging and multi-wavelength switching ability of LC lasers were exhibited for the first time. The wavelength selectivity of LC lasers was shown to allow lower incident average powers to be used for comparable image quality. Lastly, wavelength selectivity enabled the LC laser fluorescence microscope to achieve high enough sensitivity to conduct quantitative fluorescence measurements. The development of LC lasers and their suitability to fluorescence microscopy demonstrated in this thesis is hoped to push towards the realisation of commercialisation and application for the technology
Reconstruction and Synthesis of Human-Scene Interaction
In this thesis, we argue that the 3D scene is vital for understanding, reconstructing, and synthesizing human motion. We present several approaches which take the scene into consideration in reconstructing and synthesizing Human-Scene Interaction (HSI). We first observe that state-of-the-art pose estimation methods ignore the 3D scene and hence reconstruct poses that are inconsistent with the scene. We address this by proposing a pose estimation method that takes the 3D scene explicitly into account. We call our method PROX for Proximal Relationships with Object eXclusion. We leverage the data generated using PROX and build a method to automatically place 3D scans of people with clothing in scenes. The core novelty of our method is encoding the proximal relationships between the human and the scene in a novel HSI model, called POSA for Pose with prOximitieS and contActs. POSA is limited to static HSI, however. We propose a real-time method for synthesizing dynamic HSI, which we call SAMP for Scene-Aware Motion Prediction. SAMP enables virtual humans to navigate cluttered indoor scenes and naturally interact with objects. Data-driven kinematic models, like SAMP, can produce high-quality motion when applied in environments similar to those shown in the dataset. However, when applied to new scenarios, kinematic models can struggle to generate realistic behaviors that respect scene constraints. In contrast, we present InterPhys which uses adversarial imitation learning and reinforcement learning to train physically-simulated characters that perform scene interaction tasks in a physical and life-like manner
Hydroecological connectivity as a normative framework for aquatic ecosystem regulation: lessons from the USA
Very little has been achieved during the first five decades of development and application of what is now known as environmental law, in terms of slowing the global rate of biodiversity loss and ecosystem degradation. A major factor in this lack of effectiveness has been, perhaps, too narrow a focus on individual elements that exist within ecosystems, rather than on the health of the ecosystems themselves. Additionally, very little attention has been paid to maintenance of the integrity of the many types of connections that exist between the different components of ecosystems, notably aquatic ecosystems. These components are connected not only by water, but also by a variety of ecological connections and pathways ¾ here termed 'hydroecological connectivity' (HEC). These connections are not only important in terms of providing abiotic and biota corridors between components, but they also act as conduits which can translocate pollutants from one location, over vast distances, throughout a fluvial ecosystem, consequently impacting virtually all areas of human life and nature. This thesis outlines the science underpinning the first connectivity-based water law regulation, the American Clean Water Rule (CWR) and analyzes a set of legal challenges to this Rule. Barring one instance, no substantive merit was found for any of the disputed claims. Furthermore, this thesis identifies the transferability of the Rule to South Africa. It was possible to empirically substantiate the merit of the single instance that lacked appropriate qualification in the CWR. The importance of HEC is elucidated in this work using the example of headwater streams which, in aggregate, comprise 79 per cent of the aggregate length of the mapped rivers in South Africa. Also provisionally evaluated is a brightline distance, lateral to fluvial watercourses, within which water resource components that are likely to be connected to the mainstem will be found. This provides a guideline for HEC-directed administrative decision making. A connectivity-based approach to water resource governance will require limitations on some land uses on portions of land that is likely to be perceived as terrestrial but which, in fact, forms part of an aquatic ecosystem. This requirement raises obvious implications for property ownership and expropriation. Here the principles of the public trust, already legislatively expressed in South African water law, provide an institutional legal framework that renders 'public' any lands which form part and parcel of the integrity an aquatic ecosystem. The public trust doctrine anchored the reform of the post-apartheid water law of South Africa. It was introduced in a transformative and emancipatory approach to the democratisation of the nation's water resources and the restoration of water equity. This work provides the first historico-legal and comprehensive perspective of the genealogy and intentions for, the public trust in South Africa, and distils out the principles which the trust embodies. An example protocol is developed which shows how the trust principles underpin the formulation of guidance for determinations of beneficial water uses. Recommendations are made regarding the operationalization of the currently moribund South African public trust in water and highlights the role of the public trust as an effective and reformatory tool of water law. In summary this work is a translational and transdisciplinary example of aquatic science into environmental law. The complex and challenging concept of HEC is communicated in plain language and then its perceived weak point ¾ the need to isolate areas of land which form part of the aquatic resource and incorporate these within the trust res ¾ is construed using the principles of the public trust doctrine. Simultaneously the potential of the public trust to offset obstacles to environmental protection, such as the need for reformed guidance for administrative decision making, has been highlighted. On this model the public trust enfolds an ecosystem-directed HEC approach into a transformative and normative governance package which is integrative, adaptive, multi-disciplinary and proactive
Dissecting regional heterogeneity and modeling transcriptional cascades in brain organoids
Over the past decade, there has been a rapid expansion in the development and utilization of brain organoid models, enabling three-dimensional in vivo-like views of fundamental neurodevelopmental features of corticogenesis in health and disease. Nonetheless, the methods used for generating cortical organoid fates exhibit widespread heterogeneity across different cell lines. Here, we show that a combination of dual SMAD and WNT inhibition (Triple-i protocol) establishes a robust cortical identity in brain organoids, while other widely used derivation protocols are inconsistent with respect to regional specification. In order to measure this heterogeneity, we employ single-cell RNA-sequencing (scRNA-Seq), enabling the sampling of the gene expression profiles of thousands of cells in an individual sample. However, in order to draw meaningful conclusions from scRNA-Seq data, technical artifacts must be identified and removed. In this thesis, we present a method to detect one such artifact, empty droplets that do not contain a cell and consist mainly of free-floating mRNA in the sample. Furthermore, from their expression profiles, cells can be ordered along a developmental trajectory which recapitulates the progression of cells as they differentiate. Based on this ordering, we model gene expression using a Bayesian inference approach in order to measure transcriptional dynamics within differentiating cells. This enables the ordering of genes along transcriptional cascades, statistical testing for differences in gene expression changes, and measuring potential regulatory gene interactions. We apply this approach to differentiating cortical neural stem cells into cortical neurons via an intermediate progenitor cell type in brain organoids to provide a detailed characterization of the endogenous molecular processes underlying neurogenesis.Im letzten Jahrzent hat die Entwicklung und Nutzung von Organoidmodellen des Gehirns stark zugenommen. Diese Modelle erlauben dreidimensionale, in-vivo ähnliche Einblicke in fundamentale Aspekte der neurologischen Entwicklung des Hirnkortex in Gesundheit und Krankheit. Jedoch weisen die Methoden, um die Entwicklung kortikaler Organoide zu verfolgen, starke Heterogenität zwischen verschiedenen Zelllinien auf. Hier weisen wir nach, dass eine Kombination dualer SMAD und WNT Hemmung (Triple-i Protokoll) eine konstante kortikale Zuordnung in Hirnorganoiden erzeugt, während andere, weit verbreitete und genutzte Protokolle in Bezug auf kortikale Spezifizierung keine konstanten Ergebnisse liefern. Um die Heterogenität zu messen, haben wir Einzelzell-RNA Sequenzierung (scRNA-Seq) benutzt, wodurch die Erfassung der Genexpression von Tausenden von Zellen in einer Probe möglich ist. Um jedoch sinnvolle Schlüsse aus diesen scRNA-Seq Daten zu ziehen, müssen technische Artifakte identifiziert und aus den Daten entfernt werden. In dieser Dissertation stellen wir eine Methode vor, um eines solcher Artifakte zu erkennen: leere Tröpfchen (ohne Zellen), die hauptsächlich aus freischwebender mRNA in der Probe bestehen. Weiterhin können Zellen anhand ihrer Genexpressionsprofile entlang einer Entwicklungsschiene angeordnet werden, die die Entwicklung der Zellen während ihrer Differenzierung rekapituliert. Auf der Grundlage dieser Entwicklungsreihenfolge modellieren wir die Genexpression mit einem Bayes’schen Inferenzansatz, um die Dynamik der Transkription in sich differenzierenden Zellen zu messen. Dies ermöglicht das Anordnen von Genen entlang einer Transkriptionskaskade, sowie statistische Untersuchungen in Hinblick auf Unterschiede in der Veränderung von Genexpression, und das Messen des Einflusses möglicher Regulationsgene. Wir wenden diese Methode an, um kortikale neuronale Stammzellen zu untersuchen, die sich über einen intermediären Vorläuferzelltyp in kortikale Neuronen in Hirnorganoiden differenzieren, und um eine detaillierte Charakterisierung der molekularen Prozesse zu liefern, die der Neurogenese zugrunde liegen
- …