3,173 research outputs found

    Adaptive Background Music for a Fighting Game: A Multi-Instrument Volume Modulation Approach

    Full text link
    This paper presents our work to enhance the background music (BGM) in DareFightingICE by adding an adaptive BGM. The adaptive BGM consists of five different instruments playing a classical music piece called "Air on G-String." The BGM adapts by changing the volume of the instruments. Each instrument is connected to a different element of the game. We then run experiments to evaluate the adaptive BGM by using a deep reinforcement learning AI that only uses audio as input (Blind DL AI). The results show that the performance of the Blind DL AI improves while playing with the adaptive BGM as compared to playing without the adaptive BGM.Comment: This paper under review is made available for participants of DareFightingICE Competition (https://tinyurl.com/DareFightingICE) and readers interested in relevant area

    Development of Quantitative Bone SPECT Analysis Methods for Metastatic Bone Disease

    Get PDF
    Prostate cancer is one of the most prevalent types of cancer in males in the United States. Bone is a common site of metastases for metastatic prostate cancer. However, bone metastases are often considered “unmeasurable” using standard anatomic imaging and the RECIST 1.1 criteria. As a result, response to therapy is often suboptimally evaluated by visual interpretation of planar bone scintigraphy with response criteria related to the presence or absence of new lesions. With the commercial availability of quantitative single-photon emission computed tomography (SPECT) methods, it is now feasible to establish quantitative metrics of therapy response by skeletal metastases. Quantitative bone SPECT (QBSPECT) may provide the ability to estimate bone lesion uptake, volume, and the number of lesions more accurately than planar imaging. However, the accuracy of activity quantification in QBSPECT relies heavily on the precision with which bone metastases and bone structures are delineated. In this research, we aim at developing automated image segmentation methods for fast and accurate delineation of bone and bone metastases in QBSPECT. To begin, we developed registration methods to generate a dataset of realistic and anatomically-varying computerized phantoms for use in QBSPECT simulations. Using these simulations, we develop supervised computer-automated segmentation methods to minimize intra- and inter-observer variations in delineating bone metastases. This project provides accurate segmentation techniques for QBSPECT and paves the way for the development of QBSPECT methods for assessing bone metastases’ therapy response

    The Logic and Limits of the Federal Reserve Act

    Get PDF
    The Federal Reserve is a monetary authority subject to minimal executive and judicial oversight. It also has the power to create money, which permits it to disburse funds without drawing on the U.S. Treasury. Since 2008, it has leveraged this power to an unprecedented extent. It has rescued teetering financial conglomerates, purchased trillions of dollars of mortgage-backed securities, and opened numerous ad hoc lending facilities to support ordinary businesses, nonprofits, and municipalities. This Article identifies the causes and consequences of the Federal Reserve\u27s expanded footprint by recovering the logic and limits of its enabling act. It argues that to understand the Federal Reserve — including its independence, expansion, and capacity — it is necessary first to understand the statutory scheme for money and banking. Congress chartered investor-owned banks to issue most of the money supply and established the Federal Reserve for a limited purpose: to administer the banking system. Congress equipped the Federal Reserve with an interrelated set of tools to achieve a specific objective: ensure that the banking system creates enough money to keep economic resources productively employed nationwide. The rise of shadow banks — firms that issue alternative forms of money without a bank charter — has impaired the Federal Reserve’s tools. As the Federal Reserve has scrambled to adapt, it has taken on tasks it was not built to handle. This evolution has prompted calls for the Federal Reserve to tackle even more policy challenges. It has also undermined the Federal Reserve’s ability to effectively achieve its core goals. An overloaded Federal Reserve is understandable, but not desirable. Congress should modernize the Federal Reserve Act, and the banking laws on which it depends, to improve monetary administration in the United States

    Deep Multimodality Image-Guided System for Assisting Neurosurgery

    Get PDF
    Intrakranielle Hirntumoren gehören zu den zehn häufigsten bösartigen Krebsarten und sind für eine erhebliche Morbidität und Mortalität verantwortlich. Die größte histologische Kategorie der primären Hirntumoren sind die Gliome, die ein äußerst heterogenes Erschei-nungsbild aufweisen und radiologisch schwer von anderen Hirnläsionen zu unterscheiden sind. Die Neurochirurgie ist meist die Standardbehandlung für neu diagnostizierte Gliom-Patienten und kann von einer Strahlentherapie und einer adjuvanten Temozolomid-Chemotherapie gefolgt werden. Die Hirntumorchirurgie steht jedoch vor großen Herausforderungen, wenn es darum geht, eine maximale Tumorentfernung zu erreichen und gleichzeitig postoperative neurologische Defizite zu vermeiden. Zwei dieser neurochirurgischen Herausforderungen werden im Folgenden vorgestellt. Erstens ist die manuelle Abgrenzung des Glioms einschließlich seiner Unterregionen aufgrund seines infiltrativen Charakters und des Vorhandenseins einer heterogenen Kontrastverstärkung schwierig. Zweitens verformt das Gehirn seine Form ̶ die so genannte "Hirnverschiebung" ̶ als Reaktion auf chirurgische Manipulationen, Schwellungen durch osmotische Medikamente und Anästhesie, was den Nutzen präopera-tiver Bilddaten für die Steuerung des Eingriffs einschränkt. Bildgesteuerte Systeme bieten Ärzten einen unschätzbaren Einblick in anatomische oder pathologische Ziele auf der Grundlage moderner Bildgebungsmodalitäten wie Magnetreso-nanztomographie (MRT) und Ultraschall (US). Bei den bildgesteuerten Instrumenten handelt es sich hauptsächlich um computergestützte Systeme, die mit Hilfe von Computer-Vision-Methoden die Durchführung perioperativer chirurgischer Eingriffe erleichtern. Die Chirurgen müssen jedoch immer noch den Operationsplan aus präoperativen Bildern gedanklich mit Echtzeitinformationen zusammenführen, während sie die chirurgischen Instrumente im Körper manipulieren und die Zielerreichung überwachen. Daher war die Notwendigkeit einer Bildführung während neurochirurgischer Eingriffe schon immer ein wichtiges Anliegen der Ärzte. Ziel dieser Forschungsarbeit ist die Entwicklung eines neuartigen Systems für die peri-operative bildgeführte Neurochirurgie (IGN), nämlich DeepIGN, mit dem die erwarteten Ergebnisse der Hirntumorchirurgie erzielt werden können, wodurch die Gesamtüberle-bensrate maximiert und die postoperative neurologische Morbidität minimiert wird. Im Rahmen dieser Arbeit werden zunächst neuartige Methoden für die Kernbestandteile des DeepIGN-Systems der Hirntumor-Segmentierung im MRT und der multimodalen präope-rativen MRT zur intraoperativen US-Bildregistrierung (iUS) unter Verwendung der jüngs-ten Entwicklungen im Deep Learning vorgeschlagen. Anschließend wird die Ergebnisvor-hersage der verwendeten Deep-Learning-Netze weiter interpretiert und untersucht, indem für den Menschen verständliche, erklärbare Karten erstellt werden. Schließlich wurden Open-Source-Pakete entwickelt und in weithin anerkannte Software integriert, die für die Integration von Informationen aus Tracking-Systemen, die Bildvisualisierung und -fusion sowie die Anzeige von Echtzeit-Updates der Instrumente in Bezug auf den Patientenbe-reich zuständig ist. Die Komponenten von DeepIGN wurden im Labor validiert und in einem simulierten Operationssaal evaluiert. Für das Segmentierungsmodul erreichte DeepSeg, ein generisches entkoppeltes Deep-Learning-Framework für die automatische Abgrenzung von Gliomen in der MRT des Gehirns, eine Genauigkeit von 0,84 in Bezug auf den Würfelkoeffizienten für das Bruttotumorvolumen. Leistungsverbesserungen wurden bei der Anwendung fort-schrittlicher Deep-Learning-Ansätze wie 3D-Faltungen über alle Schichten, regionenbasier-tes Training, fliegende Datenerweiterungstechniken und Ensemble-Methoden beobachtet. Um Hirnverschiebungen zu kompensieren, wird ein automatisierter, schneller und genauer deformierbarer Ansatz, iRegNet, für die Registrierung präoperativer MRT zu iUS-Volumen als Teil des multimodalen Registrierungsmoduls vorgeschlagen. Es wurden umfangreiche Experimente mit zwei Multi-Location-Datenbanken durchgeführt: BITE und RESECT. Zwei erfahrene Neurochirurgen führten eine zusätzliche qualitative Validierung dieser Studie durch, indem sie MRT-iUS-Paare vor und nach der deformierbaren Registrierung überlagerten. Die experimentellen Ergebnisse zeigen, dass das vorgeschlagene iRegNet schnell ist und die besten Genauigkeiten erreicht. Darüber hinaus kann das vorgeschlagene iRegNet selbst bei nicht trainierten Bildern konkurrenzfähige Ergebnisse liefern, was seine Allgemeingültigkeit unter Beweis stellt und daher für die intraoperative neurochirurgische Führung von Nutzen sein kann. Für das Modul "Erklärbarkeit" wird das NeuroXAI-Framework vorgeschlagen, um das Vertrauen medizinischer Experten in die Anwendung von KI-Techniken und tiefen neuro-nalen Netzen zu erhöhen. Die NeuroXAI umfasst sieben Erklärungsmethoden, die Visuali-sierungskarten bereitstellen, um tiefe Lernmodelle transparent zu machen. Die experimen-tellen Ergebnisse zeigen, dass der vorgeschlagene XAI-Rahmen eine gute Leistung bei der Extraktion lokaler und globaler Kontexte sowie bei der Erstellung erklärbarer Salienzkar-ten erzielt, um die Vorhersage des tiefen Netzwerks zu verstehen. Darüber hinaus werden Visualisierungskarten erstellt, um den Informationsfluss in den internen Schichten des Encoder-Decoder-Netzwerks zu erkennen und den Beitrag der MRI-Modalitäten zur end-gültigen Vorhersage zu verstehen. Der Erklärungsprozess könnte medizinischen Fachleu-ten zusätzliche Informationen über die Ergebnisse der Tumorsegmentierung liefern und somit helfen zu verstehen, wie das Deep-Learning-Modell MRT-Daten erfolgreich verar-beiten kann. Außerdem wurde ein interaktives neurochirurgisches Display für die Eingriffsführung entwickelt, das die verfügbare kommerzielle Hardware wie iUS-Navigationsgeräte und Instrumentenverfolgungssysteme unterstützt. Das klinische Umfeld und die technischen Anforderungen des integrierten multimodalen DeepIGN-Systems wurden mit der Fähigkeit zur Integration von (1) präoperativen MRT-Daten und zugehörigen 3D-Volumenrekonstruktionen, (2) Echtzeit-iUS-Daten und (3) positioneller Instrumentenver-folgung geschaffen. Die Genauigkeit dieses Systems wurde anhand eines benutzerdefi-nierten Agar-Phantom-Modells getestet, und sein Einsatz in einem vorklinischen Operati-onssaal wurde simuliert. Die Ergebnisse der klinischen Simulation bestätigten, dass die Montage des Systems einfach ist, in einer klinisch akzeptablen Zeit von 15 Minuten durchgeführt werden kann und mit einer klinisch akzeptablen Genauigkeit erfolgt. In dieser Arbeit wurde ein multimodales IGN-System entwickelt, das die jüngsten Fort-schritte im Bereich des Deep Learning nutzt, um Neurochirurgen präzise zu führen und prä- und intraoperative Patientenbilddaten sowie interventionelle Geräte in das chirurgi-sche Verfahren einzubeziehen. DeepIGN wurde als Open-Source-Forschungssoftware entwickelt, um die Forschung auf diesem Gebiet zu beschleunigen, die gemeinsame Nut-zung durch mehrere Forschungsgruppen zu erleichtern und eine kontinuierliche Weiter-entwicklung durch die Gemeinschaft zu ermöglichen. Die experimentellen Ergebnisse sind sehr vielversprechend für die Anwendung von Deep-Learning-Modellen zur Unterstützung interventioneller Verfahren - ein entscheidender Schritt zur Verbesserung der chirurgi-schen Behandlung von Hirntumoren und der entsprechenden langfristigen postoperativen Ergebnisse

    Early English Monstrosity: Francis Bacon and His Interlocutors

    Get PDF
    This dissertation charts the distinct but related epistemological, ontological, and aesthetic frameworks defining approaches to monstrosity for three early modern English authors: Francis Bacon, Benjamin Jonson, and Thomas Browne. A central claim is that threads of Bacon’s approach to monstrosity, and natural philosophy broadly, are evident and influential in the approaches of Jonson and Browne, particularly in the way that Bacon extricates monstrosity from the supernatural, figurative, and divine, instead rooting it in the natural and explicable. The dissertation also locates each author’s conception of monstrosity in relation to art and nature. Because monstrosity is pervasive—engaged in aesthetics, morality, nature and natural order, social mores, and other aspects of culture and knowledge making—the point on the spectrum between art and nature at which each author positions monstrosity offers insight into broader early modern considerations of the relationships between art, nature, and monstrosity. Related to these considerations is each author’s concern for establishing criteria to determine kinds or types of things, including the boundaries for genres, species, normal and abnormal, truth and error, natural and artificial, natural and monstrous, and monstrous and artificial. The first chapter excavates Bacon’s terminologically labyrinthine definition of monstrosity, illustrating Bacon’s struggle to describe monstrosity as he simultaneously works to sever its ties with supernatural perceptions of it. It argues that monstrosity is central to Bacon’s goals of reforming natural philosophy, presenting it as a tool that creates clarity in the natural world and provides opportunities for human improvement. The second chapter demonstrates the natural historical texts influencing Jonson’s Epicoene to contextualize his satirical critiques of characters whose bodies cross boundaries between genders and species. It argues that Jonson strategically utilizes but also satirizes monstrosity to rebuke what he views as sociocultural dysfunction in the form of monstrous mixture. The third chapter examines Browne’s Religio Medici and Pseudodoxia Epidemica to establish Browne’s distinctive acceptance of monstrosity’s inherent resistance to categorization. It argues that despite Browne’s unwavering faith and piety, his conception of monstrosity is not rooted in divine punishment or warning; rather, monstrosity represents God’s unknowable wisdom, which is worthy of our wonder.Doctor of Philosoph

    The Forgetting of Fire: An Archaeology of Technics

    Get PDF
    This dissertation applies the methods of Bachelard and Foucault to key moments in the development of science. By analyzing the attitudes of four figures from four different centuries, it shows how epistemic attitudes have shifted from a participation in non-human, natural realities to a construction of human-centred technologies. The idea of an epistemic attitude is situated in reference to Foucault’s concept of the episteme and his method of archaeology; an attitude is the institutionally-situated and personally-enacted comportment of an epistemic agent toward an object of knowledge. This line of thought is pursued under the theme of elemental fire, which begins as a substance for early alchemical knowledge and ends up as a quantifiable branch of functions in technics. We call the attitude of Paracelsus, an alchemist of the sixteenth century, “participation,” which sheds light on the intimate goal of his alchemical practice. In the seventeenth century, Robert Boyle inaugurates the evolution of technics with the attitude of instrumentalization. Building off this, Lavoisier participates in the development of technics through his effort to construct the countable, using measuring instruments and chemical techniques. This attitude of accounting, and neither his theory of oxygen nor his basic observations in the laboratory, determines his decisive role in the development of chemistry. Finally, we discuss the attitude of employment as we find it in Sadi Carnot and the engineers of the steam engine, watching as fire for these epistemic agents becomes nothing but an employed instant of combustion

    Generative Adversarial Network (GAN) for Medical Image Synthesis and Augmentation

    Get PDF
    Medical image processing aided by artificial intelligence (AI) and machine learning (ML) significantly improves medical diagnosis and decision making. However, the difficulty to access well-annotated medical images becomes one of the main constraints on further improving this technology. Generative adversarial network (GAN) is a DNN framework for data synthetization, which provides a practical solution for medical image augmentation and translation. In this study, we first perform a quantitative survey on the published studies on GAN for medical image processing since 2017. Then a novel adaptive cycle-consistent adversarial network (Ad CycleGAN) is proposed. We respectively use a malaria blood cell dataset (19,578 images) and a COVID-19 chest X-ray dataset (2,347 images) to test the new Ad CycleGAN. The quantitative metrics include mean squared error (MSE), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), universal image quality index (UIQI), spatial correlation coefficient (SCC), spectral angle mapper (SAM), visual information fidelity (VIF), Frechet inception distance (FID), and the classification accuracy of the synthetic images. The CycleGAN and variant autoencoder (VAE) are also implemented and evaluated as comparison. The experiment results on malaria blood cell images indicate that the Ad CycleGAN generates more valid images compared to CycleGAN or VAE. The synthetic images by Ad CycleGAN or CycleGAN have better quality than those by VAE. The synthetic images by Ad CycleGAN have the highest accuracy of 99.61%. In the experiment on COVID-19 chest X-ray, the synthetic images by Ad CycleGAN or CycleGAN have higher quality than those generated by variant autoencoder (VAE). However, the synthetic images generated through the homogenous image augmentation process have better quality than those synthesized through the image translation process. The synthetic images by Ad CycleGAN have higher accuracy of 95.31% compared to the accuracy of the images by CycleGAN of 93.75%. In conclusion, the proposed Ad CycleGAN provides a new path to synthesize medical images with desired diagnostic or pathological patterns. It is considered a new approach of conditional GAN with effective control power upon the synthetic image domain. The findings offer a new path to improve the deep neural network performance in medical image processing

    USLR: an open-source tool for unbiased and smooth longitudinal registration of brain MR

    Full text link
    We present USLR, a computational framework for longitudinal registration of brain MRI scans to estimate nonlinear image trajectories that are smooth across time, unbiased to any timepoint, and robust to imaging artefacts. It operates on the Lie algebra parameterisation of spatial transforms (which is compatible with rigid transforms and stationary velocity fields for nonlinear deformation) and takes advantage of log-domain properties to solve the problem using Bayesian inference. USRL estimates rigid and nonlinear registrations that: (i) bring all timepoints to an unbiased subject-specific space; and (i) compute a smooth trajectory across the imaging time-series. We capitalise on learning-based registration algorithms and closed-form expressions for fast inference. A use-case Alzheimer's disease study is used to showcase the benefits of the pipeline in multiple fronts, such as time-consistent image segmentation to reduce intra-subject variability, subject-specific prediction or population analysis using tensor-based morphometry. We demonstrate that such approach improves upon cross-sectional methods in identifying group differences, which can be helpful in detecting more subtle atrophy levels or in reducing sample sizes in clinical trials. The code is publicly available in https://github.com/acasamitjana/uslrComment: Submitted to Medical Image Analysi

    Metamorphoses: seventeenth-century ideas on fossils and Earth history

    Get PDF
    Metamorphoses is broadly about how fossils regained their historicity in the seventeenth century, and how this changed history as fossils were simultaneously transformed into instruments of science in the hands, hearts, and minds of savants of the organic origin opinion - the opinion that fossils are either the petrified remains of once-living beings themselves, or their imprints. In studying the past with fossils, intertwined sacred, civil and natural histories became hypothetical, subjected to new, instrument-mediated investigative methods; in turn, fossils were investigated historically; and novel epistemological practices – outcomes of ontological anxieties – produced historicities, and a common experience of Earth history. More specifically, Metamorphoses examines the work of Robert Hooke, John Ray, Nicolaus Steno, Thomas Burnet, William Dugdale, Bernardino Ramazzini, Gottfried Wilhelm Leibniz, and others, to discuss how and why they broke from traditional history in idiosyncratic yet overlapping ways. Their shared idea about what a fossil is fostered a shift in visuality belonging to the seventeenth century with its instrument-mediated vision, and novel investigative methods; but it also represented their new attitudes to history, for interest in fossils was not only about phenomena. Rather, by amalgamating new ways of observing and imagining the earth with ancient wisdom, alchemical ideas, and humanist textual practices, these Earth historians fashioned historiographical approaches that could scarcely have been imagined a century before. Leibniz’s struggle to make a scientific history, by mixing helpings of the work of Burnet, Ramazzini, and others into his own ideas handed new tools to eighteenth-century historians, not only tools for doing and thinking about Earth history but also tools for witnessing and understanding its metamorphoses
    • …
    corecore