1,724 research outputs found
Numerical Simulations of Cavitating Bubbles in Elastic and Viscoelastic Materials for Biomedical Applications
The interactions of cavitating bubbles with elastic and viscoelastic materials play a central role in many biomedical applications. This thesis makes use of numerical modeling and data-driven approaches to characterize soft biomaterials at high strain rates via observation of bubble dynamics, and to model burst-wave lithotripsy, a focused ultrasound therapy to break kidney stones.
In the first part of the thesis, a data assimilation framework is developed for cavitation rheometry, a technique that uses bubble dynamics to characterize soft, viscoelastic materials at high strain-rates. This framework aims to determine material properties that best fit observed cavitating bubble dynamics. We propose ensemble-based data assimilation methods to solve this inverse problem. This approach is validated with surrogate data generated by adding random noise to simulated bubble radius time histories, and we show that we can confidently and efficiently estimate parameters of interest within 5% given an iterative Kalman smoother approach and an ensemble- based 4D-Var hybrid technique. The developed framework is applied to experimental data in three distinct settings, with varying bubble nucleation methods, cavitation media, and using different material constitutive models. We demonstrate that the mechanical properties of gels used in each experiment can be estimated quickly and accurately despite experimental inconsistencies, model error, and noisy data. The framework is used to further our understanding of the underlying physics and identify limitations of our bubble dynamics model for violent bubble collapse.
In the second part of the thesis, we simulate burst-wave lithotripsy (BWL), a non- invasive treatment for kidney stones that relies on repeated short bursts of focused ultrasound. Numerical approaches to study BWL require simulation of acoustic waves interacting with solid stones as well as bubble clouds which can nucleate ahead of the stone. We implement and validate a hypoelastic material model, which, with the addition of a continuum damage model and calibration of a spherically- focused transducer array, enables us to determine how effective various treatment strategies are with arbitrary stones. We present a preliminary investigation of the bubble dynamics occurring during treatment, and their impact on damage to the stone. Finally, we propose a strategy to reduce shielding by collapsing bubbles ahead of the stone via introduction of a secondary, low-frequency ultrasound pulse during treatment.</p
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Recommended from our members
Tradition and Transformation: Analysis of the Trajectory of Change in Artwork by Artists With an Educational Background in Oriental Painting After Graduation
This qualitative multiple case study tracks the work of six artists with an educational background in oriental painting, traditional Korean painting, for their BFA major. This study reveals a change in their artwork from their original training after graduation to their current manner of artistic expression. This transformation occurs as they develop their artwork in a more contemporary way in the South Korean art world where Western art/global art has become a center. Although oriental painting has been influenced by Western art since Japanese colonial liberation in the 1940s, this situation presents conceptual conflicts between traditional and contemporary approaches to this genre of art. This study examines how six artists find their artistic position between conflicting values through the examination of the trajectory of the changes in their artwork since their graduation from undergraduate school.
The participants of this study were six artists (three men and three women) who earned a BFA degree in oriental painting in South Korea. Semi-structured interviews, visual data of artists’ artworks, and written notes were sources of data analysis. The qualitative case study was based on constructivism of philosophical worldviews. The evolution of the participants’ artwork is examined based on theories such as Jack Mezirow’s transformative learning and Robert Kegan’s adult development.
This study presents the transformation from two perspectives: sociocultural factors and personal motivations. Each perspective reflects changes in materials and techniques as well as changes in imagery. Furthermore, the enduring values of oriental painting in the transformation are examined, which includes Eastern philosophy and aesthetics and visual elements such as three-distance perspective, blank space, and expression of line. Ultimately, this study argues that there exist various avenues of transformation based on oriental painting, with tradition persisting in novel forms of contemporary art
Did you just make that up? An auto-ethnographic investigation into the emergence of images in painting, as situated within the framework of C20th and C21st British Art
I am a painter. My paintings depict figures in groups or alone, enacting narrative in illusionistic space. The paintings are produced without much explicit preparation in terms of their content, relying on improvisation in the studio for their realisation. I do not have a clear idea of when they are finished, either, and I often alter paintings long after their first conclusion. I set out to examine where the images and spaces I depict come from, how their form develops and how they might continue to emerge; how I make things up, in other words. In doing this, I hope to make the paintings better by increasing the complexity of my understanding of them, to shed light on creative practice in general, and to offer insight to other painters like me, and to researchers into creative practice.
I have subjected the emergent and shifting nature of my paintings to academic study by combining a close attention to the work and its processes with a self-reflective journal of the activity and ongoing theoretical writing. This process generates a virtuous spiral of activity in the studio, as writing about the painting produces insight, which is fed into the painting, making it better, and producing more insight, which is fed into the painting and so on.
In subjecting my studio practice to study, I hope to open it up in a way that might be useful to others. The analysis of reflections on my own painting - developing the concept of the intersubjective object - is an attempt to make sense of interrelationships between the material, social and theoretical territories of painting. This is where the originality of my study lies. In presenting it, I offer insights into my creative practice that will be useful for other creative practitioners, and for academic study of creative practice.
I address questions about improvisation and narrative development in my paintings. First, I introduce the thesis and lay out its terms. In chapter 1 I set out the literature which informs the thesis, and in chapter 2 I set out the methodologies I have approached in working out my own method. In chapter 3 self-reflection and reflexivity are discussed in relation to improvisation and narrative, in chapter 4 which I examine how meaning is realised in relation to the surface of the painting, in chapter 5 which the positioning of my studio practice in terms of its wider contexts is examined in relation to painting as an intersubjective object and in chapter 6 which I look at continuity in my studio practice. I propose cloth as a metaphor for the work, as an articulation of development within individual paintings and within the practice. In chapter 7 I discuss the problem of finishing paintings.
This research has brought my painting into sharper focus, examining the relationship of painting to the improvisation of content. It has allowed me to re-examine elements of my practice that I have either taken for granted or overlooked, revealing historical parallels that would have remained invisible otherwise. It develops an understanding of the significance of narrative and improvisation in any creative practice, elucidating ideas about the self in creativity. In differentiating painting from other fine art practices and creative forms it produces a powerful sense of the significance of the painting in making meaning. The research leads me to the identification of a painting as an intersubjective object, in that my own subjectivity and those of others meet and operate there to generate and develop meaning. This theoretical construction can be employed in discussion of other art works, as well as my own
The muon content of atmospheric air showers and the mass composition of cosmic rays
Cosmic rays are messengers from outer space holding the answer to the Universe\u27s deepest mysteries: What are they exactly? Where do they come from? How do they accelerate to such energies, and how are the laws governing their interactions? At the highest energies, these unubiquitous particles can only be detected through the showers of secondary particles that they generate in their interactions with the Earth\u27s atmosphere. Large ground-based observatories, like the Pierre Auger Observatory, detect these extensive air showers and attempt to reconstruct as much information of the primary cosmic ray as possible. In particular, the number of secondary muons is a key observable because it is directly related to the atomic mass number of the primary that generated them. Understanding the mass composition as a function of the energy would shed light on various open questions strongly linked to the origin of cosmic rays. The Pierre Auger Observatory has dedicated scintillators buried underground to directly detect muons, the Underground Muon Detector (UMD).
This thesis is devoted to the accurate determination of the muon content of air showers and to the study of its composition implications. We analyze direct muon measurements of air showers with energies between eV and eV from two experiments. At lower energies we extensively analyze UMD data, which we complement at higher energies with measurements from the Akeno Giant Air Shower Array (AGASA). For the UMD data, we develop new methods that significantly improve the estimation of the muon number. These methods also allow for the reconstruction of the muon signal as a function of time with an unprecedented time resolution, opening the door to the reconstruction of new composition-sensitive observables. As a direct application, we study the measured lateral distribution of muons and the models that attempt to describe it.
Furthermore, we analyze the mass composition implications of the UMD and AGASA data. The composition interpretation of the data can only be inferred by a comparison against air-shower simulations. We therefore simulate single-proton, single-iron, and mixed composition scenarios based on the three newest-generation high-energy hadronic interaction models. To better compare the results against those of other experiments, we compute the so called -values, a scale of the muon content in data relative to that of proton and iron simulations. The combined results offer a picture consistent with other experiments: the unexpectedly heavy composition constitutes evidence of a muon deficit in air-shower simulations that increases with the energy. These results can help to improve the high-energy hadronic interaction models, which in turn would improve the precision of the inferred mass composition
Shaped-based IMU/Camera Tightly Coupled Object-level SLAM using Rao-Blackwellized Particle Filtering
Simultaneous Localization and Mapping (SLAM) is a decades-old problem. The classical solution to this problem utilizes entities such as feature points that cannot facilitate the interactions between a robot and its environment (e.g., grabbing objects). Recent advances in deep learning have paved the way to accurately detect objects in the image under various illumination conditions and occlusions. This led to the emergence of object-level solutions to the SLAM problem. Current object-level methods depend on an initial solution using classical approaches and assume that errors are Gaussian. This research develops a standalone solution to object-level SLAM that integrates the data from a monocular camera and an IMU (available in low-end devices) using Rao Blackwellized Particle Filter (RBPF). RBPF does not assume Gaussian distribution for the error; thus, it can handle a variety of scenarios (such as when a symmetrical object with pose ambiguities is encountered). The developed method utilizes shape instead of texture; therefore, texture-less objects can be incorporated into the solution. In the particle weighing process, a new method is developed that utilizes the Intersection over the Union (IoU) area of the observed and projected boundaries of the object that does not require point-to-point correspondence. Thus, it is not prone to false data correspondences. Landmark initialization is another important challenge for object-level SLAM. In the state-of-the-art delayed initialization, the trajectory estimation only relies on the motion model provided by IMU mechanization (during the initialization), leading to large errors. In this thesis, two novel undelayed initializations are developed. One relies only on a monocular camera and IMU, and the other utilizes an ultrasonic rangefinder as well. The developed object-level SLAM is tested using wheeled robots and handheld devices, and an error (in the position) of 4.1 to 13.1 cm (0.005 to 0.028 of the total path length) has been obtained through extensive experiments using only a single object. These experiments are conducted in different indoor environments under different conditions (e.g. illumination). Further, it is shown that undelayed initialization using an ultrasonic sensor can reduce the algorithm's runtime by half
INSAM Journal of Contemporary Music, Art and Technology 10 (I/2023)
Having in mind the foundational idea not only of our Journal but also the INSAM Institute itself, the main theme of this issue is titled “Technological Aspects of Contemporary Artistic and Scientific Research”. This theme was recognized as important, timely, and necessary by a number of authors coming from various disciplines.
The (Inter)Views section brings us three diverse pieces; the issue is opened by Aida Adžović’s interview with the legendary Slovene act Laibach regarding their performance of the Wir sing das Volk project at the Sarajevo National Theater on May 9, 2023. Following this, Marija Mitrović prepared an interview with media artist Leon Eckard, concerning this artist’s views on contemporary art and the interaction between technology and human sensitivity. An essay by Alexander Liebermann on the early 20th-century composer Erwin Schulhoff, whose search for a unique personal voice could be encouraging in any given period, closes this rubric.
The Main theme section contains seven scientific articles. In the first one, Filipa MagalhĂŁes, InĂŞs Filipe, Mariana Silva and Henrique Carvalho explore the process and details of technological and artistic challenges of reviving the music theater work FE...DE...RI...CO... (1987) by Constança Capdeville. The second article, written by Milan Milojković, is dedicated to the analysis of historical composer Vojislav VuÄŤković and his ChatGPT-generated doppelganger and opera. The fictional narrative woven around the actual historical figure served as an example of the current possibilities of AI in the domain of musicological work. In the next paper, LuĂs Arandas, Miguel Carvalhais and Mick Grierson expand on their work on the film Irreplaceable Biography, which was created via language-guided generative models in audiovisual production. Thomas Moore focuses on the Belgium-based Nadar Ensemble and discusses the ways in which the performers of the ensemble understand the concept of the integrated concert and distinguish themselves from it, specifying the broadening of performers’ competencies and responsibilities. In her paper, Dana Papachristou contributes to the discussion on the politics of connectivity based on the examination of three projects: the online project Xenakis Networked Performance Marathon 2022, 2023Eleusis Mystery 91_Magnetic Dance in Elefsina European Capital of Culture, and Spaces of Reflection offline PirateBox network in the 10th Berlin Biennale. The penultimate article in the section is written by Kenrick Ho and presents us with the author’s composition Flou for solo violin through the prism of the relationship between (historically present) algorithmic processes, the composer, and the performer. Finally, Rijad KaniĹľa adds to the critical discourse on the reshaping of the musical experience via technology and the understanding of said technology using the example of musique concrète.
In the final Review section, Bakir Memišević gives an overview of the 13th International Symposium “Music in Society” that was held in Sarajevo in December 2022
Molecular clouds and stellar feedback: an investigation of synthetic line and continuum emission maps
Molecular clouds are complex systems and the search of adequate observational measurements to trace their evolution is still an open problem. In this thesis, we use produce synthetic emission maps of the 12CO 1-0, 13CO 1-0, [CI] 1-0, and [CII] lines, as well as of the FIR continuum emission, to test to which extent these emission measurements can be used as tracers of the evolutionary stage of molecular clouds. We use numerical simulations of molecular clouds performed within the SILCC-Zoom project. These simulations include detailed stellar feedback due to ionizing radiation, external magnetic fields, and a chemical network evolved on-the-fly. We compare two different chemical networks, NL97 and NL99, and we find that NL97, even though it does not include neutral carbon, more accurately reproduces the abundances of CO and C+. We then use NL97 in the rest of the work. We introduce a novel post-processing procedure for the C+ abundance using CLOUDY, essential in HII regions to account for the higher ionization states due to stellar radiation. Furthermore, we show that assuming chemical equilibrium results in H and H2 being underestimated and overestimated, respectively, by up to a factor of 2. The abundances of C+ and CO are also, respectively, underestimated and overestimated. This is reflected and amplified in the estimation of the CO and [CII] luminosity as well. We also investigate the capability of the L_CO/L_[CII] luminosity ratio to trace the H2 mass fraction in the clouds, but find no clear trend. We then investigate the [CII]/FIR ratio in HII regions and in entire clouds with stellar feedback. In young HII regions the drop of the [CII]/FIR intensity ratio is mainly due to the strong FIR emission produced by hot and dense dust, and the contemporary saturation of the [CII] line. In more evolved HII regions, the second ionization of carbon is the main reason for the low [CII]/FIR ratio. The evolution of this ratio is reflected in the evolution of the L_[CII]/L_FIR luminosity ratio in the entire clouds. This evolution can be schematized in three phases. Overall, L_[CII]/L_FIR is well correlated with the total stellar luminosity L_*tot. The relation between L_[CII]/L_FIR andL_*tot can be fitted with a power-law. When L_*tot is large, i.e., in evolved clouds which formed many massive stars, L_[CII]/L_FIR is particularly low, determining an observable [CII]-deficit in these clouds. However, this relation breaks when the total FIR luminosity stars decreasing as a consequence of the cloud dispersal caused by the stellar feedback. The aspect of HII regions in molecular clouds strongly depends on the geometry of the cloud, and on the line of sight. Indeed, a certain HII region can have different properties when observed from different LOS, and apparent HII regions, which are actually only the result of projection effects, can be observed
- …