40479 research outputs found
Sort by
Life After Stroke: Adapting and Overcoming
Presented on March 27, 2025 at 3:30 p.m. in the Krone Engineered Biosystems Building, CHOA Seminar RoomRuntime: 71:21 minutesDiscover the inspiring stories of stroke survivors who have adapted to life after their strokes. These panelists will discuss their paths to recovery, the obstacles they've overcome, and the ongoing journey of adaptation and resilience
High-efficiency multi-drug functional profiling in a microfluidic device towards personalized predictions in pediatric leukemia
Acute lymphoblastic leukemia is the most common form of pediatric cancer, and while treatment outcomes have improved dramatically over the past several decades for these patients, this same progress has not been achieved for those who experience relapse or refractory disease, particularly in T-ALL. To further improve outcomes for these patients, new strategies need to be devised to both identify patients most at risk of relapse a priori to treatment exposure and to optimally match them with emerging new therapeutic options. To that end, in this thesis, we describe the development of an efficient microfluidics-based assay for functional evaluation of candidate drug combinations in pediatric ALL. In Aim 1, we first describe the development and characterization of the system and demonstrate its capabilities using leukemia cell lines. Next, in Aim 2, we evaluate response to standard induction therapy across a set of diagnostic ALL patient samples, and investigate the predictive value of the assay through correlation to clinical outcome metrics. Lastly, as a final proof-of-concept, we use the developed and validated system to evaluate an experimental treatment regimen in a set of patients who did not respond well to standard regimens. In sum, the results of this work have provided a new methodology for higher-order combination drug screening in limited sample settings, investigated the predictive value of combination response profiling in the context of available clinical data, and finally, evaluated an experimental drug combination across a set of samples from clinical non-responders.Ph.D.Biomedical Engineerin
Automated Root Tracing Using Deep Learning
Roots play a crucial role in plant development by anchoring plants, absorbing nutrients, and maintaining soil structure. Understanding root structures and dynamics is vital for ecological research and assessing soil health. However, tracing roots from photos obtained with Minirhizotron is a time-consuming task, and applying deep learning techniques can facilitate this process. This thesis applies the DeepLabV3+ model with a confidence weighted approach to segment root structures in soil images. The methodology involves classifying images based on root visibility, cropping images to focus on root regions, and training the DeepLabV3+ model, which employs atrous convolutions and an Atrous Spatial Pyramid Pooling (ASPP) module to capture multi-scale contextual information. The confidence method modulates the loss function based on pixel confidence scores to handle ambiguous boundaries and low-resolution images. The confidence function decreases with distance from root boundaries and adapts to varying scales. This method was tested on multiple datasets from natural environments with varying soil types, including Mepibdeath, Ban Harol, Champenoux, and Hesse, which allowed for an assessment of the robustness and generalization ability of the tested models. Evaluated using metrics such as Cohen’s kappa and R2 for surface and length, the results show that the confidence-weighted approach improves segmentation quality by reducing false positives but may miss weakly expressed roots. Future work should focus on enhancing model robustness and improving training data quality to handle complex root structures and environmental noise better.M.S.Computer Scienc
An immune-competent microvascularized human lung-on-chip device for studying immunopathologies of the lung
Severe influenza affects 3-5 million people worldwide each year, resulting in >300,000 deaths. Standard-of-care antiviral therapeutics have limited effectiveness in these patients where infection severity is driven by an aberrant immune response. In severe influenza, the hyperactive immune system causes acute cytokine storm, cytopenia, and local tissue damage. Current preclinical models of severe influenza, in small animal models and in vitro, fail to recapitulate the human immune response to severe viral infection accurately. Here, we bioengineered a human lung tissue model that represents small airway structures with tissue-resident and circulatory immune cells. The immune-competent lung tissue model comprises of a 3D, perfusable microvascular network underneath a mature, differentiated epithelium at an air-liquid interface.
With this model, we demonstrate that a conventional lung-on-chip (LOC) that lacks immune cells induces limited cytokine response to severe influenza infection, and while a LOC with tissue-resident macrophages induces significant response in the airway, the presence of both tissue-resident and circulatory immune cells was necessary to elicit a significant airway and interstitial cytokine storm. We demonstrate through extensive microscopy, secretome, and single-cell RNA sequencing analyses that severe flu infection results in significant lymphopenia, extracellular matrix remodeling, and transcriptional shutdown in fully immune-competent lung tissues. Lastly, we highlight the prominent role of stromal-immune interactions in the response to severe influenza infection, with stromal cells participating in both cytokine signaling and ECM remodeling. The introduction of both tissue-resident and circulatory immune cells into this lung-on-chip model allows for investigation into the distinct role of each immune cell type in the initiation and progression of influenza and may shed light on potential therapeutic avenues targeting immune dysregulation.Ph.D.Bioengineerin
Hazard Detection and Avoidance for Autonomous Spacecraft Landing
As the feasibility of autonomous spacecraft landing is increasingly demonstrated by space agencies and private space companies, the demand for even more advanced capabilities grows, with the aim of achieving anytime and anywhere global and safe landing capabilities across the solar system—a milestone articulated by NASA’s Precision Landing and Hazard Avoidance (PL&HA) program. A notable challenge in this endeavor is the need for real-time terrain mapping and processing, along with the appropriate guidance, navigation, and control (GNC) technologies to perform autonomous hazard detection and avoidance (AHDA).
This dissertation presents novel mathematical formulations and methods for the design and development of next-generation AHDA technology. Specifically, this study focuses on terrain mapping and processing algorithms for onboard hazard detection (HD) and the guidance problem under safe landing site uncertainty. We begin by improving the state-of-the-art (SOTA) model-based HD algorithm by applying deep learning techniques. Given the safety-critical application of planetary landings, an uncertainty-aware learning-based HD algorithm is developed for improved reliability. Next, in Chapters 3 and 4, we address the landing safety evaluation under topographic uncertainty. Topographic uncertainty is often inevitable due to the hardware limitations of sensing instruments and challenging operational conditions, but in-depth studies and effective algorithmic solutions remain limited. Chapter 3 investigates the uncertainty quantification of landing safety under topographic uncertainty and proposes a semi-analytical evaluation method. In Chapter 4, novel real-time stochastic terrain mapping and processing algorithms are developed. We develop a real-time Gaussian digital elevation map (DEM) construction algorithm and a real-time stochastic HD algorithm compatible with the Gaussian DEM input. The novel stochastic terrain mapping and processing algorithm is shown to outperform the SOTA algorithm in terms of both prediction reliability and computational cost.
Finally, Chapter 5 formulates the hazard detection and avoidance (HDA) guidance problem and presents novel solution approaches. The HDA guidance problem is unique, where the objective is to maximize the probability of a safe landing, with the major uncertainty being the landing safety over the terrain. This dissertation presents several illustrative examples as proof-of-concept to demonstrate the value of the proposed work.Ph.D.Aerospace Engineerin
Atomistic Modeling of Dislocation Plasticity in Metals and Alloys
Dislocations play a crucial role in the plasticity of crystalline solids. Recent studies on dislocation processes in compositionally complex alloys, also called high-entropy alloys, have sparked interest in quantitatively determining energy barriers of dislocation movements in metals and alloys.
This thesis focuses on the robust and efficient quantification of energy barriers to dislocation motion in metals and alloys. The work combines advanced computational methods, such as the nudged elastic band (NEB) method and molecular dynamics (MD) simulations, to investigate rate-controlling mechanisms at the atomic scale.
The research begins with the development of the NEB method to calculate Peierls barriers for dislocation glide in face-centered cubic (FCC) nickel, identifying how these barriers decrease with increasing shear stress. The study then expands to model dislocation-obstacle interactions, exploring mechanisms like vacancy cluster cutting and cross-slip, using stress-controlled and strain-controlled simulations to reveal activation energies and rate-limiting processes.
In addition to atomistic modeling, the thesis presents a statistical analysis of short-range order (SRO) and short-range clustering (SRC) in binary and ternary alloy systems. This analysis demonstrates how alloy structures, such as NiCr and NiCrCo, exhibit SRO and SRC, which in turn affect dislocation glide and alloy strength.
Further, the thesis evaluates dislocation glide barriers in A600 alloys, providing detailed insights into the mechanical behavior of these complex alloys under shear loads. Finally, MD simulations of dislocation mobility in nickel and A600 reveal how mobility and threshold stress decrease with increasing temperature, highlighting the thermal effects on alloy performance.
To conclude, this thesis has developed and applied advanced computational techniques to quantify the energy barriers and dislocation mobility in Ni and Ni-based alloys. These results offer a mechanistic understanding of dislocation motion and interaction with defects, and also provide quantitative input or mechanistic support for dislocation dynamics and crystal plasticity modeling. This research establishes a solid foundation for future research into the rate-controlling mechanisms of dislocation motion in metals and alloys, contributing to the broader field of materials science.Ph.D.Mechanical Engineerin
Topology optimization with natural frequency and structural stability criteria using eigenvector aggregates
Topology optimization is a powerful tool for designing lightweight, high-performance structures by optimizing material distribution to meet specific performance objectives. One of the most challenging applications of topology optimization involves ensuring structural stability and achieving the desired natural frequencies of the design by formulating problems as generalized eigenvalue problems. Eigenvectors provide essential information on the mode shapes for a structure, offering insights into tailoring its behavior to specific requirements. However, imposing eigenvector constraints in topology optimization is challenging due to the non-differentiability of repeated eigenvalues, the complexity of balancing competing objectives, and the high computational cost of calculating eigenvector derivatives. This thesis addresses these challenges by introducing an innovative eigenvector aggregation approach to handle eigenvector constraints in topology optimization. It presents a comprehensive study of the eigenvector aggregate, including its applications in natural frequency and buckling optimization, as well as the ability to hand the repeated eigenvalues. Additionally, this work presents efficient methods for computing eigenvector-based derivatives and validates these methods in the thermal, natural frequency, and buckling optimization problems. Furthermore, this thesis investigates nonlinear initial post-buckling problems and introduces efficient optimization criteria based on Koiter asymptotic theory for post-buckling performance optimization. It presents a novel two-layer adjoint-based sensitivity analysis for Koiter-based optimization, significantly reducing the computational cost.Ph.D.Aerospace Engineerin
Physics based Modeling of Emerging Ferroelectric Devices and Performance Benchmarking of Memory Circuits
A comprehensive framework for the performance analysis of ferroelectric-based memory systems, encompassing from device modeling to system-level analysis is proposed. Initially, a computationally efficient phase-field physics-based compact model for ferroelectric capacitors is developed. The model self-consistently solves the time-dependent Landau-Ginzburg and Poisson's equations to capture polarization dynamics. Analytical equations for the time-dependent kinetic coefficient and voltage-dependent gradient energy coefficient are derived, which are crucial for accurately modeling the transient characteristics of ferroelectric capacitors. This framework is then extended for ferroelectric, antiferroelectric, and dielectric mixed phase capacitors based on Kittel's two sublattice theory. It allows the model to capture endurance effects due to phase evolution during cycling and the effect of depolarization electric field due to the presence of dielectric phases. The developed models are calibrated with experimental results of low switching voltage ferroelectric materials for circuit level analysis. A detailed analysis is conducted on ferroelectric random access memory (FERAM) circuit arrays, examining the impact of various design parameters. The performance of these memory arrays is compared to other competing memory technologies, particularly magnetic memories, in terms of read/write latency and energy consumption. Finally, the dissertation discusses a framework for system level analysis under real workloads, exploring the potential of using FERAM as main memory.Ph.D.Electrical and Computer Engineerin
Capability-Aware Shared Hypernetworks for Heterogeneous Multi-Agent Coordination
Cooperative heterogeneous multi-agent tasks require agents to behave in a flexible and complementary manner that best leverages their diverse capabilities. Learning-based approaches to this challenge span a spectrum between two endpoints: i) shared-parameter methods, which assign an ID to each agent to encode diverse behaviors within a single architecture for sample-efficiency, but are limited in their ability to learn diverse behaviors; ii) independent methods, which learn a separate policy for each agent, enabling greater diversity at the cost of sample- and parameter-efficiency. Prior work on learning for heterogeneous multi-agent teams has already explored the middle ground of this spectrum by learning shared-parameter or independent policies for classes of agents, allowing for a compromise between diversity and efficiency. However, these approaches still do not reason over the impact of agent capabilities on behavior, and thus cannot generalize to unseen agents or team compositions.
In this work, we aim to enable flexible and heterogeneous coordination without sacrificing diversity, sample efficiency or generalization to unseen agents and teams. First, inspired by work from trait-based heterogeneous task allocation, we explore how capability-awareness enables generalization to unseen agents and teams. We thoroughly evaluate our GNN-based capability-aware policy architecture, showing that it can more effectively generalize than existing work.
Then, inspired by recent work in transfer learning and meta-RL, we propose Capability-Aware Shared Hypernetworks (CASH), a new soft weight sharing architecture for heterogeneous coordination that use hypernetworks to explicitly reason about continuous agent capabilities in addition to local observations. Intuitively, CASH allows the team to learn shared decision making strategies (captured by a shared encoder) that are readily adapted according to the team’s individual and collective capabilities (by a shared hypernetwork). Our design is agnostic to the underlying learning paradigm. We conducted detailed experiments across two heterogeneous coordination tasks and three standard learning paradigms (imitation learning, value-based and policy-gradient reinforcement learning). Results reveal that CASH generates appropriately diverse behaviors that consistently outperform baseline architectures in terms of task performance and sample efficiency during both training and zero-shot generalization. Notably, CASH provides these improvements with only 20% to 40% of the learnable parameters used by baselines.M.S.Computer Scienc
Towards a Methodology for the Definition and Evaluation of Enterprises through Modeling and Simulation: Application to Product Development
Enterprises today have to navigate a rapidly evolving landscape driven by technological
advancements, intensifying global competition, and shifting market dynamics. To remain
competitive, organizations must adopt agile and digital solutions that enhance operational
efficiency, optimize processes, and accelerate time-to-market. This research introduces a
comprehensive methodology designed to evaluate enterprise performance across a variety of
scenarios, with a specific focus on product development processes. The proposed methodology
rests on three pillars: modeling, simulation, and their integration. Leveraging Model-based
Systems Engineering (MBSE) and the Unified Architecture Framework (UAF), the methodology
provides a structured approach to model organizational elements such as processes, resources
(tools, personnel), and goals, establishing a cohesive enterprise architecture. A holistic agent
based simulation facilitates a granular analysis of the enterprise operations. The simulation
captures dynamic interactions and emergent behaviors, enabling quantitative evaluations
of the individual impacts of digital tools, processes and human resources on organizational
outcomes. The seamless integration between the modeling and simulation environments is
ensured through the use of a centralized Authoritative Source of Truth (ASOT), maintaining
consistency and traceability. This end-to-end approach enables scenario-based assessments and
quantitative evaluations of alternative configurations, addressing the limitations of traditional
EAframeworks by capturing dynamic interactions and emergent behaviors. This methodology
therefore lays the groundwork for the further development of Digital Twins of Organizations
(DTOs), one that facilitates strategic decision-making and optimization in complex enterprise
environments