149717 research outputs found
Sort by
Report to the President for year ended June 30, 2025, Singapore-MIT Alliance for Research and Technology (SMART)
This report contains the following sections: Introduction, Research Highlights, Innovation and Entrepreneurship, and Outreach and Engagement
Properties of Low TC AlMn TES
Low T C AlMn transition-edge sensors (TESs) have been developed as sensitive thermometers for the Q-Array, which will use superconducting targets to measure the coherent elastic neutrino nucleus scattering spectrum in the RICOCHET experiment. The TESs are made of manganese-doped aluminum with a titanium and gold antioxidation layer. A prototype TES thermometer consists of two TESs in parallel, an input gold pad in metallic contact with the TESs and an output gold pad and gold thermal link meanders, which are each designed to control the flow of heat through the TESs. We have fabricated and measured low T C AlMn TES chips with or without thermal flow control structures. We present T C measurements of the TESs after the initial fabrication and further T C tuning by re-heating and summarize the thermal property studies of the prototype TES thermometer by measuring I-V curves and complex impedance
Against Character Constraints
This paper defends the following principle: For any visually perceptible set of objects and any visual phenomenal character, there could be a veridical perception of exactly those objects with that character. This principle is rejected by almost all contemporary theories of perception, yet rarely addressed directly. Many have taken the apparent inconceivability of a certain sort of ‘shape inversion'—as compared to the more plausible, frequently discussed ‘colour inversion’—as evidence that the spatial characters of our perceptions are uniquely suited to and/or revelatory of the structure of their objects, such that alleged perceptions of those objects that differed radically in spatial character could not be veridical. I argue that these conclusions are unjustified: I claim that the difficulty involved in constructing coherent ‘shape inversion’ scenarios is attributable to the complex relations among visual and tactile shape experiences, as opposed to relations between shape experiences and worldly shape properties
Electrostatic adsorption of polyanions onto lipid nanoparticles controls uptake, trafficking, and transfection of RNA and DNA therapies
Rapid advances in nucleic acid therapies highlight the immense therapeutic potential of genetic therapeutics. Lipid nanoparticles (LNPs) are highly potent nonviral transfection agents that can encapsulate and deliver various nucleic acid therapeutics, including but not limited to messenger RNA (mRNA), silencing RNA (siRNA), and plasmid DNA (pDNA). However, a major challenge of targeted LNP-mediated systemic delivery is the nanoparticles’ nonspecific uptake by the liver and the mononuclear phagocytic system, due partly to the adsorption of endogenous serum proteins onto LNP surfaces. Tunable LNP surface chemistries may enable efficacious delivery across a range of organs and cell types. Here, we describe a method to electrostatically adsorb bioactive polyelectrolytes onto LNPs to create layered LNPs (LLNPs). LNP cores varying in nucleic acid cargo and component lipids were stably layered with four biologically relevant polyanions: hyaluronate (HA), poly-L-aspartate (PLD), poly-L-glutamate (PLE), and polyacrylate (PAA). We further investigated the impact of the four surface polyanions on the transfection and uptake of mRNA- and pDNA-loaded LNPs in cell cultures. PLD- and PLE-LLNPs increased mRNA transfection twofold over unlayered LNPs in immune cells. HA-LLNPs increased pDNA transfection rates by more than twofold in epithelial and immune cells. In a healthy C57BL/6 murine model, PLE- and HA-LLNPs increased transfection by 1.8-fold to 2.5-fold over unlayered LNPs in the liver and spleen. These results suggest that LbL assembly is a generalizable, highly tunable platform to modify the targeting specificity, stability, and transfection efficacy of LNPs, as well as incorporate other charged targeting and therapeutic molecules into these systems
Algorithmic Advances for Fair and Efficient Decision-Making in Online Platforms
Modern online platforms—such as recommendation systems, advertising markets and e-commerce sites—operate in dynamic and complex environments where efficient algorithmic decision-making is essential. These platforms must continuously adapt to rapidly changing user behaviors, market fluctuations, and data uncertainties while optimizing for both learning efficacy and revenue generation. However, focusing solely on performance can lead to biased outcomes and inequitable treatment of users and items, raising concerns about fairness. Balancing efficiency and fairness is therefore crucial for sustainable platform growth. In this thesis, we tackle these challenges by developing novel algorithmic frameworks and methods that integrate fairness considerations with robust learning and optimization techniques. We explore these problems from three distinct perspectives, each contributing to enhancing the decision quality and fairness considerations in online decision-making.
In Chapter 2, we first focus on the topic of efficiency, by addressing the challenge of performing online learning in a highly non-stationary environment. User behaviors and preferences often change over time, making it difficult for traditional algorithms to maintain good performance. This issue is particularly prevalent in real-world applications such as recommendation systems and advertising platforms, where shifts in user dynamics can undermine decision-making efficacy. To tackle this, we propose a novel algorithm for the widely adopted multi-armed bandit framework that enables platforms to adaptively learn in a fast-changing environment characterized by auto-regressive temporal dependencies.
In Chapter 3, we shift our focus to the realm of fairness and explore how fairness considerations can be effectively integrated into the context of assortment planning. As algorithmic recommendations become integral to platform operations, a purely revenue-driven approach can result in highly imbalanced outcomes, leading to certain items receiving minimal exposure and exiting the platform in the long run. To address this, we develop a combinatorial optimization framework that incorporates fairness constraints, ensuring equitable exposure and opportunities for all items on the platform. We design a series of polynomial-time approximation algorithms to solve the fair assortment problem. Through numerical studies on both synthetic data and real-world MovieLens data, we showcase the effectiveness of our algorithms and provide insights into the platform's price of fairness.
In Chapter 4, we bridge the topics of fairness and learning efficiency by examining how to achieve multi-stakeholder fairness in a multi-sided recommendation system. Here, the challenge is multifaceted, including ensuring high platform revenue, maintaining fair outcomes for diverse stakeholders, and enabling robust learning amidst data uncertainty. We propose a novel optimization framework that maximizes platform revenue while enforcing fairness constraints for both items and users, accommodating various fairness notions and outcome metrics. Building on this, we introduce a low-regret online learning and optimization algorithm that dynamically balances learning and fairness—two objectives that are often at odds. Finally, we demonstrate the efficacy of our approach via a real-world case study on Amazon review data and offer actionable guidelines for implementing fair policies in practice.Ph.D
Half-Space Intersection Properties for Minimal Hypersurfaces
We prove “half-space” intersection properties in three settings: the hemisphere, half-geodesic balls in space forms, and certain subsets of Gaussian space. For instance, any two embedded minimal hypersurfaces in the sphere must intersect in every closed hemisphere. Two approaches are developed: one using classifications of stable minimal hypersurfaces, and the second using conformal change and comparison geometry for α -Bakry-Émery-Ricci curvature. Our methods yield the analogous intersection properties for free boundary minimal hypersurfaces in space form balls, even when the interior or boundary curvature may be negative. Finally, Colding and Minicozzi recently showed that any two embedded shrinkers of dimension n must intersect in a large enough Euclidean ball of radius R(n). We show that R ( n ) ≤ 2 n
Report to the President for year ended June 30, 2025, MIT-IBM Watson AI Lab
This report contains the following sections: Goals and Priorities, Industry Research Collaborations, Selected Research Overview, Student and Young Researcher Engagement, Community Outreach and Events, Communications, and Administration and Governance
Exploring Learning Engineering Design Decision Tracking: Emergent Themes from Practitioners’ Work
This paper examines design decisions that were written down and enacted by learning design practitioners across 18 projects at a postsecondary institution. Through emergent coding of decisions recorded in a Learning Engineering Evidence and Decision (LEED) tracker in situ, this research answers 3 questions: (1) how do practitioners track and cite sources of influence on design decisions, (2) how do practitioners communicate, revisit, and iterate these decisions throughout cycles of their design, and (3) when revisions were made to decisions, what sources of influence led to these changes? Findings indicate that practitioners record new and revised decisions while also tracking influences on these decisions that stem from their own experiences and from the specific project context. This work contributes to the support of learning design practitioners by offering a tool to capture thinking and reasoning in complex contexts, while offering researchers a way to collect evidence of this decision making
Aligning Machine Learning and Robust Decision-Making
Machine learning (ML) has become increasingly ubiquitous across many applications worldwide, ranging from areas like supply chain to personalized pricing, recommendations, and more. These predictive models are often used as tools to inform operations and decision-making, with the potential to revolutionize decision-making. The main key question this thesis aims to address is: How can we make ML methods aware of their downstream impact on the full decision-making process? As a result, we focus on developing methods to align AI with real-world objectives in order to make efficient, safe, and robust systems.
This thesis is split into three chapters focusing on different aspects of this problem. In Chapter I of this thesis we address the heavy computational complexity of existing methods. We present a meta-optimization machine learning framework to learn fast approximations to general convex problems. We further apply this within an end-to-end learning framework which trains ML models with an optimization-based loss function to minimize the decision cost directly. This meta-optimization approach allows us to tackle problem sizes that were intractable using approaches from the previous literature. Furthermore, this work establishes analytically that this learning approach guarantees fast convergence to nearly-optimal solutions. Through this chapter it is shown that the proposed approach consistently scales better in terms of runtime as problem size increases, being 2 to 10 times faster for various problems while retaining nearly the same accuracy.
In Chapter II we focus on the robustness problem, to make decisions that protect against worst-case scenarios as well as to noise in the data. Traditional robust optimization methods tackle this issue by creating uncertainty sets for each observation, aiming to minimize costs in worst-case scenarios. However, these methods assume the worst-case scenario happens at every observation, which can be too pessimistic. We propose a new approach that avoids constructing uncertainty sets and links uncertainties across the entire feature space. This allows for robust decision-making without assuming worst-case scenarios at every observation. Our approach integrates robustness with a concept of learning stability, proving that algorithms with a stability property inherently produce robust solutions without explicitly solving the robust optimization problem. This chapter finally tests the framework on a variety of problems such as portfolio optimization using historical stock data, inventory allocation and electricity generation using real-world data, showing significant improvement in terms of robustness and competitive results in terms of the average error relative to existing literature.
Finally in Chapter III we consider the endogenous setting where decisions we take affect outcomes, like pricing and assortment optimization where decisions (like price) affect demand. In the end-to-end spirit, this research introduces an approach to jointly predict and optimize in this setting which learns a prediction aligned with expected cost. We further introduce a robust optimization decision-making method that can account for uncertainty in ML models --- specifically by constructing uncertainty sets over the space of ML models and optimize actions to protect against worst-case predictions. We further prove guarantees that our method can capture near-optimal decisions with high probability as a function of data. We also introduce a new class of two-stage stochastic optimization problems to the end-to-end learning framework that can now be addressed through our framework. Here, the first stage is an information-gathering problem to decide which random variable to ``poll'' and gain information about before making a second-stage decision based off of it. We present several computational experiments for pricing and inventory assortment/recommendation problems. We compare against existing methods in bandits and offline reinforcement learning, showing our approach has consistent improved performance over these.Ph.D
Single-Phase Heat Transfer Effects of Mixing Vane Geometries in a Narrow Rectangular Channel
Mixing vane geometries enhance the fuel-to-coolant heat transfer within nuclear reactors, which allows for more efficient use of power reactors. At the same time, their presence affects the critical heat flux (CHF), the upper limit to power produced by the reactor, within the reactor. Numerical simulations do not accurately reflect the changes to CHF when mixing vanes are included in nuclear fuel assemblies, suggesting that the CHF models are not resolving the boiling phenomena that occurs with mixing vane geometries. This thesis aims to address this gap by designing an experiment capable of directly resolving the local single- and two-phase heat transfer processes which occur when mixing vane geometries are introduced into flow channels, building on previously developed high spatial- and temporal- resolution optical and infrared imaging techniques. A high-resolution experimental database would allow researchers to understand the boiling physics at the smallest scales, enabling the creation of more advanced numerical tools for the design and safety analysis of nuclear power reactors. Single-phase heat transfer simulations using the commercial computational fluid dynamics code STAR-CCM+ were performed to aid in the design process, and a preliminary analysis of the results was conducted to identify key single-phase heat transfer phenomena. Modifications to an existing experiment were made for the inclusion of flow obstacles analogous to mixing vane geometries into a flow boiling experiment. Obstacle geometries were 3D printed using high-temperature resistant resin, allowing the creation of complex three-dimensional geometries within the experiment. Experimental validation of the simulations is needed, however, the preliminary analysis identified single-phase heat transfer phenomena of interest for further investigation. These include: the relationship between fluid velocity and turbulent kinetic energy with heat transfer; the effects of impinging flows on heat transfer; and the heat transport within and changing geometries.S.B