Northeast Radio Observatory Corporation

DSpace@MIT
Not a member yet
    147843 research outputs found

    Research and development of time resolution and time reference adjustment for CMS improved resistive plate chambers (iRPCs)

    No full text
    Purpose Improved resistive plate chambers (iRPCs) will be installed in the challenging forward region of the compact muon solenoid (CMS) during its Phase-2 upgrade. The design target of iRPC time resolution is 1.5 ns. It will help the Level-1 trigger system distinguish the muons from high backgrounds and improve the trigger efficiency. Studying the time resolution after integrating the new backend electronics boards (BEB) is essential for ensuring timely performance. In this system, a time reference (Tref) signal is distributed by the BEB to several frontend electronics boards (FEB) to reset the time-to-digital converters (TDC). In the CMS experiment, the arrangement of the iRPC chambers and on-chamber FEBs is at different positions, resulting in varying Tref arrival times on the FEB side. This paper describes the measures taken to ensure the time resolution of the single path and adjust the time base for multi-paths. Method Unique designs were implemented in the chamber, FEB, and BEB to ensure a satisfactory time resolution. Tref adjustments for different paths were performed in bunch crossing steps (24.950 ns) in the BEB using shift registers. And the sub-bunch crossing adjustment steps were performed in the FEB using the TDC correction module. Finally, the arrival time differences of Tref on different FEBs were less than 1.25 ns after adjustment. Results The time resolution of the FEB–BEB system was observed to be 32 ps. The time resolution of the chamber FEB–BEB system was first measured and is 554 ps at an iRPC working point of 7200 V. In addition, the Tref arrival time differences of different paths were adjusted from − 99.923 (− 90.113) ns to 0.073 (− 0.141) ns. Conclusion The test results revealed that the system time resolution and Tref adjustment performed by the BEB met the Phase-2 upgrade goals

    Electrostatic adsorption of polyanions onto lipid nanoparticles controls uptake, trafficking, and transfection of RNA and DNA therapies

    No full text
    Rapid advances in nucleic acid therapies highlight the immense therapeutic potential of genetic therapeutics. Lipid nanoparticles (LNPs) are highly potent nonviral transfection agents that can encapsulate and deliver various nucleic acid therapeutics, including but not limited to messenger RNA (mRNA), silencing RNA (siRNA), and plasmid DNA (pDNA). However, a major challenge of targeted LNP-mediated systemic delivery is the nanoparticles’ nonspecific uptake by the liver and the mononuclear phagocytic system, due partly to the adsorption of endogenous serum proteins onto LNP surfaces. Tunable LNP surface chemistries may enable efficacious delivery across a range of organs and cell types. Here, we describe a method to electrostatically adsorb bioactive polyelectrolytes onto LNPs to create layered LNPs (LLNPs). LNP cores varying in nucleic acid cargo and component lipids were stably layered with four biologically relevant polyanions: hyaluronate (HA), poly-L-aspartate (PLD), poly-L-glutamate (PLE), and polyacrylate (PAA). We further investigated the impact of the four surface polyanions on the transfection and uptake of mRNA- and pDNA-loaded LNPs in cell cultures. PLD- and PLE-LLNPs increased mRNA transfection twofold over unlayered LNPs in immune cells. HA-LLNPs increased pDNA transfection rates by more than twofold in epithelial and immune cells. In a healthy C57BL/6 murine model, PLE- and HA-LLNPs increased transfection by 1.8-fold to 2.5-fold over unlayered LNPs in the liver and spleen. These results suggest that LbL assembly is a generalizable, highly tunable platform to modify the targeting specificity, stability, and transfection efficacy of LNPs, as well as incorporate other charged targeting and therapeutic molecules into these systems

    Algorithmic Advances for Fair and Efficient Decision-Making in Online Platforms

    No full text
    Modern online platforms—such as recommendation systems, advertising markets and e-commerce sites—operate in dynamic and complex environments where efficient algorithmic decision-making is essential. These platforms must continuously adapt to rapidly changing user behaviors, market fluctuations, and data uncertainties while optimizing for both learning efficacy and revenue generation. However, focusing solely on performance can lead to biased outcomes and inequitable treatment of users and items, raising concerns about fairness. Balancing efficiency and fairness is therefore crucial for sustainable platform growth. In this thesis, we tackle these challenges by developing novel algorithmic frameworks and methods that integrate fairness considerations with robust learning and optimization techniques. We explore these problems from three distinct perspectives, each contributing to enhancing the decision quality and fairness considerations in online decision-making. In Chapter 2, we first focus on the topic of efficiency, by addressing the challenge of performing online learning in a highly non-stationary environment. User behaviors and preferences often change over time, making it difficult for traditional algorithms to maintain good performance. This issue is particularly prevalent in real-world applications such as recommendation systems and advertising platforms, where shifts in user dynamics can undermine decision-making efficacy. To tackle this, we propose a novel algorithm for the widely adopted multi-armed bandit framework that enables platforms to adaptively learn in a fast-changing environment characterized by auto-regressive temporal dependencies. In Chapter 3, we shift our focus to the realm of fairness and explore how fairness considerations can be effectively integrated into the context of assortment planning. As algorithmic recommendations become integral to platform operations, a purely revenue-driven approach can result in highly imbalanced outcomes, leading to certain items receiving minimal exposure and exiting the platform in the long run. To address this, we develop a combinatorial optimization framework that incorporates fairness constraints, ensuring equitable exposure and opportunities for all items on the platform. We design a series of polynomial-time approximation algorithms to solve the fair assortment problem. Through numerical studies on both synthetic data and real-world MovieLens data, we showcase the effectiveness of our algorithms and provide insights into the platform's price of fairness. In Chapter 4, we bridge the topics of fairness and learning efficiency by examining how to achieve multi-stakeholder fairness in a multi-sided recommendation system. Here, the challenge is multifaceted, including ensuring high platform revenue, maintaining fair outcomes for diverse stakeholders, and enabling robust learning amidst data uncertainty. We propose a novel optimization framework that maximizes platform revenue while enforcing fairness constraints for both items and users, accommodating various fairness notions and outcome metrics. Building on this, we introduce a low-regret online learning and optimization algorithm that dynamically balances learning and fairness—two objectives that are often at odds. Finally, we demonstrate the efficacy of our approach via a real-world case study on Amazon review data and offer actionable guidelines for implementing fair policies in practice.Ph.D

    Half-Space Intersection Properties for Minimal Hypersurfaces

    No full text
    We prove “half-space” intersection properties in three settings: the hemisphere, half-geodesic balls in space forms, and certain subsets of Gaussian space. For instance, any two embedded minimal hypersurfaces in the sphere must intersect in every closed hemisphere. Two approaches are developed: one using classifications of stable minimal hypersurfaces, and the second using conformal change and comparison geometry for α -Bakry-Émery-Ricci curvature. Our methods yield the analogous intersection properties for free boundary minimal hypersurfaces in space form balls, even when the interior or boundary curvature may be negative. Finally, Colding and Minicozzi recently showed that any two embedded shrinkers of dimension n must intersect in a large enough Euclidean ball of radius R(n). We show that R ( n ) ≤ 2 n

    Report to the President for year ended June 30, 2025, MIT-IBM Watson AI Lab

    No full text
    This report contains the following sections: Goals and Priorities, Industry Research Collaborations, Selected Research Overview, Student and Young Researcher Engagement, Community Outreach and Events, Communications, and Administration and Governance

    Exploring Learning Engineering Design Decision Tracking: Emergent Themes from Practitioners’ Work

    No full text
    This paper examines design decisions that were written down and enacted by learning design practitioners across 18 projects at a postsecondary institution. Through emergent coding of decisions recorded in a Learning Engineering Evidence and Decision (LEED) tracker in situ, this research answers 3 questions: (1) how do practitioners track and cite sources of influence on design decisions, (2) how do practitioners communicate, revisit, and iterate these decisions throughout cycles of their design, and (3) when revisions were made to decisions, what sources of influence led to these changes? Findings indicate that practitioners record new and revised decisions while also tracking influences on these decisions that stem from their own experiences and from the specific project context. This work contributes to the support of learning design practitioners by offering a tool to capture thinking and reasoning in complex contexts, while offering researchers a way to collect evidence of this decision making

    Aligning Machine Learning and Robust Decision-Making

    No full text
    Machine learning (ML) has become increasingly ubiquitous across many applications worldwide, ranging from areas like supply chain to personalized pricing, recommendations, and more. These predictive models are often used as tools to inform operations and decision-making, with the potential to revolutionize decision-making. The main key question this thesis aims to address is: How can we make ML methods aware of their downstream impact on the full decision-making process? As a result, we focus on developing methods to align AI with real-world objectives in order to make efficient, safe, and robust systems. This thesis is split into three chapters focusing on different aspects of this problem. In Chapter I of this thesis we address the heavy computational complexity of existing methods. We present a meta-optimization machine learning framework to learn fast approximations to general convex problems. We further apply this within an end-to-end learning framework which trains ML models with an optimization-based loss function to minimize the decision cost directly. This meta-optimization approach allows us to tackle problem sizes that were intractable using approaches from the previous literature. Furthermore, this work establishes analytically that this learning approach guarantees fast convergence to nearly-optimal solutions. Through this chapter it is shown that the proposed approach consistently scales better in terms of runtime as problem size increases, being 2 to 10 times faster for various problems while retaining nearly the same accuracy. In Chapter II we focus on the robustness problem, to make decisions that protect against worst-case scenarios as well as to noise in the data. Traditional robust optimization methods tackle this issue by creating uncertainty sets for each observation, aiming to minimize costs in worst-case scenarios. However, these methods assume the worst-case scenario happens at every observation, which can be too pessimistic. We propose a new approach that avoids constructing uncertainty sets and links uncertainties across the entire feature space. This allows for robust decision-making without assuming worst-case scenarios at every observation. Our approach integrates robustness with a concept of learning stability, proving that algorithms with a stability property inherently produce robust solutions without explicitly solving the robust optimization problem. This chapter finally tests the framework on a variety of problems such as portfolio optimization using historical stock data, inventory allocation and electricity generation using real-world data, showing significant improvement in terms of robustness and competitive results in terms of the average error relative to existing literature. Finally in Chapter III we consider the endogenous setting where decisions we take affect outcomes, like pricing and assortment optimization where decisions (like price) affect demand. In the end-to-end spirit, this research introduces an approach to jointly predict and optimize in this setting which learns a prediction aligned with expected cost. We further introduce a robust optimization decision-making method that can account for uncertainty in ML models --- specifically by constructing uncertainty sets over the space of ML models and optimize actions to protect against worst-case predictions. We further prove guarantees that our method can capture near-optimal decisions with high probability as a function of data. We also introduce a new class of two-stage stochastic optimization problems to the end-to-end learning framework that can now be addressed through our framework. Here, the first stage is an information-gathering problem to decide which random variable to ``poll'' and gain information about before making a second-stage decision based off of it. We present several computational experiments for pricing and inventory assortment/recommendation problems. We compare against existing methods in bandits and offline reinforcement learning, showing our approach has consistent improved performance over these.Ph.D

    Single-Phase Heat Transfer Effects of Mixing Vane Geometries in a Narrow Rectangular Channel

    No full text
    Mixing vane geometries enhance the fuel-to-coolant heat transfer within nuclear reactors, which allows for more efficient use of power reactors. At the same time, their presence affects the critical heat flux (CHF), the upper limit to power produced by the reactor, within the reactor. Numerical simulations do not accurately reflect the changes to CHF when mixing vanes are included in nuclear fuel assemblies, suggesting that the CHF models are not resolving the boiling phenomena that occurs with mixing vane geometries. This thesis aims to address this gap by designing an experiment capable of directly resolving the local single- and two-phase heat transfer processes which occur when mixing vane geometries are introduced into flow channels, building on previously developed high spatial- and temporal- resolution optical and infrared imaging techniques. A high-resolution experimental database would allow researchers to understand the boiling physics at the smallest scales, enabling the creation of more advanced numerical tools for the design and safety analysis of nuclear power reactors. Single-phase heat transfer simulations using the commercial computational fluid dynamics code STAR-CCM+ were performed to aid in the design process, and a preliminary analysis of the results was conducted to identify key single-phase heat transfer phenomena. Modifications to an existing experiment were made for the inclusion of flow obstacles analogous to mixing vane geometries into a flow boiling experiment. Obstacle geometries were 3D printed using high-temperature resistant resin, allowing the creation of complex three-dimensional geometries within the experiment. Experimental validation of the simulations is needed, however, the preliminary analysis identified single-phase heat transfer phenomena of interest for further investigation. These include: the relationship between fluid velocity and turbulent kinetic energy with heat transfer; the effects of impinging flows on heat transfer; and the heat transport within and changing geometries.S.B

    Data Interpretation and Management for an Atmospheric Probe Mission to Venus

    No full text
    After nearly 40 years without a dedicated U.S. mission to Venus, the Rocket Lab Mission to Venus is planning to launch a small probe to analyze the composition of Venus’ cloud layers. As the probe descends through the atmosphere, it will spend around five minutes in the cloud deck, from 66 km to 48 km above the surface, and roughly 20 minutes total in the atmosphere [French et al., 2022]. The probe’s primary scientific instrument, the Autofluorescence Nephelometer (AFN), will gather data by measuring the light scattering off particles, providing insight into their chemical composition based on refractive index and particle size [Baumgardner et al., 2022]. Unfortunately, the natural phenomena described by Mie scattering [Mie, 1908], the physics theory underpinning the AFN, holds that light scattering for a small solid angle is fundamentally degenerate: different combinations of refractive index and particle size can lead to identical light scattering. This degeneracy limits scientists’ ability to uniquely determine physical parameters of interest, leading some previous authors to rely upon helpful, but perhaps limiting, assumptions that mitigate this degeneracy. Complicating matters still further, the probe’s communication with Earth is subject to a strict data budget, limiting the amount of AFN measurements that may be used for analysis to begin with. This thesis addresses two important problems associated with the Rocket Lab Mission to Venus: 1) how to mitigate the light scattering degeneracy with minimal assumptions and 2) how to transmit valuable information within the limited data budget. To address the first problem, I introduce a data retrieval algorithm, based upon Bayesian statistical inference [Lindley, 1965], which combines a physical model of the instrument and a prior probability distribution describing each physical property. In some cases, this method can estimate the correct particle size and refractive index of a particle as the maximum likelihood value, from a single measurement even as it relaxes certain assumptions that were previously standard in the field, such as a small refractive index range. Using my data retrieval algorithm, I reanalyze the data collected by the Pioneer MultiProbe Mission to Venus’ nephelometers without the need for supplementary data from a different instrument [Ragent and Blamont, 1980]. I also provide new insight into the particle size and refractive index distributions seen by the Pioneer Mission’s small probes, which had not been possible with previous techniques. To address the second problem, I propose a data strategy for limited data missions like the Rocket Lab Mission to Venus. The method developed in this work relies upon Gaussian Mixture Models, which can efficiently represent multiple measurements asPh.D

    Report to the President for year ended June 20, 2025, Arts Initiatives

    No full text
    This report contains the following sections: Artfinity arts festival, Center for Art, Science & Technology (CAST), Council for the Arts (CAMIT), Faculty Arts Grants and Visiting Artists, Eugene McDermott Award in the Arts at MIT, Personnel, and Student Arts Programs (Arts Incubator, Arts Scholars, Student Center Arts Studios, Wiesner Student Art Gallery)

    58,643

    full texts

    147,857

    metadata records
    Updated in last 30 days.
    DSpace@MIT is based in United States
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇