5,667 research outputs found

    Towards in vivo g-ratio mapping using MRI: unifying myelin and diffusion imaging

    Get PDF
    The g-ratio, quantifying the comparative thickness of the myelin sheath encasing an axon, is a geometrical invariant that has high functional relevance because of its importance in determining neuronal conduction velocity. Advances in MRI data acquisition and signal modelling have put in vivo mapping of the g-ratio, across the entire white matter, within our reach. This capacity would greatly increase our knowledge of the nervous system: how it functions, and how it is impacted by disease. This is the second review on the topic of g-ratio mapping using MRI. As such, it summarizes the most recent developments in the field, while also providing methodological background pertinent to aggregate g-ratio weighted mapping, and discussing pitfalls associated with these approaches. Using simulations based on recently published data, this review demonstrates the relevance of the calibration step for three myelin-markers (macromolecular tissue volume, myelin water fraction, and bound pool fraction). It highlights the need to estimate both the slope and offset of the relationship between these MRI-based markers and the true myelin volume fraction if we are really to achieve the goal of precise, high sensitivity g-ratio mapping in vivo. Other challenges discussed in this review further evidence the need for gold standard measurements of human brain tissue from ex vivo histology. We conclude that the quest to find the most appropriate MRI biomarkers to enable in vivo g-ratio mapping is ongoing, with the potential of many novel techniques yet to be investigated.Comment: Will be published as a review article in Journal of Neuroscience Methods as parf of the Special Issue with Hu Cheng and Vince Calhoun as Guest Editor

    Scaling of transverse nuclear magnetic relaxation due to magnetic nanoparticle aggregation

    Get PDF
    The aggregation of superparamagnetic iron oxide (SPIO) nanoparticles decreases the transverse nuclear magnetic resonance (NMR) relaxation time T2 of adjacent water molecules measured by a Carr-Purcell-Meiboom-Gill (CPMG) pulse-echo sequence. This effect is commonly used to measure the concentrations of a variety of small molecules. We perform extensive Monte Carlo simulations of water diffusing around SPIO nanoparticle aggregates to determine the relationship between T2 and details of the aggregate. We find that in the motional averaging regime T2 scales as a power law with the number N of nanoparticles in an aggregate. The specific scaling is dependent on the fractal dimension d of the aggregates. We find T2 N^{-0.44} for aggregates with d=2.2, a value typical of diffusion limited aggregation. We also find that in two-nanoparticle systems, T2 is strongly dependent on the orientation of the two nanoparticles relative to the external magnetic field, which implies that it may be possible to sense the orientation of a two-nanoparticle aggregate. To optimize the sensitivity of SPIO nanoparticle sensors, we propose that it is best to have aggregates with few nanoparticles, close together, measured with long pulse-echo times.Comment: 20 pages, 3 figures, submitted to Journal of Magnetism and Magnetic Material

    Cosmic Dust Aggregation with Stochastic Charging

    Full text link
    The coagulation of cosmic dust grains is a fundamental process which takes place in astrophysical environments, such as presolar nebulae and circumstellar and protoplanetary disks. Cosmic dust grains can become charged through interaction with their plasma environment or other processes, and the resultant electrostatic force between dust grains can strongly affect their coagulation rate. Since ions and electrons are collected on the surface of the dust grain at random time intervals, the electrical charge of a dust grain experiences stochastic fluctuations. In this study, a set of stochastic differential equations is developed to model these fluctuations over the surface of an irregularly-shaped aggregate. Then, employing the data produced, the influence of the charge fluctuations on the coagulation process and the physical characteristics of the aggregates formed is examined. It is shown that dust with small charges (due to the small size of the dust grains or a tenuous plasma environment) are affected most strongly

    Differentiated Predictive Fair Service for TCP Flows

    Full text link
    The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service. In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.National Science Foundation (CAREER ANI-0096045, MRI EIA-9871022

    On the viability of the shearing box approximation for numerical studies of MHD turbulence in accretion disks

    Full text link
    Most of our knowledge on the nonlinear development of the magneto-rotational instability (MRI) relies on the results of numerical simulations employing the shearing box (SB) approximation. A number of difficulties arising from this approach have recently been pointed out in the literature. We thoroughly examine the effects of the assumptions made and numerical techniques employed in SB simulations. This is done in order to clarify and gain better understanding of those difficulties as well as of a number of additional serious problems, raised here for the first time, and of their impact on the results. Analytical derivations and estimates as well as comparative analysis to methods used in the numerical study of turbulence are used. Numerical experiments are performed to support some of our claims and conjectures. The following problems, arising from the (virtually exclusive) use of the SB simulations as a tool for the understanding and quantification of the nonlinear MRI development in disks, are analyzed and discussed: (i) inconsistencies in the application of the SB approximation itself; (ii) the limited spatial scale of the SB; (iii) the lack of convergence of most ideal MHD simulations; (iv) side-effects of the SB symmetry and the non-trivial nature of the linear MRI; (v) physical artifacts arising on the too small box scale due to periodic boundary conditions. The computational and theoretical challenge posed by the MHD turbulence problem in accretion disks cannot be met by the SB approximation, as it has been used to date. A new strategy to confront this challenge is proposed, based on techniques widely used in numerical studies of turbulent flows - developing (e.g., with the help of local numerical studies) a sub-grid turbulence model and implementing it in global calculations.Comment: Accepted for publication in Astronomy and Astrophysic

    The Coordination and Design of Point-Nonpoint Trading Programs and Agri-Environmental Policies

    Get PDF
    Agricultural agencies have long offered agri-environmental payments that are inadequate to achieve water quality goals, and many state water quality agencies are considering point-nonpoint trading to achieve the needed pollution reductions. This analysis considers both targeted and nontargeted agrienvironmental payment schemes, along with a trading program which is not spatially targeted. The degree of improved performance among these policies is found to depend on whether the programs are coordinated or not, whether double-dipping (i.e., when farmers are paid twice-once by each program-to undertake particular pollution control actions) is allowed, and whether the agri-environmental payments are targeted. Under coordination, efficiency gains only occur with double-dipping, so that both programs jointly influence farmers' marginal decisions. Without coordination, doubledipping may increase or decrease efficiency, depending on how the agri-environmental policy is targeted. Finally, double-dipping may not solely benefit farmers, but can result in a transfer of agricultural subsidies to point sources.Environmental Economics and Policy,

    Taxonomic classification of planning decisions in health care: a review of the state of the art in OR/MS

    Get PDF
    We provide a structured overview of the typical decisions to be made in resource capacity planning and control in health care, and a review of relevant OR/MS articles for each planning decision. The contribution of this paper is twofold. First, to position the planning decisions, a taxonomy is presented. This taxonomy provides health care managers and OR/MS researchers with a method to identify, break down and classify planning and control decisions. Second, following the taxonomy, for six health care services, we provide an exhaustive specification of planning and control decisions in resource capacity planning and control. For each planning and control decision, we structurally review the key OR/MS articles and the OR/MS methods and techniques that are applied in the literature to support decision making

    Systematic Review and Regression Modeling of the Effects of Age, Body Size, and Exercise on Cardiovascular Parameters in Healthy Adults

    Get PDF
    Purpose Blood pressure, cardiac output, and ventricular volumes correlate to various subject features such as age, body size, and exercise intensity. The purpose of this study is to quantify this correlation through regression modeling. Methods We conducted a systematic review to compile reference data of healthy subjects for several cardiovascular parameters and subject features. Regression algorithms used these aggregate data to formulate predictive models for the outputs—systolic and diastolic blood pressure, ventricular volumes, cardiac output, and heart rate—against the features—age, height, weight, and exercise intensity. A simulation-based procedure generated data of virtual subjects to test whether these regression models built using aggregate data can perform well for subject-level predictions and to provide an estimate for the expected error. The blood pressure and heart rate models were also validated using real-world subject-level data. Results The direction of trends between model outputs and the input subject features in our study agree with those in current literature. Conclusion Although other studies observe exponential predictor-output relations, the linear regression algorithms performed the best for the data in this study. The use of subject-level data and more predictors may provide regression models with higher fidelity. Significance Models developed in this study can be useful to clinicians for personalized patient assessment and to researchers for tuning computational models

    Resource Modelling: The Missing Piece of the HTA Jigsaw?

    Get PDF
    Within health technology assessment (HTA), cost-effectiveness analysis and budget impact analyses have been broadly accepted as important components of decision making. However, whilst they address efficiency and affordability, the issue of implementation and feasibility has been largely ignored. HTA commonly takes place within a deliberative framework that captures issues of implementation and feasibility in a qualitative manner. We argue that only through a formal quantitative assessment of resource constraints can these issues be fully addressed. This paper argues the need for resource modelling to be considered explicitly in HTA. First, economic evaluation and budget impact models are described along with their limitations in evaluating feasibility. Next, resource modelling is defined and its usefulness is described along with examples of resource modelling from the literature. Then, the important issues that need to be considered when undertaking resource modelling are described before setting out recommendations for the use of resource modelling in HTA

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware
    corecore