237 research outputs found

    Atomic power - its significance to the management of a relief valve company

    Full text link
    Thesis (M.B.A.)--Boston Universit

    Effects of multiple-dose ponesimod, a selective SIP1 receptor modulator, on lymphocyte subsets in healthy humans

    Get PDF
    This study investigated the effects of ponesimod, a selective SIP1 receptor modulator, on T lymphocyte subsets in 16 healthy subjects. Lymphocyte subset proportions and absolute numbers were determined at baseline and on Day 10, after once-daily administration of ponesimod (10 mg, 20 mg, and 40 mg each consecutively for 3 days) or placebo (ratio 3: 1). The overall change from baseline in lymphocyte count was -1,292 +/- 340x10(6) cells/L and 275 +/- 486x10(6) cells/L in ponesimod- and placebo-treated subjects, respectively. This included a decrease in both T and B lymphocytes following ponesimod treatment. A decrease in naive CD4(+) T cells (CD45RA(+)CCR7(+)) from baseline was observed only after ponesimod treatment (-113 +/- 98x10(6) cells/L, placebo: 0 +/- 18x10(6) cells/L). The number of T-cytotoxic (CD3(+)CD8(+)) and T-helper (CD3(+)CD4(+)) cells was significantly altered following ponesimod treatment compared with placebo. Furthermore, ponesimod treatment resulted in marked decreases in CD4(+) T-central memory (CD45RA(-)CCR7(+)) cells (-437 +/- 164x10(6) cells/L) and CD4(+) T-effector memory (CD45RA(-)CCR7(-)) cells (-131 +/- 57x10(6) cells/L). In addition, ponesimod treatment led to a decrease of -228 +/- 90x10(6) cells/L of gut-homing T cells (CLA(-)integrin beta 7(+)). In contrast, when compared with placebo, CD8(+) T-effector memory and natural killer (NK) cells were not significantly reduced following multiple-dose administration of ponesimod. In summary, ponesimod treatment led to a marked reduction in overall T and B cells. Further investigations revealed that the number of CD4(+) cells was dramatically reduced, whereas CD8(+) and NK cells were less affected, allowing the body to preserve critical viral-clearing functions

    A Comprehensive Economic Stimulus for our Failing Economy

    Full text link
    This paper presents a comprehensive plan to fix the ailing American economy, through a five-step approach. First, the Federal Reserve must continue to broaden the scope of monetary policy, by purchasing and selling long-term securities. Manipulating expectations through FOMC statements is another tool at the Federal Reserve’s disposal. Secondly, the government must enact fiscal stimulus to stabilize the economy in the short and medium runs, through investment in infrastructure projects, green technology, fusion technology, and science education. Additionally, the new fiscal policy must tackle the mortgage meltdown, which is weighing down the entire economy. Third, the regulatory system must be changed to reduce the likelihood of another financial collapse, starting with the nationalization of the ratings agencies. Ratings should be updated faster, with a numeric grading system rather than the pre-existing letter grades. Fourth, our globalized economy insures that a coordinated globalized response is necessary to recover. Global cooperation to reduce inflation and avoid protectionist policies is vital. Finally, the American bailout policy must be made clear, only giving bailouts to companies that are sound but financially strapped and those that are too big to fail

    Multi-static, multi-frequency scattering from zooplankton

    Get PDF
    Abstract: Inversion of multi-frequency acoustic backscattering cm be used to estkate size-abundances of zooplankton, given a valid model for backscattering for the zooplankters. me physical properties of the scatterers, density and compressibility (or compressional-wave sound speed), are usually assigned fixed values in the scattering model.~ese properties wotid be of interest if they could be mew~~in~it~, e.g.to exm~e~hange$in liPid contents over seasons. Extension of currently-favored backscattering models to multi-static configurations looks promising as a method to directly measure these relevant physical properties simultaneously with size-abundance estimation

    Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable

    Full text link
    There has been significant recent interest in parallel graph processing due to the need to quickly analyze the large graphs available today. Many graph codes have been designed for distributed memory or external memory. However, today even the largest publicly-available real-world graph (the Hyperlink Web graph with over 3.5 billion vertices and 128 billion edges) can fit in the memory of a single commodity multicore server. Nevertheless, most experimental work in the literature report results on much smaller graphs, and the ones for the Hyperlink graph use distributed or external memory. Therefore, it is natural to ask whether we can efficiently solve a broad class of graph problems on this graph in memory. This paper shows that theoretically-efficient parallel graph algorithms can scale to the largest publicly-available graphs using a single machine with a terabyte of RAM, processing them in minutes. We give implementations of theoretically-efficient parallel algorithms for 20 important graph problems. We also present the optimizations and techniques that we used in our implementations, which were crucial in enabling us to process these large graphs quickly. We show that the running times of our implementations outperform existing state-of-the-art implementations on the largest real-world graphs. For many of the problems that we consider, this is the first time they have been solved on graphs at this scale. We have made the implementations developed in this work publicly-available as the Graph-Based Benchmark Suite (GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Pair-correlation Kinetics and the Reversible Diffusion-controlled Reaction

    Get PDF
    It has long been known that the time course of a bimolecular reaction occurring in a condensed host depends on the behavior of the nonequilibrium pair-correlation function for reactant pairs. The classical analysis of such reactions has led to a kind of standard rule: The association rate constant for a diffusion-controlled reaction is 4Ď€DR and this rate constant produces the fastest possible kinetics. This result is only (approximately) true for the case of an irreversible reaction, however. Here, we reexamine this old problem, looking closely at the reversible case. We report a result that challenges the standard wisdom: When the reaction is highly reversible the relaxation of the related kinetics to equilibrium can be much faster than the model in which 4Ď€DR is the association rate constant. We suggest that our work provides a natural resolution to a well-known, long-standing controversy in the study of electrically active impurities in silicon grown by the Czochralski method

    The Computational Complexity of the Lorentz Lattice Gas

    Full text link
    The Lorentz lattice gas is studied from the perspective of computational complexity theory. It is shown that using massive parallelism, particle trajectories can be simulated in a time that scales logarithmically in the length of the trajectory. This result characterizes the ``logical depth" of the Lorentz lattice gas and allows us to compare it to other models in statistical physics.Comment: 9 pages, LaTeX, to appear in J. Stat. Phy

    Aliskiren, enalapril, or aliskiren and enalapril in heart failure

    Get PDF
    BACKGROUND Among patients with chronic heart failure, angiotensin-converting–enzyme (ACE) inhibitors reduce mortality and hospitalization, but the role of a renin inhibitor in such patients is unknown. We compared the ACE inhibitor enalapril with the renin inhibitor aliskiren (to test superiority or at least noninferiority) and with the combination of the two treatments (to test superiority) in patients with heart failure and a reduced ejection fraction. METHODS After a single-blind run-in period, we assigned patients, in a double-blind fashion, to one of three groups: 2336 patients were assigned to receive enalapril at a dose of 5 or 10 mg twice daily, 2340 to receive aliskiren at a dose of 300 mg once daily, and 2340 to receive both treatments (combination therapy). The primary composite outcome was death from cardiovascular causes or hospitalization for heart failure. RESULTS After a median follow-up of 36.6 months, the primary outcome occurred in 770 patients (32.9%) in the combination-therapy group and in 808 (34.6%) in the enalapril group (hazard ratio, 0.93; 95% confidence interval [CI], 0.85 to 1.03). The primary outcome occurred in 791 patients (33.8%) in the aliskiren group (hazard ratio vs. enalapril, 0.99; 95% CI, 0.90 to 1.10); the prespecified test for noninferiority was not met. There was a higher risk of hypotensive symptoms in the combination-therapy group than in the enalapril group (13.8% vs. 11.0%, P=0.005), as well as higher risks of an elevated serum creatinine level (4.1% vs. 2.7%, P=0.009) and an elevated potassium level (17.1% vs. 12.5%, P<0.001). CONCLUSIONS In patients with chronic heart failure, the addition of aliskiren to enalapril led to more adverse events without an increase in benefit. Noninferiority was not shown for aliskiren as compared with enalapri
    • …
    corecore