3,197 research outputs found
Modular Workflow Engine for Distributed Services using Lightweight Java Clients
In this article we introduce the concept and the first implementation of a
lightweight client-server-framework as middleware for distributed computing. On
the client side an installation without administrative rights or privileged
ports can turn any computer into a worker node. Only a Java runtime environment
and the JAR files comprising the workflow client are needed. To connect all
clients to the engine one open server port is sufficient. The engine submits
data to the clients and orchestrates their work by workflow descriptions from a
central database. Clients request new task descriptions periodically, thus the
system is robust against network failures. In the basic set-up, data up- and
downloads are handled via HTTP communication with the server. The performance
of the modular system could additionally be improved using dedicated file
servers or distributed network file systems.
We demonstrate the design features of the proposed engine in real-world
applications from mechanical engineering. We have used this system on a compute
cluster in design-of-experiment studies, parameter optimisations and robustness
validations of finite element structures.Comment: 14 pages, 8 figure
Normal values of blood pressure self-measurement in view of the 1999 World Health Organization-International Society of Hypertension guidelines
New guidelines for the management of hypertension have been published in 1999 by the World Health Organization (WHO) and the International Society of Hypertension (ISH). The WHO/ISH Committee has adopted in principle the definition and classification of hypertension provided by the JNC VI (1997). The new classification defines a blood pressure of 120/80 mm Hg as optimal and of 130/85 mm Hg as the limit between normal and high-normal blood pressure. It is unclear which self-measured home blood pressure values correspond to these office blood pressure limits. In this study we reevaluated data from our Dübendorf study to determine self-measured blood pressure values corresponding to optimal and normal office blood pressure using the percentiles of the (office and home) blood pressure distributions of 503 individuals (age, 20 to 90 years; mean age, 46.5 years; 265 men, 238 women). Self-measured blood pressure values corresponding to office values of 130/85 mm Hg and 120/80 mm Hg were 124.1/79.9 mm Hg and 114.3/75.1 mm Hg. Thus, we propose 125/80 mm Hg as a home blood pressure corresponding to an office blood pressure of 130/85 mm Hg (WHO 1999: normal) and 115/75 mm Hg corresponding to 120/80 mm Hg (optimal). Am J Hypertens 2000;13:940-943 © 2000 American Journal of Hypertension, Lt
The role of S100 proteins and their receptor RAGE in pancreatic cancer
ABSTRACTPancreatic ductal adenocarcinoma (PDAC) is a devastating disease with low survival rates. Current therapeutic treatments have very poor response rates due to the high inherent chemoresistance of the pancreatic-cancer cells. Recent studies have suggested that the receptor for advanced glycation end products (RAGE) and its S100 protein ligands play important roles in the progression of PDAC. We will discuss the potential role of S100 proteins and their receptor, RAGE, in the development and progression of pancreatic cancer
The role of S100 proteins and their receptor RAGE in pancreatic cancer
ABSTRACTPancreatic ductal adenocarcinoma (PDAC) is a devastating disease with low survival rates. Current therapeutic treatments have very poor response rates due to the high inherent chemoresistance of the pancreatic-cancer cells. Recent studies have suggested that the receptor for advanced glycation end products (RAGE) and its S100 protein ligands play important roles in the progression of PDAC. We will discuss the potential role of S100 proteins and their receptor, RAGE, in the development and progression of pancreatic cancer
Planetary/DOD entry technology flight experiments. Volume 3: Planetary entry flight experiments handbook
The environments produced by entry into Jupiter and Saturn atmospheres are summarized. Worst case design environments are identified and the effect of entry angle, type of atmosphere and ballistic coefficient variations are presented. The range of environments experienced during earth entry is parametrically described as a function of initial entry conditions. The sensitivity of these environments to vehicle ballistic coefficient and nose radius is also shown. An elliptical deorbit maneuver strategy is defined in terms of the velocity increment required versus initial entry conditions and apoapsis altitude. Mission time, ground track, and out of plane velocity penalties are also presented. Performance capabilities of typical shuttle launched boosters are described including the initial entry conditions attainable as a function of paylaod mass and apoapsis altitude
Half-Life of O
We have measured the half-life of O, a superallowed decay isotope. The O was produced by the
C(He,n)O reaction using a carbon aerogel target. A
low-energy ion beam of O was mass separated and implanted in a thin
beryllium foil. The beta particles were counted with plastic scintillator
detectors. We find s. This result is
higher than an average value from six earlier experiments, but agrees more
closely with the most recent previous measurement.Comment: 10 pages, 5 figure
Planetary/DOD entry technology flight experiments. Volume 4: DOD entry flight experiments
For abstract, see vol. 1
Planetary/DOD entry technology flight experiments. Volume 1: Executive summary
The feasibility of using the space shuttle to launch planetary and DoD entry flight experiments was examined. The results of the program are presented in two parts: (1) simulating outer planet environments during an earth entry test, the prediction of Jovian and earth radiative heating dominated environments, mission strategy, booster performance and entry vehicle design, and (2) the DoD entry test needs for the 1980's, the use of the space shuttle to meet these DoD test needs, modifications of test procedures as pertaining to the space shuttle, modifications to the space shuttle to accommodate DoD test missions and the unique capabilities of the space shuttle. The major findings of this program are summarized
TensorFlow Doing HPC
TensorFlow is a popular emerging open-source programming framework supporting
the execution of distributed applications on heterogeneous hardware. While
TensorFlow has been initially designed for developing Machine Learning (ML)
applications, in fact TensorFlow aims at supporting the development of a much
broader range of application kinds that are outside the ML domain and can
possibly include HPC applications. However, very few experiments have been
conducted to evaluate TensorFlow performance when running HPC workloads on
supercomputers. This work addresses this lack by designing four traditional HPC
benchmark applications: STREAM, matrix-matrix multiply, Conjugate Gradient (CG)
solver and Fast Fourier Transform (FFT). We analyze their performance on two
supercomputers with accelerators and evaluate the potential of TensorFlow for
developing HPC applications. Our tests show that TensorFlow can fully take
advantage of high performance networks and accelerators on supercomputers.
Running our TensorFlow STREAM benchmark, we obtain over 50% of theoretical
communication bandwidth on our testing platform. We find an approximately 2x,
1.7x and 1.8x performance improvement when increasing the number of GPUs from
two to four in the matrix-matrix multiply, CG and FFT applications
respectively. All our performance results demonstrate that TensorFlow has high
potential of emerging also as HPC programming framework for heterogeneous
supercomputers.Comment: Accepted for publication at The Ninth International Workshop on
Accelerators and Hybrid Exascale Systems (AsHES'19
- …