541 research outputs found
Malpositioned endoscopically inserted biliary stent causing massive hematemesis managed with vascular plug and stenting
A 46-year-old man with a history of hepatitis B cirrhosis and hepatocellular carcinoma (HCC) status post liver transplantation two years ago complicated by HCC recurrence and biliary stenosis presented with hypovolemic shock and melena one month after endoscopic exchange of plastic biliary stents. During endoscopic retrograde cholangiopancreatography, patient was found to have hemobilia and developed uncontrollable bleeding after a common bile duct (CBD) sweep managed by insertion of a stent-graft across major papilla into presumed CBD. The bleeding continued with subsequent negative angiography, and a computed tomography angiography showed malpositioned stent-graft between major papilla and inferior vena cava (IVC). This was successfully managed by the deployment of a vascular plug inside the stent graft and excluding it by deploying a stent across the affected area in IVC
Mapping analysis and planning system for the John F. Kennedy Space Center
Environmental management, impact assessment, research and monitoring are multidisciplinary activities which are ideally suited to incorporate a multi-media approach to environmental problem solving. Geographic information systems (GIS), simulation models, neural networks and expert-system software are some of the advancing technologies being used for data management, query, analysis and display. At the 140,000 acre John F. Kennedy Space Center, the Advanced Software Technology group has been supporting development and implementation of a program that integrates these and other rapidly evolving hardware and software capabilities into a comprehensive Mapping, Analysis and Planning System (MAPS) based in a workstation/local are network environment. An expert-system shell is being developed to link the various databases to guide users through the numerous stages of a facility siting and environmental assessment. The expert-system shell approach is appealing for its ease of data access by management-level decision makers while maintaining the involvement of the data specialists. This, as well as increased efficiency and accuracy in data analysis and report preparation, can benefit any organization involved in natural resources management
Recommended from our members
Methods to estimate equipment and materials that are candidates for removal during the decontamination of fuel processing facilities
The methodology presented in this report provides a model for estimating the volume and types of waste expected from the removal of equipment and other materials during Decontamination and Decommissioning (D and D) of canyon-type fuel reprocessing facilities. This methodology offers a rough estimation technique based on a comparative analysis for a similar, previously studied, reprocessing facility. This approach is especially useful as a planning tool to save time and money while preparing for final D and D. The basic methodology described here can be extended for use at other types of facilities, such as glovebox or reactor facilities
Discrete structure of ultrathin dielectric films and their surface optical properties
The boundary problem of linear classical optics about the interaction of
electromagnetic radiation with a thin dielectric film has been solved under
explicit consideration of its discrete structure. The main attention has been
paid to the investigation of the near-zone optical response of dielectrics. The
laws of reflection and refraction for discrete structures in the case of a
regular atomic distribution are studied and the structure of evanescent
harmonics induced by an external plane wave near the surface is investigated in
details. It is shown by means of analytical and numerical calculations that due
to the existence of the evanescent harmonics the laws of reflection and
refraction at the distances from the surface less than two interatomic
distances are principally different from the Fresnel laws. From the practical
point of view the results of this work might be useful for the near-field
optical microscopy of ultrahigh resolution.Comment: 25 pages, 16 figures, LaTeX2.09, to be published in Phys.Rev.
Polarization state of the optical near-field
The polarization state of the optical electromagnetic field lying several
nanometers above complex dielectric structures reveals the intricate
light-matter interaction that occurs in this near-field zone. This information
can only be extracted from an analysis of the polarization state of the
detected light in the near-field. These polarization states can be calculated
by different numerical methods well-suited to near--field optics. In this
paper, we apply two different techniques (Localized Green Function Method and
Differential Theory of Gratings) to separate each polarisation component
associated with both electric and magnetic optical near-fields produced by
nanometer sized objects. The analysis is carried out in two stages: in the
first stage, we use a simple dipolar model to achieve insight into the physical
origin of the near-field polarization state. In the second stage, we calculate
accurate numerical field maps, simulating experimental near-field light
detection, to supplement the data produced by analytical models. We conclude
this study by demonstrating the role played by the near-field polarization in
the formation of the local density of states.Comment: 9 pages, 11 figures, accepted for publication in Phys. Rev.
Buckling and force propagation along intracellular microtubules
Motivated by recent experiments showing the compressive buckling of microtubules in cells, we study theoretically the mechanical response of and force propagation along elastic filaments embedded in a non-linear elastic medium. We find that embedded microtubules buckle when their compressive load exceeds a critical value
Reproducing the Stellar Mass/Halo Mass Relation in Simulated LCDM Galaxies: Theory vs Observational Estimates
We examine the present-day total stellar-to-halo mass (SHM) ratio as a
function of halo mass for a new sample of simulated field galaxies using fully
cosmological, LCDM, high resolution SPH + N-Body simulations.These simulations
include an explicit treatment of metal line cooling, dust and self-shielding,
H2 based star formation and supernova driven gas outflows. The 18 simulated
halos have masses ranging from a few times 10^8 to nearly 10^12 solar masses.
At z=0 our simulated galaxies have a baryon content and morphology typical of
field galaxies. Over a stellar mass range of 2.2 x 10^3 to 4.5 x 10^10 solar
masses, we find extremely good agreement between the SHM ratio in simulations
and the present-day predictions from the statistical Abundance Matching
Technique presented in Moster et al. (2012). This improvement over past
simulations is due to a number systematic factors, each decreasing the SHM
ratios: 1) gas outflows that reduce the overall SF efficiency but allow for the
formation of a cold gas component 2) estimating the stellar masses of simulated
galaxies using artificial observations and photometric techniques similar to
those used in observations and 3) accounting for a systematic, up to 30 percent
overestimate in total halo masses in DM-only simulations, due to the neglect of
baryon loss over cosmic times. Our analysis suggests that stellar mass
estimates based on photometric magnitudes can underestimate the contribution of
old stellar populations to the total stellar mass, leading to stellar mass
errors of up to 50 percent for individual galaxies. These results highlight the
importance of using proper techniques to compare simulations with observations
and reduce the perceived tension between the star formation efficiency in
galaxy formation models and in real galaxies.Comment: Submitted to ApJ 9 pages, 5 figure
Orientation bias of optically selected galaxy clusters and its impact on stacked weak-lensing analyses
Weak-lensing measurements of the averaged shear profiles of galaxy clusters binned by some proxy for cluster mass are commonly converted to cluster mass estimates under the assumption that these cluster stacks have spherical symmetry. In this paper, we test whether this assumption holds for optically selected clusters binned by estimated optical richness. Using mock catalogues created from N-body simulations populated realistically with galaxies, we ran a suite of optical cluster finders and estimated their optical richness. We binned galaxy clusters by true cluster mass and estimated optical richness and measure the ellipticity of these stacks. We find that the processes of optical cluster selection and richness estimation are biased, leading to stacked structures that are elongated along the line of sight. We show that weak-lensing alone cannot measure the size of this orientation bias. Weak-lensing masses of stacked optically selected clusters are overestimated by up to 3–6 per cent when clusters can be uniquely associated with haloes. This effect is large enough to lead to significant biases in the cosmological parameters derived from large surveys like the Dark Energy Survey, if not calibrated via simulations or fitted simultaneously. This bias probably also contributes to the observed discrepancy between the observed and predicted Sunyaev–Zel’dovich signal of optically selected clusters
Angular Momentum of Early- and Late-type Galaxies: Nature or Nurture?
We investigate the origin, the shape, the scatter, and the cosmic evolution
in the observed relationship between specific angular momentum and
the stellar mass in early-type (ETGs) and late-type galaxies (LTGs).
Specifically, we exploit the observed star-formation efficiency and chemical
abundance to infer the fraction f_\rm inf of baryons that infall toward the
central regions of galaxies where star formation can occur. We find f_\rm
inf\approx 1 for LTGs and for ETGs with an uncertainty of about
dex, consistent with a biased collapse. By comparing with the locally
observed vs. relations for LTGs and ETGs we estimate the
fraction of the initial specific angular momentum associated to the
infalling gas that is retained in the stellar component: for LTGs we find
, in line with the classic disc formation
picture; for ETGs we infer , that can be
traced back to a evolution via dry mergers. We also show that the
observed scatter in the vs. relation for both galaxy
types is mainly contributed by the intrinsic dispersion in the spin parameters
of the host dark matter halo. The biased collapse plus mergers scenario implies
that the specific angular momentum in the stellar components of ETG progenitors
at is already close to the local values, in pleasing agreement with
observations. All in all, we argue such a behavior to be imprinted by nature
and not nurtured substantially by the environment
Automated ventricular systems segmentation in brain CT images by combining low-level segmentation and high-level template matching
<p>Abstract</p> <p>Background</p> <p>Accurate analysis of CT brain scans is vital for diagnosis and treatment of Traumatic Brain Injuries (TBI). Automatic processing of these CT brain scans could speed up the decision making process, lower the cost of healthcare, and reduce the chance of human error. In this paper, we focus on automatic processing of CT brain images to segment and identify the ventricular systems. The segmentation of ventricles provides quantitative measures on the changes of ventricles in the brain that form vital diagnosis information.</p> <p>Methods</p> <p>First all CT slices are aligned by detecting the ideal midlines in all images. The initial estimation of the ideal midline of the brain is found based on skull symmetry and then the initial estimate is further refined using detected anatomical features. Then a two-step method is used for ventricle segmentation. First a low-level segmentation on each pixel is applied on the CT images. For this step, both Iterated Conditional Mode (ICM) and Maximum A Posteriori Spatial Probability (MASP) are evaluated and compared. The second step applies template matching algorithm to identify objects in the initial low-level segmentation as ventricles. Experiments for ventricle segmentation are conducted using a relatively large CT dataset containing mild and severe TBI cases.</p> <p>Results</p> <p>Experiments show that the acceptable rate of the ideal midline detection is over 95%. Two measurements are defined to evaluate ventricle recognition results. The first measure is a sensitivity-like measure and the second is a false positive-like measure. For the first measurement, the rate is 100% indicating that all ventricles are identified in all slices. The false positives-like measurement is 8.59%. We also point out the similarities and differences between ICM and MASP algorithms through both mathematically relationships and segmentation results on CT images.</p> <p>Conclusion</p> <p>The experiments show the reliability of the proposed algorithms. The novelty of the proposed method lies in its incorporation of anatomical features for ideal midline detection and the two-step ventricle segmentation method. Our method offers the following improvements over existing approaches: accurate detection of the ideal midline and accurate recognition of ventricles using both anatomical features and spatial templates derived from Magnetic Resonance Images.</p
- …