1,124 research outputs found
Macro- and micro-strain in GaN nanowires on Si(111)
We analyze the strain state of GaN nanowire ensembles by x-ray diffraction.
The nanowires are grown by molecular beam epitaxy on a Si(111) substrate in a
self-organized manner. On a macroscopic scale, the nanowires are found to be
free of strain. However, coalescence of the nanowires results in micro-strain
with a magnitude from +-0.015% to +-0.03%.This micro-strain contributes to the
linewidth observed in low-temperature photoluminescence spectra
Recommended from our members
High temperature corrosion of Cr-W alloys in simulated syngas
Search for new high temperature materials for energy applications continues. This presentation will focus on degradation of Cr alloys containing 0-30%W by weight in a flowing gas mixture containing 30%CO, 8%CO2, 20%H2, 2%CH4, 0.8%H2S, 0.02%HCl, and 40%N2 by volume at temperatures up to 1000ºC. A pseudo-cyclic test involving heating the specimens, holding them at temperature for varying periods, and cooling them to room temperature was employed. Mass change of the specimens was determined after each cycle. Corrosion scale on the specimens was characterized using SEM, WDX, and XRD. Various sulfides, oxides, carbides, and nitrides were determined in different layers of the scale
Development and benchmarking of a dose rate engine for raster-scanned FLASH helium ions
Background: Radiotherapy with charged particles at high dose and ultra-high dose rate (uHDR) is a promising technique to further increase the therapeutic index of patient treatments. Dose rate is a key quantity to predict the so-called FLASH effect at uHDR settings. However, recent works introduced varying calculation models to report dose rate, which is susceptible to the delivery method, scanning path (in active beam delivery) and beam intensity. Purpose: This work introduces an analytical dose rate calculation engine for raster scanned charged particle beams that is able to predict dose rate from the irradiation plan and recorded beam intensity. The importance of standardized dose rate calculation methods is explored here. Methods: Dose is obtained with an analytical pencil beam algorithm, using pre-calculated databases for integrated depth dose distributions and lateral penumbra. Dose rate is then calculated by combining dose information with the respective particle fluence (i.e., time information) using three dose-rate-calculation models (mean, instantaneous, and threshold-based). Dose rate predictions for all three models are compared to uHDR helium ion beam (145.7 MeV/u, range in water of approximatively 14.6 cm) measurements performed at the Heidelberg Ion Beam Therapy Center (HIT) with a diamond-detector prototype. Three scanning patterns (scanned or snake-like) and four field sizes are used to investigate the dose rate differences. Results: Dose rate measurements were in good agreement with in-silico generated distributions using the here introduced engine. Relative differences in dose rate were below 10% for varying depths in water, from 2.3 to 14.8 cm, as well as laterally in a near Bragg peak area. In the entrance channel of the helium ion beam, dose rates were predicted within 7% on average for varying irradiated field sizes and scanning patterns. Large differences in absolute dose rate values were observed for varying calculation methods. For raster-scanned irradiations, the deviation between mean and threshold-based dose rate at the investigated point was found to increase with the field size up to 63% for a 10 mm × 10 mm field, while no significant differences were observed for snake-like scanning paths. Conclusions: This work introduces the first dose rate calculation engine benchmarked to instantaneous dose rate, enabling dose rate predictions for physical and biophysical experiments. Dose rate is greatly affected by varying particle fluence, scanning path, and calculation method, highlighting the need for a consensus among the FLASH community on how to calculate and report dose rate in the future. The here introduced engine could help provide the necessary details for the analysis of the sparing effect and uHDR conditions
A new modelling approach of evaluating preventive and reactive strategies for mitigating supply chain risks
Supply chains are becoming more complex and vulnerable due to globalization and interdependency between different risks. Existing studies have focused on identifying different preventive and reactive strategies for mitigating supply chain risks and advocating the need for adopting specific strategy under a particular situation. However, current research has not addressed the issue of evaluating an optimal mix of preventive and reactive strategies taking into account their relative costs and benefits within the supply network setting of interconnected firms and organizations. We propose a new modelling approach of evaluating different combinations of such strategies using Bayesian belief networks. This technique helps in determining an optimal solution on the basis of maximum improvement in the network expected loss. We have demonstrated our approach through a simulation study and discussed practical and managerial implications
Point Interaction in two and three dimensional Riemannian Manifolds
We present a non-perturbative renormalization of the bound state problem of n
bosons interacting with finitely many Dirac delta interactions on two and three
dimensional Riemannian manifolds using the heat kernel. We formulate the
problem in terms of a new operator called the principal or characteristic
operator. In order to investigate the problem in more detail, we then restrict
the problem to one particle sector. The lower bound of the ground state energy
is found for general class of manifolds, e.g., for compact and Cartan-Hadamard
manifolds. The estimate of the bound state energies in the tunneling regime is
calculated by perturbation theory. Non-degeneracy and uniqueness of the ground
state is proven by Perron-Frobenius theorem. Moreover, the pointwise bounds on
the wave function is given and all these results are consistent with the one
given in standard quantum mechanics. Renormalization procedure does not lead to
any radical change in these cases. Finally, renormalization group equations are
derived and the beta-function is exactly calculated. This work is a natural
continuation of our previous work based on a novel approach to the
renormalization of point interactions, developed by S. G. Rajeev.Comment: 43 page
Study of KIC 8561221 observed by Kepler: an early red giant showing depressed dipolar modes
The continuous high-precision photometric observations provided by the CoRoT
and Kepler space missions have allowed us to better understand the structure
and dynamics of red giants using asteroseismic techniques. A small fraction of
these stars shows dipole modes with unexpectedly low amplitudes. The reduction
in amplitude is more pronounced for stars with higher frequency of maximum
power. In this work we want to characterize KIC 8561221 in order to confirm
that it is currently the least evolved star among this peculiar subset and to
discuss several hypotheses that could help explain the reduction of the dipole
mode amplitudes. We used Kepler short- and long-cadence data combined with
spectroscopic observations to infer the stellar structure and dynamics of KIC
8561221. We then discussed different scenarios that could contribute to the
reduction of the dipole amplitudes such as a fast rotating interior or the
effect of a magnetic field on the properties of the modes. We also performed a
detailed study of the inertia and damping of the modes. We have been able to
characterize 37 oscillations modes, in particular, a few dipole modes above
nu_max that exhibit nearly normal amplitudes. We have inferred a surface
rotation period of around 91 days and uncovered the existence of a variation in
the surface magnetic activity during the last 4 years. As expected, the
internal regions of the star probed by the l = 2 and 3 modes spin 4 to 8 times
faster than the surface. With our grid of standard models we are able to
properly fit the observed frequencies. Our model calculation of mode inertia
and damping give no explanation for the depressed dipole modes. A fast rotating
core is also ruled out as a possible explanation. Finally, we do not have any
observational evidence of the presence of a strong deep magnetic field inside
the star.Comment: Accepted in A&A. 17 pages, 16 figure
Asteroseismology of the solar analogs 16 Cyg A & B from Kepler observations
The evolved solar-type stars 16 Cyg A & B have long been studied as solar
analogs, yielding a glimpse into the future of our own Sun. The orbital period
of the binary system is too long to provide meaningful dynamical constraints on
the stellar properties, but asteroseismology can help because the stars are
among the brightest in the Kepler field. We present an analysis of three months
of nearly uninterrupted photometry of 16 Cyg A & B from the Kepler space
telescope. We extract a total of 46 and 41 oscillation frequencies for the two
components respectively, including a clear detection of octupole (l=3) modes in
both stars. We derive the properties of each star independently using the
Asteroseismic Modeling Portal, fitting the individual oscillation frequencies
and other observational constraints simultaneously. We evaluate the systematic
uncertainties from an ensemble of results generated by a variety of stellar
evolution codes and fitting methods. The optimal models derived by fitting each
component individually yield a common age (t=6.8+/-0.4 Gyr) and initial
composition (Z_i=0.024+/-0.002, Y_i=0.25+/-0.01) within the uncertainties, as
expected for the components of a binary system, bolstering our confidence in
the reliability of asteroseismic techniques. The longer data sets that will
ultimately become available will allow future studies of differential rotation,
convection zone depths, and long-term changes due to stellar activity cycles.Comment: 6 pages, 2 figures, 2 tables, ApJ Letters (accepted
Automatic Detection of User Abilities through the SmartAbility Framework
This paper presents a proposed smartphone application for the unique SmartAbility Framework that
supports interaction with technology for people with reduced physical ability, through focusing on
the actions that they can perform independently. The Framework is a culmination of knowledge
obtained through previously conducted technology feasibility trials and controlled usability
evaluations involving the user community. The Framework is an example of ability-based design that
focuses on the abilities of users instead of their disabilities. The paper includes a summary of
Versions 1 and 2 of the Framework, including the results of a two-phased validation approach,
conducted at the UK Mobility Roadshow and via a focus group of domain experts. A holistic model
developed by adapting the House of Quality (HoQ) matrix of the Quality Function Deployment (QFD)
approach is also described. A systematic literature review of sensor technologies built into smart
devices establishes the capabilities of sensors in the Android and iOS operating systems. The review
defines a set of inclusion and exclusion criteria, as well as search terms used to elicit literature from
online repositories. The key contribution is the mapping of ability-based sensor technologies onto
the Framework, to enable the future implementation of a smartphone application. Through the
exploitation of the SmartAbility application, the Framework will increase technology amongst people
with reduced physical ability and provide a promotional tool for assistive technology manufacturers
Improving Phase Change Memory Performance with Data Content Aware Access
A prominent characteristic of write operation in Phase-Change Memory (PCM) is
that its latency and energy are sensitive to the data to be written as well as
the content that is overwritten. We observe that overwriting unknown memory
content can incur significantly higher latency and energy compared to
overwriting known all-zeros or all-ones content. This is because all-zeros or
all-ones content is overwritten by programming the PCM cells only in one
direction, i.e., using either SET or RESET operations, not both. In this paper,
we propose data content aware PCM writes (DATACON), a new mechanism that
reduces the latency and energy of PCM writes by redirecting these requests to
overwrite memory locations containing all-zeros or all-ones. DATACON operates
in three steps. First, it estimates how much a PCM write access would benefit
from overwriting known content (e.g., all-zeros, or all-ones) by
comprehensively considering the number of set bits in the data to be written,
and the energy-latency trade-offs for SET and RESET operations in PCM. Second,
it translates the write address to a physical address within memory that
contains the best type of content to overwrite, and records this translation in
a table for future accesses. We exploit data access locality in workloads to
minimize the address translation overhead. Third, it re-initializes unused
memory locations with known all-zeros or all-ones content in a manner that does
not interfere with regular read and write accesses. DATACON overwrites unknown
content only when it is absolutely necessary to do so. We evaluate DATACON with
workloads from state-of-the-art machine learning applications, SPEC CPU2017,
and NAS Parallel Benchmarks. Results demonstrate that DATACON significantly
improves system performance and memory system energy consumption compared to
the best of performance-oriented state-of-the-art techniques.Comment: 18 pages, 21 figures, accepted at ACM SIGPLAN International Symposium
on Memory Management (ISMM
- …