5,803 research outputs found
Laser Induced Damage Studies in Borosilicate Glass Using nanosecond and sub nanosecond pulses
The damage mechanism induced by laser pulse of different duration in
borosilicate glass widely used for making confinement geometry targets which
are important for laser driven shock multiplication and elongation of pressure
pulse, is studied. We measured the front and rear surface damage threshold of
borosilicate glass and their dependency on laser parameters. In this paper, we
also study the thermal effects on the damage diameters, generated at the time
of plasma formation. These induced damage width, geometries and microstructure
changes are measured and analyzed with optical microscope, scanning electron
microscope and Raman spectroscopy. The results show that at low energies
symmetrical damages are found and these damage width increases nonlinearly with
laser intensity. The emitted optical spectrum during the process of breakdown
is also investigated and is used for the characterization of emitted plasma
such as plasma temperature and free electron density. Optical emission lines
from Si I at 500 nm, Si II at 385nm and Si III at 455 nm are taken for the
temperature calculations.Comment: 9 figures, 3 table
Almost Budget Balanced Mechanisms with Scalar Bids For Allocation of a Divisible Good
This paper is about allocation of an infinitely divisible good to several
rational and strategic agents. The allocation is done by a social planner who
has limited information because the agents' valuation functions are taken to be
private information known only to the respective agents. We allow only a scalar
signal, called a bid, from each agent to the social planner. Yang and Hajek
[Jour. on Selected Areas in Comm., 2007] as well as Johari and Tsitsiklis
[Jour. of Oper. Res., 2009] proposed a scalar strategy Vickrey-Clarke-Groves
(SSVCG) mechanism with efficient Nash equilibria. We consider a setting where
the social planner desires minimal budget surplus. Example situations include
fair sharing of Internet resources and auctioning of certain public goods where
revenue maximization is not a consideration. Under the SSVCG framework, we
propose a mechanism that is efficient and comes close to budget balance by
returning much of the payments back to the agents in the form of rebates. We
identify a design criterion for {\em almost budget balance}, impose feasibility
and voluntary participation constraints, simplify the constraints, and arrive
at a convex optimization problem to identify the parameters of the rebate
functions. The convex optimization problem has a linear objective function and
a continuum of linear constraints. We propose a solution method that involves a
finite number of constraints, and identify the number of samples sufficient for
a good approximation.Comment: Accepted for publication in the European Journal of Operational
Research (EJOR
Multiple Description Vector Quantization with Lattice Codebooks: Design and Analysis
The problem of designing a multiple description vector quantizer with lattice
codebook Lambda is considered. A general solution is given to a labeling
problem which plays a crucial role in the design of such quantizers. Numerical
performance results are obtained for quantizers based on the lattices A_2 and
Z^i, i=1,2,4,8, that make use of this labeling algorithm. The high-rate
squared-error distortions for this family of L-dimensional vector quantizers
are then analyzed for a memoryless source with probability density function p
and differential entropy h(p) < infty. For any a in (0,1) and rate pair (R,R),
it is shown that the two-channel distortion d_0 and the channel 1 (or channel
2) distortions d_s satisfy lim_{R -> infty} d_0 2^(2R(1+a)) = (1/4) G(Lambda)
2^{2h(p)} and lim_{R -> infty} d_s 2^(2R(1-a)) = G(S_L) 2^2h(p), where
G(Lambda) is the normalized second moment of a Voronoi cell of the lattice
Lambda and G(S_L) is the normalized second moment of a sphere in L dimensions.Comment: 46 pages, 14 figure
Assessment of Musculoskeletal Disorder Risk in watch assembly Industry
Watch assembly is repetitive, monotonous and highly visual demanding task and has been identified as a likely contributor to the development of work related musculoskeletal disorders (WMSDs). A large number of the young workforce was engaged in assembling units where the design of the workstation and work environment does not adequately fulfill the ergonomic requirements for correct manual assembly. Due to the poor ergonomically designed workstation workers had to adopted the awkward work posture which leads to musculoskeletal disorders (MSDs) and the occupational health hazards. Therefore, a survey of watch assembly industry was conducted to assess the prevalence of work-related musculoskeletal disorders among female workers. The 120 respondents were selected from two units among which one was carrying out manual assembly and another one automatic assembly line. RULA (Rapid Upper Limb Assessment) was used to assess the working postures and recommend the changes to be made. To examine the prevalence of body pain body map was used. Frequency and percentage were used for the analysis of data. This study has shown that women workers involved in assembly work were confronted with WMSDs. Watch assembly workers carrying out repetitive tasks with hands and fingers, and working in awkward postures had high pain prevalence in the neck, upper back and lower back pain. Thus it is clear that due to adoption of awkward postures at work for a prolonged period of time, the female assembly workers suffer from high rate of work related musculoskeletal disorders
Exploring the high-pressure materials genome
A thorough in situ characterization of materials at extreme conditions is
challenging, and computational tools such as crystal structural search methods
in combination with ab initio calculations are widely used to guide experiments
by predicting the composition, structure, and properties of high-pressure
compounds. However, such techniques are usually computationally expensive and
not suitable for large-scale combinatorial exploration. On the other hand,
data-driven computational approaches using large materials databases are useful
for the analysis of energetics and stability of hundreds of thousands of
compounds, but their utility for materials discovery is largely limited to
idealized conditions of zero temperature and pressure. Here, we present a novel
framework combining the two computational approaches, using a simple linear
approximation to the enthalpy of a compound in conjunction with
ambient-conditions data currently available in high-throughput databases of
calculated materials properties. We demonstrate its utility by explaining the
occurrence of phases in nature that are not ground states at ambient conditions
and estimating the pressures at which such ambient-metastable phases become
thermodynamically accessible, as well as guiding the exploration of
ambient-immiscible binary systems via sophisticated structural search methods
to discover new stable high-pressure phases.Comment: 14 pages, 6 figure
Influence of Coronal Abundance Variations
The PI of this project was Jeff Scargle of NASA/Ames. Co-I's were Alma Connors of Eureka Scientific/Wellesley, and myself. Part of the work was subcontracted to Eureka Scientific via SAO, with Vinay Kashyap as PI. This project was originally assigned grant number NCC2-1206, and was later changed to NCC2-1350 for administrative reasons. The goal of the project was to obtain, derive, and develop statistical and data analysis tools that would be of use in the analyses of high-resolution, high-sensitivity data that are becoming available with new instruments. This is envisioned as a cross-disciplinary effort with a number of "collaborators" including some at SA0 (Aneta Siemiginowska, Peter Freeman) and at the Harvard Statistics department (David van Dyk, Rostislav Protassov, Xiao-li Meng, Epaminondas Sourlas, et al). We have developed a new tool to reliably measure the metallicities of thermal plasma. It is unfeasible to obtain high-resolution grating spectra for most stars, and one must make the best possible determination based on lower-resolution, CCD-type spectra. It has been noticed that most analyses of such spectra have resulted in measured metallicities that were significantly lower than when compared with analyses of high- resolution grating data where available (see, e.g., Brickhouse et al., 2000, ApJ 530,387). Such results have led to the proposal of the existence of so-called Metal Abundance Deficient, or "MAD" stars (e.g., Drake, J.J., 1996, Cool Stars 9, ASP Conf.Ser. 109, 203). We however find that much of these analyses may be systematically underestimating the metallicities, and using a newly developed method to correctly treat the low-counts regime at the high-energy tail of the stellar spectra (van Dyk et al. 2001, ApJ 548,224), have found that the metallicities of these stars are generally comparable to their photospheric values. The results were reported at the AAS (Sourlas, Yu, van Dyk, Kashyap, and Drake, 2000, BAAS 196, v32, #54.02), and at the conference on Statistical Challenges in Modem Astronomy (Sourlas, van Dyk, Kashyap, Drake, and Pease, 2003, SCMA 111, Eds. E.D.Feigelson, G.J.Babu, New York:Springer, p489-490). We also described the limitations of one of the most egregiously misused and misapplied statistical tests in astrophysical literature, the F-test for verifying model components (Protassov, van Dyk, Connors, Kashyap, and Siemiginowska, 2002, ApJ, 571,545). Indeed, a search through the ApJ archives turned up 170 papers in the 5 previous years that used the F-test explicitly in some form or the other, and with the vast majority of them not using it correctly! Indeed, looking at just 4 issues of the ApJ in 2001, we found 13 instances of its use, of which nine were demonstrably incorrect. Clearly, it is difficult to understate the importance of this issue. We also worked on speeding up Bayes Blocks and Sparse Bayes Blocks algorithms to make them more tractable for large searches. We also supported staistics students and postdocs in both explicit physics- model-based (spectra with tens of thousands of atomic lines) and "model-free" -- i.e. non-parametric or semi-parametric -- algorithms. Work on using more of the latter is just beginning; while using multi-scale methods for Poisson imaging has come to hition. In fact, "An Image Restoration Technique with Error Estimates", by D. Esch, A. Connors, M. Karovska, and D. van Dyk, was published by ApJ (Esch et a1.2004, ApJ, 610, 1213). The code has been delivered to M. Karovska for CXC; and is available for beta-testing upon request. The other large project we worked on was on the self-consistent modeling of logN-logs curves in the Poisson limit. logN-logs curves are a fundamental tool in the study of source populations, luminosity functions, and cosmological parameters. However, their determination is hampered by statistical effects such as the Eddington bias, incompleteness due to detection efficiency, faint source flux fluctuations, etc. We have develed a new and powerful method using the full Poisson machinery that allows us to model the logN-logs distribution of X-ray sources in a self-consistent manner. Because we properly account for all the above statistical effects, our modeling is valid over the full range of the data, and not just for strong sources, as is normally done. Using a Bayesian approach and modeling the fluxes with known functional forms such as simple or broken power-laws, and conditioning the expected photon counts on the fluxes, the background contamination, effective area, detector vignetting, and detection probability, we can delve deeply into the low counts regime and extend the usefulness of medium sensitivity surveys such as ChAMP by orders of magnitude. The built-in flexibility of the algorithm also allows a simultaneous analysis of multiple datasets. We have applied this analysis to a set a Chandra observations (Sourlas, Kashyap, Zezas, van Dyk, 2004, HEAD #8, #16.32
RoboRun: A gamification approach to control flow learning for young students with TouchDevelop
This demo paper introduces young students to writing code in a touch enabled interactive maze game. Problem-based learning is given a gamified approach to learning, while simultaneously introducing the TouchDevelop platform to build basic first control flow algorithms and to learn about ordering and loops in conditional statements
Andreev Scattering and the Kondo Effect
We examine the properties of an infinite- Anderson impurity coupled to
both normal and superconducting metals. Both the cases of a quantum dot and a
quantum point contact containing an impurity are considered; for the latter, we
study both one and two-channel impurities. Using a generalization of the
noncrossing approximation which incorporates multiple Andreev reflection, we
compute the impurity spectral function and the linear-response conductance of
these devices. We find generically that the Kondo resonance develops structure
at energies corresponding to the superconducting gap, and that the magnitude of
the resonance at the Fermi energy is altered. This leads to observable changes
in the zero-bias conductance as compared to the case with no superconductivity.Comment: 8 pages, 7 figures; expanded version to appear in PR
- …
