803 research outputs found
Polymorphism, subtyping, and type inference in MLsub
We present a type system combining subtyping and ML-style parametric
polymorphism. Unlike previous work, our system support type inference and has compact principal types. We demonstrate this system in the minimal language MLsub, which types a strict superset of core ML programs.
This is made possible by keeping a strict separation between the types used to describe inputs and those used to describe outputs, and extending the classical unification algorithm to handle subtyping constraints between these input and output types. Principal types are kept compact by type simplification, which exploits deep connections between subtyping and the algebra of regular languages. An implementation is available online
Type-Inference Based Short Cut Deforestation (nearly) without Inlining
Deforestation optimises a functional program by transforming it into another one that does not create certain intermediate data structures. In [ICFP'99] we presented a type-inference based deforestation algorithm which performs extensive inlining. However, across module boundaries only limited inlining is practically feasible. Furthermore, inlining is a non-trivial transformation which is therefore best implemented as a separate optimisation pass. To perform short cut deforestation (nearly) without inlining, Gill suggested to split definitions into workers and wrappers and inline only the small wrappers, which transfer the information needed for deforestation. We show that Gill's use of a function build limits deforestation and note that his reasons for using build do not apply to our approach. Hence we develop a more general worker/wrapper scheme without build. We give a type-inference based algorithm which splits definitions into workers and wrappers. Finally, we show that we can deforest more expressions with the worker/wrapper scheme than the algorithm with inlining
An accurate description of quantum size effects in InP nanocrystallites over a wide range of sizes
We obtain an effective parametrization of the bulk electronic structure of
InP within the Tight Binding scheme. Using these parameters, we calculate the
electronic structure of InP clusters with the size ranging upto 7.5 nm. The
calculated variations in the electronic structure as a function of the cluster
size is found to be in excellent agreement with experimental results over the
entire range of sizes, establishing the effectiveness and transferability of
the obtained parameter strengths.Comment: 9 pages, 3 figures, pdf file available at
http://sscu.iisc.ernet.in/~sampan/publications.htm
Well-Typed Programs Canât Be Blamed
We introduce the blame calculus, which adds the notion of blame from Findler and Felleisenâs contracts to a system similar to Siek and Tahaâs gradual types and Flanaganâs hybrid types. We characterise where positive and negative blame can arise by decomposing the usual notion of subtype into positive and negative subtypes, and show that these recombine to yield naive subtypes. Naive subtypes previously appeared in type systems that are unsound, but we believe this is the first time naive subtypes play a role in establishing type soundness
Irradiation-induced Ag nanocluster nucleation in silicate glasses: analogy with photography
The synthesis of Ag nanoclusters in sodalime silicate glasses and silica was
studied by optical absorption (OA) and electron spin resonance (ESR)
experiments under both low (gamma-ray) and high (MeV ion) deposited energy
density irradiation conditions. Both types of irradiation create electrons and
holes whose density and thermal evolution - notably via their interaction with
defects - are shown to determine the clustering and growth rates of Ag
nanocrystals. We thus establish the influence of redox interactions of defects
and silver (poly)ions. The mechanisms are similar to the latent image formation
in photography: irradiation-induced photoelectrons are trapped within the glass
matrix, notably on dissolved noble metal ions and defects, which are thus
neutralized (reverse oxidation reactions are also shown to exist). Annealing
promotes metal atom diffusion, which in turn leads to cluster nuclei formation.
The cluster density depends not only on the irradiation fluence, but also - and
primarily - on the density of deposited energy and the redox properties of the
glass. Ion irradiation (i.e., large deposited energy density) is far more
effective in cluster formation, despite its lower neutralization efficiency
(from Ag+ to Ag0) as compared to gamma photon irradiation.Comment: 48 pages, 18 figures, revised version publ. in Phys. Rev. B, pdf fil
Photochemically reduced polyoxometalate assisted generation of silver and gold nanoparticles in composite films: a single step route
A simple method to embed noble metal (Ag, Au) nanoparticles in organicâinorganic nanocomposite films by single step method is described. This is accomplished by the assistance of Keggin ions present in the composite film. The photochemically reduced composite film has served both as a reducing agent and host for the metal nanoparticles in a single process. The embedded metal nanoparticles in composites film have been characterized by UVâVisible, TEM, EDAX, XPS techniques. Particles of less than 20 nm were readily embedded using the described approach, and monodisperse nanoparticles were obtained under optimized conditions. The fluorescence experiments showed that embedded Ag and Au nanoparticles are responsible for fluorescence emissions. The described method is facile and simple, and provides a simple potential route to fabricate self-standing noble metal embedded composite films
Synthesis of CdS and CdSe nanocrystallites using a novel single-molecule precursors approach
The synthesis of CdS and CdSe nanocrystallites using the thermolysis of several dithioor
diselenocarbamato complexes of cadmium in trioctylphosphine oxide (TOPO) is reported.
The nanodispersed materials obtained show quantum size effects in their optical spectra
and exhibit near band-edge luminescence. The influence of experimental parameters on
the properties of the nanocrystallites is discussed. HRTEM images of these materials show
well-defined, crystalline nanosized particles. Standard size fractionation procedures can
be performed in order to narrow the size dispersion of the samples. The TOPO-capped CdS
and CdSe nanocrystallites and simple organic bridging ligands, such as 2,2Âą-bipyrimidine,
are used as the starting materials for the preparation of novel nanocomposites. The optical
properties shown by these new nanocomposites are compared with those of the starting
nanodispersed materials
Recommended from our members
Shape-controlled continuous synthesis of metal nanostructures
A segmented flow-based microreactor is used for the continuous production of faceted nanocrystals.
Flow segmentation is proposed as a versatile tool to manipulate the reduction kinetics and control the
growth of faceted nanostructures; tuning the size and shape. Switching the gas from oxygen to carbon
monoxide permits the adjustment in nanostructure growth from 1D (nanorods) to 2D (nanosheets). CO is
a key factor in the formation of Pd nanosheets and Pt nanocubes; operating as a second phase, a reductant,
and a capping agent. This combination confines the growth to specific structures. In addition, the
segmented flow microfluidic reactor inherently has the ability to operate in a reproducible manner at
elevated temperatures and pressures whilst confining potentially toxic reactants, such as CO, in nanoliter
slugs. This continuous system successfully synthesised Pd nanorods with an aspect ratio of 6; thin palladium
nanosheets with a thickness of 1.5 nm; and Pt nanocubes with a 5.6 nm edge length, all in a synthesis
time as low as 150 s
Flow Analysis, Linearity, and PTIME
Abstract. Flow analysis is a ubiquitous and much-studied component of compiler technologyâand its variations abound. Amongst the most well known is Shivers â 0CFA; however, the best known algorithm for 0CFA requires time cubic in the size of the analyzed program and is unlikely to be improved. Consequently, several analyses have been de-signed to approximate 0CFA by trading precision for faster computation. Hengleinâs simple closure analysis, for example, forfeits the notion of di-rectionality in flows and enjoys an âalmost linear â time algorithm. But in making trade-offs between precision and complexity, what has been given up and what has been gained? Where do these analyses differ and where do they coincide? We identify a core languageâthe linear λ-calculusâwhere 0CFA, simple closure analysis, and many other known approximations or restrictions to 0CFA are rendered identical. Moreover, for this core language, analysis corresponds with (instrumented) evaluation. Because analysis faithfully captures evaluation, and because the linear λ-calculus is complete for ptime, we derive ptime-completeness results for all of these analyses.
- âŠ