1,186 research outputs found

    Oxidation mechanism in metal nanoclusters: Zn nanoclusters to ZnO hollow nanoclusters

    Full text link
    Zn nanoclusters (NCs) are deposited by Low-energy cluster beam deposition technique. The mechanism of oxidation is studied by analysing their compositional and morphological evolution over a long span of time (three years) due to exposure to ambient atmosphere. It is concluded that the mechanism proceeds in two steps. In the first step, the shell of ZnO forms over Zn NCs rapidly up to certain limiting thickness: with in few days -- depending upon the size -- Zn NCs are converted to Zn-ZnO (core-shell), Zn-void-ZnO, or hollow ZnO type NCs. Bigger than ~15 nm become Zn-ZnO (core-shell) type: among them, NCs above ~25 nm could able to retain their initial geometrical shapes (namely triangular, hexagonal, rectangular and rhombohedral), but ~25 to 15 nm size NCs become irregular or distorted geometrical shapes. NCs between ~15 to 5 nm become Zn-void-ZnO type, and smaller than ~5 nm become ZnO hollow sphere type i.e. ZnO hollow NCs. In the second step, all Zn-void-ZnO and Zn-ZnO (core-shell) structures are converted to hollow ZnO NCs in a slow and gradual process, and the mechanism of conversion proceeds through expansion in size by incorporating ZnO monomers inside the shell. The observed oxidation behaviour of NCs is compared with theory of Cabrera - Mott on low-temperature oxidation of metal.Comment: 9 pages, 8 figure

    Data approximation strategies between generalized line scales and the influence of labels and spacing

    Get PDF
    Comparing sensory data gathered using different line scales is challenging. We tested whether adding internal labels to a generalized visual analog scale (gVAS) would improve comparability to a typical generalized labeled magnitude scale (gLMS). Untrained participants evaluated cheeses using one of four randomly assigned scales. Normalization to a cross‐modal standard and/or two gLMS transformations were applied to the data. Response means and distributions were lower for the gLMS than the gVAS, but no difference in resolving power was detected. The presence of labels, with or without line markings, caused categorical‐like lumping of responses. Closer low‐end label spacing for gLMS increased influenced participants to mark near higher intensity labels when they were evaluating low‐intensity samples. Although normalization reduced differences between scales, neither transformation nor normalization was supported as appropriate gLMS/gVAS approximation strategies. This study supports previous observations that neither scale offers a systematic advantage and that participant usage differences limit direct scale comparisons

    A Universal Lifetime Distribution for Multi-Species Systems

    Full text link
    Lifetime distributions of social entities, such as enterprises, products, and media contents, are one of the fundamental statistics characterizing the social dynamics. To investigate the lifetime distribution of mutually interacting systems, simple models having a rule for additions and deletions of entities are investigated. We found a quite universal lifetime distribution for various kinds of inter-entity interactions, and it is well fitted by a stretched-exponential function with an exponent close to 1/2. We propose a "modified Red-Queen" hypothesis to explain this distribution. We also review empirical studies on the lifetime distribution of social entities, and discussed the applicability of the model.Comment: 10 pages, 6 figures, Proceedings of Social Modeling and Simulations + Econophysics Colloquium 201

    Investigating the health implications of social policy initiatives at the local level: study design and methods

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In this paper we present the research design and methods of a study that seeks to capture local level responses to an Australian national social policy initiative, aimed at reducing inequalities in the social determinants of health.</p> <p>Methods/Design</p> <p>The study takes a policy-to-practice approach and combines policy and stakeholder interviewing with a comparative case study analysis of two not-for-profit organisations involved in the delivery of federal government policy.</p> <p>Discussion</p> <p>Before the health impacts of broad-scale policies, such as the one described in this study, can be assessed at the population level, we need to understand the implementation process. This is consistent with current thinking in political science and social policy, which has emphasised the importance of investigating how, and if, policies are translated into operational realities.</p

    DOSCATs: Double standards for protein quantification

    Get PDF
    The two most common techniques for absolute protein quantification are based on either mass spectrometry (MS) or on immunochemical techniques, such as western blotting (WB). Western blotting is most often used for protein identification or relative quantification, but can also be deployed for absolute quantification if appropriate calibration standards are used. MS based techniques offer superior data quality and reproducibility, but WB offers greater sensitivity and accessibility to most researchers. It would be advantageous to apply both techniques for orthogonal quantification, but workflows rarely overlap. We describe DOSCATs (DOuble Standard conCATamers), novel calibration standards based on QconCAT technology, to unite these platforms. DOSCATs combine a series of epitope sequences concatenated with tryptic peptides in a single artificial protein to create internal tryptic peptide standards for MS as well as an intact protein bearing multiple linear epitopes. A DOSCAT protein was designed and constructed to quantify five proteins of the NF-ÎşB pathway. For three target proteins, protein fold change and absolute copy per cell values measured by MS and WB were in excellent agreement. This demonstrates that DOSCATs can be used as multiplexed, dual purpose standards, readily deployed in a single workflow, supporting seamless quantitative transition from MS to WB

    The pre-concept design of the DEMO tritium, matter injection and vacuum systems

    Get PDF
    In the Pre-Concept Design Phase of EU-DEMO, the work package TFV (Tritium – Matter Injection – Vacuum) has developed a tritium self-sufficient three-loop fuel cycle architecture. Driven by the need to reduce the tritium inventory in the systems to an absolute minimum, this requires the continual recirculation of gases in loops without storage, avoiding hold-ups of tritium in each process stage by giving preference to continuous over batch technologies, and immediate use of tritium extracted from tritium breeding blankets. In order to achieve this goal, a number of novel concepts and technologies had to be found and their principal feasibility to be shown. This paper starts from a functional analysis of the fuel cycle and introduces the results of a technology survey and ranking exercise which provided the prime technology candidates for all system blocks. The main boundary conditions for the TFV systems are described based on which the fuel cycle architecture was developed and the required operational windows of all subsystems were defined. To validate this, various R&D lines were established, selected results of which are reported, together with the key technology developments. Finally, an outlook towards the Concept Design Phase is given

    Reports of the AAAI 2019 spring symposium series

    Get PDF
    Applications of machine learning combined with AI algorithms have propelled unprecedented economic disruptions across diverse fields in industry, military, medicine, finance, and others. With the forecast for even larger impacts, the present economic impact of machine learning is estimated in the trillions of dollars. But as autonomous machines become ubiquitous, recent problems have surfaced. Early on, and again in 2018, Judea Pearl warned AI scientists they must "build machines that make sense of what goes on in their environment," a warning still unheeded that may impede future development. For example, self-driving vehicles often rely on sparse data; self-driving cars have already been involved in fatalities, including a pedestrian; and yet machine learning is unable to explain the contexts within which it operates
    • …
    corecore