448 research outputs found
Damage function for historic paper. Part I: Fitness for use
Background In heritage science literature and in preventive conservation practice, damage functions are used to model material behaviour and specifically damage (unacceptable change), as a result of the presence of a stressor over time. For such functions to be of use in the context of collection management, it is important to define a range of parameters, such as who the stakeholders are (e.g. the public, curators, researchers), the mode of use (e.g. display, storage, manual handling), the long-term planning horizon (i.e. when in the future it is deemed acceptable for an item to become damaged or unfit for use), and what the threshold of damage is, i.e. extent of physical change assessed as damage. Results In this paper, we explore the threshold of fitness for use for archival and library paper documents used for display or reading in the context of access in reading rooms by the general public. Change is considered in the context of discolouration and mechanical deterioration such as tears and missing pieces: forms of physical deterioration that accumulate with time in libraries and archives. We also explore whether the threshold fitness for use is defined differently for objects perceived to be of different value, and for different modes of use. The data were collected in a series of fitness-for-use workshops carried out with readers/visitors in heritage institutions using principles of Design of Experiments. Conclusions The results show that when no particular value is pre-assigned to an archival or library document, missing pieces influenced readers/visitors’ subjective judgements of fitness-for-use to a greater extent than did discolouration and tears (which had little or no influence). This finding was most apparent in the display context in comparison to the reading room context. The finding also best applied when readers/visitors were not given a value scenario (in comparison to when they were asked to think about the document having personal or historic value). It can be estimated that, in general, items become unfit when text is evidently missing. However, if the visitor/reader is prompted to think of a document in terms of its historic value, then change in a document has little impact on fitness for use
Behavioural resistance against a protozoan parasite in the monarch butterfly
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/89483/1/j.1365-2656.2011.01901.x.pd
Antimagic Labeling of Generalized Edge Corona Graphs
An antimagic labeling of a graph is a one-to-one correspondence between
the edge set and in which the sum of the
edge labels incident on the distinct vertices are distinct. Let
,,,...,, and be simple graphs where . A
generalized edge corona of the graph and (denoted by
) is a graph obtained by taking a copy of
and joining the end vertices of edge of to
every vertex of , . In this paper, we
consider as a connected graph with exactly one vertex of maximum degree 3
(excluding the spider graph with exactly one vertex of maximum degree 3
containing uneven legs) and each , as a connected graph
on at least two vertices. We provide an algorithmic approach to prove that
is antimagic under certain conditions
An Analysis of Graceful Coloring in a Specific r-Regular Graphs
A graceful -coloring of a graph is a proper vertex coloring with
colors which induces a proper edge coloring with at most colors, where
the color for an edge is the absolute difference between the colors
assigned to the vertices and . The graceful chromatic number
is the smallest for which permits graceful -coloring. The problem of
computing the graceful chromatic number of regular graphs is still open, though
the existence of the lower bound was proved in \cite{3}. Hence, we pay
attention to the computation of the graceful chromatic number of a special
class of regular graphs namely complete graphs using set theoretic approach.
Also, a few characterization of graphs based on their graceful chromatic number
were examined.Comment: 8 pages, 4 figure
Laser Induced Desorption Time of Flight Mass Spectrometer Analysis of Adsorbed Contaminants on Vacuum Ultraviolet Lithography Optic Materials
Adsorbed surface contaminants on optical elements absorb light energy in an optical lithography system and, if left unclean, will result in reduced wafer yield. In order to nondestructively analyze the surface adsorbate of different CaF2 samples, a laser induced desorption-Time of Flight Mass Spectrometer (LID-TOFMS) technique was developed. The main object of this technique is to investigate the surface composition of adsorbed contaminants as a function of position on the sample. An Er:YAG laser at 2.94 μm was used as the light source to induce desorption. Electron impact ionization was used to obtain ionization of desorbed molecules. The detection of ionized species was accomplished by TOFMS operated in Angular Reflectron (AREF) mode to obtain better resolution.
The data reported here can be used in semiconductor industries either to modify conventional processing or to design a new efficient laser cleaning process for optical elements
Analysis of Adsorbed Contaminants of CaF/sub 2/ Surfaces by Infrared Laser Induced Desorption
157 nm photolithography technologies are currently under development and have been accepted as the leading candidate for fabrication of the next generation semiconductor devices after 193 nm. At this and shorter wavelengths, molecular contamination of surfaces becomes a serious problem as almost all molecules absorb at 157 nm and below. The light transmitted by a photolithographic tool can be significantly decreased by the presence of a few monolayers adsorbed on its many optical surfaces. We have developed a laser induced desorption, electron impact ionization, time-of-flight mass spectrometer (LID TOFMS) to study contaminants on 157nm and other ultraviolet optics, e.g., polished CaF2. The LID TOFMS of CaF2(100) samples showed water ions, hydrocarbon ions, oxygen-containing hydrocarbon ions, as well as alkali metal ions (Na+,K+). For multiple irradiations of one site at fixed laser fluence, the ion intensities decreased as the number of pulses increased, suggesting that surface contaminants were being removed. A degenerate threshold model that assumes preferential adsorption at surface defects was employed to quantitatively analyze the LID data. Desorption thresholds for water and hydrocarbons were obtained from this model.
© 2004 American Vacuum Societ
Graceful coloring is computationally hard
Given a (proper) vertex coloring of a graph , say , the difference edge labelling induced by is a function
defined as for every edge
of . A graceful coloring of is a vertex coloring of such that
the difference edge labelling induced by is a (proper) edge coloring of
. A graceful coloring with range is called a graceful
-coloring. The least integer such that admits a graceful
-coloring is called the graceful chromatic number of , denoted by
.
We prove that for every graph ,
where denotes the th term of the integer sequence A065825 in OEIS. We
also prove that graceful coloring problem is NP-hard for planar bipartite
graphs, regular graphs and 2-degenerate graphs. In particular, we show that for
each , it is NP-complete to check whether a planar bipartite graph of
maximum degree is graceful -colorable. The complexity of checking
whether a planar graph is graceful 4-colorable remains open
Prediction of thermo-physiological properties of plated knits by different neural network architectures
Thermo-physiological properties of polyester-cotton plated knits have been predicted using two different network architectures (NA1 & NA2). NA1 consists of four individual networks working in tandem with common set of inputs and NA2 consists of one network giving four outputs. It is found that network architecture NA1 is able to predict the thermo-physiological properties of plated fabrics better as compared to NA2 network architecture. Sensitivity analysis is performed to judge the sensitivity or the importance of each input parameter in determining thermo-physiological properties of plated fabrics. The most sensitive parameter in prediction of thermal resistance is total yarn linear density, filament fineness for thermal absorptivity, loop length for air permeability and moisture vapour transmission rate
Novel approaches in development of cell penetrating peptides
Therapeutic cargos which are impermeable to the cell can be delivered by cell penetrating peptides (CPPs). CPP-cargo complexes accumulate by endocytosis inside the cells but they fail to reach the cytosolic space properly as they are often trapped in the endocytic organelles. Here the CPP mediated endosomal escape and some strategies used to increase endosomal escape of CPP-cargo conjugates are discussed with evidence. Potential benefits can be obtained by peptides such as reduction in side effects, biocompatibility, easier synthesis and can be obtained at lower administered doses. The particular peptide known as cell penetrating peptides are able to translocate themselves across membrane with the carrier drugs with different mechanisms. This is of prime importance in drug delivery systems as they have capability to cross physiological membranes. This review describes various mechanisms for effective drug delivery and associated challenge
Minnorm training: an algorithm for training over-parameterized deep neural networks
In this work, we propose a new training method for finding minimum weight
norm solutions in over-parameterized neural networks (NNs). This method seeks
to improve training speed and generalization performance by framing NN training
as a constrained optimization problem wherein the sum of the norm of the
weights in each layer of the network is minimized, under the constraint of
exactly fitting training data. It draws inspiration from support vector
machines (SVMs), which are able to generalize well, despite often having an
infinite number of free parameters in their primal form, and from recent
theoretical generalization bounds on NNs which suggest that lower norm
solutions generalize better. To solve this constrained optimization problem,
our method employs Lagrange multipliers that act as integrators of error over
training and identify `support vector'-like examples. The method can be
implemented as a wrapper around gradient based methods and uses standard
back-propagation of gradients from the NN for both regression and
classification versions of the algorithm. We provide theoretical justifications
for the effectiveness of this algorithm in comparison to early stopping and
-regularization using simple, analytically tractable settings. In
particular, we show faster convergence to the max-margin hyperplane in a
shallow network (compared to vanilla gradient descent); faster convergence to
the minimum-norm solution in a linear chain (compared to -regularization);
and initialization-independent generalization performance in a deep linear
network. Finally, using the MNIST dataset, we demonstrate that this algorithm
can boost test accuracy and identify difficult examples in real-world datasets
- …
