330,191 research outputs found
Performance Evaluation of Two Palm Kernel Nut Cracker Machines
In this study performance evaluation of centrifugal impact approach and vertical palm
kernel nut cracker machine was carried out. The results of the study show that the
vertical centrifugal palm kernel cracker is more efficient than the centrifugal impact
approach palm kernel cracker. The efficiency of Vertical centrifugal palm kernel
cracker is 71.3% and that of centrifugal impact approach is 50.38%. Apart from this,
the vertical centrifugal machine though has low speed but it produces clean and neat
nut cracked output
Reference-free evaluation of thin films mass thickness and composition through energy dispersive x-ray spectroscopy
In this paper we report the development of a new method for the evaluation of
thin films mass thickness and composition based on the Energy Dispersive X-Ray
Spectroscopy (EDS). The method exploits the theoretical calculation of the
in-depth characteristic X-ray generation distribution function, /(
z), in multilayer samples, obtained by the numerical solution of the electron
transport equation, to achieve reliable measurements without the need of a
reference sample and multiple voltages acquisitions. The electron transport
model is derived from the Boltzmann transport equation and it exploits the most
updated and reliable physical parameters in order to obtain an accurate
description of the phenomenon. The method for the calculation of film mass
thickness and composition is validated with benchmarks from standard
techniques. In addition, a model uncertainty and sensitivity analysis is
carried out and it indicates that the mass thickness accuracy is in the order
of 10 g/cm, which is comparable to the nuclear standard techniques
resolution. We show the technique peculiarities in one example measurement:
two-dimensional mass thickness and composition profiles are obtained for a
ultra-low density, high roughness, nanostructured film.Comment: This project has received funding from the European Research Council
(ERC) under the European Union's Horizon 2020 research and innovation
programme (ENSURE grant agreement No. 647554
Weight function method for precise determination of top quark mass at Large Hadron Collider
We propose a new method to measure a theoretically well-defined top quark
mass at the LHC. This method is based on the "weight function method," which we
proposed in our preceding paper. It requires only lepton energy distribution
and is basically independent of the production process of the top quark. We
perform a simulation analysis of the top quark mass reconstruction with
pair production and lepton+jets decay channel at the leading order.
The estimated statistical error of the top quark mass is about GeV with
an integrated luminosity of fb at TeV. We also
estimate some of the major systematic uncertainties and find that they are
under good control.Comment: 8 pages, 7 figures, version to appear in PL
Ranking efficient DMUs using cooperative game theory
The problem of ranking Decision Making Units (DMUs) in Data Envelopment Analysis (DEA) has been widely studied in the literature. Some of the proposed approaches use cooperative game theory as a tool to perform the ranking. In this paper, we use the Shapley value of two different cooperative games in which the players are the efficient DMUs and the characteristic function represents the increase in the discriminant power of DEA contributed by each efficient DMU. The idea is that if the efficient DMUs are not included in the modified reference sample then the efficiency score of some inefficient DMUs would be higher. The characteristic function represents, therefore, the change in the efficiency scores of the inefficient DMUs that occurs when a given coalition of efficient units is dropped from the sample. Alternatively, the characteristic function of the cooperative game can be defined as the change in the efficiency scores of the inefficient DMUs that occurs when a given coalition of efficient DMUs are the only efficient DMUs that are included in the sample. Since the two cooperative games proposed are dual games, their corresponding Shapley value coincide and thus lead to the same ranking. The more an ef- ficient DMU impacts the shape of the efficient frontier, the higher the increase in the efficiency scores of the inefficient DMUs its removal brings about and, hence, the higher its contribution to the overall discriminant power of the method. The proposed approach is illustrated on a number of datasets from the literature and compared with existing methods
Recommended from our members
An evaluation methodology for ergonomic design of electronic consumer products based on fuzzy axiomatic design
This article is posted with permission of OCP Science imprint. Copyright @ 2008 Old City Publishing Group.The development life cycle of software and electronic products has been shortened by the growth of rapid prototyping techniques. The evaluation of electronic consumer products should consider hardware and software as well as the ergonomic usability, emotional appeal and aesthetic integrity of the design. This research follows a systematic approach to develop an evaluation methodology for electronic mobile products on ergonomic design. The proposed methodology is based on fuzzy multi attribute decision making and fuzzy axiomatic design realized in three steps; determination of ergonomic attributes for electronic consumer products, determination of a representative set of alternatives, and selection of the best alternative in terms of ergonomic design by utilizing fuzzy axiomatic design. A case study is also provided to support the proposed methodology
Controlled release properties and final macroporosity of a pectin microspheres–calcium phosphate composite bone cement
The use of calcium phosphate cements (CPC) is restricted by their lack of macroporosity and poor drug release properties. To overcome these two limitations, incorporating degradable polymer microparticles into CPC is an attractive option, as polymer microparticles could help to control drug release and induce macroporosity after degradation. Although few authors have yet tested synthetic polymers, the potentiality of polysaccharides’ assuming this role has never been explored. Low-methoxy amidated pectins (LMAP) constitute valuable candidates because of their biocompatibility and ionic and pH sensitivity. In this study, the potentiality of a LMAP with a degree of esterification (DE) of 30 and a degree of amidation (DA) of 19 was explored. The aim of this study was to explore the influence of LMAP microspheres within the composite on the cement properties, drug release ability and final macroporosity after microspheres degradation. Three LMAP incorporation ratios, 2%, 4% and 6% w/w were tested, and ibuprofen was chosen as the model drug. In comparison with the CPC reference, the resulting composites presented reduced setting times and lowered the mechanical properties, which remained acceptable for an implantation in moderate-stress-bearing locations. Sustained release of ibuprofen was obtained on at least 45 days, and release rates were found to be controlled by the LMAP ratio, which modulated drug diffusion. After 4 months of degradation study, the resulting CPC appeared macroporous, with a maximum macroporosity of nearly 30% for the highest LMAP incorporation ratio, and interconnectivity between pores could be observed. In conclusion, LMAP appear as interesting candidates to generate macroporous bone cements with tailored release properties and macroporosity by adjusting the pectin content within the composites
The impact of priors and observables on parameter inferences in the Constrained MSSM
We use a newly released version of the SuperBayeS code to analyze the impact
of the choice of priors and the influence of various constraints on the
statistical conclusions for the preferred values of the parameters of the
Constrained MSSM. We assess the effect in a Bayesian framework and compare it
with an alternative likelihood-based measure of a profile likelihood. We employ
a new scanning algorithm (MultiNest) which increases the computational
efficiency by a factor ~200 with respect to previously used techniques. We
demonstrate that the currently available data are not yet sufficiently
constraining to allow one to determine the preferred values of CMSSM parameters
in a way that is completely independent of the choice of priors and statistical
measures. While b->s gamma generally favors large m_0, this is in some contrast
with the preference for low values of m_0 and m_1/2 that is almost entirely a
consequence of a combination of prior effects and a single constraint coming
from the anomalous magnetic moment of the muon, which remains somewhat
controversial. Using an information-theoretical measure, we find that the
cosmological dark matter abundance determination provides at least 80% of the
total constraining power of all available observables. Despite the remaining
uncertainties, prospects for direct detection in the CMSSM remain excellent,
with the spin-independent neutralino-proton cross section almost guaranteed
above sigma_SI ~ 10^{-10} pb, independently of the choice of priors or
statistics. Likewise, gluino and lightest Higgs discovery at the LHC remain
highly encouraging. While in this work we have used the CMSSM as particle
physics model, our formalism and scanning technique can be readily applied to a
wider class of models with several free parameters.Comment: Minor changes, extended discussion of profile likelihood. Matches
JHEP accepted version. SuperBayeS code with MultiNest algorithm available at
http://www.superbayes.or
Study of the q^2-Dependence of B --> pi ell nu and B --> rho(omega)ell nu Decay and Extraction of |V_ub|
We report on determinations of |Vub| resulting from studies of the branching
fraction and q^2 distributions in exclusive semileptonic B decays that proceed
via the b->u transition. Our data set consists of the 9.7x10^6 BBbar meson
pairs collected at the Y(4S) resonance with the CLEO II detector. We measure
B(B0 -> pi- l+ nu) = (1.33 +- 0.18 +- 0.11 +- 0.01 +- 0.07)x10^{-4} and B(B0 ->
rho- l+ nu) = (2.17 +- 0.34 +0.47/-0.54 +- 0.41 +- 0.01)x10^{-4}, where the
errors are statistical, experimental systematic, systematic due to residual
form-factor uncertainties in the signal, and systematic due to residual
form-factor uncertainties in the cross-feed modes, respectively. We also find
B(B+ -> eta l+ nu) = (0.84 +- 0.31 +- 0.16 +- 0.09)x10^{-4}, consistent with
what is expected from the B -> pi l nu mode and quark model symmetries. We
extract |Vub| using Light-Cone Sum Rules (LCSR) for 0<= q^2<16 GeV^2 and
Lattice QCD (LQCD) for 16 GeV^2 <= q^2 < q^2_max. Combining both intervals
yields |Vub| = (3.24 +- 0.22 +- 0.13 +0.55/-0.39 +- 0.09)x10^{-3}$ for pi l nu,
and |Vub| = (3.00 +- 0.21 +0.29/-0.35 +0.49/-0.38 +-0.28)x10^{-3} for rho l nu,
where the errors are statistical, experimental systematic, theoretical, and
signal form-factor shape, respectively. Our combined value from both decay
modes is |Vub| = (3.17 +- 0.17 +0.16/-0.17 +0.53/-0.39 +-0.03)x10^{-3}.Comment: 45 pages postscript, also available through
http://w4.lns.cornell.edu/public/CLNS, submitted to PR
- …