258 research outputs found

    Novel Technique for Ultra-sensitive Determination of Trace Elements in Organic Scintillators

    Get PDF
    A technique based on neutron activation has been developed for an extremely high sensitivity analysis of trace elements in organic materials. Organic materials are sealed in plastic or high purity quartz and irradiated at the HFIR and MITR. The most volatile materials such as liquid scintillator (LS) are first preconcentrated by clean vacuum evaporation. Activities of interest are separated from side activities by acid digestion and ion exchange. The technique has been applied to study the liquid scintillator used in the KamLAND neutrino experiment. Detection limits of <2.4X10**-15 g 40K/g LS, <5.5X10**-15 g Th/g LS, and <8X10**-15 g U/g LS have been achieved.Comment: 16 pages, 3 figures, accepted for publication in Nuclear Instruments and Methods

    Combinatorial Bounds and Characterizations of Splitting Authentication Codes

    Full text link
    We present several generalizations of results for splitting authentication codes by studying the aspect of multi-fold security. As the two primary results, we prove a combinatorial lower bound on the number of encoding rules and a combinatorial characterization of optimal splitting authentication codes that are multi-fold secure against spoofing attacks. The characterization is based on a new type of combinatorial designs, which we introduce and for which basic necessary conditions are given regarding their existence.Comment: 13 pages; to appear in "Cryptography and Communications

    A survey on feature weighting based K-Means algorithms

    Get PDF
    This is a pre-copyedited, author-produced PDF of an article accepted for publication in Journal of Classification [de Amorim, R. C., 'A survey on feature weighting based K-Means algorithms', Journal of Classification, Vol. 33(2): 210-242, August 25, 2016]. Subject to embargo. Embargo end date: 25 August 2017. The final publication is available at Springer via http://dx.doi.org/10.1007/s00357-016-9208-4 © Classification Society of North America 2016In a real-world data set there is always the possibility, rather high in our opinion, that different features may have different degrees of relevance. Most machine learning algorithms deal with this fact by either selecting or deselecting features in the data preprocessing phase. However, we maintain that even among relevant features there may be different degrees of relevance, and this should be taken into account during the clustering process. With over 50 years of history, K-Means is arguably the most popular partitional clustering algorithm there is. The first K-Means based clustering algorithm to compute feature weights was designed just over 30 years ago. Various such algorithms have been designed since but there has not been, to our knowledge, a survey integrating empirical evidence of cluster recovery ability, common flaws, and possible directions for future research. This paper elaborates on the concept of feature weighting and addresses these issues by critically analysing some of the most popular, or innovative, feature weighting mechanisms based in K-Means.Peer reviewedFinal Accepted Versio

    Recognizing Treelike k-Dissimilarities

    Full text link
    A k-dissimilarity D on a finite set X, |X| >= k, is a map from the set of size k subsets of X to the real numbers. Such maps naturally arise from edge-weighted trees T with leaf-set X: Given a subset Y of X of size k, D(Y) is defined to be the total length of the smallest subtree of T with leaf-set Y . In case k = 2, it is well-known that 2-dissimilarities arising in this way can be characterized by the so-called "4-point condition". However, in case k > 2 Pachter and Speyer recently posed the following question: Given an arbitrary k-dissimilarity, how do we test whether this map comes from a tree? In this paper, we provide an answer to this question, showing that for k >= 3 a k-dissimilarity on a set X arises from a tree if and only if its restriction to every 2k-element subset of X arises from some tree, and that 2k is the least possible subset size to ensure that this is the case. As a corollary, we show that there exists a polynomial-time algorithm to determine when a k-dissimilarity arises from a tree. We also give a 6-point condition for determining when a 3-dissimilarity arises from a tree, that is similar to the aforementioned 4-point condition.Comment: 18 pages, 4 figure

    E-commerce transactions in a virtual environment: Virtual transactions

    Get PDF
    E-commerce is a fundamental method of doing business, such that for a firm to say it is trading at all in the modern market-place it must have some element of on-line presence. Coupled with this is the explosion of the "population" of Massively Multiplayer On-line Role Playing Games and other shared virtual environments. Many suggest this will lead to a further dimension of commerce: virtual commerce. We discuss here the issues, current roadblocks and present state of an e-commerce transaction carried out completely within a virtual environment; a virtual transaction. Although technically such transactions are in a sense trivial, they raise many other issues in complex ways thus making V-transactions a highly interesting cross-disciplinary issue. We also discuss the social, ethical and regulatory implications for the virtual communities in these environments of such v-transactions, how their implementation affects the nature and management of a virtual environment, and how they represent a fundamental merging of the real and virtual worlds for the purpose of commerce. We highlight the minimal set of features a v-transaction capable virtual environment requires and suggest a model of how in the medium term they could be carried out via a methodology we call click-through, and that the developers of such environments will need to take on the multi-modal behavior of their users, as well as elements of the economic and political sciences in order to fully realize the commercial potential of the v-transaction. © 2012 Springer Science+Business Media, LLC

    A simulated annealing methodology for clusterwise linear regression

    Full text link
    In many regression applications, users are often faced with difficulties due to nonlinear relationships, heterogeneous subjects, or time series which are best represented by splines. In such applications, two or more regression functions are often necessary to best summarize the underlying structure of the data. Unfortunately, in most cases, it is not known a priori which subset of observations should be approximated with which specific regression function. This paper presents a methodology which simultaneously clusters observations into a preset number of groups and estimates the corresponding regression functions' coefficients, all to optimize a common objective function. We describe the problem and discuss related procedures. A new simulated annealing-based methodology is described as well as program options to accommodate overlapping or nonoverlapping clustering, replications per subject, univariate or multivariate dependent variables, and constraints imposed on cluster membership. Extensive Monte Carlo analyses are reported which investigate the overall performance of the methodology. A consumer psychology application is provided concerning a conjoint analysis investigation of consumer satisfaction determinants. Finally, other applications and extensions of the methodology are discussed.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45745/1/11336_2005_Article_BF02296405.pd

    On plexus representation of dissimilarities

    Get PDF
    Correspondence analysis has found widespread application in analysing vegetation gradients. However, it is not clear how it is robust to situations where structures other than a simple gradient exist. The introduction of instrumental variables in canonical correspondence analysis does not avoid these difficulties. In this paper I propose to examine some simple methods based on the notion of the plexus (sensu McIntosh) where graphs or networks are used to display some of the structure of the data so that an informed choice of models is possible. I showthat two different classes of plexus model are available. These classes are distinguished by the use in one case of a global Euclidean model to obtain well-separated pair decomposition (WSPD) of a set of points which implicitly involves all dissimilarities, while in the other a Riemannian view is taken and emphasis is placed locally, i.e., on small dissimilarities. I showan example of each of these classes applied to vegetation data

    A spatial interaction model for deriving joint space maps of bundle compositions and market segments from pick-any/J data: An application to new product options

    Full text link
    We propose an approach for deriving joint space maps of bundle compositions and market segments from three-way (e.g., consumers x product options/benefits/features x usage situations/scenarios/time periods) pick-any/J data. The proposed latent structure multidimensional scaling procedure simultaneously extracts market segment and product option positions in a joint space map such that the closer a product option is to a particlar segment, the higher the likelihood of its being chosen by that segment. A segment-level threshold parameter is estimated that spatially delineates the bundle of product options that are predicted to be chosen by each segment. Estimates of the probability of each consumer belonging to the derived segments are simultaneously obtained. Explicit treatment of product and consumer characteristics are allowed via optional model reparameterizations of the product option locations and segment memberships. We illustrate the use of the proposed approach using an actual commercial application involving pick-any/J data gathered by a major hi-tech firm for some 23 advanced technological options for new automobiles.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47207/1/11002_2004_Article_BF00434905.pd
    • 

    corecore