1,078 research outputs found

    Proof of a local antimagic conjecture

    Get PDF
    An antimagic labelling of a graph GG is a bijection f:E(G)→{1,…,E(G)}f:E(G)\to\{1,\ldots,E(G)\} such that the sums Sv=∑e∋vf(e)S_v=\sum_{e\ni v}f(e) distinguish all vertices. A well-known conjecture of Hartsfield and Ringel (1994) is that every connected graph other than K2K_2 admits an antimagic labelling. Recently, two sets of authors (Arumugam, Premalatha, Ba\v{c}a \& Semani\v{c}ov\'a-Fe\v{n}ov\v{c}\'ikov\'a (2017), and Bensmail, Senhaji \& Lyngsie (2017)) independently introduced the weaker notion of a local antimagic labelling, where only adjacent vertices must be distinguished. Both sets of authors conjectured that any connected graph other than K2K_2 admits a local antimagic labelling. We prove this latter conjecture using the probabilistic method. Thus the parameter of local antimagic chromatic number, introduced by Arumugam et al., is well-defined for every connected graph other than K2K_2 .Comment: Final version for publication in DMTCS. Changes from previous version are formatting to journal style and correction of two minor typographical error

    Investigation of interference models for RFID systems

    Get PDF
    The reader-to-reader collision in an RFID system is a challenging problem for communications technology. In order to model the interference between RFID readers, different interference models have been proposed, mainly based on two approaches: single and additive interference. The former only considers the interference from one reader within a certain range, whereas the latter takes into account the sum of all of the simultaneous interferences in order to emulate a more realistic behavior. Although the difference between the two approaches has been theoretically analyzed in previous research, their effects on the estimated performance of the reader-to-reader anti-collision protocols have not yet been investigated. In this paper, the influence of the interference model on the anti-collision protocols is studied by simulating a representative state-of-the-art protocol. The results presented in this paper highlight that the use of additive models, although more computationally intensive, is mandatory to improve the performance of anti-collision protocols

    Decoding of EEG signals reveals non-uniformities in the neural geometry of colour

    Get PDF
    The idea of colour opponency maintains that colour vision arises through the comparison of two chromatic mechanisms, red versus green and yellow versus blue. The four unique hues, red, green, blue, and yellow, are assumed to appear at the null points of these the two chromatic systems. Here we hypothesise that, if unique hues represent a tractable cortical state, they should elicit more robust activity compared to other, non-unique hues. We use a spatiotemporal decoding approach to report that electroencephalographic (EEG) responses carry robust information about the tested isoluminant unique hues within a 100-350 ms window from stimulus onset. Decoding is possible in both passive and active viewing tasks, but is compromised when concurrent high luminance contrast is added to the colour signals. For large hue-differences, the efficiency of hue decoding can be predicted by mutual distance in a nominally uniform perceptual colour space. However, for small perceptual neighbourhoods around unique hues, the encoding space shows pivotal non-uniformities which suggest that anisotropies in neurometric hue-spaces may reflect perceptual unique hues

    SparsePak: A Formatted Fiber Field-Unit for The WIYN Telescope Bench Spectrograph. II. On-Sky Performance

    Full text link
    We present a performance analysis of SparsePak and the WIYN Bench Spectrograph for precision studies of stellar and ionized gas kinematics of external galaxies. We focus on spectrograph configurations with echelle and low-order gratings yielding spectral resolutions of ~10000 between 500-900nm. These configurations are of general relevance to the spectrograph performance. Benchmarks include spectral resolution, sampling, vignetting, scattered light, and an estimate of the system absolute throughput. Comparisons are made to other, existing, fiber feeds on the WIYN Bench Spectrograph. Vignetting and relative throughput are found to agree with a geometric model of the optical system. An aperture-correction protocol for spectrophotometric standard-star calibrations has been established using independent WIYN imaging data and the unique capabilities of the SparsePak fiber array. The WIYN point-spread-function is well-fit by a Moffat profile with a constant power-law outer slope of index -4.4. We use SparsePak commissioning data to debunk a long-standing myth concerning sky-subtraction with fibers: By properly treating the multi-fiber data as a ``long-slit'' it is possible to achieve precision sky subtraction with a signal-to-noise performance as good or better than conventional long-slit spectroscopy. No beam-switching is required, and hence the method is efficient. Finally, we give several examples of science measurements which SparsePak now makes routine. These include Hα\alpha velocity fields of low surface-brightness disks, gas and stellar velocity-fields of nearly face-on disks, and stellar absorption-line profiles of galaxy disks at spectral resolutions of ~24,000.Comment: To appear in ApJSupp (Feb 2005); 19 pages text; 7 tables; 27 figures (embedded); high-resolution version at http://www.astro.wisc.edu/~mab/publications/spkII_pre.pd

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Combining shape and color. A bottom-up approach to evaluate object similarities

    Get PDF
    The objective of the present work is to develop a bottom-up approach to estimate the similarity between two unknown objects. Given a set of digital images, we want to identify the main objects and to determine whether they are similar or not. In the last decades many object recognition and classification strategies, driven by higher-level activities, have been successfully developed. The peculiarity of this work, instead, is the attempt to work without any training phase nor a priori knowledge about the objects or their context. Indeed, if we suppose to be in an unstructured and completely unknown environment, usually we have to deal with novel objects never seen before; under these hypothesis, it would be very useful to define some kind of similarity among the instances under analysis (even if we do not know which category they belong to). To obtain this result, we start observing that human beings use a lot of information and analyze very different aspects to achieve object recognition: shape, position, color and so on. Hence we try to reproduce part of this process, combining different methodologies (each working on a specific characteristic) to obtain a more meaningful idea of similarity. Mainly inspired by the human conception of representation, we identify two main characteristics and we called them the implicit and explicit models. The term "explicit" is used to account for the main traits of what, in the human representation, connotes a principal source of information regarding a category, a sort of a visual synecdoche (corresponding to the shape); the term "implicit", on the other hand, accounts for the object rendered by shadows and lights, colors and volumetric impression, a sort of a visual metonymy (corresponding to the chromatic characteristics). During the work, we had to face several problems and we tried to define specific solutions. In particular, our contributions are about: - defining a bottom-up approach for image segmentation (which does not rely on any a priori knowledge); - combining different features to evaluate objects similarity (particularly focusiing on shape and color); - defining a generic distance (similarity) measure between objects (without any attempt to identify the possible category they belong to); - analyzing the consequences of using the number of modes as an estimation of the number of mixture’s components (in the Expectation-Maximization algorithm)

    Minors in expanding graphs

    Full text link
    Extending several previous results we obtained nearly tight estimates on the maximum size of a clique-minor in various classes of expanding graphs. These results can be used to show that graphs without short cycles and other H-free graphs contain large clique-minors, resolving some open questions in this area

    Study of Target Enhancement Algorithms to Counter the Hostile Nuclear Environment

    Get PDF
    A necessary requirement of strategic defense is the detection of incoming nuclear warheads in an environment that may include nuclear detonations of undetected or missed target warheads. A computer model is described which simulates incoming warheads as distant endoatmospheric targets. A model of the expected electromagnetic noise present in a nuclear environment is developed using estimates of the probability distributions. Predicted atmospheric effects are also included. Various image enhancement algorithms, both linear and nonlinear, are discussed concerning their anticipated ability to suppress the noise and atmospheric effects of the nuclear environment. These algorithms are then tested, using the combined target and noise models, and evaluated in terms of the stated figures of merit
    • …
    corecore