5,028 research outputs found

    Balancing Conflict and Cost in the Selection of Negotiation Opponents

    No full text
    Within the context of agent-to-agent purchase negotiations, a problem that has received little attention is that of identifying negotiation opponents in situations where the consequences of conflict and the ability to access resources dynamically vary. Such dynamism poses a number of problems that make it difficult to automate the identification of appropriate opponents. To that end, this paper describes a motivation-based opponent selection mechanism used by a buyer-agent to evaluate and select between an already identified set of seller-agents. Sellers are evaluated in terms of the amount of conflict they are expected to bring to a negotiation and the expected amount of cost a negotiation with them will entail. The mechanism allows trade-offs to be made between conflict and cost minimisation, and experimental results show the effectiveness of the approach

    Motivation-based selection of negotiation partners

    Get PDF
    Negotiation is key to resolving conflicts, allocating resources and establishing cooperation in systems of self interested agents. Often, an agent may have to select between different potential negotiation partners, and identifying which offers the best chance of a successful negotiation is a challenging task. However, poor selection of partners can result in failure or in inefficient outcomes. To that end, this paper describes a motivation-based mechanism to evaluate and select between negotiation candidates. This is achieved by a twofold process: first, acceptable candidates are identified using motivation-based thresholds on objective scoring measures; second, the importance of issues is considered, and expected performance measures are evaluated accordingly. The mechanism is described and empirically evaluated

    How much baseline correction do we need in ERP research? Extended GLM model can replace baseline correction while lifting its limits

    Full text link
    Baseline correction plays an important role in past and current methodological debates in ERP research (e.g. the Tanner v. Maess debate in Journal of Neuroscience Methods), serving as a potential alternative to strong highpass filtering. However, the very assumptions that underlie traditional baseline also undermine it, making it statistically unnecessary and even undesirable and reducing signal-to-noise ratio. Including the baseline interval as a predictor in a GLM-based statistical approach allows the data to determine how much baseline correction is needed, including both full traditional and no baseline correction as subcases, while reducing the amount of variance in the residual error term and thus potentially increasing statistical power

    Object Segmentation in Images using EEG Signals

    Get PDF
    This paper explores the potential of brain-computer interfaces in segmenting objects from images. Our approach is centered around designing an effective method for displaying the image parts to the users such that they generate measurable brain reactions. When an image region, specifically a block of pixels, is displayed we estimate the probability of the block containing the object of interest using a score based on EEG activity. After several such blocks are displayed, the resulting probability map is binarized and combined with the GrabCut algorithm to segment the image into object and background regions. This study shows that BCI and simple EEG analysis are useful in locating object boundaries in images.Comment: This is a preprint version prior to submission for peer-review of the paper accepted to the 22nd ACM International Conference on Multimedia (November 3-7, 2014, Orlando, Florida, USA) for the High Risk High Reward session. 10 page

    Competition and cooperation:aspects of dynamics in sandpiles

    Full text link
    In this article, we review some of our approaches to granular dynamics, now well known to consist of both fast and slow relaxational processes. In the first case, grains typically compete with each other, while in the second, they cooperate. A typical result of {\it cooperation} is the formation of stable bridges, signatures of spatiotemporal inhomogeneities; we review their geometrical characteristics and compare theoretical results with those of independent simulations. {\it Cooperative} excitations due to local density fluctuations are also responsible for relaxation at the angle of repose; the {\it competition} between these fluctuations and external driving forces, can, on the other hand, result in a (rare) collapse of the sandpile to the horizontal. Both these features are present in a theory reviewed here. An arena where the effects of cooperation versus competition are felt most keenly is granular compaction; we review here a random graph model, where three-spin interactions are used to model compaction under tapping. The compaction curve shows distinct regions where 'fast' and 'slow' dynamics apply, separated by what we have called the {\it single-particle relaxation threshold}. In the final section of this paper, we explore the effect of shape -- jagged vs. regular -- on the compaction of packings near their jamming limit. One of our major results is an entropic landscape that, while microscopically rough, manifests {\it Edwards' flatness} at a macroscopic level. Another major result is that of surface intermittency under low-intensity shaking.Comment: 36 pages, 23 figures, minor correction

    Statistics of quantum transmission in one dimension with broad disorder

    Full text link
    We study the statistics of quantum transmission through a one-dimensional disordered system modelled by a sequence of independent scattering units. Each unit is characterized by its length and by its action, which is proportional to the logarithm of the transmission probability through this unit. Unit actions and lengths are independent random variables, with a common distribution that is either narrow or broad. This investigation is motivated by results on disordered systems with non-stationary random potentials whose fluctuations grow with distance. In the statistical ensemble at fixed total sample length four phases can be distinguished, according to the values of the indices characterizing the distribution of the unit actions and lengths. The sample action, which is proportional to the logarithm of the conductance across the sample, is found to obey a fluctuating scaling law, and therefore to be non-self-averaging, in three of the four phases. According to the values of the two above mentioned indices, the sample action may typically grow less rapidly than linearly with the sample length (underlocalization), more rapidly than linearly (superlocalization), or linearly but with non-trivial sample-to-sample fluctuations (fluctuating localization).Comment: 26 pages, 4 figures, 1 tabl

    Spectral properties of zero temperature dynamics in a model of a compacting granular column

    Full text link
    The compacting of a column of grains has been studied using a one-dimensional Ising model with long range directed interactions in which down and up spins represent orientations of the grain having or not having an associated void. When the column is not shaken (zero 'temperature') the motion becomes highly constrained and under most circumstances we find that the generator of the stochastic dynamics assumes an unusual form: many eigenvalues become degenerate, but the associated multi-dimensional invariant spaces have but a single eigenvector. There is no spectral expansion and a Jordan form must be used. Many properties of the dynamics are established here analytically; some are not. General issues associated with the Jordan form are also taken up.Comment: 34 pages, 4 figures, 3 table
    corecore