6,463 research outputs found

    Spin-injection Hall effect in a planar photovoltaic cell

    Full text link
    Successful incorporation of the spin degree of freedom in semiconductor technology requires the development of a new paradigm allowing for a scalable, non-destructive electrical detection of the spin-polarization of injected charge carriers as they propagate along the semiconducting channel. In this paper we report the observation of a spin-injection Hall effect (SIHE) which exploits the quantum-relativistic nature of spin-charge transport and which meets all these key requirements on the spin detection. The two-dimensional electron-hole gas photo-voltaic cell we designed to observe the SIHE allows us to develop a quantitative microscopic theory of the phenomenon and to demonstrate its direct application in optoelectronics. We report an experimental realization of a non-magnetic spin-photovoltaic effect via the SIHE, rendering our device an electrical polarimeter which directly converts the degree of circular polarization of light to a voltage signal.Comment: 14 pages, 4 figure

    First-order transition in small-world networks

    Full text link
    The small-world transition is a first-order transition at zero density pp of shortcuts, whereby the normalized shortest-path distance undergoes a discontinuity in the thermodynamic limit. On finite systems the apparent transition is shifted by Δp∼L−d\Delta p \sim L^{-d}. Equivalently a ``persistence size'' L∗∼p−1/dL^* \sim p^{-1/d} can be defined in connection with finite-size effects. Assuming L∗∼p−τL^* \sim p^{-\tau}, simple rescaling arguments imply that τ=1/d\tau=1/d. We confirm this result by extensive numerical simulation in one to four dimensions, and argue that τ=1/d\tau=1/d implies that this transition is first-order.Comment: 4 pages, 3 figures, To appear in Europhysics Letter

    Settling Some Open Problems on 2-Player Symmetric Nash Equilibria

    Full text link
    Over the years, researchers have studied the complexity of several decision versions of Nash equilibrium in (symmetric) two-player games (bimatrix games). To the best of our knowledge, the last remaining open problem of this sort is the following; it was stated by Papadimitriou in 2007: find a non-symmetric Nash equilibrium (NE) in a symmetric game. We show that this problem is NP-complete and the problem of counting the number of non-symmetric NE in a symmetric game is #P-complete. In 2005, Kannan and Theobald defined the "rank of a bimatrix game" represented by matrices (A, B) to be rank(A+B) and asked whether a NE can be computed in rank 1 games in polynomial time. Observe that the rank 0 case is precisely the zero sum case, for which a polynomial time algorithm follows from von Neumann's reduction of such games to linear programming. In 2011, Adsul et. al. obtained an algorithm for rank 1 games; however, it does not solve the case of symmetric rank 1 games. We resolve this problem

    Exactly solvable models of adaptive networks

    Full text link
    A satisfiability (SAT-UNSAT) transition takes place for many optimization problems when the number of constraints, graphically represented by links between variables nodes, is brought above some threshold. If the network of constraints is allowed to adapt by redistributing its links, the SAT-UNSAT transition may be delayed and preceded by an intermediate phase where the structure self-organizes to satisfy the constraints. We present an analytic approach, based on the recently introduced cavity method for large deviations, which exactly describes the two phase transitions delimiting this adaptive intermediate phase. We give explicit results for random bond models subject to the connectivity or rigidity percolation transitions, and compare them with numerical simulations.Comment: 4 pages, 4 figure

    University Staff Teaching Allocation: Formulating and Optimising a Many-Objective Problem

    Get PDF
    This is the author accepted manuscript. The final version is available from ACM via the DOI in this record.The codebase for this paper is available at https://github.com/fieldsend/gecco_2017_staff_teaching_allocationThe allocation of university staff to teaching exhibits a range of often competing objectives. We illustrate the use of an augmented version of NSGA-III to undertake the seven-objective optimisation of this problem, to fi nd a trade-off front for a university department using real world data. We highlight its use in decision-making, and compare solutions identi fied to an actual allocation made prior to the availability of the optimisation tool. The criteria we consider include minimising the imbalance in workload distribution among staff; minimising the average load; minimising the maximum peak load; minimising the staff per module; minimising staff dissatisfaction with teaching allocations; and minimising the variation from the previous year’s allocation (allocation churn). We derive mathematical forms for these various criteria, and show we can determine the maximum possible values for all criteria and the minimum values for most exactly (with lower bounds on the remaining criteria). For many of the objectives, when considered in isolation, an optimal solution may be obtained rapidly. We demonstrate the advantage of utilising such extreme solutions to drastically improve the optimisation effi ciency in this many-objective optimisation problem. We also identify issues that NSGA-III can experience due to selection between generations

    Landscape of solutions in constraint satisfaction problems

    Get PDF
    We present a theoretical framework for characterizing the geometrical properties of the space of solutions in constraint satisfaction problems, together with practical algorithms for studying this structure on particular instances. We apply our method to the coloring problem, for which we obtain the total number of solutions and analyze in detail the distribution of distances between solutions.Comment: 4 pages, 4 figures. Replaced with published versio

    Unfolding the procedure of characterizing recorded ultra low frequency, kHZ and MHz electromagetic anomalies prior to the L'Aquila earthquake as pre-seismic ones. Part I

    Get PDF
    Ultra low frequency, kHz and MHz electromagnetic anomalies were recorded prior to the L'Aquila catastrophic earthquake that occurred on April 6, 2009. The main aims of this contribution are: (i) To suggest a procedure for the designation of detected EM anomalies as seismogenic ones. We do not expect to be possible to provide a succinct and solid definition of a pre-seismic EM emission. Instead, we attempt, through a multidisciplinary analysis, to provide elements of a definition. (ii) To link the detected MHz and kHz EM anomalies with equivalent last stages of the L'Aquila earthquake preparation process. (iii) To put forward physically meaningful arguments to support a way of quantifying the time to global failure and the identification of distinguishing features beyond which the evolution towards global failure becomes irreversible. The whole effort is unfolded in two consecutive parts. We clarify we try to specify not only whether or not a single EM anomaly is pre-seismic in itself, but mainly whether a combination of kHz, MHz, and ULF EM anomalies can be characterized as pre-seismic one

    The Computational Power of Minkowski Spacetime

    Full text link
    The Lorentzian length of a timelike curve connecting both endpoints of a classical computation is a function of the path taken through Minkowski spacetime. The associated runtime difference is due to time-dilation: the phenomenon whereby an observer finds that another's physically identical ideal clock has ticked at a different rate than their own clock. Using ideas appearing in the framework of computational complexity theory, time-dilation is quantified as an algorithmic resource by relating relativistic energy to an nnth order polynomial time reduction at the completion of an observer's journey. These results enable a comparison between the optimal quadratic \emph{Grover speedup} from quantum computing and an n=2n=2 speedup using classical computers and relativistic effects. The goal is not to propose a practical model of computation, but to probe the ultimate limits physics places on computation.Comment: 6 pages, LaTeX, feedback welcom

    Phase transition for cutting-plane approach to vertex-cover problem

    Full text link
    We study the vertex-cover problem which is an NP-hard optimization problem and a prototypical model exhibiting phase transitions on random graphs, e.g., Erdoes-Renyi (ER) random graphs. These phase transitions coincide with changes of the solution space structure, e.g, for the ER ensemble at connectivity c=e=2.7183 from replica symmetric to replica-symmetry broken. For the vertex-cover problem, also the typical complexity of exact branch-and-bound algorithms, which proceed by exploring the landscape of feasible configurations, change close to this phase transition from "easy" to "hard". In this work, we consider an algorithm which has a completely different strategy: The problem is mapped onto a linear programming problem augmented by a cutting-plane approach, hence the algorithm operates in a space OUTSIDE the space of feasible configurations until the final step, where a solution is found. Here we show that this type of algorithm also exhibits an "easy-hard" transition around c=e, which strongly indicates that the typical hardness of a problem is fundamental to the problem and not due to a specific representation of the problem.Comment: 4 pages, 3 figure
    • …
    corecore