305 research outputs found

    On behavior strategy solutions in finite extended decision processes

    Get PDF
    Techniques for finding best behavior strategies on arbitrary information collection scheme

    On behavior strategy solutions of finite two- person constant-sum extended games

    Get PDF
    Recall-sensitivity and behavior strategy solutions in finite two-person constant-sum extended game

    Essays on Matching Markets and Their Equilibria

    Get PDF
    Matching theory and matching markets are a core component of modern economic theory and market design. This dissertation presents three original contributions to this area. The first essay constructs a matching mechanism in an incomplete information matching market in which the positive assortative match is the unique efficient and unique stable match. The mechanism asks each agent in the matching market to reveal her privately known type. Through its novel payment rule, truthful revelation forms an ex post Nash equilibrium in this setting. This mechanism works in one-, two- and many-sided matching markets, thus offering the first mechanism to unify these matching markets under a single mechanism design framework. The second essay confronts a problem of matching in an environment in which no efficient and incentive compatible matching mechanism exists due to matching externalities. I develop a two-stage matching game in which a contracting stage facilitates subsequent conditionally efficient and incentive compatible Vickrey auction stage. Infinite repetition of this two-stage matching game enforces the contract in every period. This mechanism produces inequitably distributed social improvement: parties to the contract receive all of the gains and then some. The final essay demonstrates the existence of prices which stably and efficiently partition a single set of agents into firms and workers, and match those two sets to each other. This pricing system extends Kelso and Crawford's general equilibrium results in a labor market matching model and links one- and two-sided matching markets as well

    Algorithm Selection Framework for Cyber Attack Detection

    Full text link
    The number of cyber threats against both wired and wireless computer systems and other components of the Internet of Things continues to increase annually. In this work, an algorithm selection framework is employed on the NSL-KDD data set and a novel paradigm of machine learning taxonomy is presented. The framework uses a combination of user input and meta-features to select the best algorithm to detect cyber attacks on a network. Performance is compared between a rule-of-thumb strategy and a meta-learning strategy. The framework removes the conjecture of the common trial-and-error algorithm selection method. The framework recommends five algorithms from the taxonomy. Both strategies recommend a high-performing algorithm, though not the best performing. The work demonstrates the close connectedness between algorithm selection and the taxonomy for which it is premised.Comment: 6 pages, 7 figures, 1 table, accepted to WiseML '2

    Squeezing Bottlenecks: Exploring the Limits of Autoencoder Semantic Representation Capabilities

    Full text link
    This is the author’s version of a work that was accepted for publication in Neurocomputing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Neurocomputing 175 (2016) 1001–1008. DOI 10.1016/j.neucom.2015.06.091.[EN] We present a comprehensive study on the use of autoencoders for modelling text data, in which (differently from previous studies) we focus our attention on the various issues. We explore the suitability of two different models binary deep autencoders (bDA) and replicated-softmax deep autencoders (rsDA) for constructing deep autoencoders for text data at the sentence level. We propose and evaluate two novel metrics for better assessing the text-reconstruction capabilities of autoencoders. We propose an automatic method to find the critical bottleneck dimensionality for text representations (below which structural information is lost); and finally we conduct a comparative evaluation across different languages, exploring the regions of critical bottleneck dimensionality and its relationship to language perplexity. & 2015 Elsevier B.V. All rights reserved.A significant part of this research work was conducted during the first author's attachment to the HLT department of I2R in Singapore. The work of the first and third authors was carried out in the framework of the WIQ-EI IRSES project (Grant no. 269180) within the FP 7 Marie Curie, the DIANA APPLICATIONS Finding Hidden Knowledge in Texts: Applications (TIN2012-38603-C02-01) project and the VLC/CAMPUS Microcluster on Multimodal Interaction in Intelligent Systems.Gupta, PA.; Banchs, R.; Rosso, P. (2016). Squeezing Bottlenecks: Exploring the Limits of Autoencoder Semantic Representation Capabilities. Neurocomputing. 175:1001-1008. https://doi.org/10.1016/j.neucom.2015.06.091S1001100817

    A Compromise between Neutrino Masses and Collider Signatures in the Type-II Seesaw Model

    Full text link
    A natural extension of the standard SU(2)L×U(1)YSU(2)_{\rm L} \times U(1)_{\rm Y} gauge model to accommodate massive neutrinos is to introduce one Higgs triplet and three right-handed Majorana neutrinos, leading to a 6×66\times 6 neutrino mass matrix which contains three 3×33\times 3 sub-matrices MLM_{\rm L}, MDM_{\rm D} and MRM_{\rm R}. We show that three light Majorana neutrinos (i.e., the mass eigenstates of νe\nu_e, νμ\nu_\mu and ντ\nu_\tau) are exactly massless in this model, if and only if ML=MDMR−1MDTM_{\rm L} = M_{\rm D} M_{\rm R}^{-1} M_{\rm D}^T exactly holds. This no-go theorem implies that small but non-vanishing neutrino masses may result from a significant but incomplete cancellation between MLM_{\rm L} and MDMR−1MDTM_{\rm D} M_{\rm R}^{-1} M_{\rm D}^T terms in the Type-II seesaw formula, provided three right-handed Majorana neutrinos are of O(1){\cal O}(1) TeV and experimentally detectable at the LHC. We propose three simple Type-II seesaw scenarios with the A4×U(1)XA_4 \times U(1)_{\rm X} flavor symmetry to interpret the observed neutrino mass spectrum and neutrino mixing pattern. Such a TeV-scale neutrino model can be tested in two complementary ways: (1) searching for possible collider signatures of lepton number violation induced by the right-handed Majorana neutrinos and doubly-charged Higgs particles; and (2) searching for possible consequences of unitarity violation of the 3×33\times 3 neutrino mixing matrix in the future long-baseline neutrino oscillation experiments.Comment: RevTeX 19 pages, no figure

    Explanation Based Generalisation = Partial Evaluation

    Get PDF
    We argue that explanation-based generalisation as recently proposed in the machine learning literature is essentially equivalent to partial evaluation, a well known technique in the functional and logic programming literature. We show this equivalence by analysing the definitions and underlying algorithms of both techniques, and by giving a Prolog program which can be interpreted as doing either explanation-based generalisation or partial evaluation

    1994-95 Advisory Council on Social Security Technical Panel on Assumptions and Methods Final Report

    Get PDF
    The Panel\u27s major conclusions are: The intermediate projection of the Trustees Report for the Old-Age. Survivors. and Disability Insurance (OASDI) program provide a reasonable evaluation of the financial status. Although the Panel suggests that modifications be considered in various specific assumptions, the overall effect of those suggestions would not significantly change the financial status evaluation. There should be evolutionary implementation of procedures to indicate more adequately the uncertainties involved in the projections. Even though such uncertainties are unavoidable, stochastic analysis should be used to examine more explicitly the probabilities of alternative projections. It is emphasized that there should be an extended period during which the new procedures would supplement, rather than replace, the current methods of considering high-cost and low-cost projections and individual assumption sensitivity analysis. Evaluation of the long-range financial status should put less emphasis on the 75-year actuarial balance and the test of long-range close actuarial balance. Prior to enactment of legislation reforming the program, primary emphasis should be on the projected date the Trust Fund Ratio would fall below 100 percent; when definitive legislative revisions are adopted, subsequent long-range evaluation should compare up-dated projections with the intended results of the legislation. There should be a substantial expansion of SSA\u27s resources and its interaction with experts in related areas: increased recognition should be given to the interrelationships between OASDI and many public and private programs as well as other aspects of the economy. Social Security Administration (SSA) staff does high quality work, but is relatively small and works with inadequate resources. In addition to internal expansion, there should be greater use of outside consultants and contractual research; periodic comprehensive review by technical panels should be supplemented by ongoing arrangements for advice on specific matters

    A novel Big Data analytics and intelligent technique to predict driver's intent

    Get PDF
    Modern age offers a great potential for automatically predicting the driver's intent through the increasing miniaturization of computing technologies, rapid advancements in communication technologies and continuous connectivity of heterogeneous smart objects. Inside the cabin and engine of modern cars, dedicated computer systems need to possess the ability to exploit the wealth of information generated by heterogeneous data sources with different contextual and conceptual representations. Processing and utilizing this diverse and voluminous data, involves many challenges concerning the design of the computational technique used to perform this task. In this paper, we investigate the various data sources available in the car and the surrounding environment, which can be utilized as inputs in order to predict driver's intent and behavior. As part of investigating these potential data sources, we conducted experiments on e-calendars for a large number of employees, and have reviewed a number of available geo referencing systems. Through the results of a statistical analysis and by computing location recognition accuracy results, we explored in detail the potential utilization of calendar location data to detect the driver's intentions. In order to exploit the numerous diverse data inputs available in modern vehicles, we investigate the suitability of different Computational Intelligence (CI) techniques, and propose a novel fuzzy computational modelling methodology. Finally, we outline the impact of applying advanced CI and Big Data analytics techniques in modern vehicles on the driver and society in general, and discuss ethical and legal issues arising from the deployment of intelligent self-learning cars
    • …
    corecore