3 research outputs found

    American graduate admissions: both sides of the table

    Get PDF
    This is a comprehensive study of graduate admission process in American universities. There are multiple entities involved in the process, out of which the most significant ones are: • The candidate applying for admission in a department in a school • The decision-makers acting upon the candidates' application The goal of this study is to understand the admission process from each of these entities' perspective, and provide them decision-support models for their respective tasks. Although both of the entities interact through a common set of datapoints, i.e. candidate admission application, each of them works towards a very different goal. The juxtaposition of these two tasks provides a very interesting challenge which is hard to resolve deterministically. Solution to such a problem requires learning techniques which can find patterns, adapt according to the dynamic nature of problem, and produce results in a probabilistic fashion. We study and model the graduate admission process from a machine learning perspective based on analysis of large amounts of data. The analysis considers factors such as standardized test scores, and GPA, as well as world knowledge such as university similarity, reputation, and constraints. Based on the targeted entity, learning problem is formulated as classification problem or ranking problem. During learning and inference, not only those features are considered which are available from the data directly, but also the hidden features which need to be incorporated generatively. Our experimental study reveals some key factors in the decision process and, consequently, allows us to propose a recommendation algorithm that provides applicants the ability to make an informed decision regarding where to apply, as well as guides the decision-makers towards a more efficient process

    Minimal supervision for language learning: bootstrapping global patterns from local knowledge

    Get PDF
    A fundamental step in sentence comprehension involves assigning semantic roles to sentence constituents. To accomplish this, the listener must parse the sentence, find constituents that are candidate arguments, and assign semantic roles to those constituents. Each step depends on prior lexical and syntactic knowledge. Where do children begin in solving this problem when learning their first languages? To experiment with different representations that children may use to begin understanding language, we have built a computational model for this early point in language acquisition. This system, BabySRL, learns from transcriptions of natural child-directed speech and makes use of psycholinguistically plausible background knowledge and realistically noisy semantic feedback to begin to classify sentences at the level of ``who does what to whom.'' Starting with simple, psycholinguistically-motivated representations of sentence structure, the BabySRL is able to learn from full semantic feedback, as well as a supervision signal derived from partial semantic background knowledge. In addition we combine the BabySRL with an unsupervised Hidden Markov Model part-of-speech tagger, linking clusters with syntactic categories using background noun knowledge so that they can be used to parse input for the SRL system. The results show that proposed shallow representations of sentence structure are robust to reductions in parsing accuracy, and that the contribution of alternative representations of sentence structure to successful semantic role labeling varies with the integrity of the parsing and argument-identification stages. Finally, we enable the BabySRL to improve both an intermediate syntactic representation and its final semantic role classification. Using this system we show that it is possible for a simple learner in a plausible (noisy) setup to begin comprehending simple semantics when initialized with a small amount of concrete noun knowledge and some simple syntax-semantics mapping biases, before acquiring any specific verb knowledge

    Linear concepts and hidden variables: An empirical study

    No full text
    Some learning techniques for classification tasks work indirectly, by first trying to fit a full probabilistic model to the observed data. Whether this is a good idea or not depends on the robustness with respect to deviations from the postulated model. We study this question experimentally in a restricted, yet non-trivial and interesting case: we consider a conditionally independent attribute (CIA) model which postulates a single binary-valued hidden variable z on which all other attributes (i.e., the target and the observables) depend. In this model, finding the most likely value of any one variable (given known values for the others) reduces to testing a linear function of the observed values. We learn CIA with two techniques: the standard EM algorithm, and a new algorithm we develop based on covariances. We compare these, in a controlled fashion, against an algorithm (a version of Winnow) that attempts to find a good linear classifier directly. Our conclusions help delimit the frag..
    corecore