11 research outputs found

    Sistema de gestión de estrategias e indicadores utilizando metodologías de inteligencia de negocios en una universidad privada

    Get PDF
    Continuamente el mundo está evolucionando hacia la búsqueda de nuevas ideas, surgen cambios a través del tiempo que afectan a todos los elementos de la sociedad. Una organización, como elemento de la sociedad, está sujeta a que el entorno forme parte de los aspectos que debe evaluar para seguir subsistiendo. Uno de los cambios que ha surgido con fuerza es el enfoque de los sistemas de información, ya no solamente apoyan a las actividades operacionales, ha nacido hace unos años una línea de sistemas que se preocupa también por la gestión misma de la empresa. Tal es el caso de las herramientas que apoyan a la toma de decisiones y del concepto no muy reciente de Inteligencia de Negocios. El objetivo de presente proyecto es desarrollar una solución de éste tipo aplicada a la gestión de estrategia y operativa dentro de una organización. Esta herramienta dará soporte al desarrollo de Cuadro de Mandos y Cuadros de Mando Integrales para alcanzar su propósito. El Cuadro de Mando o también conocido como Tablero de Mando tiene como objetivo permitir analizar de forma sencilla el estado de la organización en los aspectos más importantes y con ello buscar mayor información que permita encontrar los puntos débiles y las fortalezas para poder tomar decisiones en base a ello. El Cuadro de Mando Integral está basado en un Cuadro de Mando, pero tiene un concepto más elaborado que implica alinear todas las actividades de la organización hacia el logro de los objetivos estratégicos, para esto ve a la organización desde cuatro perspectivas: cliente, financiera, procesos internos, y aprendizaje y desarrollo. Para cada perspectiva se plantearán objetivos medibles en el tiempo, los cuales interactúan entre sí por relaciones de causa-efecto. Incluir este tipo de herramientas al negocio de la empresa comprende beneficios claros de mejora en el rendimiento en general. Mientras más información se tenga, y esta sea más confiable y precisa, el proceso de toma de decisiones apuntará más certeramente el éxito.Tesi

    Pronoun Translation and Prediction with or without Coreference Links

    Get PDF
    The Idiap NLP Group has participated in both DiscoMT 2015 sub-tasks: pronoun-focused translation and pronoun prediction. The system for the first sub-task combines two knowledge sources: gram matical constraints from the hypothesized coreference links, and candidate translations from an SMT decoder. The system for the second sub-task avoids hypothesizing a coreference link, and uses instead a large set of source-side and target-side features from the noun phrases surrounding the pronoun to train a pronoun predictor

    Evaluating Attention Networks for Anaphora Resolution

    Get PDF
    In this paper, we evaluate the results of using inter and intra attention mechanisms from two architectures, a Deep Attention Long Short-Term Memory-Network (LSTM-N) (Cheng et al., 2016) and a Decomposable Attention model (Parikh et al., 2016), for anaphora resolution, i.e. detecting coreference relations between a pronoun and a noun (its antecedent). The models are adapted from an entailment task, to address the pronominal coreference resolution task by comparing two pairs of sentences: one with the original sentences containing the antecedent and the pronoun, and another one with the pronoun replaced with a correct or an incorrect antecedent. The goal is thus to detect the correct replacements, assuming the original sentence pair entails the one with the correct replacement, but not one with an incorrect replacement. We use the CoNLL-2012 English dataset (Pradhan et al., 2012) to train the models and evaluate the ability to recognize correct and incorrect pronoun replacements in sentence pairs. We find that the Decomposable Attention Model performs better, while using a much simpler architecture. Furthermore, we focus on two previous studies that use intra- and inter-attention mechanisms, discuss how they relate to each other, and examine how these advances work to identify correct antecedent replacements

    The SUMMA Platform Prototype

    Get PDF
    We present the first prototype of the SUMMA Platform: an integrated platform for multilingual media monitoring. The platform contains a rich suite of low-level and high-level natural language processing technologies: automatic speech recognition of broadcast media, machine translation, automated tagging and classification of named entities, semantic parsing to detect relationships between entities, and automatic construction / augmentation of factual knowledge bases. Implemented on the Docker platform, it can easily be deployed, customised, and scaled to large volumes of incoming media streams

    Statistical Learning Methods for Profiling Analysis Notebook for PAN at CLEF 2015

    No full text
    Abstract Author profiling is the task to infer some information about an author by analyzing her/his writing style. It's application in forensics, business intelligence and psychology makes this topic interesting for researching. In this notebook, we present our baseline approach using SVM and Linear Discriminant Analysis (LDA) classifiers. We analyze features obtained from LIWC dictionaries, these are frequencies of use words by categories, which gives a general view about how the author writes and what he/she is talking about. According the experimental results, those are significant features to differentiate gender, age-group and personality. Although they are relatively few (not more than 100), they allow to discriminate with an acceptable accuracy

    Discourse Phenomena in Machine Translation

    No full text
    Machine Translation (MT) has made considerable progress in the past two decades, particularly after the introduction of neural network models (NMT). During this time, the research community has mostly focused on modeling and evaluating MT systems at the sentence level. MT models learn to translate from large amounts of parallel sentences in different languages. The focus on sentences brings a practical simplification for the task that favors efficiency but has the disadvantage of missing relevant contextual information. Several studies showed that the negative impact of this simplification is significant. One key point is that the discourse dependencies among distant words are ignored, resulting in a lack of coherence and cohesion in the text. The main objective of this thesis is to improve MT by including discourse-level constraints. In particular, we focus on the translation of the entity mentions. We summarize our contributions in four points. First, we define the evaluation process to assess entity translations (i.e., nouns and pronouns) and propose an automatic metric to measure this phenomenon. Second, we perform a proof-of-concept and analyze how effective it is to include entity coreference resolution (CR) in translation. We conclude that CR significantly helps pronoun translation and boosts the whole translation quality according to human judgment. Third, we focus on the discourse connections at the sentence level. We propose enhancing the sequential model to infer long-term connections by incorporating a âself-attentionâ mechanism. This mechanism gives direct and selective access to the context. Experiments in different language pairs show that our method outperforms various baselines, and the analysis confirms that the model emphasizes a broader context and captures syntactic-like structures. Fourth, we formulate the problem of document-level NMT and model inter-sentential connections among words with a hierarchical attention mechanism. Experiments on multiple data sets show significant improvement over two strong baselines and conclude that the source and target sidesâ contexts are mutually complementary. This set of results confirms that discourse significantly enhances translation quality, verifying our main thesis objective. Our secondary objective is to improve the CR task by modeling the underlying connections among entities at the document-level. This task is particularly challenging for current neural network models because it requires understanding and reasoning. First, we propose a method to detect entity mentions from partially annotated data. We then proposed to model coreference with a graph of entities encoded in a pre-trained language model as an internal structure. The experiments show that these methods outperform various baselines. CR has the potential to help MT and other text generation tasks by maintaining coherence between the entity mentions

    Self-attentive residual decoder for neural machine translation

    No full text
    Neural sequence-to-sequence networks with attention have achieved remarkable performance for machine translation. One of the reasons for their effectiveness is their ability to capture relevant source-side contextual information at each time-step prediction through an attention mechanism. However, the target-side context is solely based on the sequence model which, in practice, is prone to a recency bias and lacks the ability to capture effectively nonsequential dependencies among words. To address this limitation, we propose a target-sideattentive residual recurrent network for decoding, where attention over previous words contributes directly to the prediction of the next word. The residual learning facilitates the flow of information from the distant past and is able to emphasize any of the previously translated words, hence it gains access to a wider context. The proposed model outperforms a neural MT baseline as well as a memory and self-attention network on three language pairs. The analysis of the attention learned by the decoder confirms that it emphasizes a wider context, and that it captures syntactic-like structures
    corecore