65 research outputs found

    Reading projects

    Get PDF
    "By reading only six hours a day", says Marianne Dashwood, outlining her plan of future application to her sister Elinor in Sense and Sensibility, "I shall gain in the course of a twelve-month a great deal of instruction which I now feel myself to want." She adds: "Our own library is too well known to me, to be resorted to for any thing beyond mere amusement. But there are many works well worth reading at the Park; and there are others of more modern production which I know I can borrow of Colonel Brandon" (301). We know, to some extent, what was in the Dashwoods' own library – volumes of Cowper, Scott and Thomson are mentioned. But what might Marianne have borrowed at Barton Park and Delaford? Which publications would Colonel Brandon have considered most appropriate for her project of self-improvement? Elinor considers Marianne's plan excessive, but what would have been a more realistic amount of time for her to spend reading each day, and where might she have done it

    Learning Colour Representations of Search Queries

    Full text link
    Image search engines rely on appropriately designed ranking features that capture various aspects of the content semantics as well as the historic popularity. In this work, we consider the role of colour in this relevance matching process. Our work is motivated by the observation that a significant fraction of user queries have an inherent colour associated with them. While some queries contain explicit colour mentions (such as 'black car' and 'yellow daisies'), other queries have implicit notions of colour (such as 'sky' and 'grass'). Furthermore, grounding queries in colour is not a mapping to a single colour, but a distribution in colour space. For instance, a search for 'trees' tends to have a bimodal distribution around the colours green and brown. We leverage historical clickthrough data to produce a colour representation for search queries and propose a recurrent neural network architecture to encode unseen queries into colour space. We also show how this embedding can be learnt alongside a cross-modal relevance ranker from impression logs where a subset of the result images were clicked. We demonstrate that the use of a query-image colour distance feature leads to an improvement in the ranker performance as measured by users' preferences of clicked versus skipped images.Comment: Accepted as a full paper at SIGIR 202

    Fluxes of dissolved organic carbon in stand throughfall and percolation water in 12 boreal coniferous stands on mineral soils in Finland

    Get PDF
    Predictors for glucose intolerance postpartum were evaluated in women with gestational diabetes mellitus (GDM) based on the 2013 World Health Organization (WHO) criteria. 1841 women were tested for GDM in a prospective cohort study. A postpartum 75g oral glucose tolerance test (OGTT) was performed in women with GDM at 14 ± 4.1 weeks. Of all 231 mothers with GDM, 83.1% (192) had a postpartum OGTT of which 18.2% (35) had glucose intolerance. Women with glucose intolerance were more often of Asian origin [15.1% vs. 3.7%, OR 4.64 (1.26–17.12)], had more often a recurrent history of GDM [41.7% vs. 26.7%, OR 3.68 (1.37–9.87)], higher fasting glycaemia (FPG) [5.1 (4.5–5.3) vs. 4.6 (4.3–5.1) mmol/L, OR 1.05 (1.01–1.09)], higher HbA1c [33 (31–36) vs. 32 (30–33) mmol/mol, OR 4.89 (1.61–14.82)], and higher triglycerides [2.2 (1.9–2.8) vs. 2.0 (1.6–2.5) mmol/L, OR 1.00 (1.00–1.01)]. Sensitivity of glucose challenge test (GCT) ≥7.2 mmol/l for glucose intolerance postpartum was 80% (63.1%–91.6%). The area under the curve to predict glucose intolerance was 0.76 (0.65–0.87) for FPG, 0.54 (0.43–0.65) for HbA1c and 0.75 (0.64–0.86) for both combined. In conclusion, nearly one-fifth of women with GDM have glucose intolerance postpartum. A GCT ≥7.2 mmol/L identifies a high risk population for glucose intolerance postpartum

    Normal glucose tolerant women with low glycemia during the oral glucose tolerance test have a higher risk to deliver a low birth weight infant

    Get PDF
    BackgroundData are limited on pregnancy outcomes of normal glucose tolerant (NGT) women with a low glycemic value measured during the 75g oral glucose tolerance test (OGTT). Our aim was to evaluate maternal characteristics and pregnancy outcomes of NGT women with low glycemia measured at fasting, 1-hour or 2-hour OGTT.MethodsThe Belgian Diabetes in Pregnancy-N study was a multicentric prospective cohort study with 1841 pregnant women receiving an OGTT to screen for gestational diabetes (GDM). We compared the characteristics and pregnancy outcomes in NGT women according to different groups [(<3.9mmol/L), (3.9-4.2mmol/L), (4.25-4.4mmol/L) and (>4.4mmol/L)] of lowest glycemia measured during the OGTT. Pregnancy outcomes were adjusted for confounding factors such as body mass index (BMI) and gestational weight gain.ResultsOf all NGT women, 10.7% (172) had low glycemia (<3.9 mmol/L) during the OGTT. Women in the lowest glycemic group (<3.9mmol/L) during the OGTT had compared to women in highest glycemic group (>4.4mmol/L, 29.9%, n=482), a better metabolic profile with a lower BMI, less insulin resistance and better beta-cell function. However, women in the lowest glycemic group had more often inadequate gestational weight gain [51.1% (67) vs. 29.5% (123); p<0.001]. Compared to the highest glycemia group, women in the lowest group had more often a birth weight <2.5Kg [adjusted OR 3.41, 95% CI (1.17-9.92); p=0.025].ConclusionWomen with a glycemic value <3.9 mmol/L during the OGTT have a higher risk for a neonate with birth weight < 2.5Kg, which remained significant after adjustment for BMI and gestational weight gain

    Learning Explainable Disentangled Representations of E-Commerce Data by Aligning Their Visual and Textual Attributes

    No full text
    Understanding multimedia content remains a challenging problem in e-commerce search and recommendation applications. It is difficult to obtain item representations that capture the relevant product attributes since these product attributes are fine-grained and scattered across product images with huge visual variations and product descriptions that are noisy and incomplete. In addition, the interpretability and explainability of item representations have become more important in order to make e-commerce applications more intelligible to humans. Multimodal disentangled representation learning, where the independent generative factors of multimodal data are identified and encoded in separate subsets of features in the feature space, is an interesting research area to explore in an e-commerce context given the benefits of the resulting disentangled representations such as generalizability, robustness and interpretability. However, the characteristics of real-word e-commerce data, such as the extensive visual variation, noisy and incomplete product descriptions, and complex cross-modal relations of vision and language, together with the lack of an automatic interpretation method to explain the contents of disentangled representations, means that current approaches for multimodal disentangled representation learning do not suffice for e-commerce data. Therefore, in this work, we design an explainable variational autoencoder framework (E-VAE) which leverages visual and textual item data to obtain disentangled item representations by jointly learning to disentangle the visual item data and to infer a two-level alignment of the visual and textual item data in a multimodal disentangled space. As such, E-VAE tackles the main challenges in disentangling multimodal e-commerce data. Firstly, with the weak supervision of the two-level alignment our E-VAE learns to steer the disentanglement process towards discovering the relevant factors of variations in the multimodal data and to ignore irrelevant visual variations which are abundant in e-commerce data. Secondly, to the best of our knowledge our E-VAE is the first VAE-based framework that has an automatic interpretation mechanism that allows to explain the components of the disentangled item representations with text. With our textual explanations we provide insight in the quality of the disentanglement. Furthermore, we demonstrate that with our explainable disentangled item representations we achieve state-of-the-art outfit recommendation results on the Polyvore Outfits dataset and report new state-of-the-art cross-modal search results on the Amazon Dresses dataset

    Attention-based Fusion for Outfit Recommendation

    No full text
    status: accepte

    A Comparative Study of Outfit Recommendation Methods with a Focus on Attention-based Fusion

    No full text
    https://authors.elsevier.com/a/1bI5R15hYdjpsKstatus: Published onlin

    Can image captioning help passage retrieval in multimodal question answering?

    No full text
    status: publishe

    Causal Factor Disentanglement for Few-Shot Domain Adaptation in Video Prediction

    No full text
    An important challenge in machine learning is performing with accuracy when few training samples are available from the target distribution. If a large number of training samples from a related distribution are available, transfer learning can be used to improve the performance. This paper investigates how to do transfer learning more effectively if the source and target distributions are related through a Sparse Mechanism Shift for the application of next-frame prediction. We create Sparse Mechanism Shift-TempoRal Intervened Sequences (SMS-TRIS), a benchmark to evaluate transfer learning for next-frame prediction derived from the TRIS datasets. We then propose to exploit the Sparse Mechanism Shift property of the distribution shift by disentangling the model parameters with regard to the true causal mechanisms underlying the data. We use the Causal Identifiability from TempoRal Intervened Sequences (CITRIS) model to achieve this disentanglement via causal representation learning. We show that encouraging disentanglement with the CITRIS extensions can improve performance, but their effectiveness varies depending on the dataset and backbone used. We find that it is effective only when encouraging disentanglement actually succeeds in increasing disentanglement. We also show that an alternative method designed for domain adaptation does not help, indicating the challenging nature of the SMS-TRIS benchmark
    • …
    corecore