4 research outputs found

    Neural Embeddings of Graphs in Hyperbolic Space

    Get PDF
    Neural embeddings have been used with great success in Natural Language Processing (NLP). They provide compact representations that encapsulate word similarity and attain state-of-the-art performance in a range of linguistic tasks. The success of neural embeddings has prompted significant amounts of research into applications in domains other than language. One such domain is graph-structured data, where embeddings of vertices can be learned that encapsulate vertex similarity and improve performance on tasks including edge prediction and vertex labelling. For both NLP and graph based tasks, embeddings have been learned in high-dimensional Euclidean spaces. However, recent work has shown that the appropriate isometric space for embedding complex networks is not the flat Euclidean space, but negatively curved, hyperbolic space. We present a new concept that exploits these recent insights and propose learning neural embeddings of graphs in hyperbolic space. We provide experimental evidence that embedding graphs in their natural geometry significantly improves performance on downstream tasks for several real-world public datasets.Comment: 7 pages, 5 figure

    Customer life time value prediction using embeddings

    Get PDF
    We describe the Customer Life Time Value (CLTV) prediction sys- tem deployed at ASOS.com, a global online fashion retailer. CLTV prediction is an important problem in e-commerce where an accu- rate estimate of future value allows retailers to effectively allocate marketing spend, identify and nurture high value customers and mitigate exposure to losses.The system at ASOS provides daily estimates of the future value of every customer and is one of the cornerstones of the personalised shopping experience. The state of the art in this domain uses large numbers of handcrafted features and ensemble regressors to forecast value, predict churn and evalu- ate customer loyalty. We describe our system, which adopts this approach, and our ongoing e orts to further improve it. Recently, domains including language, vision and speech have shown dra- matic advances by replacing hand-crafted features with features that are learned automatically from data. We show that learning feature representations is a promising extension to the state of the art in CLTV modeling. We propose a novel way to generate embed- dings of customers which addresses the issue of the ever changing product catalogue and obtain a signi cant improvement over an exhaustive set of handcrafted features
    corecore