4 research outputs found
Prediction is very hard, especially about conversion. Predicting user purchases from clickstream data in fashion e-commerce
Knowing if a user is a buyer vs window shopper solely based on clickstream
data is of crucial importance for ecommerce platforms seeking to implement
real-time accurate NBA (next best action) policies. However, due to the low
frequency of conversion events and the noisiness of browsing data, classifying
user sessions is very challenging. In this paper, we address the clickstream
classification problem in the fashion industry and present three major
contributions to the burgeoning field of AI in fashion: first, we collected,
normalized and prepared a novel dataset of live shopping sessions from a major
European e-commerce fashion website; second, we use the dataset to test in a
controlled environment strong baselines and SOTA models from the literature;
finally, we propose a new discriminative neural model that outperforms neural
architectures recently proposed at Rakuten labs
Event sequence metric learning
In this paper we consider a challenging problem of learning discriminative
vector representations for event sequences generated by real-world users.
Vector representations map behavioral client raw data to the low-dimensional
fixed-length vectors in the latent space. We propose a novel method of learning
those vector embeddings based on metric learning approach. We propose a
strategy of raw data subsequences generation to apply a metric learning
approach in a fully self-supervised way. We evaluated the method over several
public bank transactions datasets and showed that self-supervised embeddings
outperform other methods when applied to downstream classification tasks.
Moreover, embeddings are compact and provide additional user privacy
protection
Deconstructing the right to privacy considering the impact of fashion recommender systems on an individualâs autonomy and identity
Computing âfashionâ into a system of algorithms that personalise an individualâs shopping journey is not without risks to the way we express, assess, and develop aspects of our identity. This study uses an interdisciplinary research approach to examine how an individualâs interaction with algorithms in the fashion domain shapes our understanding of an individualâs privacy, autonomy, and identity. Using fashion theory and psychology, I make two contributions to the meaning of privacy to protect notions of identity and autonomy, and develop a more nuanced perspective on this concept using âfashion identityâ. One, a more varied outlook on privacy allows us to examine how algorithmic constructions impose inherent reductions on individual sense-making in developing and reinventing personal fashion choices. A âright to not be reducedâ allows us to focus on the individualâs practice of identity and choice with regard to the algorithmic entities incorporating imperfect semblances on the personal and social aspects of fashion. Second, I submit that we need a new perspective on the right to privacy to address the risks of algorithmic personalisation systems in fashion. There are gaps in the law regarding capturing the impact of algorithmic personalisation systems on an individualâs inference of knowledge about fashion, as well as the associations of fashion applied to individual circumstances. Focusing on the case law of the European Court of Human Rights (ECtHR) and the General Data Protection Regulation (GDPR), as well as aspects of EU non-discrimination and consumer law, I underline that we need to develop a proactive approach to the right to privacy entailing the incorporation of new values. I define these values to include an individualâs perception and self-relationality, describing the impact of algorithmic personalisation systems on an individualâs inference of knowledge about fashion, as well as the associations of fashion applied to individual circumstances.
The study concludes with recommendations regarding the use of AI techniques in fashion using an international human rights approach. I argue that the âright to not be reducedâ requires new interpretative guidance informing international human rights standards, including Article 17 of the International Covenant on Civil and Political Rights (ICCPR). Moreover, I consider that the âright to not be reducedâ requires us to consider novel choices that inform the design and deployment of algorithmic personalisation systems in fashion, considering the UN Guiding Principles on Business and Human Rights and the EU Commissionâs Proposal for an AI Act