625 research outputs found

    Information technology for intellectual analysis of item descriptions in e-commerce

    Get PDF
    E-commerce is experiencing a robust surge, propelled by the worldwide digital transformation and the mutual advantages accrued by both consumers and merchants. The integration of information technologies has markedly augmented the efficacy of digital enterprise, ushering in novel prospects and shaping innovative business paradigms. Nonetheless, adopting information technology is concomitant with risks, notably concerning safeguarding personal data. This substantiates the significance of research within the domain of artificial intelligence for e-commerce, with particular emphasis on the realm of recommender systems. This paper is dedicated to the discourse surrounding the construction of information technology tailored for processing textual descriptions pertaining to commodities within the e-commerce landscape. Through a qualitative analysis, we elucidate factors that mitigate the risks inherent in unauthorized data access. The cardinal insight discerned is that the apt utilization of product matching technologies empowers the formulation of recommendations devoid of entailing customers' personal data or vendors' proprietary information. A meticulously devised structural model of this information technology is proffered, delineating the principal functional components essential for processing textual data found within electronic trading platforms. Central to our exposition is the exploration of the product comparison predicated on textual depictions. The resolution of this challenge stands to enhance the efficiency of product searches and facilitate product juxtaposition and categorization. The prospective implementation of the propounded information technology, either in its entirety or through its constituent elements, augurs well for sellers, enabling them to improve a pricing strategy and heightened responsiveness to market sales trends. Concurrently, it streamlines the procurement journey for buyers by expediting the identification of requisite goods within the intricate milieu of e-commerce platforms

    Mechanistic Mode Connectivity

    Full text link
    We study neural network loss landscapes through the lens of mode connectivity, the observation that minimizers of neural networks retrieved via training on a dataset are connected via simple paths of low loss. Specifically, we ask the following question: are minimizers that rely on different mechanisms for making their predictions connected via simple paths of low loss? We provide a definition of mechanistic similarity as shared invariances to input transformations and demonstrate that lack of linear connectivity between two models implies they use dissimilar mechanisms for making their predictions. Relevant to practice, this result helps us demonstrate that naive fine-tuning on a downstream dataset can fail to alter a model's mechanisms, e.g., fine-tuning can fail to eliminate a model's reliance on spurious attributes. Our analysis also motivates a method for targeted alteration of a model's mechanisms, named connectivity-based fine-tuning (CBFT), which we analyze using several synthetic datasets for the task of reducing a model's reliance on spurious attributes.Comment: Accepted at ICML, 202

    Neural Methods for Effective, Efficient, and Exposure-Aware Information Retrieval

    Get PDF
    Neural networks with deep architectures have demonstrated significant performance improvements in computer vision, speech recognition, and natural language processing. The challenges in information retrieval (IR), however, are different from these other application areas. A common form of IR involves ranking of documents--or short passages--in response to keyword-based queries. Effective IR systems must deal with query-document vocabulary mismatch problem, by modeling relationships between different query and document terms and how they indicate relevance. Models should also consider lexical matches when the query contains rare terms--such as a person's name or a product model number--not seen during training, and to avoid retrieving semantically related but irrelevant results. In many real-life IR tasks, the retrieval involves extremely large collections--such as the document index of a commercial Web search engine--containing billions of documents. Efficient IR methods should take advantage of specialized IR data structures, such as inverted index, to efficiently retrieve from large collections. Given an information need, the IR system also mediates how much exposure an information artifact receives by deciding whether it should be displayed, and where it should be positioned, among other results. Exposure-aware IR systems may optimize for additional objectives, besides relevance, such as parity of exposure for retrieved items and content publishers. In this thesis, we present novel neural architectures and methods motivated by the specific needs and challenges of IR tasks.Comment: PhD thesis, Univ College London (2020

    An informatics based approach to respiratory healthcare.

    Get PDF
    By 2005 one person in every five UK households suffered with asthma. Research has shown that episodes of poor air quality can have a negative effect on respiratory health and is a growing concern for the asthmatic. To better inform clinical staff and patients to the contribution of poor air quality on patient health, this thesis defines an IT architecture that can be used by systems to identify environmental predictors leading to a decline in respiratory health of an individual patient. Personal environmental predictors of asthma exacerbation are identified by validating the delay between environmental predictors and decline in respiratory health. The concept is demonstrated using prototype software, and indicates that the analytical methods provide a mechanism to produce an early warning of impending asthma exacerbation due to poor air quality. The author has introduced the term enviromedics to describe this new field of research. Pattern recognition techniques are used to analyse patient-specific environments, and extract meaningful health predictors from the large quantities of data involved (often in the region of '/o million data points). This research proposes a suitable architecture that defines processes and techniques that enable the validation of patient-specific environmental predictors of respiratory decline. The design of the architecture was validated by implementing prototype applications that demonstrate, through hospital admissions data and personal lung function monitoring, that air quality can be used as a predictor of patient-specific health. The refined techniques developed during the research (such as Feature Detection Analysis) were also validated by the application prototypes. This thesis makes several contributions to knowledge, including: the process architecture; Feature Detection Analysis (FDA) that automates the detection of trend reversals within time series data; validation of the delay characteristic using a Self-organising Map (SOM) that is used as an unsupervised method of pattern recognition; Frequency, Boundary and Cluster Analysis (FBCA), an additional technique developed by this research to refine the SOM

    WinnER: A Winner-Take-All Hashing-Based Unsupervised Model for Entity Resolution Problems

    Get PDF
    Σε αυτή τη μελέτη, προτείνουμε μια ολοκληρωμένη ιδέα για ένα μοντέλο μη επιβλεπόμενης μηχανικής μάθησης, το οποίο μπορεί να χρησιμοποιηθεί σε προβλήματα ανεύρεσης όμοιων οντοτήτων σε ένα σύνολο συμβολοσειρών, οι οποίες περιγράφουν το ίδιο φυσικό αντικείμενο, ενώ διαφέρουν σαν συμβολοσειρές. Στην μεθοδολογία αυτή, χρησιμοποιείται ένας καινοτόμος αλγόριθμος επιλογής πρωτοτύπων προκειμένου να δημιουργηθεί ένας ευκλείδειος και ταυτόχρονα ανομοιόμορφος χώρος. Μέρος αυτής της μελέτης, είναι μια πλήρης παρουσίαση των θεωρητικών πλεονεκτημάτων ενός ευκλείδειου και ταυτόχρονα ανομοιογενούς χώρου. Στη συνέχεια, παρουσιάζουμε μια μέθοδο διανυσματοποίησης του αρχικού συνόλου δεδομένων, η οποία βασίζεται στη μετατροπή των διανυσμάτων σε βαθμωτά διανύσματα, μια τεχνική η οποία αντιμετωπίζει το γνωστό πρόβλημα της Μηχανικής Μάθησης, το πρόβλημα των μεγάλων διαστάσεων. Το κεντρικό και πιο καθοριστικό κομμάτι αυτής της μεθοδολογίας, είναι η χρήση ενός αλγορίθμου κατακερματισμού, ο οποίος ονομάζεται Winner-Take-All. Με αυτόν τον αλγόριθμο μειώνεται καθοριστικά ο χρόνος εκτέλεσης της μεθοδολογίας μας ενώ ταυτόχρονα παρέχει εξαιρετικά αποτελέσματα κατά την φάση ελέγχου ομοιότητας μεταξύ των οντοτήτων. Για τη φάση ελέγχου ομοιότητας, υιοθετούμε τον συντελεστή συσχέτισης κατάταξης Kendall Tau, μια ευρέως αποδεκτή μέτρηση για τη σύγκριση των βαθμωτών διανυσμάτων. Τέλος χρησιμοποιούμε δύο σύγχρονα μοντέλα προκειμένου να κάνουμε μια ολοκληρωμένη αξιολόγηση της μεθοδολογίας μας, σε ένα διάσημο σύνολο δεδομένων, στοχευμένο για ανεύρεση όμοιων οντοτήτων.In this study, we propose an end-to-end unsupervised learning model that can be used for Entity Resolution problems on string data sets. An innovative prototype selection algorithm is utilized in order to create a rich euclidean, and at the same time, dissimilarity space. Part of this work, is a fine presentation of the theoretical benefits of a euclidean and dissimilarity space. Following we present an embedding scheme based on rank-ordered vectors, that circumvents the Curse of Dimensionality problem. The core of our framework is a locality hashing algorithm named Winner-Take-All, which accelerates our models run time while also maintaining great scores in the similarity checking phase. For the similarity checking phase, we adopt Kendall Tau rank correlation coefficient, a metric for comparing rankings. Finally, we use two state-of-the-art frameworks in order to make a consistent evaluation of our methodology among a famous Entity Resolution data set

    Unsupervised Structural Embedding Methods for Efficient Collective Network Mining

    Full text link
    How can we align accounts of the same user across social networks? Can we identify the professional role of an email user from their patterns of communication? Can we predict the medical effects of chemical compounds from their atomic network structure? Many problems in graph data mining, including all of the above, are defined on multiple networks. The central element to all of these problems is cross-network comparison, whether at the level of individual nodes or entities in the network or at the level of entire networks themselves. To perform this comparison meaningfully, we must describe the entities in each network expressively in terms of patterns that generalize across the networks. Moreover, because the networks in question are often very large, our techniques must be computationally efficient. In this thesis, we propose scalable unsupervised methods that embed nodes in vector space by mapping nodes with similar structural roles in their respective networks, even if they come from different networks, to similar parts of the embedding space. We perform network alignment by matching nodes across two or more networks based on the similarity of their embeddings, and refine this process by reinforcing the consistency of each node’s alignment with those of its neighbors. By characterizing the distribution of node embeddings in a graph, we develop graph-level feature vectors that are highly effective for graph classification. With principled sparsification and randomized approximation techniques, we make all our methods computationally efficient and able to scale to graphs with millions of nodes or edges. We demonstrate the effectiveness of structural node embeddings on industry-scale applications, and propose an extensive set of embedding evaluation techniques that lay the groundwork for further methodological development and application.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162895/1/mheimann_1.pd
    corecore