296 research outputs found

    Measuring customer loyalty using an extended RFM and clustering technique

    Get PDF
    Today, the ability to identify the profitable customers, creating a long-term loyalty in them and expanding the existing relationships are considered as the key and competitive factors for a customer-oriented organization. The prerequisite for having such competitive factors is the presence of a very powerful customer relationship management (CRM). The accurate evaluation of customers’ profitability is considered as one of the fundamental reasons that lead to a successful customer relationship management. RFM is a method that scrutinizes three properties, namely recency, frequency and monetary for each customer and scores customers based on these properties. In this paper, a method is introduced that obtains the behavioral traits of customers using the extended RFM approach and having the information related to the customers of an organization; it then classifies the customers using the K-means algorithm and finally scores the customers in terms of their loyalty in each cluster. In the suggested approach, first the customers’ records will be clustered and then the RFM model items will be specified through selecting the effective properties on the customers’ loyalty rate using the multipurpose genetic algorithm. Next, they will be scored in each cluster based on the effect that they have on the loyalty rate. The influence rate each property has on loyalty is calculated using the Spearman’s correlation coefficient

    On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law

    Full text link
    Out-of-distribution (OOD) testing is increasingly popular for evaluating a machine learning system's ability to generalize beyond the biases of a training set. OOD benchmarks are designed to present a different joint distribution of data and labels between training and test time. VQA-CP has become the standard OOD benchmark for visual question answering, but we discovered three troubling practices in its current use. First, most published methods rely on explicit knowledge of the construction of the OOD splits. They often rely on ``inverting'' the distribution of labels, e.g. answering mostly 'yes' when the common training answer is 'no'. Second, the OOD test set is used for model selection. Third, a model's in-domain performance is assessed after retraining it on in-domain splits (VQA v2) that exhibit a more balanced distribution of labels. These three practices defeat the objective of evaluating generalization, and put into question the value of methods specifically designed for this dataset. We show that embarrassingly-simple methods, including one that generates answers at random, surpass the state of the art on some question types. We provide short- and long-term solutions to avoid these pitfalls and realize the benefits of OOD evaluation

    Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup

    Full text link
    Mixup is a highly successful technique to improve generalization of neural networks by augmenting the training data with combinations of random pairs. Selective mixup is a family of methods that apply mixup to specific pairs, e.g. only combining examples across classes or domains. These methods have claimed remarkable improvements on benchmarks with distribution shifts, but their mechanisms and limitations remain poorly understood. We examine an overlooked aspect of selective mixup that explains its success in a completely new light. We find that the non-random selection of pairs affects the training distribution and improve generalization by means completely unrelated to the mixing. For example in binary classification, mixup across classes implicitly resamples the data for a uniform class distribution - a classical solution to label shift. We show empirically that this implicit resampling explains much of the improvements in prior work. Theoretically, these results rely on a regression toward the mean, an accidental property that we identify in several datasets. We have found a new equivalence between two successful methods: selective mixup and resampling. We identify limits of the former, confirm the effectiveness of the latter, and find better combinations of their respective benefits

    Soccer event detection via collaborative multimodal feature analysis and candidate ranking

    Get PDF
    This paper presents a framework for soccer event detection through collaborative analysis of the textual, visual and aural modalities. The basic notion is to decompose a match video into smaller segments until ultimately the desired eventful segment is identified. Simple features are considered namely the minute-by-minute reports from sports websites (i.e. text), the semantic shot classes of far and closeup-views (i.e. visual), and the low-level features of pitch and log-energy (i.e. audio). The framework demonstrates that despite considering simple features, and by averting the use of labeled training examples, event detection can be achieved at very high accuracy. Experiments conducted on ~30-hours of soccer video show very promising results for the detection of goals, penalties, yellow cards and red cards

    An evaluation of the software architecture efficiency using the Clichés and behavioral diagrams pertaining to the unified modeling language

    Get PDF
    The software architecture plays essential role for the development of the complicated software systems and it is important to evaluate the software architecture efficiency. One way to evaluate the software architecture is to create an executable model from the architecture. Unified Modeling Language (UML) diagrams are used to describe the software architecture. UML has made it easy to use and to evaluate the necessary requirements at the software architecture level. It creates an executable model from these diagrams; yet, since the UML is a standard semi-formal language for describing the software architecture, evaluating the software architecture is not directly possible through it. Furthermore, in order to evaluate the software architecture, one needs to turn the actual model into the formal model. In this study, first we describe the architecture using the UML. Then, some properties of the software architecture are mentioned using the UML sequence diagram, deployment diagram, use case diagram, and component diagram. The necessary information associated with the qualitative characteristic of efficiency will be margined as clichés and labels to these diagrams. The independent and dependent components will be extracted from the component diagram. Finally, the resulted semi-formal model will be mapped into a formal model based on the colored Petri net and finally the evaluation will take place
    corecore