565 research outputs found

    THE STUDY OF ADOPTING PROBLEM BASED LEARNING IN NORMAL SCALE CLASS COURSE DESIGN

    Get PDF
    [[abstract]]This study adopts the Problem Based Learning (PBL) for pre-service teachers in teacher education program. The reasons to adopt PBL are the class scale is not a small class, the contents are too many to teach, and the technologies are ready to be used in classroom. This study used an intermediary, movie, for scenario to student to define the problems and to search for information in order to report their findings. Since this is not a required course, more than ten people took this course. Therefore, adopting PBL, the groups using, in the normal scale class in higher education is another modification. The purposes of this study are to evaluate this adopting PBL processing and to find the chance to improve the PBL course design. The methodology of this study is the text mining with KeyGraph technology. Thirty seven pre-service teachers’ feedbacks are analyzed. The feedbacks were from those students who finished three cycles of the PBL processing. Two of the cycles of the PBL processing are the students learning educational special topics by themselves. The last cycle of PBL processing is to train the pre-service teachers how to run the PBL course in the future. The results indicate that the important factor is “discussion” and rare and important factor is “movie” (intermediary) for adopting PBL course.[[sponsorship]]ICEduTech 2014[[incitationindex]]EI[[conferencetype]]國際[[conferencetkucampus]]淡水校園[[conferencedate]]20141210~20141212[[booktype]]電子版[[iscallforpapers]]Y[[conferencelocation]]淡水, 台

    Leveraging Multimedia to Advance Science by Disseminating a Greater Variety of Scholarly Contributions in More Accessible Formats

    Get PDF
    For the welfare of the scientific community, we intentionally “rock the boat” about the way we conduct, recognize, and disseminate scholarly contributions. As a scientific community, we are doing ourselves a great disservice by ignoring the insights, artifacts, discoveries, and conversations that naturally occur in the scientific process of advancing knowledge that do not fit into the narrowly defined form of print-style papers. By failing to recognize, reward, and publish the wide variety of scholarly contributions that do not suit print-style papers, we hinder scientific progress, devalue important and necessary contributions to science, and demotivate these types of vital contributions. Although over three centuries of scientific publishing has demonstrated the effectiveness of the print medium for conveying scholarly knowledge, the print-style paper captures only a single form of scholarly contribution in a highly limited media format. Unfortunately, the current tenure and promotion process recognizes only this one form of scientific contribution. As a result, science at large advances inevitably only by this single type of contribution. Given the radical advances in audiovisual technologies, storage and bandwidth capacities, public virtual infrastructure, and global acceptance of user-generated open content, the time is ripe to exploit the possibility of publishing more forms of scholarly contributions in a publicly available multimedia format (e.g., video). In this paper, we examine the feasibility of this proposal, develop a model to demonstrate the sustainability of this approach, and discuss potential limitations

    Investigating human-perceptual properties of "shapes" using 3D shapes and 2D fonts

    Get PDF
    Shapes are generally used to convey meaning. They are used in video games, films and other multimedia, in diverse ways. 3D shapes may be destined for virtual scenes or represent objects to be constructed in the real-world. Fonts add character to an otherwise plain block of text, allowing the writer to make important points more visually prominent or distinct from other text. They can indicate the structure of a document, at a glance. Rather than studying shapes through traditional geometric shape descriptors, we provide alternative methods to describe and analyse shapes, from a lens of human perception. This is done via the concepts of Schelling Points and Image Specificity. Schelling Points are choices people make when they aim to match with what they expect others to choose but cannot communicate with others to determine an answer. We study whole mesh selections in this setting, where Schelling Meshes are the most frequently selected shapes. The key idea behind image Specificity is that different images evoke different descriptions; but ‘Specific’ images yield more consistent descriptions than others. We apply Specificity to 2D fonts. We show that each concept can be learned and predict them for fonts and 3D shapes, respectively, using a depth image-based convolutional neural network. Results are shown for a range of fonts and 3D shapes and we demonstrate that font Specificity and the Schelling meshes concept are useful for visualisation, clustering, and search applications. Overall, we find that each concept represents similarities between their respective type of shape, even when there are discontinuities between the shape geometries themselves. The ‘context’ of these similarities is in some kind of abstract or subjective meaning which is consistent among different people

    A product-centric data mining algorithm for targeted promotions

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Targeted promotions in retail are becoming increasingly popular, particularly in the UK grocery retail sector where competition is stiff and consumers remain price sensitive. Given this, a targeted promotion algorithm is proposed to enhance the effectiveness of promotions by retailers. The algorithm leverages a mathematical model for optimizing items to target and fuzzy c-means clustering for finding the best customers to target. Tests using simulations with real life consumer scanner panel data from the UK grocery retailer sector shows that the algorithm performs well in finding the best items and customers to target whilst eliminating "false positives" (targeting customers who do not buy a product) and reducing "false negatives" (not targeting customers who could buy). The algorithm also shows better performance when compared to a similar published framework, particularly in handling "false positives" and "false negatives". The paper concludes by discussing managerial and research implications, and highlights applications of the model to other fields

    A hybrid approach for item collection recommendations : an application to automatic playlist continuation

    Get PDF
    Current recommender systems aim mainly to generate accurate item recommendations, without properly evaluating the multiple dimensions of the recommendation problem. However, in many domains, like in music, where items are rarely consumed in isolation, users would rather need a set of items, designed to work well together, while having some cognitive properties as a whole, related to their perception of quality and satisfaction. In this thesis, a hybrid case-based recommendation approach for item collections is proposed. In particular, an application to automatic playlist continuation, addressing similar cognitive concepts, rather than similar users, is presented. Playlists, that are sets of music items designed to be consumed as a sequence, with a specific purpose and within a specific context, are treated as cases. The proposed recommender system is based on a meta-level hybridization. First, Latent Dirichlet Allocation is applied to the set of past playlists, described as distributions over music styles, to identify their underlying concepts. Then, for a started playlist, its semantic characteristics, like its latent concept and the styles of the included items, are inferred, and Case-Based Reasoning is applied to the set of past playlists addressing the same concept, to construct and recommend a relevant playlist continuation. A graph-based item model is used to overcome the semantic gap between songs’ signal-based descriptions and users’ high-level preferences, efficiently capture the playlists’ structures and the similarity of the music items in those. As the proposed method bases its reasoning on previous playlists, it does not require the construction of complex user profiles to generate accurate recommendations. Furthermore, apart from relevance, support to parameters beyond accuracy, like increased coherence or support to diverse items is provided to deliver a more complete user experience. Experiments on real music datasets have revealed improved results, compared to other state of the art techniques, while achieving a “good trade-off” between recommendations’ relevance, diversity and coherence. Finally, although actually focusing on playlist continuations, the designed approach could be easily adapted to serve other recommendation domains with similar characteristics.Los sistemas de recomendación actuales tienen como objetivo principal generar recomendaciones precisas de artículos, sin evaluar propiamente las múltiples dimensiones del problema de recomendación. Sin embargo, en dominios como la música, donde los artículos rara vez se consumen en forma aislada, los usuarios más bien necesitarían recibir recomendaciones de conjuntos de elementos, diseñados para que se complementaran bien juntos, mientras se cubran algunas propiedades cognitivas, relacionadas con su percepción de calidad y satisfacción. En esta tesis, se propone un sistema híbrido de recomendación meta-nivel, que genera recomendaciones de colecciones de artículos. En particular, el sistema se centra en la generación automática de continuaciones de listas de música, tratando conceptos cognitivos similares, en lugar de usuarios similares. Las listas de reproducción son conjuntos de elementos musicales diseñados para ser consumidos en secuencia, con un propósito específico y dentro de un contexto específico. El sistema propuesto primero aplica el método de Latent Dirichlet Allocation a las listas de reproducción, que se describen como distribuciones sobre estilos musicales, para identificar sus conceptos. Cuando se ha iniciado una nueva lista, se deducen sus características semánticas, como su concepto y los estilos de los elementos incluidos en ella. A continuación, el sistema aplica razonamiento basado en casos, utilizando las listas del mismo concepto, para construir y recomendar una continuación relevante. Se utiliza un grafo que modeliza las relaciones de los elementos, para superar el ?salto semántico? existente entre las descripciones de las canciones, normalmente basadas en características sonoras, y las preferencias de los usuarios, expresadas en características de alto nivel. También se utiliza para calcular la similitud de los elementos musicales y para capturar la estructura de las listas de dichos elementos. Como el método propuesto basa su razonamiento en las listas de reproducción y no en usuarios que las construyeron, no se requiere la construcción de perfiles de usuarios complejos para poder generar recomendaciones precisas. Aparte de la relevancia de las recomendaciones, el sistema tiene en cuenta parámetros más allá de la precisión, como mayor coherencia o soporte a la diversidad de los elementos para enriquecer la experiencia del usuario. Los experimentos realizados en bases de datos reales, han revelado mejores resultados, en comparación con las técnicas utilizadas normalmente. Al mismo tiempo, el algoritmo propuesto logra un "buen equilibrio" entre la relevancia, la diversidad y la coherencia de las recomendaciones generadas. Finalmente, aunque la metodología presentada se centra en la recomendación de continuaciones de listas de reproducción musical, el sistema se puede adaptar fácilmente a otros dominios con características similares.Postprint (published version

    Identification of Data Structure with Machine Learning: From Fisher to Bayesian networks

    Get PDF
    This thesis proposes a theoretical framework to thoroughly analyse the structure of a dataset in terms of a) metric, b) density and c) feature associations. To look into the first aspect, Fisher's metric learning algorithms are the foundations of a novel manifold based on the information and complexity of a classification model. When looking at the density aspect, the Probabilistic Quantum clustering, a Bayesian version of the original Quantum Clustering is proposed. The clustering results will depend on local density variations, which is a desired feature when dealing with heteroscedastic data. To address the third aspect, the constraint-based PC-algorithm is the starting point of many structure learning algorithms, it is focused on finding feature associations by means of conditional independent tests. This is then used to select Bayesian networks, based on a regularized likelihood score. These three topics of data structure analysis were fully tested with synthetic data examples and real cases, which allowed us to unravel and discuss the advantages and limitations of these algorithms. One of the biggest challenges encountered was related to the application of these methods to a Big Data dataset that was analysed within the framework of a collaboration with a large UK retailer, where the interest was in the identification of the data structure underlying customer shopping baskets

    A framework for personalized dynamic cross-selling in e-commerce retailing

    Get PDF
    Cross-selling and product bundling are prevalent strategies in the retail sector. Instead of static bundling offers, i.e. giving the same offer to everyone, personalized dynamic cross-selling generates targeted bundle offers and can help maximize revenues and profits. In resolving the two basic problems of dynamic cross-selling, which involves selecting the right complementary products and optimizing the discount, the issue of computational complexity becomes central as the customer base and length of the product list grows. Traditional recommender systems are built upon simple collaborative filtering techniques, which exploit the informational cues gained from users in the form of product ratings and rating differences across users. The retail setting differs in that there are only records of transactions (in period X, customer Y purchased product Z). Instead of a range of explicit rating scores, transactions form binary datasets; 1-purchased and 0-not-purchased. This makes it a one-class collaborative filtering (OCCF) problem. Notwithstanding the existence of wider application domains of such an OCCF problem, very little work has been done in the retail setting. This research addresses this gap by developing an effective framework for dynamic cross-selling for online retailing. In the first part of the research, we propose an effective yet intuitive approach to integrate temporal information regarding a product\u27s lifecycle (i.e., the non-stationary nature of the sales history) in the form of a weight component into latent-factor-based OCCF models, improving the quality of personalized product recommendations. To improve the scalability of large product catalogs with transaction sparsity typical in online retailing, the approach relies on product catalog hierarchy and segments (rather than individual SKUs) for collaborative filtering. In the second part of the work, we propose effective bundle discount policies, which estimate a specific customer\u27s interest in potential cross-selling products (identified using the proposed OCCF methods) and calibrate the discount to strike an effective balance between the probability of the offer acceptance and the size of the discount. We also developed a highly effective simulation platform for generation of e-retailer transactions under various settings and test and validate the proposed methods. To the best of our knowledge, this is the first study to address the topic of real-time personalized dynamic cross-selling with discounting. The proposed techniques are applicable to cross-selling, up-selling, and personalized and targeted selling within the e-retail business domain. Through extensive analysis of various market scenario setups, we also provide a number of managerial insights on the performance of cross-selling strategies

    Export-Led Growth after Covid-19: The Case of Portugal

    Get PDF
    Abstract: The COVID-19 pandemic has disrupted trade and global value chains. Small open economies such as Portugal are particularly vulnerable. In this paper we consider the impact of the pandemic on the country's exports, arguing that an export-led recovery is possible. The challenge is to identify viable export opportunities: one of the consequences of the COVID-19 pandemic is to have closed and shrunk export opportunities globally. Despite this we show that there are still significant under-utilized export opportunities for Portugal. We use the large UN-COMTRADE and CEPII BACI data sets to which we apply four sets of filters to identify 42,593 realistic export opportunities. These opportunities are worth €286,6 billion in untapped revenue potential. The major markets for these products are countries such as United States, Germany, China, United Kingdom, France and Japan. We discuss the trade facilitation and industrial policy implications for utilizing these opportunities in the context of the relevant literature on trade and development

    Data Clustering And Visualization Through Matrix Factorization

    Get PDF
    Clustering is traditionally an unsupervised task which is to find natural groupings or clusters in multidimensional data based on perceived similarities among the patterns. The purpose of clustering is to extract useful information from unlabeled data. In order to present the extracted useful knowledge obtained by clustering in a meaningful way, data visualization becomes a popular and growing area of research field. Visualization can provide a qualitative overview of large and complex data sets, which help us the desired insight in truly understanding the phenomena of interest in data. The contribution of this dissertation is two-fold: Semi-Supervised Non-negative Matrix Factorization (SS-NMF) for data clustering/co-clustering and Exemplar-based data Visualization (EV) through matrix factorization. Compared to traditional data mining models, matrix-based methods are fast, easy to understand and implement, especially suitable to solve large-scale challenging problems in text mining, image grouping, medical diagnosis, and bioinformatics. In this dissertation, we present two effective matrix-based solutions in the new directions of data clustering and visualization. First, in many practical learning domains, there is a large supply of unlabeled data but limited labeled data, and in most cases it might be expensive to generate large amounts of labeled data. Traditional clustering algorithms completely ignore these valuable labeled data and thus are inapplicable to these problems. Consequently, semi-supervised clustering, which can incorporate the domain knowledge to guide a clustering algorithm, has become a topic of significant recent interest. Thus, we develop a Non-negative Matrix Factorization (NMF) based framework to incorporate prior knowledge into data clustering. Moreover, with the fast growth of Internet and computational technologies in the past decade, many data mining applications have advanced swiftly from the simple clustering of one data type to the co-clustering of multiple data types, usually involving high heterogeneity. To this end, we extend SS-NMF to perform heterogeneous data co-clustering. From a theoretical perspective, SS-NMF for data clustering/co-clustering is mathematically rigorous. The convergence and correctness of our algorithms are proved. In addition, we discuss the relationship between SS-NMF with other well-known clustering and co-clustering models. Second, most of current clustering models only provide the centroids (e.g., mathematical means of the clusters) without inferring the representative exemplars from real data, thus they are unable to better summarize or visualize the raw data. A new method, Exemplar-based Visualization (EV), is proposed to cluster and visualize an extremely large-scale data. Capitalizing on recent advances in matrix approximation and factorization, EV provides a means to visualize large scale data with high accuracy (in retaining neighbor relations), high efficiency (in computation), and high flexibility (through the use of exemplars). Empirically, we demonstrate the superior performance of our matrix-based data clustering and visualization models through extensive experiments performed on the publicly available large scale data sets
    corecore