1,082 research outputs found

    Listener Modeling and Context-aware Music Recommendation Based on Country Archetypes

    Get PDF
    Music preferences are strongly shaped by the cultural and socio-economic background of the listener, which is reflected, to a considerable extent, in country-specific music listening profiles. Previous work has already identified several country-specific differences in the popularity distribution of music artists listened to. In particular, what constitutes the "music mainstream" strongly varies between countries. To complement and extend these results, the article at hand delivers the following major contributions: First, using state-of-the-art unsupervised learning techniques, we identify and thoroughly investigate (1) country profiles of music preferences on the fine-grained level of music tracks (in contrast to earlier work that relied on music preferences on the artist level) and (2) country archetypes that subsume countries sharing similar patterns of listening preferences. Second, we formulate four user models that leverage the user's country information on music preferences. Among others, we propose a user modeling approach to describe a music listener as a vector of similarities over the identified country clusters or archetypes. Third, we propose a context-aware music recommendation system that leverages implicit user feedback, where context is defined via the four user models. More precisely, it is a multi-layer generative model based on a variational autoencoder, in which contextual features can influence recommendations through a gating mechanism. Fourth, we thoroughly evaluate the proposed recommendation system and user models on a real-world corpus of more than one billion listening records of users around the world (out of which we use 369 million in our experiments) and show its merits vis-a-vis state-of-the-art algorithms that do not exploit this type of context information.Comment: 30 pages, 3 tables, 12 figure

    Exception handling in the development of fault-tolerant component-based systems

    Get PDF
    Orientador: Cecilia Mary Fischer RubiraTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Mecanismos de tratamento de exceções foram concebidos com o intuito de facilitar o gerenciamento da complexidade de sistemas de software tolerantes a falhas. Eles promovem uma separação textual explícita entre o código normal e o código que lida com situações anormais, afim de dar suporte a construção de programas que são mais concisos fáceis de evoluir e confáveis. Diversas linguagens de programação modernas e a maioria dos modelos de componentes implementam mecanismos de tratamento de exceções. Apesar de seus muitos benefícios, tratamento de exceções pode ser a fonte de diversas falhas de projeto se usado de maneira indisciplinada. Estudos recentes mostram que desenvolvedores de sistemas de grande escala baseados em infra-estruturas de componentes têm hábitos, no tocante ao uso de tratamento de exceções, que tornam suas aplicações vulneráveis a falhas e difíceis de se manter. Componentes de software criam novos desafios com os quais mecanismos de tratamento de exceções tradicionais não lidam, o que aumenta a probabilidade de que problemas ocorram. Alguns exemplos são indisponibilidade de código fonte e incompatibilidades arquiteturais. Neste trabalho propomos duas técnicas complementares centradas em tratamento de exceções para a construção de sistemas tolerantes a falhas baseados em componentes. Ambas têm ênfase na estrutura do sistema como um meio para se reduzir o impacto de mecanismos de tolerância a falhas em sua complexidade total e o número de falhas de projeto decorrentes dessa complexidade. A primeira é uma abordagem para o projeto arquitetural dos mecanismos de recuperação de erros de um sistema. Ela trata do problema de verificar se uma arquitetura de software satisfaz certas propriedades relativas ao fluxo de exceções entre componentes arquiteturais, por exemplo, se todas as exceções lançadas no nível arquitetural são tratadas. A abordagem proposta lança de diversas ferramentas existentes para automatizar ao máximo esse processo. A segunda consiste em aplicar programação orientada a aspectos (AOP) afim de melhorar a modularização de código de tratamento de exceções. Conduzimos um estudo aprofundado com o objetivo de melhorar o entendimento geral sobre o efeitos de AOP no código de tratamento de exceções e identificar as situações onde seu uso é vantajoso e onde não éAbstract: Exception handling mechanisms were conceived as a means to help managing the complexity of fault-tolerant software. They promote an explicit textual separation between normal code and the code that deals with abnormal situations, in order to support the construction of programs that are more concise, evolvable, and reliable. Several mainstream programming languages and most of the existing component models implement exception handling mechanisms. In spite of its many bene?ts, exception handling can be a source of many design faults if used in an ad hoc fashion. Recent studies show that developers of large-scale software systems based on component infrastructures have habits concerning the use of exception handling that make applications vulnerable to faults and hard to maintain. Software components introduce new challenges which are not addressed by traditional exception handling mechanisms and increase the chances of problems occurring. Examples include unavailability of source code and architectural mismatches. In this work, we propose two complementary techniques centered on exception handling for the construction of fault-tolerant component-based systems. Both of them emphasize system structure as a means to reduce the impactof fault tolerance mechanisms on the overall complexity of a software system and the number of design faults that stem from complexity. The ?rst one is an approach for the architectural design of a system?s error handling capabilities. It addresses the problem of verifying whether a software architecture satis?es certain properties of interest pertaining the ?ow of exceptions between architectural components, e.g., if all the exceptions signaled at the architectural level are eventually handled. The proposed approach is based on a set of existing tools that automate this process as much as possible. The second one consists in applying aspect-oriented programming (AOP) to better modularize exception handling code. We have conducted a through study aimed at improving our understanding of the efects of AOP on exception handling code and identifying the situations where its use is advantageous and the ones where it is notDoutoradoDoutor em Ciência da Computaçã

    Highlighting model elements to improve OCL comprehension

    Get PDF
    Models, metamodels, and model transformations play a central role in Model-Driven Development (MDD). Object Constraint Language (OCL) was initially proposed as part of the Unified Modeling Language (UML) standard to add the precision and validation capabilities lacking in its diagrams, and to express well-formedness rules in its metamodel. OCL has several other applications, such as defining design metrics, code-generation templates, or validation rules for model transformations, required in MDD. Learning OCL as part of a UML course at the university would seem natural but is still the exception rather than the rule. We believe that this is mainly due to a widespread perception that OCL is hard to learn, as gleaned from claims made in the literature. Based on data gathered over the past school years from numerous undergraduate students of di↵erent Software Engineering courses, we analyzed how learning design by contract clauses with UML+OCL compares with several other Software Engineering Body Of Knowledge (SWEBOK) topics. The outcome of the learning process was collected in a rigorous setup, supported by an e-learning platform. We performed inferential statistics on that data to support our conclusions and identify the relevant explanatory variables for students’ success/failure. The obtained findings lead us to extend an existing OCL tool with two novel features: one is aimed at OCL apprentices and goes straight to the heart of the matter by allowing to visualize how OCL expressions traverse UML class diagrams; the other is intended for researchers and allows to compute OCL complexity metrics, making it possible to replicate a research study like the one we are presenting.Modelos, metamodelos e transformações de modelo desempenham um papel central em MDD. OCL foi inicialmente proposta como parte da UML para adicionar os recursos de precisão e validação que faltavam nestes diagramas, e também para expressar regras de boa formação no metamodelo. OCL possui outras aplicações, tais como definir métricas de desenho, modelos de geração de código ou regras de validação para transformações de modelo, exigidas em MDD. Aprender OCL como parte de um curso de UML na universidade parecia portanto natural, não sendo no entanto o que se verifica. Acreditamos que isso se deva a uma percepção generalizada de que OCL é difícil de aprender, tendo em conta afirmações feitas na literatura. Com base em dados recolhidos em anos letivos anteriores de vários alunos de licenciatura de diferentes cursos de Engenharia de Software, analisámos como a aprendizagem por cláusulas contratuais de UML + OCL se compara a outros tópicos do SWEBOK. O resultado do processo de aprendizagem foi recolhido de forma rigorosa, apoiado por uma plataforma de e-learning. Realizámos estatísticas inferenciais sobre os dados para apoiar as nossas conclusões, de forma a identificar as variáveis explicativas relevantes para o sucesso / fracasso dos alunos. As conclusões obtidas levaram-nos a estender uma ferramenta OCL com duas novas funcionalidades: a primeira é voltada para os estudantes de OCL e permite visualizar como as expressões percorrem um diagrama de classes UML; a segunda é voltada para investigadores e permite calcular métricas de complexidade OCL, habilitando a réplica de um estudo semelhante ao apresentado

    Performance of management solutions and cooperation approaches for vehicular delay-tolerant networks

    Get PDF
    A wide range of daily-life applications supported by vehicular networks attracted the interest, not only from the research community, but also from governments and the automotive industry. For example, they can be used to enable services that assist drivers on the roads (e.g., road safety, traffic monitoring), to spread commercial and entertainment contents (e.g., publicity), or to enable communications on remote or rural regions where it is not possible to have a common network infrastructure. Nonetheless, the unique properties of vehicular networks raise several challenges that greatly impact the deployment of these networks. Most of the challenges faced by vehicular networks arise from the highly dynamic network topology, which leads to short and sporadic contact opportunities, disruption, variable node density, and intermittent connectivity. This situation makes data dissemination an interesting research topic within the vehicular networking area, which is addressed by this study. The work described along this thesis is motivated by the need to propose new solutions to deal with data dissemination problems in vehicular networking focusing on vehicular delay-tolerant networks (VDTNs). To guarantee the success of data dissemination in vehicular networks scenarios it is important to ensure that network nodes cooperate with each other. However, it is not possible to ensure a fully cooperative scenario. This situation makes vehicular networks suitable to the presence of selfish and misbehavior nodes, which may result in a significant decrease of the overall network performance. Thus, cooperative nodes may suffer from the overwhelming load of services from other nodes, which comprises their performance. Trying to solve some of these problems, this thesis presents several proposals and studies on the impact of cooperation, monitoring, and management strategies on the network performance of the VDTN architecture. The main goal of these proposals is to enhance the network performance. In particular, cooperation and management approaches are exploited to improve and optimize the use of network resources. It is demonstrated the performance gains attainable in a VDTN through both types of approaches, not only in terms of bundle delivery probability, but also in terms of wasted resources. The results and achievements observed on this research work are intended to contribute to the advance of the state-of-the-art on methods and strategies for overcome the challenges that arise from the unique characteristics and conceptual design of vehicular networks.O vasto número de aplicações e cenários suportados pelas redes veiculares faz com que estas atraiam o interesse não só da comunidade científica, mas também dos governos e da indústria automóvel. A título de exemplo, estas podem ser usadas para a implementação de serviços e aplicações que podem ajudar os condutores dos veículos a tomar decisões nas estradas, para a disseminação de conteúdos publicitários, ou ainda, para permitir que existam comunicações em zonas rurais ou remotas onde não é possível ter uma infraestrutura de rede convencional. Contudo, as propriedades únicas das redes veiculares fazem com que seja necessário ultrapassar um conjunto de desafios que têm grande impacto na sua aplicabilidade. A maioria dos desafios que as redes veiculares enfrentam advêm da grande mobilidade dos veículos e da topologia de rede que está em constante mutação. Esta situação faz com que este tipo de rede seja suscetível de disrupção, que as oportunidades de contacto sejam escassas e de curta duração, e que a ligação seja intermitente. Fruto destas adversidades, a disseminação dos dados torna-se um tópico de investigação bastante promissor na área das redes veiculares e por esta mesma razão é abordada neste trabalho de investigação. O trabalho descrito nesta tese é motivado pela necessidade de propor novas abordagens para lidar com os problemas inerentes à disseminação dos dados em ambientes veiculares. Para garantir o sucesso da disseminação dos dados em ambientes veiculares é importante que este tipo de redes garanta a cooperação entre os nós da rede. Contudo, neste tipo de ambientes não é possível garantir um cenário totalmente cooperativo. Este cenário faz com que as redes veiculares sejam suscetíveis à presença de nós não cooperativos que comprometem seriamente o desempenho global da rede. Por outro lado, os nós cooperativos podem ver o seu desempenho comprometido por causa da sobrecarga de serviços que poderão suportar. Para tentar resolver alguns destes problemas, esta tese apresenta várias propostas e estudos sobre o impacto de estratégias de cooperação, monitorização e gestão de rede no desempenho das redes veiculares com ligações intermitentes (Vehicular Delay-Tolerant Networks - VDTNs). O objetivo das propostas apresentadas nesta tese é melhorar o desempenho global da rede. Em particular, as estratégias de cooperação e gestão de rede são exploradas para melhorar e optimizar o uso dos recursos da rede. Ficou demonstrado que o uso deste tipo de estratégias e metodologias contribui para um aumento significativo do desempenho da rede, não só em termos de agregados de pacotes (“bundles”) entregues, mas também na diminuição do volume de recursos desperdiçados. Os resultados observados neste trabalho procuram contribuir para o avanço do estado da arte em métodos e estratégias que visam ultrapassar alguns dos desafios que advêm das propriedades e desenho conceptual das redes veiculares

    Un Procedimiento de Medición de Tamaño Funcional para Modelos Conceptuales en entornos MDA

    Full text link
    Esta tesis presenta el diseño, la aplicación y la automatización de un procedimiento de medición de tamaño funcional basado en el método COSMIC, que permite medir el tamaño funcional de aplicaciones generadas en entornos MDA en sus modelos conceptuales.Marín Campusano, BM. (2008). Un Procedimiento de Medición de Tamaño Funcional para Modelos Conceptuales en entornos MDA. http://hdl.handle.net/10251/12305Archivo delegad

    Performance assessment of mobility solutions for IPv6-based healthcare wireless sensor networks

    Get PDF
    This thesis focuses on the study of mobile wireless sensor networks applied to healthcare scenarios. The promotion of better quality-of-life for hospitalized patients is addressed in this research work with a solution that can help these patients to keep their mobility (if possible). The solution proposed allows remote monitoring and control of patients’ health in real-time and without interruptions. Small sensor nodes able to collect and send wirelessly the health parameters allow for the control of the patients' health condition. A network infrastructure, composed by several access points, allows the connection of the sensor nodes (carried by the patients) to remote healthcare providers. To ensure continuous access to sensor nodes special attention should be dedicated to manage the transition of these sensor nodes between different access points’ coverage areas. The process of changing an access point attachment of a sensor node is called handover. In that context, this thesis proposes a new handover mechanism that can ensure continuous connection to mobile sensor nodes in a healthcare wireless sensor network. Due to the limitations of sensor nodes’ resources, namely available energy (these sensor nodes are typically powered by small batteries), the proposed mechanism pays a special attention in the optimization of energy consumption. To achieve this optimization, part of this work is dedicated to the construction of a small sensor node. The handover mechanism proposed in this work is called Hand4MAC (handover mechanism for MAC layer). This mechanism is compared with other mechanisms commonly used in handover management. The Hand4MAC mechanism is deployed and validated through by simulation and in a real testbed. The scenarios used for the validation reproduces a hospital ward. The performance evaluation is focused in the percentage of time that senor nodes are accessible to the network while traveling across several access points’ coverage areas and the energy expenditures in handover processes. The experiments performed take into account various parameters that are the following: number of sent messages, number of received messages, multicast message usage, energy consumption, number of sensor nodes present in the scenario, velocity of sensor nodes, and time-to-live value. In both simulation and real testbed, the Hand4MAC mechanism is shown to perform better than all the other handover mechanisms tested. In this comparison it was only considered the most promising handover mechanisms proposed in the literature.Fundação para a Ciência e a Tecnologia (FCT

    Languages of games and play: A systematic mapping study

    Get PDF
    Digital games are a powerful means for creating enticing, beautiful, educational, and often highly addictive interactive experiences that impact the lives of billions of players worldwide. We explore what informs the design and construction of good games to learn how to speed-up game development. In particular, we study to what extent languages, notations, patterns, and tools, can offer experts theoretical foundations, systematic techniques, and practical solutions they need to raise their productivity and improve the quality of games and play. Despite the growing number of publications on this topic there is currently no overview describing the state-of-the-art that relates research areas, goals, and applications. As a result, efforts and successes are often one-off, lessons learned go overlooked, language reuse remains minimal, and opportunities for collaboration and synergy are lost. We present a systematic map that identifies relevant publications and gives an overview of research areas and publication venues. In addition, we categorize research perspectives along common objectives, techniques, and approaches, illustrated by summaries of selected languages. Finally, we distill challenges and opportunities for future research and development

    Predicting software project effort: A grey relational analysis based method

    Get PDF
    This is the post-print version of the final paper published in Expert Systems with Applications. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2011 Elsevier B.V.The inherent uncertainty of the software development process presents particular challenges for software effort prediction. We need to systematically address missing data values, outlier detection, feature subset selection and the continuous evolution of predictions as the project unfolds, and all of this in the context of data-starvation and noisy data. However, in this paper, we particularly focus on outlier detection, feature subset selection, and effort prediction at an early stage of a project. We propose a novel approach of using grey relational analysis (GRA) from grey system theory (GST), which is a recently developed system engineering theory based on the uncertainty of small samples. In this work we address some of the theoretical challenges in applying GRA to outlier detection, feature subset selection, and effort prediction, and then evaluate our approach on five publicly available industrial data sets using both stepwise regression and Analogy as benchmarks. The results are very encouraging in the sense of being comparable or better than other machine learning techniques and thus indicate that the method has considerable potential.National Natural Science Foundation of Chin
    corecore