164,261 research outputs found

    A Collaborative Kalman Filter for Time-Evolving Dyadic Processes

    Full text link
    We present the collaborative Kalman filter (CKF), a dynamic model for collaborative filtering and related factorization models. Using the matrix factorization approach to collaborative filtering, the CKF accounts for time evolution by modeling each low-dimensional latent embedding as a multidimensional Brownian motion. Each observation is a random variable whose distribution is parameterized by the dot product of the relevant Brownian motions at that moment in time. This is naturally interpreted as a Kalman filter with multiple interacting state space vectors. We also present a method for learning a dynamically evolving drift parameter for each location by modeling it as a geometric Brownian motion. We handle posterior intractability via a mean-field variational approximation, which also preserves tractability for downstream calculations in a manner similar to the Kalman filter. We evaluate the model on several large datasets, providing quantitative evaluation on the 10 million Movielens and 100 million Netflix datasets and qualitative evaluation on a set of 39 million stock returns divided across roughly 6,500 companies from the years 1962-2014.Comment: Appeared at 2014 IEEE International Conference on Data Mining (ICDM

    Collaborative method to maintain business process models updated

    Get PDF
    Business process models are often forgotten after their creation and its representation is not usually updated. This appears to be negative as processes evolve over time. This paper discusses the issue of business process models maintenance through the definition of a collaborative method that creates interaction contexts enabling business actors to discuss about business processes, sharing business knowledge. The collaboration method extends the discussion about existing process representations to all stakeholders promoting their update. This collaborative method contributes to improve business process models, allowing updates based in change proposals and discussions, using a groupware tool that was developed. Four case studies were developed in real organizational environment. We came to the conclusion that the defined method and the developed tool can help organizations to maintain a business process model updated based on the inputs and consequent discussions taken by the organizational actors who participate in the processes.info:eu-repo/semantics/publishedVersio

    Collaborative Verification-Driven Engineering of Hybrid Systems

    Full text link
    Hybrid systems with both discrete and continuous dynamics are an important model for real-world cyber-physical systems. The key challenge is to ensure their correct functioning w.r.t. safety requirements. Promising techniques to ensure safety seem to be model-driven engineering to develop hybrid systems in a well-defined and traceable manner, and formal verification to prove their correctness. Their combination forms the vision of verification-driven engineering. Often, hybrid systems are rather complex in that they require expertise from many domains (e.g., robotics, control systems, computer science, software engineering, and mechanical engineering). Moreover, despite the remarkable progress in automating formal verification of hybrid systems, the construction of proofs of complex systems often requires nontrivial human guidance, since hybrid systems verification tools solve undecidable problems. It is, thus, not uncommon for development and verification teams to consist of many players with diverse expertise. This paper introduces a verification-driven engineering toolset that extends our previous work on hybrid and arithmetic verification with tools for (i) graphical (UML) and textual modeling of hybrid systems, (ii) exchanging and comparing models and proofs, and (iii) managing verification tasks. This toolset makes it easier to tackle large-scale verification tasks

    Supporting collaborative grid application development within the escience community

    Get PDF
    The systemic representation and organisation of software artefacts, e.g. specifications, designs, interfaces, and implementations, resulting from the development of large distributed systems from software components have been addressed by our research within the Practitioner and AMES projects [1,2,3,4]. Without appropriate representations and organisations, large collections of existing software are not amenable to the activities of software reuse and software maintenance, as these activities are likely to be severely hindered by the difficulties of understanding the software applications and their associated components. In both of these projects, static analysis of source code and other development artefacts, where available, and subsequent application of reverse engineering techniques were successfully used to develop a more comprehensive understanding of the software applications under study [5,6]. Later research addressed the maintenance of a component library in the context of component-based software product line development and maintenance [7]. The classic software decompositions, horizontal and vertical, proposed by Goguen [8] influenced all of this research. While they are adequate for static composition, they fail to address the dynamic aspects of composing large distributed software applications from components especially where these include software services. The separation of component co-ordination concerns from component functionality proposed in [9] offers a partial solution

    Personalization in cultural heritage: the road travelled and the one ahead

    Get PDF
    Over the last 20 years, cultural heritage has been a favored domain for personalization research. For years, researchers have experimented with the cutting edge technology of the day; now, with the convergence of internet and wireless technology, and the increasing adoption of the Web as a platform for the publication of information, the visitor is able to exploit cultural heritage material before, during and after the visit, having different goals and requirements in each phase. However, cultural heritage sites have a huge amount of information to present, which must be filtered and personalized in order to enable the individual user to easily access it. Personalization of cultural heritage information requires a system that is able to model the user (e.g., interest, knowledge and other personal characteristics), as well as contextual aspects, select the most appropriate content, and deliver it in the most suitable way. It should be noted that achieving this result is extremely challenging in the case of first-time users, such as tourists who visit a cultural heritage site for the first time (and maybe the only time in their life). In addition, as tourism is a social activity, adapting to the individual is not enough because groups and communities have to be modeled and supported as well, taking into account their mutual interests, previous mutual experience, and requirements. How to model and represent the user(s) and the context of the visit and how to reason with regard to the information that is available are the challenges faced by researchers in personalization of cultural heritage. Notwithstanding the effort invested so far, a definite solution is far from being reached, mainly because new technology and new aspects of personalization are constantly being introduced. This article surveys the research in this area. Starting from the earlier systems, which presented cultural heritage information in kiosks, it summarizes the evolution of personalization techniques in museum web sites, virtual collections and mobile guides, until recent extension of cultural heritage toward the semantic and social web. The paper concludes with current challenges and points out areas where future research is needed

    Ontology-based collaborative framework for disaster recovery scenarios

    Full text link
    This paper aims at designing of adaptive framework for supporting collaborative work of different actors in public safety and disaster recovery missions. In such scenarios, firemen and robots interact to each other to reach a common goal; firemen team is equipped with smart devices and robots team is supplied with communication technologies, and should carry on specific tasks. Here, reliable connection is mandatory to ensure the interaction between actors. But wireless access network and communication resources are vulnerable in the event of a sudden unexpected change in the environment. Also, the continuous change in the mission requirements such as inclusion/exclusion of new actor, changing the actor's priority and the limitations of smart devices need to be monitored. To perform dynamically in such case, the presented framework is based on a generic multi-level modeling approach that ensures adaptation handled by semantic modeling. Automated self-configuration is driven by rule-based reconfiguration policies through ontology

    From Amateurs to Connoisseurs: Modeling the Evolution of User Expertise through Online Reviews

    Full text link
    Recommending products to consumers means not only understanding their tastes, but also understanding their level of experience. For example, it would be a mistake to recommend the iconic film Seven Samurai simply because a user enjoys other action movies; rather, we might conclude that they will eventually enjoy it -- once they are ready. The same is true for beers, wines, gourmet foods -- or any products where users have acquired tastes: the `best' products may not be the most `accessible'. Thus our goal in this paper is to recommend products that a user will enjoy now, while acknowledging that their tastes may have changed over time, and may change again in the future. We model how tastes change due to the very act of consuming more products -- in other words, as users become more experienced. We develop a latent factor recommendation system that explicitly accounts for each user's level of experience. We find that such a model not only leads to better recommendations, but also allows us to study the role of user experience and expertise on a novel dataset of fifteen million beer, wine, food, and movie reviews.Comment: 11 pages, 7 figure

    Social Information Processing in Social News Aggregation

    Full text link
    The rise of the social media sites, such as blogs, wikis, Digg and Flickr among others, underscores the transformation of the Web to a participatory medium in which users are collaboratively creating, evaluating and distributing information. The innovations introduced by social media has lead to a new paradigm for interacting with information, what we call 'social information processing'. In this paper, we study how social news aggregator Digg exploits social information processing to solve the problems of document recommendation and rating. First, we show, by tracking stories over time, that social networks play an important role in document recommendation. The second contribution of this paper consists of two mathematical models. The first model describes how collaborative rating and promotion of stories emerges from the independent decisions made by many users. The second model describes how a user's influence, the number of promoted stories and the user's social network, changes in time. We find qualitative agreement between predictions of the model and user data gathered from Digg.Comment: Extended version of the paper submitted to IEEE Internet Computing's special issue on Social Searc
    • …
    corecore