79 research outputs found

    Survey on social reputation mechanisms: Someone told me I can trust you

    Full text link
    Nowadays, most business and social interactions have moved to the internet, highlighting the relevance of creating online trust. One way to obtain a measure of trust is through reputation mechanisms, which record one's past performance and interactions to generate a reputational value. We observe that numerous existing reputation mechanisms share similarities with actual social phenomena; we call such mechanisms 'social reputation mechanisms'. The aim of this paper is to discuss several social phenomena and map these to existing social reputation mechanisms in a variety of scopes. First, we focus on reputation mechanisms in the individual scope, in which everyone is responsible for their own reputation. Subjective reputational values may be communicated to different entities in the form of recommendations. Secondly, we discuss social reputation mechanisms in the acquaintances scope, where one's reputation can be tied to another through vouching or invite-only networks. Finally, we present existing social reputation mechanisms in the neighbourhood scope. In such systems, one's reputation can heavily be affected by the behaviour of others in their neighbourhood or social group.Comment: 10 pages, 3 figures, 1 tabl

    A Hybrid Social Network-based Collaborative Filtering Method for Personalized Manufacturing Service Recommendation

    Get PDF
    Nowadays, social network-based collaborative filtering (CF) methods are widely applied to recommend suitable products to consumers by combining trust relationships and similarities in the preference ratings among past users. However, these types of methods are rarely used for recommending manufacturing services. Hence, this study has developed a hybrid social network-based CF method for recommending personalized manufacturing services. The trustworthy enterprises and three types of similar enterprises with different features were considered as the four influential components for calculating predicted ratings of candidate services. The stochastic approach for link structure analysis (SALSA) was adopted to select top K trustworthy enterprises while also considering their reputation propagation on enterprise social network. The predicted ratings of candidate services were computed by using an extended user-based CF method where the particle swarm optimization (PSO) algorithm was leveraged to optimize the weights of the four components, thus making service recommendation more objective. Finally, an evaluation experiment illustrated that the proposed method is more accurate than the traditional user-based CF method

    A PageRank-based collaborative filtering recommendation approach in digital libraries

    Get PDF
    U sadašnje vrijeme opromnog broja podataka, eksplozivni porast digitalnih izvora u Digitalnim Knjižnicama - Digital Libraries (DLs) doveo je do ozbiljnog problema preopterećenja informacijama. Taj trend zahtijeva pristupe personaliziranih preporuka koji bi korisnike DL upoznali s digitalnim izvorima specifičnim za njihove individualne potrebe. U ovom radu predstavljamo personalizirani pristup preporuci digitalnog izvora koji kombinira tehnike PageRank i Collaborative Filtering (CF) u sjedinjenom okviru u svrhu preporuke odgovarajućih digitalnih izvora aktivnom korisniku generirajući i analizirajući mrežu u postojećem vremenu kako odnosa među korisnicima tako i odnosa među izvorima. Kako bi se obradila postojeća pitanja o postavljanju digitalnih knjižnica, uključujući nesigurne profile korisnika, nesigurna obilježja digitalnog izvora, oskudnost podataka i problem hladnog starta, ovaj rad adaptira personalizirani PageRank algoritam kako bi rangirao važnost izvora koji vodi računa o vremenu učinkovitijim CF, tražeći asocijativne linkove koji povezuju i aktivnog korisnika i njegove/njezine početno preferirane izvore. Također ocijenjujemo performansu predložene metodologije kroz analizu slučaja vezanog za tradicionalnu CF tehniku koja koristi iste podatke iz Digitalne knjižnice.In the current era of big data, the explosive growth of digital resources in Digital Libraries (DLs) has led to the serious information overload problem. This trend demands personalized recommendation approaches to provide DL users with digital resources specific to their individual needs. In this paper we present a personalized digital resource recommendation approach, which combines PageRank and Collaborative Filtering (CF) techniques in a unified framework for recommending right digital resources to an active user by generating and analyzing a time-aware network of both user relationships and resource relationships from historical usage data. To address the existing issues in DL deployment, including unstable user profiles, unstable digital resource features, data sparsity and cold start problem, this work adapts the personalized PageRank algorithm to rank the time-aware resource importance for more effective CF, by searching for associative links connecting both active user and his/her initially preferred resources. We further evaluate the performance of the proposed methodology through a case study relative to the traditional CF technique operating on the same historical usage data from a DL

    A Graph-Based Model Reduction Method for Digital Twins

    Get PDF
    Digital twin technology is the talking point of academia and industry. When defining a digital twin, new modeling paradigms and computational methods are needed. Developments in the Internet of Things and advanced simulation and modeling techniques have provided new strategies for building complex digital twins. The digital twin is a virtual entity representation of the physical entity, such as a product or a process. This virtual entity is a collection of computationally complex knowledge models that embeds all the information of the physical world. To that end, this article proposes a graph-based representation of the virtual entity. This graph-based representation provides a method to visualize the parameter and their interactions across different modeling domains. However, the virtual entity graph becomes inherently complex with multiple parameters for a complex multidimensional physical system. This research contributes to the body of knowledge with a novel graph-based model reduction method that simplifies the virtual entity analysis. The graph-based model reduction method uses graph structure preserving algorithms and Dempster–Shaffer Theory to provide the importance of the parameters in the virtual entity. The graph-based model reduction method is validated by benchmarking it against the random forest regressor method. The method is tested on a turbo compressor case study. In the future, a method such as graph-based model reduction needs to be integrated with digital twin frameworks to provide digital services by the twin efficiently.Peer reviewe

    Social impact retrieval: measuring author influence on information retrieval

    Get PDF
    The increased presence of technologies collectively referred to as Web 2.0 mean the entire process of new media production and dissemination has moved away from an authorcentric approach. Casual web users and browsers are increasingly able to play a more active role in the information creation process. This means that the traditional ways in which information sources may be validated and scored must adapt accordingly. In this thesis we propose a new way in which to look at a user's contributions to the network in which they are present, using these interactions to provide a measure of authority and centrality to the user. This measure is then used to attribute an query-independent interest score to each of the contributions the author makes, enabling us to provide other users with relevant information which has been of greatest interest to a community of like-minded users. This is done through the development of two algorithms; AuthorRank and MessageRank. We present two real-world user experiments which focussed around multimedia annotation and browsing systems that we built; these systems were novel in themselves, bringing together video and text browsing, as well as free-text annotation. Using these systems as examples of real-world applications for our approaches, we then look at a larger-scale experiment based on the author and citation networks of a ten year period of the ACM SIGIR conference on information retrieval between 1997-2007. We use the citation context of SIGIR publications as a proxy for annotations, constructing large social networks between authors. Against these networks we show the effectiveness of incorporating user generated content, or annotations, to improve information retrieval

    Trustworthiness in Social Big Data Incorporating Semantic Analysis, Machine Learning and Distributed Data Processing

    Get PDF
    This thesis presents several state-of-the-art approaches constructed for the purpose of (i) studying the trustworthiness of users in Online Social Network platforms, (ii) deriving concealed knowledge from their textual content, and (iii) classifying and predicting the domain knowledge of users and their content. The developed approaches are refined through proof-of-concept experiments, several benchmark comparisons, and appropriate and rigorous evaluation metrics to verify and validate their effectiveness and efficiency, and hence, those of the applied frameworks

    A governance framework for algorithmic accountability and transparency

    Get PDF
    Algorithmic systems are increasingly being used as part of decision-making processes in both the public and private sectors, with potentially significant consequences for individuals, organisations and societies as a whole. Algorithmic systems in this context refer to the combination of algorithms, data and the interface process that together determine the outcomes that affect end users. Many types of decisions can be made faster and more efficiently using algorithms. A significant factor in the adoption of algorithmic systems for decision-making is their capacity to process large amounts of varied data sets (i.e. big data), which can be paired with machine learning methods in order to infer statistical models directly from the data. The same properties of scale, complexity and autonomous model inference however are linked to increasing concerns that many of these systems are opaque to the people affected by their use and lack clear explanations for the decisions they make. This lack of transparency risks undermining meaningful scrutiny and accountability, which is a significant concern when these systems are applied as part of decision-making processes that can have a considerable impact on people's human rights (e.g. critical safety decisions in autonomous vehicles; allocation of health and social service resources, etc.). This study develops policy options for the governance of algorithmic transparency and accountability, based on an analysis of the social, technical and regulatory challenges posed by algorithmic systems. Based on a review and analysis of existing proposals for governance of algorithmic systems, a set of four policy options are proposed, each of which addresses a different aspect of algorithmic transparency and accountability: 1. awareness raising: education, watchdogs and whistleblowers; 2. accountability in public-sector use of algorithmic decision-making; 3. regulatory oversight and legal liability; and 4. global coordination for algorithmic governance

    Localizing the media, locating ourselves: a critical comparative analysis of socio-spatial sorting in locative media platforms (Google AND Flickr 2009-2011)

    Get PDF
    In this thesis I explore media geocoding (i.e., geotagging or georeferencing), the process of inscribing the media with geographic information. A process that enables distinct forms of producing, storing, and distributing information based on location. Historically, geographic information technologies have served a biopolitical function producing knowledge of populations. In their current guise as locative media platforms, these systems build rich databases of places facilitated by user-generated geocoded media. These geoindexes render places, and users of these services, this thesis argues, subject to novel forms of computational modelling and economic capture. Thus, the possibility of tying information, people and objects to location sets the conditions to the emergence of new communicative practices as well as new forms of governmentality (management of populations). This project is an attempt to develop an understanding of the socio-economic forces and media regimes structuring contemporary forms of location-aware communication, by carrying out a comparative analysis of two of the main current location-enabled platforms: Google and Flickr. Drawing from the medium-specific approach to media analysis characteristic of the subfield of Software Studies, together with the methodological apparatus of Cultural Analytics (data mining and visualization methods), the thesis focuses on examining how social space is coded and computed in these systems. In particular, it looks at the databases’ underlying ontologies supporting the platforms' geocoding capabilities and their respective algorithmic logics. In the final analysis the thesis argues that the way social space is translated in the form of POIs (Points of Interest) and business-biased categorizations, as well as the geodemographical ordering underpinning the way it is computed, are pivotal if we were to understand what kind of socio-spatial relations are actualized in these systems, and what modalities of governing urban mobility are enabled
    corecore