824 research outputs found

    INTEREST-BASED FILTERING OF SOCIAL DATA IN DECENTRALIZED ONLINE SOCIAL NETWORKS

    Get PDF
    In Online Social Networks (OSNs) users are overwhelmed with huge amount of social data, most of which are irrelevant to their interest. Due to the fact that most current OSNs are centralized, people are forced to share their data with the site, in order to be able to share it with their friends, and thus they lose control over it. Decentralized Online Social Networks have been proposed as an alternative to traditional centralized ones (such as Facebook, Twitter, Google+, etc.) to deal with privacy problems and to allow users to maintain control over their data. This thesis presents a novel peer-to-peer architecture for decentralized OSN and a mechanism that allows each node to filter out irrelevant social data, while ensuring a level of serendipity (serendipitous are social data which are unexpected since they do not belong in the areas of interest of the user but are desirable since they are important or popular). The approach uses feedback from recipient users to construct a model of different areas of interest along the relationships between sender and receiver, which acts as a filter while propagating social data in this area of interest. The evaluation of the approach, using an Erlang simulation shows that it works according to the design specification: with the increasing number of social data passing through the network, the nodes learn to filter out irrelevant data, while serendipitous important data is able to pass through the network

    Providing awareness, explanation and control of personalized stream filtering in a P2P social network

    Get PDF
    In Online Social Networks (OSNs), users are often overwhelmed with a huge amount of social data, most of which are irrelevant to their interest. Filtering of the social data stream is the common way to deal with this problem, and it has already been applied by OSNs, such as Facebook and Google+. Unfortunately, personalized filtering leads to “the filter bubble” problem where the user is trapped inside a world within the limited boundaries of her interests and cannot be exposed to any surprising, desirable information. Moreover, these OSNs are black boxes, providing no transparency for the user about how the filtering mechanism decides what is to be shown in the activity stream. As a result, the user trust in the system can decline. This thesis presents an interactive method to visualize the personalized stream filtering in OSNs. The proposed visualization helps to create awareness, explanation, and control of personalized stream filtering to alleviate “the filter bubble” problem and increase the users’ trust in the system. The visualization is implemented in MADMICA – a new privacy-aware decentralized OSN, based on the Friendica P2P protocol, which filters the social updates stream of users based on their interests. The results of three user evaluations are presented in this thesis: small-scale pilot study, qualitative study and large-scale quantitative study with 326 participants. The results of the small-scale study show that the filter bubble visualization makes the users aware of the filtering mechanism, engages them in actions to correct and change it, and as a result, increases the users’ trust in the system. The qualitative study reveals a generally higher proportion of desirable user perceptions for the awareness, explanation and control of the filter bubble provided by the visualization. Moreover, the results of the quantitative study demonstrate that the visualization leads to increased users’ awareness of the filter bubble, understandability of the filtering mechanism and to a feeling of control over the data stream they are seeing

    Leveraging Enterprise 2.0 for Knowledge Sharing

    Get PDF
    Enterprise 2.0 takes on the full benefits of Web 2.0 services and has great potential in delivering business benefits. Many organizations have invested in this platform yet many are still hesitant to the adoption of it. This research paper explores the use of Enterprise 2.0 and how it can be incorporated into the changing business environment. The paper delineates the principles of KM and draw inferences where the appropriate use of Enterprise 2.0 will improve knowledge sharing. The underlying principles of KM strategy and the desired transformation discussed, illustrates challenges and gaps in knowledge sharing. The subsequent discussion explores the identified gaps and proposed the appropriate use of Enterprise 2.0, based on social capital theory. The study will contribute to a deeper understanding of the coherence of Enterprise 2.0 and knowledge sharing and identify the potential areas of improvements through appropriate use of Enterprise 2.0

    Providing awareness, explanation and control of personalized filtering in a social networking site

    Get PDF
    Social networking sites (SNSs) have applied personalized filtering to deal with overwhelmingly irrelevant social data. However, due to the focus of accuracy, the personalized filtering often leads to “the filter bubble” problem where the users can only receive information that matches their pre-stated preferences but fail to be exposed to new topics. Moreover, these SNSs are black boxes, providing no transparency for the user about how the filtering mechanism decides what is to be shown in the activity stream. As a result, the user’s usage experience and trust in the system can decline. This paper presents an interactive method to visualize the personalized filtering in SNSs. The proposed visualization helps to create awareness, explanation, and control of personalized filtering to alleviate the “filter bubble” problem and increase the users’ trust in the system. Three user evaluations are presented. The results show that users have a good understanding about the filter bubble visualization, and the visualization can increase users’ awareness of the filter bubble, understandability of the filtering mechanism and to a feeling of control over the data stream they are seeing. The intuitiveness of the design is overall good, but a context sensitive help is also preferred. Moreover, the visualization can provide users with better usage experience and increase users’ trust in the system

    A Distributed, Architecture-Centric Approach to Computing Accurate Recommendations from Very Large and Sparse Datasets

    Get PDF
    The use of recommender systems is an emerging trend today, when user behavior information is abundant. There are many large datasets available for analysis because many businesses are interested in future user opinions. Sophisticated algorithms that predict such opinions can simplify decision-making, improve customer satisfaction, and increase sales. However, modern datasets contain millions of records, which represent only a small fraction of all possible data. Furthermore, much of the information in such sparse datasets may be considered irrelevant for making individual recommendations. As a result, there is a demand for a way to make personalized suggestions from large amounts of noisy data. Current recommender systems are usually all-in-one applications that provide one type of recommendation. Their inflexible architectures prevent detailed examination of recommendation accuracy and its causes. We introduce a novel architecture model that supports scalable, distributed suggestions from multiple independent nodes. Our model consists of two components, the input matrix generation algorithm and multiple platform-independent combination algorithms. A dedicated input generation component provides the necessary data for combination algorithms, reduces their size, and eliminates redundant data processing. Likewise, simple combination algorithms can produce recommendations from the same input, so we can more easily distinguish between the benefits of a particular combination algorithm and the quality of the data it receives. Such flexible architecture is more conducive for a comprehensive examination of our system. We believe that a user's future opinion may be inferred from a small amount of data, provided that this data is most relevant. We propose a novel algorithm that generates a more optimal recommender input. Unlike existing approaches, our method sorts the relevant data twice. Doing this is slower, but the quality of the resulting input is considerably better. Furthermore, the modular nature of our approach may improve its performance, especially in the cloud computing context. We implement and validate our proposed model via mathematical modeling, by appealing to statistical theories, and through extensive experiments, data analysis, and empirical studies. Our empirical study examines the effectiveness of accuracy improvement techniques for collaborative filtering recommender systems. We evaluate our proposed architecture model on the Netflix dataset, a popular (over 130,000 solutions), large (over 100,000,000 records), and extremely sparse (1.1\%) collection of movie ratings. The results show that combination algorithm tuning has little effect on recommendation accuracy. However, all algorithms produce better results when supplied with a more relevant input. Our input generation algorithm is the reason for a considerable accuracy improvement

    Re-examining and re-conceptualising enterprise search and discovery capability: towards a model for the factors and generative mechanisms for search task outcomes.

    Get PDF
    Many organizations are trying to re-create the Google experience, to find and exploit their own corporate information. However, there is evidence that finding information in the workplace using search engine technology has remained difficult, with socio-technical elements largely neglected in the literature. Explication of the factors and generative mechanisms (ultimate causes) to effective search task outcomes (user satisfaction, search task performance and serendipitous encountering) may provide a first step in making improvements. A transdisciplinary (holistic) lens was applied to Enterprise Search and Discovery capability, combining critical realism and activity theory with complexity theories to one of the worlds largest corporations. Data collection included an in-situ exploratory search experiment with 26 participants, focus groups with 53 participants and interviews with 87 business professionals. Thousands of user feedback comments and search transactions were analysed. Transferability of findings was assessed through interviews with eight industry informants and ten organizations from a range of industries. A wide range of informational needs were identified for search filters, including a need to be intrigued. Search term word co-occurrence algorithms facilitated serendipity to a greater extent than existing methods deployed in the organization surveyed. No association was found between user satisfaction (or self assessed search expertise) with search task performance and overall performance was poor, although most participants had been satisfied with their performance. Eighteen factors were identified that influence search task outcomes ranging from user and task factors, informational and technological artefacts, through to a wide range of organizational norms. Modality Theory (Cybersearch culture, Simplicity and Loss Aversion bias) was developed to explain the study observations. This proposes that at all organizational levels there are tendencies for reductionist (unimodal) mind-sets towards search capability leading to fixes that fail. The factors and mechanisms were identified in other industry organizations suggesting some theory generalizability. This is the first socio-technical analysis of Enterprise Search and Discovery capability. The findings challenge existing orthodoxy, such as the criticality of search literacy (agency) which has been neglected in the practitioner literature in favour of structure. The resulting multifactorial causal model and strategic framework for improvement present opportunities to update existing academic models in the IR, LIS and IS literature, such as the DeLone and McLean model for information system success. There are encouraging signs that Modality Theory may enable a reconfiguration of organizational mind-sets that could transform search task outcomes and ultimately business performance

    A survey of recommender systems for energy efficiency in buildings: Principles, challenges and prospects

    Full text link
    Recommender systems have significantly developed in recent years in parallel with the witnessed advancements in both internet of things (IoT) and artificial intelligence (AI) technologies. Accordingly, as a consequence of IoT and AI, multiple forms of data are incorporated in these systems, e.g. social, implicit, local and personal information, which can help in improving recommender systems' performance and widen their applicability to traverse different disciplines. On the other side, energy efficiency in the building sector is becoming a hot research topic, in which recommender systems play a major role by promoting energy saving behavior and reducing carbon emissions. However, the deployment of the recommendation frameworks in buildings still needs more investigations to identify the current challenges and issues, where their solutions are the keys to enable the pervasiveness of research findings, and therefore, ensure a large-scale adoption of this technology. Accordingly, this paper presents, to the best of the authors' knowledge, the first timely and comprehensive reference for energy-efficiency recommendation systems through (i) surveying existing recommender systems for energy saving in buildings; (ii) discussing their evolution; (iii) providing an original taxonomy of these systems based on specified criteria, including the nature of the recommender engine, its objective, computing platforms, evaluation metrics and incentive measures; and (iv) conducting an in-depth, critical analysis to identify their limitations and unsolved issues. The derived challenges and areas of future implementation could effectively guide the energy research community to improve the energy-efficiency in buildings and reduce the cost of developed recommender systems-based solutions.Comment: 35 pages, 11 figures, 1 tabl

    At the crossroads of big science, open science, and technology transfer

    Get PDF
    Les grans infraestructures científiques s’enfronten a demandes creixents de responsabilitat pública, no només per la seva contribució al descobriment científic, sinó també per la seva capacitat de generar valor econòmic secundari. Per construir i operar les seves infraestructures sofisticades, sovint generen tecnologies frontereres dissenyant i construint solucions tècniques per a problemes d’enginyeria complexos i sense precedents. En paral·lel, la dècada anterior ha presenciat la ràpida irrupció de canvis tecnològics que han afectat la manera com es fa i es comparteix la ciència, cosa que ha comportat l’emergència del concepte d’Open Science (OS). Els governs avancen ràpidament vers aquest paradigma de OS i demanen a les grans infraestructures científiques que "obrin" els seus processos científics. No obstant, aquestes dues forces s'oposen, ja que la comercialització de tecnologies i resultats científics requereixen normalment d’inversions financeres importants i les empreses només estan disposades a assumir aquest cost si poden protegir la innovació de la imitació o de la competència deslleial. Aquesta tesi doctoral té com a objectiu comprendre com les noves aplicacions de les TIC afecten els resultats de la recerca i la transferència de tecnologia resultant en el context de les grans infraestructures científiques. La tesis pretén descobrir les tensions entre aquests dos vectors normatius, així com identificar els mecanismes que s’utilitzen per superar-les. La tesis es compon de quatre estudis: 1) Un estudi que aplica un mètode de recerca mixt que combina dades de dues enquestes d’escala global realitzades online (2016, 2018), amb dos cas d’estudi de dues comunitats científiques en física d’alta energia i biologia molecular que avaluen els factors explicatius darrere les pràctiques de compartir dades per part dels científics; 2) Un estudi de cas d’Open Targets, una infraestructura d’informació basada en dades considerades bens comuns, on el Laboratori Europeu de Biologia Molecular-EBI i empreses farmacèutiques col·laboren i comparteixen dades científiques i eines tecnològiques per accelerar el descobriment de medicaments; 3) Un estudi d’un conjunt de dades únic de 170 projectes finançats en el marc d’ATTRACT (un nou instrument de la Comissió Europea liderat per les grans infraestructures científiques europees) que té com a objectiu comprendre la naturalesa del procés de serendipitat que hi ha darrere de la transició de tecnologies de grans infraestructures científiques a aplicacions comercials abans no anticipades. ; i 4) un cas d’estudi sobre la tecnologia White Rabbit, un hardware sofisticat de codi obert desenvolupat al Consell Europeu per a la Recerca Nuclear (CERN) en col·laboració amb un extens ecosistema d’empreses.Las grandes infraestructuras científicas se enfrentan a crecientes demandas de responsabilidad pública, no solo por su contribución al descubrimiento científico sino también por su capacidad de generar valor económico para la sociedad. Para construir y operar sus sofisticadas infraestructuras, a menudo generan tecnologías de vanguardia al diseñar y construir soluciones técnicas para problemas de ingeniería complejos y sin precedentes. Paralelamente, la década anterior ha visto la irrupción de rápidos cambios tecnológicos que afectan la forma en que se genera y comparte la ciencia, lo que ha llevado a acuñar el concepto de Open Science (OS). Los gobiernos se están moviendo rápidamente hacia este nuevo paradigma y están pidiendo a las grandes infraestructuras científicas que "abran" el proceso científico. Sin embargo, estas dos fuerzas se oponen, ya que la comercialización de tecnología y productos científicos generalmente requiere importantes inversiones financieras y las empresas están dispuestas a asumir este coste solo si pueden proteger la innovación de la imitación o la competencia desleal. Esta tesis doctoral tiene como objetivo comprender cómo las nuevas aplicaciones de las TIC están afectando los resultados científicos y la transferencia de tecnología resultante en el contexto de las grandes infraestructuras científicas. La tesis pretende descubrir las tensiones entre estas dos fuerzas normativas e identificar los mecanismos que se emplean para superarlas. La tesis se compone de cuatro estudios: 1) Un estudio que emplea un método mixto de investigación que combina datos de dos encuestas de escala global realizadas online (2016, 2018), con dos caso de estudio sobre dos comunidades científicas distintas -física de alta energía y biología molecular- que evalúan los factores explicativos detrás de las prácticas de intercambio de datos científicos; 2) Un caso de estudio sobre Open Targets, una infraestructura de información basada en datos considerados como bienes comunes, donde el Laboratorio Europeo de Biología Molecular-EBI y compañías farmacéuticas colaboran y comparten datos científicos y herramientas tecnológicas para acelerar el descubrimiento de fármacos; 3) Un estudio de un conjunto de datos único de 170 proyectos financiados bajo ATTRACT, un nuevo instrumento de la Comisión Europea liderado por grandes infraestructuras científicas europeas, que tiene como objetivo comprender la naturaleza del proceso fortuito detrás de la transición de las tecnologías de grandes infraestructuras científicas a aplicaciones comerciales previamente no anticipadas ; y 4) un estudio de caso de la tecnología White Rabbit, un sofisticado hardware de código abierto desarrollado en el Consejo Europeo de Investigación Nuclear (CERN) en colaboración con un extenso ecosistema de empresas.Big science infrastructures are confronting increasing demands for public accountability, not only within scientific discovery but also their capacity to generate secondary economic value. To build and operate their sophisticated infrastructures, big science often generates frontier technologies by designing and building technical solutions to complex and unprecedented engineering problems. In parallel, the previous decade has seen the disruption of rapid technological changes impacting the way science is done and shared, which has led to the coining of the concept of Open Science (OS). Governments are quickly moving towards the OS paradigm and asking big science centres to "open up” the scientific process. Yet these two forces run in opposition as the commercialization of scientific outputs usually requires significant financial investments and companies are willing to bear this cost only if they can protect the innovation from imitation or unfair competition. This PhD dissertation aims at understanding how new applications of ICT are affecting primary research outcomes and the resultant technology transfer in the context of big and OS. It attempts to uncover the tensions in these two normative forces and identify the mechanisms that are employed to overcome them. The dissertation is comprised of four separate studies: 1) A mixed-method study combining two large-scale global online surveys to research scientists (2016, 2018), with two case studies in high energy physics and molecular biology scientific communities that assess explanatory factors behind scientific data-sharing practices; 2) A case study of Open Targets, an information infrastructure based upon data commons, where European Molecular Biology Laboratory-EBI and pharmaceutical companies collaborate and share scientific data and technological tools to accelerate drug discovery; 3) A study of a unique dataset of 170 projects funded under ATTRACT -a novel policy instrument of the European Commission lead by European big science infrastructures- which aims to understand the nature of the serendipitous process behind transitioning big science technologies to previously unanticipated commercial applications; and 4) a case study of White Rabbit technology, a sophisticated open-source hardware developed at the European Council for Nuclear Research (CERN) in collaboration with an extensive ecosystem of companies
    corecore