191 research outputs found

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    Discussing the digital age and youth work

    Get PDF
    https://www.ester.ee/record=b5288538*es

    Data Privacy, What Still Need Consideration in Online Application System?

    Get PDF
    This paper aims to conduct an analysis and exploration of matters that still needs to be considered in relation to data privacy in the online application system. This research is still a preliminary study. We conduct research related to data privacy using systematic literature review approach (SLR). Bt using SLR stages, we made a synthesis of 44 publications from Scopus Database Online that were released in the range 2015 - 2019. Based on this study, we found six things points to consider in data privacy, namely security and data protection, user awareness, risk managment, control setting, ethics, and transparency

    Data doxa: The affective consequences of data practices

    Get PDF
    This paper explores the embedding of data producing technologies in people's everyday lives and practices. It traces how repeated encounters with digital data operate to naturalise these entities, while often blindsiding their agentive properties and the ways they get implicated in processes of exploitation and governance. I propose and develop the notion of ‘data doxa’ to conceptualise the way in which digital data – and the devices and platforms that stage data – have come to be perceived in Western societies as normal, necessary and enabling. The ‘data doxa’ concept also accentuates the enculturation of many individuals into a data sharing habitus which frames digital technologies in simplistic terms as (a) panaceas for the problems associated with contemporary life, (b) figures of progress and convenience, and (c) mediums of knowledge, pleasure and identity. I suggest that three types of data-based relations contribute to the formation of this doxic sensibility: fetishisation, habit and enchantment. Each of these relations come to mediate public understandings of digital devices and the data they generate, obscuring the multifaceted nature and hidden depths of data and their propensity to double up as technologies of exposure and discipline. As a result of this situation, imaginative educational programs and revamped regulatory frameworks are urgently needed to inform individuals about the contribution of data to the leveraging of value and power in today's digital economies, but also to protect them from experiencing data-based harms

    State of the art 2015: a literature review of social media intelligence capabilities for counter-terrorism

    Get PDF
    Overview This paper is a review of how information and insight can be drawn from open social media sources. It focuses on the specific research techniques that have emerged, the capabilities they provide, the possible insights they offer, and the ethical and legal questions they raise. These techniques are considered relevant and valuable in so far as they can help to maintain public safety by preventing terrorism, preparing for it, protecting the public from it and pursuing its perpetrators. The report also considers how far this can be achieved against the backdrop of radically changing technology and public attitudes towards surveillance. This is an updated version of a 2013 report paper on the same subject, State of the Art. Since 2013, there have been significant changes in social media, how it is used by terrorist groups, and the methods being developed to make sense of it.  The paper is structured as follows: Part 1 is an overview of social media use, focused on how it is used by groups of interest to those involved in counter-terrorism. This includes new sections on trends of social media platforms; and a new section on Islamic State (IS). Part 2 provides an introduction to the key approaches of social media intelligence (henceforth ‘SOCMINT’) for counter-terrorism. Part 3 sets out a series of SOCMINT techniques. For each technique a series of capabilities and insights are considered, the validity and reliability of the method is considered, and how they might be applied to counter-terrorism work explored. Part 4 outlines a number of important legal, ethical and practical considerations when undertaking SOCMINT work

    Addressing the new generation of spam (Spam 2.0) through Web usage models

    Get PDF
    New Internet collaborative media introduce new ways of communicating that are not immune to abuse. A fake eye-catching profile in social networking websites, a promotional review, a response to a thread in online forums with unsolicited content or a manipulated Wiki page, are examples of new the generation of spam on the web, referred to as Web 2.0 Spam or Spam 2.0. Spam 2.0 is defined as the propagation of unsolicited, anonymous, mass content to infiltrate legitimate Web 2.0 applications.The current literature does not address Spam 2.0 in depth and the outcome of efforts to date are inadequate. The aim of this research is to formalise a definition for Spam 2.0 and provide Spam 2.0 filtering solutions. Early-detection, extendibility, robustness and adaptability are key factors in the design of the proposed method.This dissertation provides a comprehensive survey of the state-of-the-art web spam and Spam 2.0 filtering methods to highlight the unresolved issues and open problems, while at the same time effectively capturing the knowledge in the domain of spam filtering.This dissertation proposes three solutions in the area of Spam 2.0 filtering including: (1) characterising and profiling Spam 2.0, (2) Early-Detection based Spam 2.0 Filtering (EDSF) approach, and (3) On-the-Fly Spam 2.0 Filtering (OFSF) approach. All the proposed solutions are tested against real-world datasets and their performance is compared with that of existing Spam 2.0 filtering methods.This work has coined the term ‘Spam 2.0’, provided insight into the nature of Spam 2.0, and proposed filtering mechanisms to address this new and rapidly evolving problem

    Informational Privacy and Self-Disclosure Online: A Critical Mixed-Methods Approach to Social Media

    Get PDF
    This thesis investigates the multifaceted processes that have contributed to normalising identifiable self-disclosure in online environments and how perceptions of informational privacy and self-disclosure behavioural patterns have evolved in the relatively brief history of online communication. Its investigative mixed-methods approach critically examines a wide and diverse variety of primary and secondary sources and material to bring together aspects of the social dynamics that have contributed to the generalised identifiable self-disclosure. This research also utilises the results of the exploratory statistical as well as qualitative analysis of an extensive online survey completed by UCL students as a snapshot in time. This is combined with arguments developed from an analysis of existing published sources and looks ahead to possible future developments. This study examines the time when people online proved to be more trusting, and how users of the Internet responded to the development of the growing societal need to share personal information online. It addresses issues of privacy ethics and how they evolved over time to allow a persistent association of online self-disclosure to real-life identity that had not been seen before the emergence of social network sites. The resistance to identifiable self-disclosure before the widespread use of social network sites was relatively resolved by a combination of elements and circumstances. Some of these result from the demographics of young users, users' attitudes to deception, ideology and trust-building processes. Social and psychological factors, such as gaining social capital, peer pressure and the overall rewarding and seductive nature of social media, have led users to waive significant parts of their privacy in order to receive the perceived benefits. The sociohistorical context allows this research to relate evolving phenomena like the privacy paradox, lateral surveillance and self-censorship to the revamped ethics of online privacy and self-disclosure

    Anglicisms in The National Corpus of Polish : assets and limitations of corpus tools

    Get PDF
    Pomimo obiecujących badaƄ automatyczna ekstrakcja anglicyzmĂłw z wykorzystaniem narzędzi dostępnych w elektronicznych korpusach językowych wciÄ…ĆŒ nie jest moĆŒliwa. Mimo to wyszukiwarki korpusowe są nieodzownym narzędziem w systematycznej weryfikacji uĆŒycia anglicyzmĂłw wyƂuskanych metodą tradycyjną. W artykule omĂłwiono zarĂłwno funkcjonalnoƛć, jak i niedoskonaƂoƛć narzędzi dostępnych w Narodowym Korpusie Języka Polskiego w odniesieniu do badania anglicyzmĂłw rĂłĆŒnych typĂłw oraz ich z gĂłry zdefiniowanych cech. Niedostatki narzędzi, związane gƂównie z semantyką zapoĆŒyczeƄ, zostaƂy zilustrowane konkretnymi przykƂadami anglicyzmĂłw.While electronic corpora may not seem adequate sources for anglicisms retrieval, since despite promising attempts they still lack readily available and efficient tools for foreign loans identification, they are indispensable in a systematic verification of the use of preidentified loans. The article offers an assessment of an electronic corpus of Polish in reference to its usefulness for the study of English loans. Though we test a selected corpus and its tools, and use Polish anglicisms as exemplifications, the findings presented in the article pertain to other large corpora and anglicisms in other languages. Corpus tools allow for a multidimensional analysis of loans, yet they fail to meet the requirements of more in-depth analyses of anglicisms, related to their semantics and structure. The limitations of corpora tools will be illustrated with authentic attempted-but-failed corpus searches
    • 

    corecore