1,873 research outputs found

    Essential Speech and Language Technology for Dutch: Results by the STEVIN-programme

    Get PDF
    Computational Linguistics; Germanic Languages; Artificial Intelligence (incl. Robotics); Computing Methodologie

    A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging

    Full text link

    Technoconsen(t)sus

    Get PDF
    This Article proposes to ease doctrinal noise in consent through creating an objective “reasonable digital consumer” standard based on empirical testing of real consumers. In a manner similar to the way in which courts assess actual consumer confusion in trademark law, digital user agreements can be tested for legal usability. Specifically, a particular digital agreement would be deemed to withstand an unconscionability challenge only to the extent that a drafter can demonstrate a “reasonable digital consumer” is capable of meaningfully understanding its terms and presentation. Part I of this Article introduces the challenges computer code presents to consent in the intellectual property space using the example of security-invasive DRM. It briefly describes DRM as a common business strategy for preemptively enforcing intellectual property rights. It then explains the negative consequences of this strategy for the information security of businesses, governments, and consumers. One of these negative consequences is industry confusion regarding the ethical norms of acceptable technology business conduct. Part II examines legal code and consent, placing the norm confusion described in Part I in legal context. This section describes the strain that the emergence of security-invasive DRM has placed on copyright law, computer intrusion law, and contract law in the United States. This tension forces us to come to terms with the preexisting problems of contractual consent and form contracts in a digital context. Current doctrinal construction of digital consent has analyzed user agreements only on grounds related to procedural unconscionability. This approach is flawed as a matter of contract doctrine: procedural and substantive unconscionability must be analyzed simultaneously under either Williston’s or Corbin’s standard of unconscionability. Either of these two approaches would correctly assess as unconscionable many current user agreements. Finally, Part III discusses the organizational code emerging at the intersection of computer code and legal code in digital contracting. It posits one possible legal approach to reconstructing meaningful consent in digital contracts in order to solve the problems of unconscionability discussed in Part II—generating an empirical objective “reasonable digital consumer” standard by looking to trademark law. Trademark case law offers well-established methods for determining whether a “reasonable” consumer is confused by a particular trademark or practice; these cases employ empirical testing by experts using real consumers. Importing this “legal usability testing” into digital contracting would benefit both users and content owners through creating predictability of legal outcome. Similarly, a reasonable digital consumer standard leverages the naturally occurring “hubs” of understanding that both courts and content owners seek to generate through form contracts. The proposed method strikes a successful balance between customization and standardization by using the real understandings of users. It also allows for evolution of these understandings over time as users’ familiarity with technology, and technology itself, advances

    Text–to–Video: Image Semantics and NLP

    Get PDF
    When aiming at automatically translating an arbitrary text into a visual story, the main challenge consists in finding a semantically close visual representation whereby the displayed meaning should remain the same as in the given text. Besides, the appearance of an image itself largely influences how its meaningful information is transported towards an observer. This thesis now demonstrates that investigating in both, image semantics as well as the semantic relatedness between visual and textual sources enables us to tackle the challenging semantic gap and to find a semantically close translation from natural language to a corresponding visual representation. Within the last years, social networking became of high interest leading to an enormous and still increasing amount of online available data. Photo sharing sites like Flickr allow users to associate textual information with their uploaded imagery. Thus, this thesis exploits this huge knowledge source of user generated data providing initial links between images and words, and other meaningful data. In order to approach visual semantics, this work presents various methods to analyze the visual structure as well as the appearance of images in terms of meaningful similarities, aesthetic appeal, and emotional effect towards an observer. In detail, our GPU-based approach efficiently finds visual similarities between images in large datasets across visual domains and identifies various meanings for ambiguous words exploring similarity in online search results. Further, we investigate in the highly subjective aesthetic appeal of images and make use of deep learning to directly learn aesthetic rankings from a broad diversity of user reactions in social online behavior. To gain even deeper insights into the influence of visual appearance towards an observer, we explore how simple image processing is capable of actually changing the emotional perception and derive a simple but effective image filter. To identify meaningful connections between written text and visual representations, we employ methods from Natural Language Processing (NLP). Extensive textual processing allows us to create semantically relevant illustrations for simple text elements as well as complete storylines. More precisely, we present an approach that resolves dependencies in textual descriptions to arrange 3D models correctly. Further, we develop a method that finds semantically relevant illustrations to texts of different types based on a novel hierarchical querying algorithm. Finally, we present an optimization based framework that is capable of not only generating semantically relevant but also visually coherent picture stories in different styles.Bei der automatischen Umwandlung eines beliebigen Textes in eine visuelle Geschichte, besteht die grĂ¶ĂŸte Herausforderung darin eine semantisch passende visuelle Darstellung zu finden. Dabei sollte die Bedeutung der Darstellung dem vorgegebenen Text entsprechen. DarĂŒber hinaus hat die Erscheinung eines Bildes einen großen Einfluß darauf, wie seine bedeutungsvollen Inhalte auf einen Betrachter ĂŒbertragen werden. Diese Dissertation zeigt, dass die Erforschung sowohl der Bildsemantik als auch der semantischen Verbindung zwischen visuellen und textuellen Quellen es ermöglicht, die anspruchsvolle semantische LĂŒcke zu schließen und eine semantisch nahe Übersetzung von natĂŒrlicher Sprache in eine entsprechend sinngemĂ€ĂŸe visuelle Darstellung zu finden. Des Weiteren gewann die soziale Vernetzung in den letzten Jahren zunehmend an Bedeutung, was zu einer enormen und immer noch wachsenden Menge an online verfĂŒgbaren Daten gefĂŒhrt hat. Foto-Sharing-Websites wie Flickr ermöglichen es Benutzern, Textinformationen mit ihren hochgeladenen Bildern zu verknĂŒpfen. Die vorliegende Arbeit nutzt die enorme Wissensquelle von benutzergenerierten Daten welche erste Verbindungen zwischen Bildern und Wörtern sowie anderen aussagekrĂ€ftigen Daten zur VerfĂŒgung stellt. Zur Erforschung der visuellen Semantik stellt diese Arbeit unterschiedliche Methoden vor, um die visuelle Struktur sowie die Wirkung von Bildern in Bezug auf bedeutungsvolle Ähnlichkeiten, Ă€sthetische Erscheinung und emotionalem Einfluss auf einen Beobachter zu analysieren. Genauer gesagt, findet unser GPU-basierter Ansatz effizient visuelle Ähnlichkeiten zwischen Bildern in großen Datenmengen quer ĂŒber visuelle DomĂ€nen hinweg und identifiziert verschiedene Bedeutungen fĂŒr mehrdeutige Wörter durch die Erforschung von Ähnlichkeiten in Online-Suchergebnissen. Des Weiteren wird die höchst subjektive Ă€sthetische Anziehungskraft von Bildern untersucht und "deep learning" genutzt, um direkt Ă€sthetische Einordnungen aus einer breiten Vielfalt von Benutzerreaktionen im sozialen Online-Verhalten zu lernen. Um noch tiefere Erkenntnisse ĂŒber den Einfluss des visuellen Erscheinungsbildes auf einen Betrachter zu gewinnen, wird erforscht, wie alleinig einfache Bildverarbeitung in der Lage ist, tatsĂ€chlich die emotionale Wahrnehmung zu verĂ€ndern und ein einfacher aber wirkungsvoller Bildfilter davon abgeleitet werden kann. Um bedeutungserhaltende Verbindungen zwischen geschriebenem Text und visueller Darstellung zu ermitteln, werden Methoden des "Natural Language Processing (NLP)" verwendet, die der Verarbeitung natĂŒrlicher Sprache dienen. Der Einsatz umfangreicher Textverarbeitung ermöglicht es, semantisch relevante Illustrationen fĂŒr einfache Textteile sowie fĂŒr komplette HandlungsstrĂ€nge zu erzeugen. Im Detail wird ein Ansatz vorgestellt, der AbhĂ€ngigkeiten in Textbeschreibungen auflöst, um 3D-Modelle korrekt anzuordnen. Des Weiteren wird eine Methode entwickelt die, basierend auf einem neuen hierarchischen Such-Anfrage Algorithmus, semantisch relevante Illustrationen zu Texten verschiedener Art findet. Schließlich wird ein optimierungsbasiertes Framework vorgestellt, das nicht nur semantisch relevante, sondern auch visuell kohĂ€rente Bildgeschichten in verschiedenen Bildstilen erzeugen kann

    Argumentation Mining in User-Generated Web Discourse

    Full text link
    The goal of argumentation mining, an evolving research field in computational linguistics, is to design methods capable of analyzing people's argumentation. In this article, we go beyond the state of the art in several ways. (i) We deal with actual Web data and take up the challenges given by the variety of registers, multiple domains, and unrestricted noisy user-generated Web discourse. (ii) We bridge the gap between normative argumentation theories and argumentation phenomena encountered in actual data by adapting an argumentation model tested in an extensive annotation study. (iii) We create a new gold standard corpus (90k tokens in 340 documents) and experiment with several machine learning methods to identify argument components. We offer the data, source codes, and annotation guidelines to the community under free licenses. Our findings show that argumentation mining in user-generated Web discourse is a feasible but challenging task.Comment: Cite as: Habernal, I. & Gurevych, I. (2017). Argumentation Mining in User-Generated Web Discourse. Computational Linguistics 43(1), pp. 125-17

    Histogram via entropy reduction (HER): an information-theoretic alternative for geostatistics

    Get PDF
    Interpolation of spatial data has been regarded in many different forms, varying from deterministic to stochastic, parametric to nonparametric, and purely data-driven to geostatistical methods. In this study, we propose a nonparametric interpolator, which combines information theory with probability aggregation methods in a geostatistical framework for the stochastic estimation of unsampled points. Histogram via entropy reduction (HER) predicts conditional distributions based on empirical probabilities, relaxing parameterizations and, therefore, avoiding the risk of adding information not present in data. By construction, it provides a proper framework for uncertainty estimation since it accounts for both spatial configuration and data values, while allowing one to introduce or infer properties of the field through the aggregation method. We investigate the framework using synthetically generated data sets and demonstrate its efficacy in ascertaining the underlying field with varying sample densities and data properties. HER shows a comparable performance to popular benchmark models, with the additional advantage of higher generality. The novel method brings a new perspective of spatial interpolation and uncertainty analysis to geostatistics and statistical learning, using the lens of information theory

    Talking at Cross Purposes? A Computational Analysis of the Debate on Informational Duties in the Digital Services and the Digital Markets Acts

    Get PDF
    none4siSince the opaqueness of algorithms used for rankings, recommender systems, personalized advertisements, and content moderation on online platforms opens the door to discriminatory and anti-competitive behavior, increasing transparency has become a key objective of EU lawmakers. In the latest Commission proposals, the Digital Markets Act and Digital Services Act, transparency obligations for online intermediaries, platforms and ‘gatekeepers’ figure prominently. This paper investigates whether key concepts of competition law and transparency on digital markets are used in the same way by different stakeholders. Leveraging the power of computational text analysis, we find significant differences in the employment of terms like ‘gatekeepers’, ‘simple’, and ‘precise’ in the position papers that informed the drafting of the two latest Commission proposals. This finding is not only informative for the Commission and legal scholars, it might also affect the effectiveness of transparency duties, for which it is often simply assumed that phrases like ‘precise information’ are understood the same way by those implementing said obligations. Hence, it may explain why they fail so often to reach their goal. We conclude by sketching out how different computational text analysis tools, like topic modeling, sentiment analysis and text similarity, could be combined to provide many helpful insights for both rulemakers and the legal scholarship.Di Porto, Fabiana; Grote, Tatjana; Volpi, Gabriele; Invernizzi, RiccardoDi Porto, Fabiana; Grote, Tatjana; Volpi, Gabriele; Invernizzi, Riccard

    Fine Art Pattern Extraction and Recognition

    Get PDF
    This is a reprint of articles from the Special Issue published online in the open access journal Journal of Imaging (ISSN 2313-433X) (available at: https://www.mdpi.com/journal/jimaging/special issues/faper2020)

    Automating user privacy policy recommendations in social media

    Get PDF
    Most Social Media Platforms (SMPs) implement privacy policies that enable users to protect their sensitive information against privacy violations. However, observations indicate that users find these privacy policies cumbersome and difficult to configure. Consequently, various approaches have been proposed to assist users with privacy policy configuration. These approaches are however, limited to either protecting only profile attributes, or only protecting user-generated content. This is problematic, because both profile attributes and user-generated content can contain sensitive information. Therefore, protecting one without the other, can still result in privacy violations. A further drawback of existing approaches is that most require considerable user input which is time consuming and inefficient in terms of privacy policy configuration. In order to address these problems, we propose an automated privacy policy recommender system. The system relies on the expertise of existing social media users, as well as the user's privacy policy history in order to provide him/her with personalized privacy policy suggestions for both profile attributes, and user-generated content. Results from our prototype implementation indicate that the proposed recommender system provides accurate privacy policy suggestions, with minimum user input
    • 

    corecore