220 research outputs found

    Analyzing fluctuation of topics and public sentiment through social media data

    Get PDF
    Over the past decade years, Internet users were expending rapidly in the world. They form various online social networks through such Internet platforms as Twitter, Facebook and Instagram. These platforms provide a fast way that helps their users receive and disseminate information and express personal opinions in virtual space. When dealing with massive and chaotic social media data, how to accurately determine what events or concepts users are discussing is an interesting and important problem. This dissertation work mainly consists of two parts. First, this research pays attention to mining the hidden topics and user interest trend by analyzing real-world social media activities. Topic modeling and sentiment analysis methods are proposed to classify the social media posts into different sentiment classes and then discover the trend of sentiment based on different topics over time. The presented case study focuses on COVID-19 pandemic that started in 2019. A large amount of Twitter data is collected and used to discover the vaccine-related topics during the pre- and post-vaccine emergency use period. By using the proposed framework, 11 vaccine-related trend topics are discovered. Ultimately the discovered topics can be used to improve the readability of confusing messages about vaccines on social media and provide effective results to support policymakers in making their policy their informed decisions about public health. Second, using conventional topic models cannot deal with the sparsity problem of short text. A novel topic model, named Topic Noise based-Biterm Topic Model with FastText embeddings (TN-BTMF), is proposed to deal with this problem. Word co-occurrence patterns (i.e. biterms) are dirctly generated in BTM. A scoring method based on word co-occurrence and semantic similarity is proposed to detect noise biterms. In th

    Software Takes Command

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Software has replaced a diverse array of physical, mechanical, and electronic technologies used before 21st century to create, store, distribute and interact with cultural artifacts. It has become our interface to the world, to others, to our memory and our imagination - a universal language through which the world speaks, and a universal engine on which the world runs. What electricity and combustion engine were to the early 20th century, software is to the early 21st century. Offering the the first theoretical and historical account of software for media authoring and its effects on the practice and the very concept of 'media,' the author of The Language of New Media (2001) develops his own theory for this rapidly-growing, always-changing field. What was the thinking and motivations of people who in the 1960 and 1970s created concepts and practical techniques that underlie contemporary media software such as Photoshop, Illustrator, Maya, Final Cut and After Effects? How do their interfaces and tools shape the visual aesthetics of contemporary media and design? What happens to the idea of a 'medium' after previously media-specific tools have been simulated and extended in software? Is it still meaningful to talk about different mediums at all? Lev Manovich answers these questions and supports his theoretical arguments by detailed analysis of key media applications such as Photoshop and After Effects, popular web services such as Google Earth, and the projects in motion graphics, interactive environments, graphic design and architecture. Software Takes Command is a must for all practicing designers and media artists and scholars concerned with contemporary media

    Software Takes Command

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Software has replaced a diverse array of physical, mechanical, and electronic technologies used before 21st century to create, store, distribute and interact with cultural artifacts. It has become our interface to the world, to others, to our memory and our imagination - a universal language through which the world speaks, and a universal engine on which the world runs. What electricity and combustion engine were to the early 20th century, software is to the early 21st century. Offering the the first theoretical and historical account of software for media authoring and its effects on the practice and the very concept of 'media,' the author of The Language of New Media (2001) develops his own theory for this rapidly-growing, always-changing field. What was the thinking and motivations of people who in the 1960 and 1970s created concepts and practical techniques that underlie contemporary media software such as Photoshop, Illustrator, Maya, Final Cut and After Effects? How do their interfaces and tools shape the visual aesthetics of contemporary media and design? What happens to the idea of a 'medium' after previously media-specific tools have been simulated and extended in software? Is it still meaningful to talk about different mediums at all? Lev Manovich answers these questions and supports his theoretical arguments by detailed analysis of key media applications such as Photoshop and After Effects, popular web services such as Google Earth, and the projects in motion graphics, interactive environments, graphic design and architecture. Software Takes Command is a must for all practicing designers and media artists and scholars concerned with contemporary media

    The New Reflexivity: Puzzle Films, Found Footage, and Cinematic Narration in the Digital Age

    Full text link
    “The New Reflexivity” tracks two narrative styles of contemporary Hollywood production that have yet to be studied in tandem: the puzzle film and the found footage horror film. In early August 1999, near the end of what D.N. Rodowick refers to as “the summer of digital paranoia,” two films entered the wide-release U.S. theatrical marketplace and enjoyed surprisingly massive financial success, just as news of the “death of film” circulated widely. Though each might typically be classified as belonging to the horror genre, both the unreliable “puzzle film” The Sixth Sense and the fake-documentary “found footage film” The Blair Witch Project stood as harbingers of new narrative currents in global cinema. This dissertation looks closely at these two films, reading them as illustrative of two decidedly millennial narrative styles, styles that stepped out strikingly from the computer-generated shadows cast by big-budget Hollywood. The industrial shift to digital media that coincides with the rise of these films in the late 90s reframed the cinematic image as inherently manipulable, no longer a necessary index of physical reality. Directors become image-writers, constructing photorealistic imagery from scratch. Meanwhile, DVDs and online paratexts encourage cinephiles to digitize, to attain and interact with cinema in novel ways. “The New Reflexivity” reads The Sixth Sense and The Blair Witch Project as reflexive allegories of cinema’s and society’s encounters with new digital media. The most basic narrative tricks and conceits of puzzle films and found footage films produce an unusually intense and ludic engagement with narrative boundaries and limits, thus undermining the naturalized practices of classical Hollywood narration. Writers and directors of these films treat recorded events and narrative worlds as reviewable, remixable, and upgradeable, just as Hollywood digitizes and tries to keep up with new media. Though a great deal of critical attention has been paid to both puzzle and found footage films separately, no lengthy critical survey has yet been undertaken that considers these movies in terms of their shared formal and thematic concerns. Rewriting the rules of popular cinematic narration, these films encourage viewers to be suspicious of what they see onscreen, to be aware of the possibility of unreliable narration, or CGI and the “Photoshopped.” Urgent to film and cultural studies, “The New Reflexivity” suggests that these genres’ complicitous critique of new media is decidedly instructive for a networked society struggling with what it means to be digital

    Text–to–Video: Image Semantics and NLP

    Get PDF
    When aiming at automatically translating an arbitrary text into a visual story, the main challenge consists in finding a semantically close visual representation whereby the displayed meaning should remain the same as in the given text. Besides, the appearance of an image itself largely influences how its meaningful information is transported towards an observer. This thesis now demonstrates that investigating in both, image semantics as well as the semantic relatedness between visual and textual sources enables us to tackle the challenging semantic gap and to find a semantically close translation from natural language to a corresponding visual representation. Within the last years, social networking became of high interest leading to an enormous and still increasing amount of online available data. Photo sharing sites like Flickr allow users to associate textual information with their uploaded imagery. Thus, this thesis exploits this huge knowledge source of user generated data providing initial links between images and words, and other meaningful data. In order to approach visual semantics, this work presents various methods to analyze the visual structure as well as the appearance of images in terms of meaningful similarities, aesthetic appeal, and emotional effect towards an observer. In detail, our GPU-based approach efficiently finds visual similarities between images in large datasets across visual domains and identifies various meanings for ambiguous words exploring similarity in online search results. Further, we investigate in the highly subjective aesthetic appeal of images and make use of deep learning to directly learn aesthetic rankings from a broad diversity of user reactions in social online behavior. To gain even deeper insights into the influence of visual appearance towards an observer, we explore how simple image processing is capable of actually changing the emotional perception and derive a simple but effective image filter. To identify meaningful connections between written text and visual representations, we employ methods from Natural Language Processing (NLP). Extensive textual processing allows us to create semantically relevant illustrations for simple text elements as well as complete storylines. More precisely, we present an approach that resolves dependencies in textual descriptions to arrange 3D models correctly. Further, we develop a method that finds semantically relevant illustrations to texts of different types based on a novel hierarchical querying algorithm. Finally, we present an optimization based framework that is capable of not only generating semantically relevant but also visually coherent picture stories in different styles.Bei der automatischen Umwandlung eines beliebigen Textes in eine visuelle Geschichte, besteht die größte Herausforderung darin eine semantisch passende visuelle Darstellung zu finden. Dabei sollte die Bedeutung der Darstellung dem vorgegebenen Text entsprechen. Darüber hinaus hat die Erscheinung eines Bildes einen großen Einfluß darauf, wie seine bedeutungsvollen Inhalte auf einen Betrachter übertragen werden. Diese Dissertation zeigt, dass die Erforschung sowohl der Bildsemantik als auch der semantischen Verbindung zwischen visuellen und textuellen Quellen es ermöglicht, die anspruchsvolle semantische Lücke zu schließen und eine semantisch nahe Übersetzung von natürlicher Sprache in eine entsprechend sinngemäße visuelle Darstellung zu finden. Des Weiteren gewann die soziale Vernetzung in den letzten Jahren zunehmend an Bedeutung, was zu einer enormen und immer noch wachsenden Menge an online verfügbaren Daten geführt hat. Foto-Sharing-Websites wie Flickr ermöglichen es Benutzern, Textinformationen mit ihren hochgeladenen Bildern zu verknüpfen. Die vorliegende Arbeit nutzt die enorme Wissensquelle von benutzergenerierten Daten welche erste Verbindungen zwischen Bildern und Wörtern sowie anderen aussagekräftigen Daten zur Verfügung stellt. Zur Erforschung der visuellen Semantik stellt diese Arbeit unterschiedliche Methoden vor, um die visuelle Struktur sowie die Wirkung von Bildern in Bezug auf bedeutungsvolle Ähnlichkeiten, ästhetische Erscheinung und emotionalem Einfluss auf einen Beobachter zu analysieren. Genauer gesagt, findet unser GPU-basierter Ansatz effizient visuelle Ähnlichkeiten zwischen Bildern in großen Datenmengen quer über visuelle Domänen hinweg und identifiziert verschiedene Bedeutungen für mehrdeutige Wörter durch die Erforschung von Ähnlichkeiten in Online-Suchergebnissen. Des Weiteren wird die höchst subjektive ästhetische Anziehungskraft von Bildern untersucht und "deep learning" genutzt, um direkt ästhetische Einordnungen aus einer breiten Vielfalt von Benutzerreaktionen im sozialen Online-Verhalten zu lernen. Um noch tiefere Erkenntnisse über den Einfluss des visuellen Erscheinungsbildes auf einen Betrachter zu gewinnen, wird erforscht, wie alleinig einfache Bildverarbeitung in der Lage ist, tatsächlich die emotionale Wahrnehmung zu verändern und ein einfacher aber wirkungsvoller Bildfilter davon abgeleitet werden kann. Um bedeutungserhaltende Verbindungen zwischen geschriebenem Text und visueller Darstellung zu ermitteln, werden Methoden des "Natural Language Processing (NLP)" verwendet, die der Verarbeitung natürlicher Sprache dienen. Der Einsatz umfangreicher Textverarbeitung ermöglicht es, semantisch relevante Illustrationen für einfache Textteile sowie für komplette Handlungsstränge zu erzeugen. Im Detail wird ein Ansatz vorgestellt, der Abhängigkeiten in Textbeschreibungen auflöst, um 3D-Modelle korrekt anzuordnen. Des Weiteren wird eine Methode entwickelt die, basierend auf einem neuen hierarchischen Such-Anfrage Algorithmus, semantisch relevante Illustrationen zu Texten verschiedener Art findet. Schließlich wird ein optimierungsbasiertes Framework vorgestellt, das nicht nur semantisch relevante, sondern auch visuell kohärente Bildgeschichten in verschiedenen Bildstilen erzeugen kann

    Advances in Artificial Intelligence: Models, Optimization, and Machine Learning

    Get PDF
    The present book contains all the articles accepted and published in the Special Issue “Advances in Artificial Intelligence: Models, Optimization, and Machine Learning” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications

    Webfilm theory

    Get PDF
    Since its inception in 1989, the World Wide Web has grown as a medium for publishing first text, then images, audio, and finally, moving images including short films. While most new media forms, in particular, hypertext, have received scholarly attention, research into moving image on the Internet had been limited. The thesis therefore set out to investigate webfilms, a form of short film on the WWW and the Internet, over a period of 9 years (1997-2005). The thesis was theoretically embedded in questions regarding new media as new field of research, since the increasing visibility of new media had resulted in the emergence of the discipline of 'new media studies'. This context raised issues regarding the configuration of new media studies within the existing academic disciplines of media and cultural studies, which were explored in depth in the literature review. The case studies of the thesis explored and analysed webfilms from a vantage point of actor-network theory, since this was arguably the most appropriate methodology to a research object considerably influenced by technological factors. The focus was on the conditions of webfilm production, distribution, and exhibition, and the evolution of webfilm discourse and culture. The aim was to seek answers to the question 'How didwebfilm arise as (new) form of film?' In the process of research, a number of issues were raised including the changing definition and changing forms of webfilms, the convergence of media, and the complex interdependency of humans and their computers. The research re-evaluates the relationship between human and non-human factors in media production and presents a fresh approach by focusing on the network as unit of analysis. The thesis as a whole not only provides new information on the evolution of webfilm as a form of film, but also illustrates how the network interaction of humans and nonhumans lies at the heart of contemporary new media and convergence culture.sub_mcpaunpub79_ethesesunpu

    Data-centric Design and Training of Deep Neural Networks with Multiple Data Modalities for Vision-based Perception Systems

    Get PDF
    224 p.Los avances en visión artificial y aprendizaje automático han revolucionado la capacidad de construir sistemas que procesen e interpreten datos digitales, permitiéndoles imitar la percepción humana y abriendo el camino a un amplio rango de aplicaciones. En los últimos años, ambas disciplinas han logrado avances significativos,impulsadas por los progresos en las técnicas de aprendizaje profundo(deep learning). El aprendizaje profundo es una disciplina que utiliza redes neuronales profundas (DNNs, por sus siglas en inglés) para enseñar a las máquinas a reconocer patrones y hacer predicciones basadas en datos. Los sistemas de percepción basados en el aprendizaje profundo son cada vez más frecuentes en diversos campos, donde humanos y máquinas colaboran para combinar sus fortalezas.Estos campos incluyen la automoción, la industria o la medicina, donde mejorar la seguridad, apoyar el diagnóstico y automatizar tareas repetitivas son algunos de los objetivos perseguidos.Sin embargo, los datos son uno de los factores clave detrás del éxito de los algoritmos de aprendizaje profundo. La dependencia de datos limita fuertemente la creación y el éxito de nuevas DNN. La disponibilidad de datos de calidad para resolver un problema específico es esencial pero difícil de obtener, incluso impracticable,en la mayoría de los desarrollos. La inteligencia artificial centrada en datos enfatiza la importancia de usar datos de alta calidad que transmitan de manera efectiva lo que un modelo debe aprender. Motivada por los desafíos y la necesidad de los datos, esta tesis formula y valida cinco hipótesis sobre la adquisición y el impacto de los datos en el diseño y entrenamiento de las DNNs.Específicamente, investigamos y proponemos diferentes metodologías para obtener datos adecuados para entrenar DNNs en problemas con acceso limitado a fuentes de datos de gran escala. Exploramos dos posibles soluciones para la obtención de datos de entrenamiento, basadas en la generación de datos sintéticos. En primer lugar, investigamos la generación de datos sintéticos utilizando gráficos 3D y el impacto de diferentes opciones de diseño en la precisión de los DNN obtenidos. Además, proponemos una metodología para automatizar el proceso de generación de datos y producir datos anotados variados, mediante la replicación de un entorno 3D personalizado a partir de un archivo de configuración de entrada. En segundo lugar, proponemos una red neuronal generativa(GAN) que genera imágenes anotadas utilizando conjuntos de datos anotados limitados y datos sin anotaciones capturados en entornos no controlados
    corecore