4,756 research outputs found

    A New Reality: Deepfake Technology and the World Around Us

    Get PDF

    Discovering Image Usage Online: A Case Study With "Flatten the Curve''

    Full text link
    Understanding the spread of images across the web helps us understand the reuse of scientific visualizations and their relationship with the public. The "Flatten the Curve" graphic was heavily used during the COVID-19 pandemic to convey a complex concept in a simple form. It displays two curves comparing the impact on case loads for medical facilities if the populace either adopts or fails to adopt protective measures during a pandemic. We use five variants of the "Flatten the Curve" image as a case study for viewing the spread of an image online. To evaluate its spread, we leverage three information channels: reverse image search engines, social media, and web archives. Reverse image searches give us a current view into image reuse. Social media helps us understand a variant's popularity over time. Web archives help us see when it was preserved, highlighting a view of popularity for future researchers. Our case study leverages document URLs can be used as a proxy for images when studying the spread of images online.Comment: 6 pages, 5 figures, Presented as poster at JCDL 202

    A literature analysis examining the potential suitability of terahertz imaging to detect friction ridge detail preserved in the imprimatura layer of oil-based, painted artwork

    Full text link
    This literature analysis examines terahertz (THz) imaging as a non-invasive tool for the imaging of friction ridge detail from the first painted layer (imprimatura) in multilayered painted works of art. The paintings of interest are those created utilizing techniques developed during the Renaissance and still in use today. The goal of analysis serves to answer two questions. First, can THz radiation penetrate paint layers covering the imprimatura to reveal friction ridge information? Secondly, can the this technology recover friction ridge detail such that the fine details are sufficiently resolved to provide images suitable for comparison and identification purposes. If a comparison standard exists, recovered friction ridge detail from this layer can be used to establish linkages to an artist or between works of art. Further, it can be added to other scientific methods currently employed to assist with the authentication efforts of unattributed paintings. Flanked by the microwave and far-infrared edges, THz straddles the electronic and optic perspectives of the electromagnetic spectrum. As a consequence, this range is imparted with unique and useful properties. Able to penetrate and image through many opaque materials, its non-ionizing radiation is an ideal non-destructive technique that provides visual information from a painting’s sub-strata. Imaging is possible where refractive index differences exist between different paint layers. Though it is impossible, at present, to determine when a fingerprint was deposited, one can infer approximately when a print was created if it is recovered from the imprimatura layer of a painting, and can be subsequently attributed to a known source. Fingerprints are unique, a person is only able to deposit prints while their physical body is intact and thus, in some cases, the multiple layer process some artists use in their work may be used to the examiner’s advantage. Impressions of friction ridge detail have been recorded on receiving surfaces from human hands throughout time (and have also been discovered in works of art). Yet, the potential to associate those recorded impressions to a specific individual was only realized just over one hundred years ago. Much like the use of friction ridge skin, the relatively recently discovered THz range is now better understood; its tremendous potential unlocked by growing research and technology designed to exploit its unique properties

    Abstract Images Have Different Levels of Retrievability Per Reverse Image Search Engine

    Full text link
    Much computer vision research has focused on natural images, but technical documents typically consist of abstract images, such as charts, drawings, diagrams, and schematics. How well do general web search engines discover abstract images? Recent advancements in computer vision and machine learning have led to the rise of reverse image search engines. Where conventional search engines accept a text query and return a set of document results, including images, a reverse image search accepts an image as a query and returns a set of images as results. This paper evaluates how well common reverse image search engines discover abstract images. We conducted an experiment leveraging images from Wikimedia Commons, a website known to be well indexed by Baidu, Bing, Google, and Yandex. We measure how difficult an image is to find again (retrievability), what percentage of images returned are relevant (precision), and the average number of results a visitor must review before finding the submitted image (mean reciprocal rank). When trying to discover the same image again among similar images, Yandex performs best. When searching for pages containing a specific image, Google and Yandex outperform the others when discovering photographs with precision scores ranging from 0.8191 to 0.8297, respectively. In both of these cases, Google and Yandex perform better with natural images than with abstract ones achieving a difference in retrievability as high as 54\% between images in these categories. These results affect anyone applying common web search engines to search for technical documents that use abstract images.Comment: 20 pages; 7 figures; to be published in the proceedings of the Drawings and abstract Imagery: Representation and Analysis (DIRA) Workshop from ECCV 202

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    Digital media and terrorism: analysis of online visual images of Kenyan security forces in the battle of EL adde

    Get PDF
    This study sought to analyse online visual images of the Kenya security forces in the Battle of El Adde that occurred on January 15, 2016 in Gedo, Somalia. The study specifically examined the framing of these visual images, the journlistsic and ethical practices employed by online platforms when selecting and publishing images of terror on Kenyan security forces, and the extent to which the framing of these online visual images of terror manifested elements of social responsibility as captured in The Code of Cunduct for the Practice of Journalism as entreched in the Second Schedule of the Media Act 2013. The study analysed a total of 48 visual images purposively selected from five major news websites and adopted the descriptive content analysis design to quantitatively describe manifest features. The findings indicated that 80.0 percent of images published on the news websites projected the Kenyan government and its security forces as losing the war against terrorism as most images published showed more causalities suffered on the Kenyan side. To research used interview guide to address other elements of the study that could not be analysed quantitatively. The research findings from the interviews conducted showed that digital news websites indeed framed visual images of the Kenya security forces in the Battle of El Adde – and adhered to journalistsic and ethical practices in sourcing, selecting or publishing images of terror from the Battle of El Adde. Some of the journalistic principals that came into play included Professional Accountability as captured in Article 3 (1) – where the journalists were required to be independent and free from those seeking influence or control over news content. Further, when publishing images, the journalist pointed out that they would adhere to Article 15: Intrusion into grief and shock. In such incidences, the journalists were required to use the images with sensitivity and discretion
    • …
    corecore