3,874 research outputs found

    Dynamic Deep Multi-modal Fusion for Image Privacy Prediction

    Full text link
    With millions of images that are shared online on social networking sites, effective methods for image privacy prediction are highly needed. In this paper, we propose an approach for fusing object, scene context, and image tags modalities derived from convolutional neural networks for accurately predicting the privacy of images shared online. Specifically, our approach identifies the set of most competent modalities on the fly, according to each new target image whose privacy has to be predicted. The approach considers three stages to predict the privacy of a target image, wherein we first identify the neighborhood images that are visually similar and/or have similar sensitive content as the target image. Then, we estimate the competence of the modalities based on the neighborhood images. Finally, we fuse the decisions of the most competent modalities and predict the privacy label for the target image. Experimental results show that our approach predicts the sensitive (or private) content more accurately than the models trained on individual modalities (object, scene, and tags) and prior privacy prediction works. Also, our approach outperforms strong baselines, that train meta-classifiers to obtain an optimal combination of modalities.Comment: Accepted by The Web Conference (WWW) 201

    Why We Fear Genetic Informants: Using Genetic Genealogy to Catch Serial Killers

    Get PDF
    Consumer genetics has exploded, driven by the second-most popular hobby in the United States: genealogy. This hobby has been co-opted by law enforcement to solve cold cases, by linking crime-scene DNA with the DNA of a suspect\u27s relative, which is contained in a direct-to-consumer (DTC) genetic database. The relative’s genetic data acts as a silent witness, or genetic informant, wordlessly guiding law enforcement to a handful of potential suspects. At least thirty murderers and rapists have been arrested in this way, a process which I describe in careful detail in this article. Legal scholars have sounded many alarms, and have called for immediate bans on this methodology, which is referred to as long-range familial searching ( LRFS ) or forensic genetic genealogy ( FGG ). The opponents’ concerns are many, but generally boil down to fears that FGG will invade the privacy and autonomy of presumptively innocent individuals. These concerns, I argue, are considerably overblown. Indeed, many aspects of the methodology implicate nothing new, legally or ethically, and might even better protect privacy while exonerating the innocent. Law enforcement’s use of FGG to solve cold cases is a bogeyman. The real threat to genetic privacy comes from shoddy consumer consent procedures, poor data security standards, and user agreements that permit rampant secondary uses of data. So why do so many legal scholars fear a world where law enforcement uses this methodology? I submit that our fear of so-called genetic informants stems from the sticky and long-standing traps of genetic essentialism and genetic determinism, where we incorrectly attribute intentional action to our genes and fear a world where humans are controlled by our biology. Rather than banning the use of genetic genealogy to catch serial killers and rapists, I call for improved DTC consent processes, and more transparent privacy and security measures. This will better protect genetic privacy in line with consumer expectations, while still permitting the use of LRFS to deliver justice to victims and punish those who commit society\u27s most heinous acts

    State of the art 2015: a literature review of social media intelligence capabilities for counter-terrorism

    Get PDF
    Overview This paper is a review of how information and insight can be drawn from open social media sources. It focuses on the specific research techniques that have emerged, the capabilities they provide, the possible insights they offer, and the ethical and legal questions they raise. These techniques are considered relevant and valuable in so far as they can help to maintain public safety by preventing terrorism, preparing for it, protecting the public from it and pursuing its perpetrators. The report also considers how far this can be achieved against the backdrop of radically changing technology and public attitudes towards surveillance. This is an updated version of a 2013 report paper on the same subject, State of the Art. Since 2013, there have been significant changes in social media, how it is used by terrorist groups, and the methods being developed to make sense of it.  The paper is structured as follows: Part 1 is an overview of social media use, focused on how it is used by groups of interest to those involved in counter-terrorism. This includes new sections on trends of social media platforms; and a new section on Islamic State (IS). Part 2 provides an introduction to the key approaches of social media intelligence (henceforth ‘SOCMINT’) for counter-terrorism. Part 3 sets out a series of SOCMINT techniques. For each technique a series of capabilities and insights are considered, the validity and reliability of the method is considered, and how they might be applied to counter-terrorism work explored. Part 4 outlines a number of important legal, ethical and practical considerations when undertaking SOCMINT work

    Measurement and Evaluation of Deep Learning Based 3D Reconstruction

    Get PDF
    Performances of Deep Learning (DL) based methods for 3D reconstruction are becoming at par or better than classical computer vision techniques. Learning requires data with proper annotations. While images have a standardized representation, there is currently no widely accepted format for efficiently representing 3D output shapes. The challenge lies in finding a format that can handle the high-resolution geometry of any shape while also being memory and computationally efficient. Therefore, most advanced learning-based 3D reconstructions are restricted to a certain domain. In this work, we compare the performance of different output representations for 3D reconstruction in different contexts including objects or natural scenes, full human body to human body parts reconstruction. Despite substantial progress in the semantic understanding of the visual world, there are few methods that can reconstruct from a single view for a s large set of objects. Our the objective is to investigate methods to reconstruct a wider variety of object categories in 3D and aim to achieve accurate 3D reconstruction at both object and scene levels. In this work, we compare the performance of different output representations for 3D reconstruction in such a way that will give us implicit and smooth output representation of complex geometry of 3D from RGB images, DICOM (Digital Imaging and Communications in Medicine) formatted MRI breast images and images from a wild environment in terms of input using the Deep Learning methods and available 3D processing applications (MeshLab, 3D Slicer, and Mayavi)

    INQUIRIES IN INTELLIGENT INFORMATION SYSTEMS: NEW TRAJECTORIES AND PARADIGMS

    Get PDF
    Rapid Digital transformation drives organizations to continually revitalize their business models so organizations can excel in such aggressive global competition. Intelligent Information Systems (IIS) have enabled organizations to achieve many strategic and market leverages. Despite the increasing intelligence competencies offered by IIS, they are still limited in many cognitive functions. Elevating the cognitive competencies offered by IIS would impact the organizational strategic positions. With the advent of Deep Learning (DL), IoT, and Edge Computing, IISs has witnessed a leap in their intelligence competencies. DL has been applied to many business areas and many industries such as real estate and manufacturing. Moreover, despite the complexity of DL models, many research dedicated efforts to apply DL to limited computational devices, such as IoTs. Applying deep learning for IoTs will turn everyday devices into intelligent interactive assistants. IISs suffer from many challenges that affect their service quality, process quality, and information quality. These challenges affected, in turn, user acceptance in terms of satisfaction, use, and trust. Moreover, Information Systems (IS) has conducted very little research on IIS development and the foreseeable contribution for the new paradigms to address IIS challenges. Therefore, this research aims to investigate how the employment of new AI paradigms would enhance the overall quality and consequently user acceptance of IIS. This research employs different AI paradigms to develop two different IIS. The first system uses deep learning, edge computing, and IoT to develop scene-aware ridesharing mentoring. The first developed system enhances the efficiency, privacy, and responsiveness of current ridesharing monitoring solutions. The second system aims to enhance the real estate searching process by formulating the search problem as a Multi-criteria decision. The system also allows users to filter properties based on their degree of damage, where a deep learning network allocates damages in 12 each real estate image. The system enhances real-estate website service quality by enhancing flexibility, relevancy, and efficiency. The research contributes to the Information Systems research by developing two Design Science artifacts. Both artifacts are adding to the IS knowledge base in terms of integrating different components, measurements, and techniques coherently and logically to effectively address important issues in IIS. The research also adds to the IS environment by addressing important business requirements that current methodologies and paradigms are not fulfilled. The research also highlights that most IIS overlook important design guidelines due to the lack of relevant evaluation metrics for different business problems

    Leveraging analytics to produce compelling and profitable film content

    Get PDF
    Producing compelling film content profitably is a top priority to the long-term prosperity of the film industry. Advances in digital technologies, increasing availabilities of granular big data, rapid diffusion of analytic techniques, and intensified competition from user generated content and original content produced by Subscription Video on Demand (SVOD) platforms have created unparalleled needs and opportunities for film producers to leverage analytics in content production. Built upon the theories of value creation and film production, this article proposes a conceptual framework of key analytic techniques that film producers may engage throughout the production process, such as script analytics, talent analytics, and audience analytics. The article further synthesizes the state-of-the-art research on and applications of these analytics, discuss the prospect of leveraging analytics in film production, and suggest fruitful avenues for future research with important managerial implications
    • 

    corecore