21 research outputs found

    Optical and X-ray properties of the RIXOS AGN: II - Emission lines

    Full text link
    We present the optical and UV emission line properties of 160 X-ray selected AGN taken from the RIXOS survey (including Halpha, Hbeta, [OIII]5007, MgII2798 and CIII]1909). This sample is believed to contain a mixture of absorbed and unabsorbed objects, with column densities up to 4e21 cm-2. Although the distribution of the [OIII] EW for the RIXOS AGN is typical of optically selected samples, the Balmer line EWs are relatively low. This is consistent with the presence of a dust absorber between the broad and narrow line regions (eg. a molecular torus), and intrinsically weak optical line emission. We find Baldwin effects in CIII] and MgII, and a positive response of the MgII line to its ionizing continuum. There is a strong correlation between the EW and FWHM of MgII, which may be similar to that seen in other samples for Hbeta. We demonstrate that this is consistent with models which suggest two line-emitting zones, a `very broad line region' (VBLR) and an `intermediate line region' (ILR). The correlation between EW and FWHM in MgII may be a physical characteristic of the ILR or it may reflect a geometric dependence. We found no correlation between the Hbeta FWHM and the slope of the X-ray spectrum, however this may be due to the effects of dust absorption which suppresses the broad Hbeta component, masking any relationship. The Halpha FWHM does tend to be narrow when alpha_X is soft, and broadens as alpha_X hardens, although the formal probability for this correlation is low (91 per cent). If the distribution of alpha_X in the RIXOS sample reflects the level of intrinsic absorption in these AGN, the data suggest a possible link between the velocity of the Balmer line-emitting region and the amount of absorbing material beyond.Comment: 29 pages, 14 figures, to be published in Monthly Notices of the Royal Astronomical Society. Also available from http://www.mssl.ucl.ac.uk/www_astro/preprints/preprints.htm

    Methodological developments in violence research

    Get PDF
    Über Jahrzehnte wurde Gewalt durch Interviews mit Betroffenen oder Tätern, durch teilnehmende Beobachtung oder Gewaltstatistiken untersucht, meist unter Verwendung entweder qualitativer oder quantitativer Analysemethoden. Seit der Jahrhundertwende stehen Forschenden eine Reihe neuer Ansätze zur Verfügung: Es gibt immer mehr Videoaufnahmen von gewaltsamen Ereignissen, Mixed Methods-Ansätze werden stetig weiterentwickelt und durch Computational Social Sciences finden Big Data-Ansätze Einzug in immer mehr Forschungsfelder. Diese drei Entwicklungen bieten großes Potenzial für die quantitative und qualitative Gewaltforschung. Der vorliegende Beitrag diskutiert Videodatenanalyse, Triangulation und Mixed Methods-Ansätze sowie Big Data und bespricht den gegenwärtigen und zukünftigen Einfluss der genannten Entwicklungen auf das Forschungsfeld. Das Augenmerk liegt besonders darauf, (1) wie neuere Videodaten genutzt werden können, um Gewalt zu untersuchen und wo ihre Vor- und Nachteile liegen, (2) wie Triangulation und Mixed Methods-Ansätze umfassendere Analysen und theoretische Verknüpfungen in der Gewaltforschung ermöglichen und (3) wo Anwendungen von Big Data und Computational Social Science in der Gewaltforschung liegen können.For decades violence research has relied on interviews with victims and perpetrators, on participant observation, and on survey methods, and most studies focused on either qualitative or quantitative analytic strategies. Since the turn of the millennium, researchers can draw on a range of new approaches: there are increasing amounts of video data of violent incidents, triangulation and mixed methods approaches become ever more sophisticated, and computational social sciences introduce big data analysis to more and more research fields. These three developments hold great potential for quantitative and qualitative violence research. This paper discusses video data analysis, mixed methods, and big data in the context of current and future violence research. Specific focus lies on (1) potentials and challenges of new video data for studying violence; (2) the role of triangulation and mixed methods in enabling more comprehensive violence research from multiple theoretical perspectives, and (3) what potential uses of big data and computational social science in violence research may look like

    Exploiting Prediction Error Inconsistencies through LSTM-based Classifiers to Detect Deepfake Videos

    No full text
    The ability of artificial intelligence techniques to build synthesized brand new videos or to alter the facial expression of already existing ones has been efficiently demonstrated in the literature. The identification of such new threat generally known as Deepfake, but consisting of different techniques, is fundamental in multimedia forensics. In fact this kind of manipulated information could undermine and easily distort the public opinion on a certain person or about a specific event. Thus, in this paper, a new technique able to distinguish synthetic generated portrait videos from natural ones is introduced by exploiting inconsistencies due to the prediction error in the re-encoding phase. In particular, features based on inter-frame prediction error have been investigated jointly with a Long Short-Term Memory (LSTM) model network able to learn the temporal correlation among consecutive frames. Preliminary results have demonstrated that such sequence-based approach, used to distinguish between original and manipulated videos, highlights promising performances

    Fooling PRNU-Based Detectors Through Convolutional Neural Networks

    No full text
    In the last few years, forensic researchers have developed a wide set of techniques to blindly attribute an image to the device used to shoot it. Among these techniques, those based on photo response non uniformity (PRNU) have shown incredibly accurate results, thus they are often considered as a reference baseline solution. The rationale behind these techniques is that each camera sensor leaves on acquired images a characteristic noise pattern. This pattern can be estimated and uniquely mapped to a specific acquisition device through a cross-correlation test. In this paper, we study the possibility of leveraging recent findings in the deep learning field to attack PRNU-based detectors. Specifically, we focus on the possibility of editing an image through convolutional neural networks in a visually imperceptible way, still hindering PRNU noise estimation. Results show that performing such an attack is possible, even though an informed forensic analyst can reduce its impact through a smart test

    Shadow Removal Detection and Localization for Forensics Analysis

    No full text
    The recent advancements in image processing and computer vision allow realistic photo manipulations. In order to avoid the distribution of fake imagery, the image forensics community is working towards the development of image authenticity verification tools. Methods based on shadow analysis are particularly reliable since they are part of the physical integrity of the scene, thus detecting forgeries is possible whenever inconsistencies are found (e.g., shadows not coherent with the light direction). An attacker can easily delete inconsistent shadows and replace them with correctly cast shadows in order to fool forensics detectors based on physical analysis. In this paper, we propose a method to detect shadow removal done with state-of-the-art tools. The proposed method is based on a conditional generative adversarial network (cGAN) specifically trained for shadow removal detection

    Satellite image forgery detection and localization using GAN and One-Class classifier

    No full text
    Current satellite imaging technology enables shooting highresolution pictures of the ground. As any other kind of digital images, overhead pictures can also be easily forged. However, common image forensic techniques are often developed for consumer camera images, which strongly differ in their nature from satellite ones (e.g., compression schemes, post-processing, sensors, etc.). Therefore, many accurate state-of-the-art forensic algorithms are bound to fail if blindly applied to overhead image analysis. Development of novel forensic tools for satellite images is paramount to assess their authenticity and integrity. In this paper, we propose an algorithm for satellite image forgery detection and localization. Specifically, we consider the scenario in which pixels within a region of a satellite image are replaced to add or remove an object from the scene. Our algorithm works under the assumption that no forged images are available for training. Using a generative adversarial network (GAN), we learn a feature representation of pristine satellite images. A one-class support vector machine (SVM) is trained on these features to determine their distribution. Finally, image forgeries are detected as anomalies. The proposed algorithm is validated against different kinds of satellite images containing forgeries of different size and shape

    Splicing Detection and Localization in Satellite Imagery Using Conditional GANs

    No full text
    The widespread availability of image editing tools and improvements in image processing techniques allow image manipulation to be very easy. Oftentimes, easy-to-use yet sophisticated image manipulation tools yields distortions/changes imperceptible to the human observer. Distribution of forged images can have drastic ramifications, especially when coupled with the speed and vastness of the Internet. Therefore, verifying image integrity poses an immense and important challenge to the digital forensic community. Satellite images specifically can be modified in a number of ways, including the insertion of objects to hide existing scenes and structures. In this paper, we describe the use of a Conditional Generative Adversarial Network (cGAN) to identify the presence of such spliced forgeries within satellite images. Additionally, we identify their locations and shapes. Trained on pristine and falsified images, our method achieves high success on these detection and localization objectives
    corecore