324 research outputs found

    Shrinking the Semantic Gap: Spatial Pooling of Local Moment Invariants for Copy-Move Forgery Detection

    Full text link
    Copy-move forgery is a manipulation of copying and pasting specific patches from and to an image, with potentially illegal or unethical uses. Recent advances in the forensic methods for copy-move forgery have shown increasing success in detection accuracy and robustness. However, for images with high self-similarity or strong signal corruption, the existing algorithms often exhibit inefficient processes and unreliable results. This is mainly due to the inherent semantic gap between low-level visual representation and high-level semantic concept. In this paper, we present a very first study of trying to mitigate the semantic gap problem in copy-move forgery detection, with spatial pooling of local moment invariants for midlevel image representation. Our detection method expands the traditional works on two aspects: 1) we introduce the bag-of-visual-words model into this field for the first time, may meaning a new perspective of forensic study; 2) we propose a word-to-phrase feature description and matching pipeline, covering the spatial structure and visual saliency information of digital images. Extensive experimental results show the superior performance of our framework over state-of-the-art algorithms in overcoming the related problems caused by the semantic gap.Comment: 13 pages, 11 figure

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Document legalisation (a new approach to the document legalisation process using enterprise network technology)

    Get PDF
    Documents issued in one country often have to be legalised (authenticated)before they can be used in another country. Different types of documents (legal papers) such as birth, death and marriage records, deeds of assignments, powers of attorney, commercial invoices etc. need to be legalised by the Destination Country before they can be assumed legal. Legalising a document simply means confirming that official documents are sealed and signed either with an Apostille Certificate for countries that are part of the Hague convention 1961, or with a Certificate of Authentication where countries are not party to The Hague Convention. Legalising (authenticating) documents is a process of verification of signatures. The aim of this research is to critically examine the current processes of document legalisation, through analysing and establishing the opportunities that lie before the organisation to implement a new process in document legalisation to replace the prolonged historical process currently used in some countries, specifically the United Arab Emirates (UAE). Using enterprise network technology1, this research will also produce a solution to the risks involved, the implementation and the security, and it will also analyse the impact of such implementation to the organisation. Considering the project, which explores a very sensitive area in the organisation and one of major change to the organisation’s business process, the authenticity of data must be given a high priority. Therefore, an online survey may not always be a legitimate approach. A paper survey may well fit the purpose but, on the other hand, a detailed interview and/ or telephone survey will be even more accurate. Hence I made use of a mixed method (qualitative/ quantitative) approach. The business of document legalisation goes back in history by more than two thousand years and, therefore, needs to be explored historically, establishing how the business of the document legalisation process has evolved alongside the established professions in government today, and defining the areas of concern such as security, availability, traceability and mobility. This will pave the way for an investigation to evaluate a new process that can utilise available technology to solve the areas of concern. The current process of Document Legalisation has been used for many years and a change in this process may take some time. There are many possible pitfalls that the programme may encounter, one of which is the change to a process that has not yet been established in any other area of the world, so there are no other occurrences in this subject for comparison. A clear and informative document explaining the project – a Specific, Measurable, Achievable, Realistic and Time Limit (SMART) description of the project – will solve any conflict. Considering that the research in th is complex topic runs in history for more than two thousand years, a mixed method approach should be used. However, to elaborate the methodology that can explore the underlying philosophical assumption taken by researchers a mixed methodology term should be more accurate, due to the history and composition of knowledge that have been accumulated in this topic. Hence clarification is needed to establish reasons and define a new approach in the document legalisation process. In addition to the historical literature, the main groups taken into consideration to form the data are the decision makers, interviews with senior staff and a survey for employees working in the field of document legalisation. To find reasons for every step in document legalisation, experiments should not be ignored. The reason for this is to clarify the area of data mismatch. The scope of the project will discuss the current risk involved in the current process of legalising documents, identifying its weaknesses, and the needs and requirements of the newly proposed process with recommendations to establish a solution utilising state of the art technology to provide a new secure, mobile and traceable process which is available 24/7

    Human and Artificial Intelligence

    Get PDF
    Although tremendous advances have been made in recent years, many real-world problems still cannot be solved by machines alone. Hence, the integration between Human Intelligence and Artificial Intelligence is needed. However, several challenges make this integration complex. The aim of this Special Issue was to provide a large and varied collection of high-level contributions presenting novel approaches and solutions to address the above issues. This Special Issue contains 14 papers (13 research papers and 1 review paper) that deal with various topics related to human–machine interactions and cooperation. Most of these works concern different aspects of recommender systems, which are among the most widespread decision support systems. The domains covered range from healthcare to movies and from biometrics to cultural heritage. However, there are also contributions on vocal assistants and smart interactive technologies. In summary, each paper included in this Special Issue represents a step towards a future with human–machine interactions and cooperation. We hope the readers enjoy reading these articles and may find inspiration for their research activities

    Publishing and Culture

    Get PDF

    Discovering a Domain Knowledge Representation for Image Grouping: Multimodal Data Modeling, Fusion, and Interactive Learning

    Get PDF
    In visually-oriented specialized medical domains such as dermatology and radiology, physicians explore interesting image cases from medical image repositories for comparative case studies to aid clinical diagnoses, educate medical trainees, and support medical research. However, general image classification and retrieval approaches fail in grouping medical images from the physicians\u27 viewpoint. This is because fully-automated learning techniques cannot yet bridge the gap between image features and domain-specific content for the absence of expert knowledge. Understanding how experts get information from medical images is therefore an important research topic. As a prior study, we conducted data elicitation experiments, where physicians were instructed to inspect each medical image towards a diagnosis while describing image content to a student seated nearby. Experts\u27 eye movements and their verbal descriptions of the image content were recorded to capture various aspects of expert image understanding. This dissertation aims at an intuitive approach to extracting expert knowledge, which is to find patterns in expert data elicited from image-based diagnoses. These patterns are useful to understand both the characteristics of the medical images and the experts\u27 cognitive reasoning processes. The transformation from the viewed raw image features to interpretation as domain-specific concepts requires experts\u27 domain knowledge and cognitive reasoning. This dissertation also approximates this transformation using a matrix factorization-based framework, which helps project multiple expert-derived data modalities to high-level abstractions. To combine additional expert interventions with computational processing capabilities, an interactive machine learning paradigm is developed to treat experts as an integral part of the learning process. Specifically, experts refine medical image groups presented by the learned model locally, to incrementally re-learn the model globally. This paradigm avoids the onerous expert annotations for model training, while aligning the learned model with experts\u27 sense-making

    Integrating passive ubiquitous surfaces into human-computer interaction

    Get PDF
    Mobile technologies enable people to interact with computers ubiquitously. This dissertation investigates how ordinary, ubiquitous surfaces can be integrated into human-computer interaction to extend the interaction space beyond the edge of the display. It turns out that acoustic and tactile features generated during an interaction can be combined to identify input events, the user, and the surface. In addition, it is shown that a heterogeneous distribution of different surfaces is particularly suitable for realizing versatile interaction modalities. However, privacy concerns must be considered when selecting sensors, and context can be crucial in determining whether and what interaction to perform.Mobile Technologien ermöglichen den Menschen eine allgegenwärtige Interaktion mit Computern. Diese Dissertation untersucht, wie gewöhnliche, allgegenwärtige Oberflächen in die Mensch-Computer-Interaktion integriert werden können, um den Interaktionsraum über den Rand des Displays hinaus zu erweitern. Es stellt sich heraus, dass akustische und taktile Merkmale, die während einer Interaktion erzeugt werden, kombiniert werden können, um Eingabeereignisse, den Benutzer und die Oberfläche zu identifizieren. Darüber hinaus wird gezeigt, dass eine heterogene Verteilung verschiedener Oberflächen besonders geeignet ist, um vielfältige Interaktionsmodalitäten zu realisieren. Bei der Auswahl der Sensoren müssen jedoch Datenschutzaspekte berücksichtigt werden, und der Kontext kann entscheidend dafür sein, ob und welche Interaktion durchgeführt werden soll

    Elasticity mapping for breast cancer diagnosis using tactile imaging and auxiliary sensor fusion

    Get PDF
    Tactile Imaging (TI) is a technology utilising capacitive pressure sensors to image elasticity distributions within soft tissues such as the breast for cancer screening. TI aims to solve critical problems in the cancer screening pathway, particularly: low sensitivity of manual palpation, patient discomfort during X-ray mammography, and the poor quality of breast cancer referral forms between primary and secondary care facilities. TI is effective in identifying ‘non-palpable’, early-stage tumours, with basic differential ability that reduced unnecessary biopsies by 21% in repeated clinical studies. TI has its limitations, particularly: the measured hardness of a lesion is relative to the background hardness, and lesion location estimates are subjective and prone to operator error. TI can achieve more than simple visualisation of lesions and can act as an accurate differentiator and material analysis tool with further metric development and acknowledgement of error sensitivities when transferring from phantom to clinical trials. This thesis explores and develops two methods, specifically inertial measurement and IR vein imaging, for determining the breast background elasticity, and registering tactile maps for lesion localisation, based on fusion of tactile and auxiliary sensors. These sensors enhance the capabilities of TI, with background tissue elasticity determined with MAE < 4% over tissues in the range 9 kPa – 90 kPa and probe trajectory across the breast measured with an error ratio < 0.3%, independent of applied load, validated on silicone phantoms. A basic TI error model is also proposed, maintaining tactile sensor stability and accuracy with 1% settling times < 1.5s over a range of realistic operating conditions. These developments are designed to be easily implemented into commercial systems, through appropriate design, to maximise impact, providing a stable platform for accurate tissue measurements. This will allow clinical TI to further reduce benign referral rates in a cost-effective manner, by elasticity differentiation and lesion classification in future works.Tactile Imaging (TI) is a technology utilising capacitive pressure sensors to image elasticity distributions within soft tissues such as the breast for cancer screening. TI aims to solve critical problems in the cancer screening pathway, particularly: low sensitivity of manual palpation, patient discomfort during X-ray mammography, and the poor quality of breast cancer referral forms between primary and secondary care facilities. TI is effective in identifying ‘non-palpable’, early-stage tumours, with basic differential ability that reduced unnecessary biopsies by 21% in repeated clinical studies. TI has its limitations, particularly: the measured hardness of a lesion is relative to the background hardness, and lesion location estimates are subjective and prone to operator error. TI can achieve more than simple visualisation of lesions and can act as an accurate differentiator and material analysis tool with further metric development and acknowledgement of error sensitivities when transferring from phantom to clinical trials. This thesis explores and develops two methods, specifically inertial measurement and IR vein imaging, for determining the breast background elasticity, and registering tactile maps for lesion localisation, based on fusion of tactile and auxiliary sensors. These sensors enhance the capabilities of TI, with background tissue elasticity determined with MAE < 4% over tissues in the range 9 kPa – 90 kPa and probe trajectory across the breast measured with an error ratio < 0.3%, independent of applied load, validated on silicone phantoms. A basic TI error model is also proposed, maintaining tactile sensor stability and accuracy with 1% settling times < 1.5s over a range of realistic operating conditions. These developments are designed to be easily implemented into commercial systems, through appropriate design, to maximise impact, providing a stable platform for accurate tissue measurements. This will allow clinical TI to further reduce benign referral rates in a cost-effective manner, by elasticity differentiation and lesion classification in future works
    • …
    corecore