16 research outputs found

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read

    Privacy-preserving information hiding and its applications

    Get PDF
    The phenomenal advances in cloud computing technology have raised concerns about data privacy. Aided by the modern cryptographic techniques such as homomorphic encryption, it has become possible to carry out computations in the encrypted domain and process data without compromising information privacy. In this thesis, we study various classes of privacy-preserving information hiding schemes and their real-world applications for cyber security, cloud computing, Internet of things, etc. Data breach is recognised as one of the most dreadful cyber security threats in which private data is copied, transmitted, viewed, stolen or used by unauthorised parties. Although encryption can obfuscate private information against unauthorised viewing, it may not stop data from illegitimate exportation. Privacy-preserving Information hiding can serve as a potential solution to this issue in such a manner that a permission code is embedded into the encrypted data and can be detected when transmissions occur. Digital watermarking is a technique that has been used for a wide range of intriguing applications such as data authentication and ownership identification. However, some of the algorithms are proprietary intellectual properties and thus the availability to the general public is rather limited. A possible solution is to outsource the task of watermarking to an authorised cloud service provider, that has legitimate right to execute the algorithms as well as high computational capacity. Privacypreserving Information hiding is well suited to this scenario since it is operated in the encrypted domain and hence prevents private data from being collected by the cloud. Internet of things is a promising technology to healthcare industry. A common framework consists of wearable equipments for monitoring the health status of an individual, a local gateway device for aggregating the data, and a cloud server for storing and analysing the data. However, there are risks that an adversary may attempt to eavesdrop the wireless communication, attack the gateway device or even access to the cloud server. Hence, it is desirable to produce and encrypt the data simultaneously and incorporate secret sharing schemes to realise access control. Privacy-preserving secret sharing is a novel research for fulfilling this function. In summary, this thesis presents novel schemes and algorithms, including: • two privacy-preserving reversible information hiding schemes based upon symmetric cryptography using arithmetic of quadratic residues and lexicographic permutations, respectively. • two privacy-preserving reversible information hiding schemes based upon asymmetric cryptography using multiplicative and additive privacy homomorphisms, respectively. • four predictive models for assisting the removal of distortions inflicted by information hiding based respectively upon projection theorem, image gradient, total variation denoising, and Bayesian inference. • three privacy-preserving secret sharing algorithms with different levels of generality

    DCT-Based Image Feature Extraction and Its Application in Image Self-Recovery and Image Watermarking

    Get PDF
    Feature extraction is a critical element in the design of image self-recovery and watermarking algorithms and its quality can have a big influence on the performance of these processes. The objective of the work presented in this thesis is to develop an effective methodology for feature extraction in the discrete cosine transform (DCT) domain and apply it in the design of adaptive image self-recovery and image watermarking algorithms. The methodology is to use the most significant DCT coefficients that can be at any frequency range to detect and to classify gray level patterns. In this way, gray level variations with a wider range of spatial frequencies can be looked into without increasing computational complexity and the methodology is able to distinguish gray level patterns rather than the orientations of simple edges only as in many existing DCT-based methods. The proposed image self-recovery algorithm uses the developed feature extraction methodology to detect and classify blocks that contain significant gray level variations. According to the profile of each block, the critical frequency components representing the specific gray level pattern of the block are chosen for encoding. The code lengths are made variable depending on the importance of these components in defining the block’s features, which makes the encoding of critical frequency components more precise, while keeping the total length of the reference code short. The proposed image self-recovery algorithm has resulted in remarkably shorter reference codes that are only 1/5 to 3/5 of those produced by existing methods, and consequently a superior visual quality in the embedded images. As the shorter codes contain the critical image information, the proposed algorithm has also achieved above average reconstruction quality for various tampering rates. The proposed image watermarking algorithm is computationally simple and designed for the blind extraction of the watermark. The principle of the algorithm is to embed the watermark in the locations where image data alterations are the least visible. To this end, the properties of the HVS are used to identify the gray level image features of such locations. The characteristics of the frequency components representing these features are identifying by applying the DCT-based feature extraction methodology developed in this thesis. The strength with which the watermark is embedded is made adaptive to the local gray level characteristics. Simulation results have shown that the proposed watermarking algorithm results in significantly higher visual quality in the watermarked images than that of the reported methods with a difference in PSNR of about 2.7 dB, while the embedded watermark is highly robustness against JPEG compression even at low quality factors and to some other common image processes. The good performance of the proposed image self-recovery and watermarking algorithms is an indication of the effectiveness of the developed feature extraction methodology. This methodology can be applied in a wide range of applications and it is suitable for any process where the DCT data is available

    In-house indexing of periodical literature : a study of university libraries in Kenya

    Get PDF
    The present study investigated identification, access and usage of periodicals in university libraries in Kenya, with a view of recommending a tool for assisting users to identify information. Using questionnaires completed by 316 university library users and 27 librarians, backed with participant observations, document analysis as well as interviews, it was found that usage of periodicals was low as most users browse through periodicals to identify information, a method that is not effective. In-house indexing was investigated and found to be an effective tool in facilitating access to relevant information. The study recommends establishment of in-house indexing programs and databases in university libraries; formulation of consistent indexing policies to achieve quality indexing; and that indexing should be focused on both content and user requirements by specifying points- of- view, and study methodologies to enhance retrieval of relevant information.Information ScienceM. A. (Information Science

    Unsupervised quantification of entity consistency between photos and text in real-world news

    Get PDF
    Das World Wide Web und die sozialen Medien übernehmen im heutigen Informationszeitalter eine wichtige Rolle für die Vermittlung von Nachrichten und Informationen. In der Regel werden verschiedene Modalitäten im Sinne der Informationskodierung wie beispielsweise Fotos und Text verwendet, um Nachrichten effektiver zu vermitteln oder Aufmerksamkeit zu erregen. Kommunikations- und Sprachwissenschaftler erforschen das komplexe Zusammenspiel zwischen Modalitäten seit Jahrzehnten und haben unter Anderem untersucht, wie durch die Kombination der Modalitäten zusätzliche Informationen oder eine neue Bedeutungsebene entstehen können. Die Anzahl gemeinsamer Konzepte oder Entitäten (beispielsweise Personen, Orte und Ereignisse) zwischen Fotos und Text stellen einen wichtigen Aspekt für die Bewertung der Gesamtaussage und Bedeutung eines multimodalen Artikels dar. Automatisierte Ansätze zur Quantifizierung von Bild-Text-Beziehungen können für zahlreiche Anwendungen eingesetzt werden. Sie ermöglichen beispielsweise eine effiziente Exploration von Nachrichten, erleichtern die semantische Suche von Multimedia-Inhalten in (Web)-Archiven oder unterstützen menschliche Analysten bei der Evaluierung der Glaubwürdigkeit von Nachrichten. Allerdings gibt es bislang nur wenige Ansätze, die sich mit der Quantifizierung von Beziehungen zwischen Fotos und Text beschäftigen. Diese Ansätze berücksichtigen jedoch nicht explizit die intermodalen Beziehungen von Entitäten, welche eine wichtige Rolle in Nachrichten darstellen, oder basieren auf überwachten multimodalen Deep-Learning-Techniken. Diese überwachten Lernverfahren können ausschließlich die intermodalen Beziehungen von Entitäten detektieren, die in annotierten Trainingsdaten enthalten sind. Um diese Forschungslücke zu schließen, wird in dieser Arbeit ein unüberwachter Ansatz zur Quantifizierung der intermodalen Konsistenz von Entitäten zwischen Fotos und Text in realen multimodalen Nachrichtenartikeln vorgestellt. Im ersten Teil dieser Arbeit werden neuartige Verfahren auf Basis von Deep Learning zur Extrahierung von Informationen aus Fotos vorgestellt, um Ereignisse (Events), Orte, Zeitangaben und Personen automatisch zu erkennen. Diese Verfahren bilden eine wichtige Voraussetzung, um die Beziehungen von Entitäten zwischen Bild und Text zu bewerten. Zunächst wird ein Ansatz zur Ereignisklassifizierung präsentiert, der neuartige Optimierungsfunktionen und Gewichtungsschemata nutzt um Ontologie-Informationen aus einer Wissensdatenbank in ein Deep-Learning-Verfahren zu integrieren. Das Training erfolgt anhand eines neu vorgestellten Datensatzes, der 570.540 Fotos und eine Ontologie mit 148 Ereignistypen enthält. Der Ansatz übertrifft die Ergebnisse von Referenzsystemen die keine strukturierten Ontologie-Informationen verwenden. Weiterhin wird ein DeepLearning-Ansatz zur Schätzung des Aufnahmeortes von Fotos vorgeschlagen, der Kontextinformationen über die Umgebung (Innen-, Stadt-, oder Naturaufnahme) und von Erdpartitionen unterschiedlicher Granularität verwendet. Die vorgeschlagene Lösung übertrifft die bisher besten Ergebnisse von aktuellen Forschungsarbeiten, obwohl diese deutlich mehr Fotos zum Training verwenden. Darüber hinaus stellen wir den ersten Datensatz zur Schätzung des Aufnahmejahres von Fotos vor, der mehr als eine Million Bilder aus den Jahren 1930 bis 1999 umfasst. Dieser Datensatz wird für das Training von zwei Deep-Learning-Ansätzen zur Schätzung des Aufnahmejahres verwendet, welche die Aufgabe als Klassifizierungs- und Regressionsproblem behandeln. Beide Ansätze erzielen sehr gute Ergebnisse und übertreffen Annotationen von menschlichen Probanden. Schließlich wird ein neuartiger Ansatz zur Identifizierung von Personen des öffentlichen Lebens und ihres gemeinsamen Auftretens in Nachrichtenfotos aus der digitalen Bibliothek Internet Archiv präsentiert. Der Ansatz ermöglicht es unstrukturierte Webdaten aus dem Internet Archiv mit Metadaten, beispielsweise zur semantischen Suche, zu erweitern. Experimentelle Ergebnisse haben die Effektivität des zugrundeliegenden Deep-Learning-Ansatzes zur Personenerkennung bestätigt. Im zweiten Teil dieser Arbeit wird ein unüberwachtes System zur Quantifizierung von BildText-Beziehungen in realen Nachrichten vorgestellt. Im Gegensatz zu bisherigen Verfahren liefert es automatisch neuartige Maße der intermodalen Konsistenz für verschiedene Entitätstypen (Personen, Orte und Ereignisse) sowie den Gesamtkontext. Das System ist nicht auf vordefinierte Datensätze angewiesen, und kann daher mit der Vielzahl und Diversität von Entitäten und Themen in Nachrichten umgehen. Zur Extrahierung von Entitäten aus dem Text werden geeignete Methoden der natürlichen Sprachverarbeitung eingesetzt. Examplarbilder für diese Entitäten werden automatisch aus dem Internet beschafft. Die vorgeschlagenen Methoden zur Informationsextraktion aus Fotos werden auf die Nachrichten- und heruntergeladenen Exemplarbilder angewendet, um die intermodale Konsistenz von Entitäten zu quantifizieren. Es werden zwei Aufgaben untersucht um die Qualität des vorgeschlagenen Ansatzes in realen Anwendungen zu bewerten. Experimentelle Ergebnisse für die Dokumentverifikation und die Beschaffung von Nachrichten mit geringer (potenzielle Fehlinformation) oder hoher multimodalen Konsistenz zeigen den Nutzen und das Potenzial des Ansatzes zur Unterstützung menschlicher Analysten bei der Untersuchung von Nachrichten.In today’s information age, the World Wide Web and social media are important sources for news and information. Different modalities (in the sense of information encoding) such as photos and text are typically used to communicate news more effectively or to attract attention. Communication scientists, linguists, and semioticians have studied the complex interplay between modalities for decades and investigated, e.g., how their combination can carry additional information or add a new level of meaning. The number of shared concepts or entities (e.g., persons, locations, and events) between photos and text is an important aspect to evaluate the overall message and meaning of an article. Computational models for the quantification of image-text relations can enable many applications. For example, they allow for more efficient exploration of news, facilitate semantic search and multimedia retrieval in large (web) archives, or assist human assessors in evaluating news for credibility. To date, only a few approaches have been suggested that quantify relations between photos and text. However, they either do not explicitly consider the cross-modal relations of entities – which are important in the news – or rely on supervised deep learning approaches that can only detect the cross-modal presence of entities covered in the labeled training data. To address this research gap, this thesis proposes an unsupervised approach that can quantify entity consistency between photos and text in multimodal real-world news articles. The first part of this thesis presents novel approaches based on deep learning for information extraction from photos to recognize events, locations, dates, and persons. These approaches are an important prerequisite to measure the cross-modal presence of entities in text and photos. First, an ontology-driven event classification approach that leverages new loss functions and weighting schemes is presented. It is trained on a novel dataset of 570,540 photos and an ontology with 148 event types. The proposed system outperforms approaches that do not use structured ontology information. Second, a novel deep learning approach for geolocation estimation is proposed that uses additional contextual information on the environmental setting (indoor, urban, natural) and from earth partitions of different granularity. The proposed solution outperforms state-of-the-art approaches, which are trained with significantly more photos. Third, we introduce the first large-scale dataset for date estimation with more than one million photos taken between 1930 and 1999, along with two deep learning approaches that treat date estimation as a classification and regression problem. Both approaches achieve very good results that are superior to human annotations. Finally, a novel approach is presented that identifies public persons and their co-occurrences in news photos extracted from the Internet Archive, which collects time-versioned snapshots of web pages that are rarely enriched with metadata relevant to multimedia retrieval. Experimental results confirm the effectiveness of the deep learning approach for person identification. The second part of this thesis introduces an unsupervised approach capable of quantifying image-text relations in real-world news. Unlike related work, the proposed solution automatically provides novel measures of cross-modal consistency for different entity types (persons, locations, and events) as well as the overall context. The approach does not rely on any predefined datasets to cope with the large amount and diversity of entities and topics covered in the news. State-of-the-art tools for natural language processing are applied to extract named entities from the text. Example photos for these entities are automatically crawled from the Web. The proposed methods for information extraction from photos are applied to both news images and example photos to quantify the cross-modal consistency of entities. Two tasks are introduced to assess the quality of the proposed approach in real-world applications. Experimental results for document verification and retrieval of news with either low (potential misinformation) or high cross-modal similarities demonstrate the feasibility of the approach and its potential to support human assessors to study news

    Recover the tampered image based on VQ indexing

    No full text
    In this paper, a tampered image recovery scheme is proposed by creating an index table of the original image via vector quantization (VQ) and embedding it into the original image as a basis for recovery. In order to complete the goal of image authentication and recovery, Wong's watermarking scheme [P.W. Wong, N. Memon, Secret an public key image watermarking schemes for image authentication and ownership verification, IEEE Trans. Image Process 10 (10) (2001) 1593-1601] is employed to perform tamper detection. Wong's watermarking scheme can be used to accurately locate tampered regions. If an image has been tampered, the index table can be used to recover the tampered regions. The experimental results indicate that our scheme has the higher probability of image recovery. Besides, compared with Lee and Lin's I Dual watermark for image tamper detection and recovery, Pattern Recognition 41 (2008) 3497-3506] scheme, our scheme provides not only a better quality of recovered images but also better results at edge regions. (C) 2009 Elsevier B.V. All rights reserved

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Advances in knowledge discovery and data mining Part II

    Get PDF
    19th Pacific-Asia Conference, PAKDD 2015, Ho Chi Minh City, Vietnam, May 19-22, 2015, Proceedings, Part II</p

    Actas de la XIII Reunión Española sobre Criptología y Seguridad de la Información RECSI XIII : Alicante, 2-5 de septiembre de 2014

    Get PDF
    Si tuviéramos que elegir un conjunto de palabras clave para definir la sociedad actual, sin duda el término información sería uno de los más representativos. Vivimos en un mundo caracterizado por un continuo flujo de información en el que las Tecnologías de la Información y Comunicación (TIC) y las Redes Sociales desempeñan un papel relevante. En la Sociedad de la Información se generan gran variedad de datos en formato digital, siendo la protección de los mismos frente a accesos y usos no autorizados el objetivo principal de lo que conocemos como Seguridad de la Información. Si bien la Criptología es una herramienta tecnológica básica, dedicada al desarrollo y análisis de sistemas y protocolos que garanticen la seguridad de los datos, el espectro de tecnologías que intervienen en la protección de la información es amplio y abarca diferentes disciplinas. Una de las características de esta ciencia es su rápida y constante evolución, motivada en parte por los continuos avances que se producen en el terreno de la computación, especialmente en las últimas décadas. Sistemas, protocolos y herramientas en general considerados seguros en la actualidad dejarán de serlo en un futuro más o menos cercano, lo que hace imprescindible el desarrollo de nuevas herramientas que garanticen, de forma eficiente, los necesarios niveles de seguridad. La Reunión Española sobre Criptología y Seguridad de la Información (RECSI) es el congreso científico español de referencia en el ámbito de la Criptología y la Seguridad en las TIC, en el que se dan cita periódicamente los principales investigadores españoles y de otras nacionalidades en esta disciplina, con el fin de compartir los resultados más recientes de su investigación. Del 2 al 5 de septiembre de 2014 se celebrará la decimotercera edición en la ciudad de Alicante, organizada por el grupo de Criptología y Seguridad Computacional de la Universidad de Alicante. Las anteriores ediciones tuvieron lugar en Palma de Mallorca (1991), Madrid (1992), Barcelona (1994), Valladolid (1996), Torremolinos (1998), Santa Cruz de Tenerife (2000), Oviedo (2002), Leganés (2004), Barcelona (2006), Salamanca (2008), Tarragona (2010) y San Sebastián (2012)
    corecore