9 research outputs found

    Collusion Resistive Framework for Multimedia Security

    Get PDF
    The recent advances in multimedia and Internet technology rises the need for multimedia security.The frequent distribution of multimedia content can cause security breach and violate copyright protection law.The legitimate user can come together to generate illegitimate copy to use it for unintended purpose.The most effective such kind of attack is collusion,involve group of user to contribute with their copies of content to generate a new copy. Fingerprinting,a unique mark is embedded have one to one corresponds with user,is the solution to tackle collusion attack problem.A colluder involve in collusion leaves its trace in alter copy,so the effectiveness of mounting a successful attack lies in how effectively a colluder alter the image by leaving minimum trace.A framework,step by step procedure to tackle collusion attack, involves fingerprint generation and embedding.Various fingerprint generation and embedding techniques are used to make collusion resistive framework effective.Spread spectrum embedding with coded modulation is most effective framework to tackle collusion attack problem.The spread spectrum framework shows high collusion resistant and traceability but it can be attacked with some special collusion attack like interleaving attack and combination of average attack.Various attacks have different post effect on multimedia in different domains. The thesis provide a detail analysis of various collusion attack in different domains which serve as basis for designing the framework to resist collusion.Various statistical and experimental resuslts are drwan to show the behavior of collusion attack.The thesis also proposed a framework here uses modified ECC coded fingerprint for generation and robust watermarking embedding using wave atom.The system shows high collusion resistance against various attack.Various experiments are are drawn and system shows high collusion resistance and much better performance than literature System

    Steganalysis of video sequences using collusion sensitivity

    Get PDF
    In this thesis we present an effective steganalysis technique for digital video sequences based on the collusion attack. Steganalysis is the process of detecting with a high probability the presence of covert data in multimedia. Existing algorithms for steganalysis target detecting covert information in still images. When applied directly to video sequences these approaches are suboptimal. In this thesis we present methods that overcome this limitation by using redundant information present in the temporal domain to detect covert messages in the form of Gaussian watermarks. In particular we target the spread spectrum steganography method because of its widespread use. Our gains are achieved by exploiting the collusion attack that has recently been studied in the field of digital video watermarking and more sophisticated pattern recognition tools. Through analysis and simulations we, evaluate the effectiveness of the video steganalysis method based on averaging based collusion scheme. Other forms of collusion attack in the form of weighted linear collusion and block-based collusion schemes have been proposed to improve the detection performance. The proposed steganalsyis methods were successful in detecting hidden watermarks bearing low SNR with high accuracy. The simulation results also show the improved performance of the proposed temporal based methods over the spatial methods. We conclude that the essence of future video steganalysis techniques lies in the exploitation of the temporal redundancy

    Supervision de contenus multimédia : adaptation de contenu, politiques optimales de préchargement et coordination causale de flux

    Get PDF
    La qualitĂ© des systĂšmes d'informations distribuĂ©s dĂ©pend de la pertinence du contenu mis Ă  disposition, de la rĂ©activitĂ© du service ainsi que de la cohĂ©rence des informations prĂ©sentĂ©es. Nos travaux visent Ă  amĂ©liorer ces trois critĂšres de performance et passent par la prise en compte des caractĂ©ristiques de l'utilisateur, des ressources disponibles ou plus gĂ©nĂ©ralement du contexte d'exĂ©cution. Par consĂ©quent, cette thĂšse comporte trois volets. Le premier volet se place dans le cadre de l'adaptation de systĂšmes d’information dĂ©ployĂ©s dans des contextes dynamiques et stochastiques. Nous prĂ©sentons une approche oĂč des agents d’adaptation appliquent des politiques de dĂ©cision sĂ©quentielle dans l'incertain. Nous modĂ©lisons ces agents par des Processus DĂ©cisionnels de Markov (PDM) selon que le contexte soit observable ou seulement partiellement observable (PDM Partiellement Observables). Dans le cas d’un service mobile de consultation de films, nous montrons en particulier qu’une politique d'adaptation de ce service Ă  des ressources limitĂ©es peut ĂȘtre nuancĂ©e selon l'intĂ©rĂȘt de l'utilisateur, estimĂ© grĂące Ă  l’évaluation des signaux de retour implicite. Dans le deuxiĂšme volet, nous nous intĂ©ressons Ă  l'optimisation de la rĂ©activitĂ© d'un systĂšme qui propose des contenus hypermĂ©dia. Nous nous appuyons sur des techniques de prĂ©chargement pour rĂ©duire les latences. Comme prĂ©cĂ©demment, un PDM modĂ©lise les habitudes des utilisateurs et les ressources disponibles. La force de ce modĂšle rĂ©side dans sa capacitĂ© Ă  fournir des politiques optimales de prĂ©chargement. Les premiĂšres politiques que nous obtenons sont simples. Nous enrichissons alors le modĂšle pour dĂ©river des politiques de prĂ©chargement plus complexes et plus agressives et montrons leurs performances par simulation. Afin de personnaliser nos stratĂ©gies optimales nous proposons finalement un modĂšle PDMPO dont les politiques s'adaptent aux profils des utilisateurs. Le troisiĂšme volet se place dans le contexte des applications multimĂ©dia interactives distribuĂ©es et concerne le contrĂŽle de la cohĂ©rence des flux multimĂ©dia rĂ©partis. Dans un tel contexte, plusieurs mĂ©canismes de synchronisation sont nĂ©cessaires et plusieurs ordres logiques (fifo, causal, total) s'avĂšrent utiles. Nous proposons une boĂźte Ă  outils capable de gĂ©rer plusieurs protocoles d’ordre partiel et d'assurer une dĂ©livrance correcte de chaque message, en respectant tous les ordres qui lui ont Ă©tĂ© imposĂ©s. Nous dĂ©crivons ensuite l’intĂ©gration des tolĂ©rances humaines vis-Ă -vis des courtes incohĂ©rences causales dans notre boĂźte Ă  outils. Nos simulations montrent que de meilleures performances sont obtenues par cette mĂ©thode comparativement Ă  d’autres approches, comme la causalitĂ© classique ou la Δ-causalitĂ©. ABSTRACT : Distributed systems information quality depends on service responsiveness, data consistency and its relevance according to user interests. The thesis aims to improve these three performance criteria by taking into account user characteristics, available ressources or more generally execution context. Naturally, the document is organized in three main parts. The first part discusses adaptation policies for information systems that are subject to dynamic and stochastic contexts. In our approach adaptation agents apply sequential decisional policies under uncertainty. We focus on the modeling of such decisional processes depending on whether the context is fully or partially observable. We use Markov Decision Processes (MDP) and Partially Observable MDP (POMDP) for modeling a movie browsing service in a mobile environment. Our model derives adaptation policies for this service that take into account the limited (and observable) resources. These policies are further refined according to the (partially observable) users’ interest level estimated from implicit feedback. Our theoretical models are validated through numerous simulations. The second part deals with hypermedia content delivery aiming to reduce navigation latencies by means of prefetching. As previously, we build upon an MDP model able to derive optimal prefetching policies integrating both user behaviour and ressource availability. First, we extend this model and propose more complex and aggressive policies. Second, the extended model is enriched by taking into account user's profile and therefore provides finer prefetching policies. It is worth noting that this model issues personnalized policies without explicily manipulating user profiles. The proposed extensions and the associated policies are validated through comparison with the original model and some heuristic approches. Finally, the third part considers multimedia applications in distributed contexts. In these contexts, highly interactive collaborative applications need to offer each user a consistent view of the interactions represented by the streams exchanged between dispersed groups of users. At the coordination level, strong ordering protocols for capturing and delivering streams' interactions (e.g. CAUSAL, TOTAL order) may be too expensive due to the variability of network conditions. We build upon previous work on expressing streams causality and propose a flexible coordination middleware for integrating different delivery modes (e.g. FIFO, CAUSAL, TOTAL) into a single channel (with respect to each of these protocols). Moreover, the proposed abstract channel can handle the mix of any partial or total order protocols. Integrating perceptual tolerance in our middleware, provides us with a coordination toolkit that performs better than Δ-causality, usually considered the best solutio

    Multibiometric security in wireless communication systems

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 05/08/2010.This thesis has aimed to explore an application of Multibiometrics to secured wireless communications. The medium of study for this purpose included Wi-Fi, 3G, and WiMAX, over which simulations and experimental studies were carried out to assess the performance. In specific, restriction of access to authorized users only is provided by a technique referred to hereafter as multibiometric cryptosystem. In brief, the system is built upon a complete challenge/response methodology in order to obtain a high level of security on the basis of user identification by fingerprint and further confirmation by verification of the user through text-dependent speaker recognition. First is the enrolment phase by which the database of watermarked fingerprints with memorable texts along with the voice features, based on the same texts, is created by sending them to the server through wireless channel. Later is the verification stage at which claimed users, ones who claim are genuine, are verified against the database, and it consists of five steps. Initially faced by the identification level, one is asked to first present one’s fingerprint and a memorable word, former is watermarked into latter, in order for system to authenticate the fingerprint and verify the validity of it by retrieving the challenge for accepted user. The following three steps then involve speaker recognition including the user responding to the challenge by text-dependent voice, server authenticating the response, and finally server accepting/rejecting the user. In order to implement fingerprint watermarking, i.e. incorporating the memorable word as a watermark message into the fingerprint image, an algorithm of five steps has been developed. The first three novel steps having to do with the fingerprint image enhancement (CLAHE with 'Clip Limit', standard deviation analysis and sliding neighborhood) have been followed with further two steps for embedding, and extracting the watermark into the enhanced fingerprint image utilising Discrete Wavelet Transform (DWT). In the speaker recognition stage, the limitations of this technique in wireless communication have been addressed by sending voice feature (cepstral coefficients) instead of raw sample. This scheme is to reap the advantages of reducing the transmission time and dependency of the data on communication channel, together with no loss of packet. Finally, the obtained results have verified the claims

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Multibiometric security in wireless communication systems

    Get PDF
    This thesis has aimed to explore an application of Multibiometrics to secured wireless communications. The medium of study for this purpose included Wi-Fi, 3G, and WiMAX, over which simulations and experimental studies were carried out to assess the performance. In specific, restriction of access to authorized users only is provided by a technique referred to hereafter as multibiometric cryptosystem. In brief, the system is built upon a complete challenge/response methodology in order to obtain a high level of security on the basis of user identification by fingerprint and further confirmation by verification of the user through text-dependent speaker recognition. First is the enrolment phase by which the database of watermarked fingerprints with memorable texts along with the voice features, based on the same texts, is created by sending them to the server through wireless channel. Later is the verification stage at which claimed users, ones who claim are genuine, are verified against the database, and it consists of five steps. Initially faced by the identification level, one is asked to first present one’s fingerprint and a memorable word, former is watermarked into latter, in order for system to authenticate the fingerprint and verify the validity of it by retrieving the challenge for accepted user. The following three steps then involve speaker recognition including the user responding to the challenge by text-dependent voice, server authenticating the response, and finally server accepting/rejecting the user. In order to implement fingerprint watermarking, i.e. incorporating the memorable word as a watermark message into the fingerprint image, an algorithm of five steps has been developed. The first three novel steps having to do with the fingerprint image enhancement (CLAHE with 'Clip Limit', standard deviation analysis and sliding neighborhood) have been followed with further two steps for embedding, and extracting the watermark into the enhanced fingerprint image utilising Discrete Wavelet Transform (DWT). In the speaker recognition stage, the limitations of this technique in wireless communication have been addressed by sending voice feature (cepstral coefficients) instead of raw sample. This scheme is to reap the advantages of reducing the transmission time and dependency of the data on communication channel, together with no loss of packet. Finally, the obtained results have verified the claims.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    New intra-video collusion attack using mosaicing

    No full text

    New intra-video collusion attack using mosaicing

    No full text

    11th International Coral Reef Symposium Proceedings

    Get PDF
    A defining theme of the 11th International Coral Reef Symposium was that the news for coral reef ecosystems are far from encouraging. Climate change happens now much faster than in an ice-age transition, and coral reefs continue to suffer fever-high temperatures as well as sour ocean conditions. Corals may be falling behind, and there appears to be no special silver bullet remedy. Nevertheless, there are hopeful signs that we should not despair. Reef ecosystems respond vigorously to protective measures and alleviation of stress. For concerned scientists, managers, conservationists, stakeholders, students, and citizens, there is a great role to play in continuing to report on the extreme threat that climate change represents to earth’s natural systems. Urgent action is needed to reduce CO2 emissions. In the interim, we can and must buy time for coral reefs through increased protection from sewage, sediment, pollutants, overfishing, development, and other stressors, all of which we know can damage coral health. The time to act is now. The canary in the coral-coal mine is dead, but we still have time to save the miners. We need effective management rooted in solid interdisciplinary science and coupled with stakeholder buy in, working at local, regional, and international scales alongside global efforts to give reefs a chance.https://nsuworks.nova.edu/occ_icrs/1000/thumbnail.jp
    corecore