14 research outputs found

    A new approach for enhancing LSB steganography using bidirectional coding scheme

    Get PDF
    This paper proposes a new algorithm for embedding private information within a cover image. Unlike all other already existing algorithms, this one tends to employ the data of the carrier image more efficiently such that the image looks less distorted. As a consequence, the private data is maintained unperceived and the sent information stays unsuspicious.  This task is achieved by dividing the least significant bit plane of the cover image into fixed size blocks, and then embedding the required top-secret message within each block using one of two opposite ways depending on the extent of similarity of each block with the private information needed to be hidden. This technique will contribute to lessen the number of bits needed to be changed in the cover image to accommodate the private data, and hence will substantially reduce the   amount of distortion in the stego-image when compared to the classic LSB image steganography algorithms

    Analysis of Loading Rate, Fiber orientation and Material Composition through Image Processing and Digital Volume Correlation in High Performance Concrete

    Get PDF
    Ultra High Performance Concrete (UHPC) and High Performance concrete (HPC) is characterized by high compressive strength and high toughness. This is achieved through maximizing the particle packing density in the matrix and the use of fibers to reinforce the matrix, increasing the materials toughness. The interactions of fibers and the matrix during loading is quite complex and involves several different energy dissipation mechanisms. The goal of this work and this thesis is to investigate these interactions and identify any changes in material response, and hope that these changes may be useful for the design of UHPC moving forward. In this thesis two different reinforcement types, Steel Wool/Steel Fiber and purely Steel Fiber, are tested in a split cylinder tensile test using a quasi static load rate and a high loading rate to investigate changes in the material response. The fracture response is evaluated for loading rate, fiber orientation, and reinforcement type individually. This is done using Digital Volume Correlation, and Image processing Analysis to determine surface area generation and volumetric strain production by both macro cracks and micro cracking. This study found that when loading rate increases the amount of micro cracking in the specimen also increases, this trend was seen for 75% of the data. It was also found that for 75% of the optimum fiber oriented specimens, less volumetric strain was produced per joule of absorbed energy then in the pessimum fiber oriented specimens. This means that the optimum fiber oriented specimens were performing better than the pessimum. The optimum fiber oriented specimens showed another trend which was an increase in the amount of volumetric strain made up of micro cracking per joule of absorbed energy than the pessimum fiber oriented specimens, this was again seen for 75% of the data. The Steel Wool and Fiber reinforced specimens performed the best, with lower amounts of volumetric strain per joule of absorbed energy then in the purely Fiber reinforced specimens this again was seen for 75% of the dat

    Multibiometric security in wireless communication systems

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 05/08/2010.This thesis has aimed to explore an application of Multibiometrics to secured wireless communications. The medium of study for this purpose included Wi-Fi, 3G, and WiMAX, over which simulations and experimental studies were carried out to assess the performance. In specific, restriction of access to authorized users only is provided by a technique referred to hereafter as multibiometric cryptosystem. In brief, the system is built upon a complete challenge/response methodology in order to obtain a high level of security on the basis of user identification by fingerprint and further confirmation by verification of the user through text-dependent speaker recognition. First is the enrolment phase by which the database of watermarked fingerprints with memorable texts along with the voice features, based on the same texts, is created by sending them to the server through wireless channel. Later is the verification stage at which claimed users, ones who claim are genuine, are verified against the database, and it consists of five steps. Initially faced by the identification level, one is asked to first present one’s fingerprint and a memorable word, former is watermarked into latter, in order for system to authenticate the fingerprint and verify the validity of it by retrieving the challenge for accepted user. The following three steps then involve speaker recognition including the user responding to the challenge by text-dependent voice, server authenticating the response, and finally server accepting/rejecting the user. In order to implement fingerprint watermarking, i.e. incorporating the memorable word as a watermark message into the fingerprint image, an algorithm of five steps has been developed. The first three novel steps having to do with the fingerprint image enhancement (CLAHE with 'Clip Limit', standard deviation analysis and sliding neighborhood) have been followed with further two steps for embedding, and extracting the watermark into the enhanced fingerprint image utilising Discrete Wavelet Transform (DWT). In the speaker recognition stage, the limitations of this technique in wireless communication have been addressed by sending voice feature (cepstral coefficients) instead of raw sample. This scheme is to reap the advantages of reducing the transmission time and dependency of the data on communication channel, together with no loss of packet. Finally, the obtained results have verified the claims

    Multi-Class Classification for Identifying JPEG Steganography Embedding Methods

    Get PDF
    Over 725 steganography tools are available over the Internet, each providing a method for covert transmission of secret messages. This research presents four steganalysis advancements that result in an algorithm that identifies the steganalysis tool used to embed a secret message in a JPEG image file. The algorithm includes feature generation, feature preprocessing, multi-class classification and classifier fusion. The first contribution is a new feature generation method which is based on the decomposition of discrete cosine transform (DCT) coefficients used in the JPEG image encoder. The generated features are better suited to identifying discrepancies in each area of the decomposed DCT coefficients. Second, the classification accuracy is further improved with the development of a feature ranking technique in the preprocessing stage for the kernel Fisher s discriminant (KFD) and support vector machines (SVM) classifiers in the kernel space during the training process. Third, for the KFD and SVM two-class classifiers a classification tree is designed from the kernel space to provide a multi-class classification solution for both methods. Fourth, by analyzing a set of classifiers, signature detectors, and multi-class classification methods a classifier fusion system is developed to increase the detection accuracy of identifying the embedding method used in generating the steganography images. Based on classifying stego images created from research and commercial JPEG steganography techniques, F5, JP Hide, JSteg, Model-based, Model-based Version 1.2, OutGuess, Steganos, StegHide and UTSA embedding methods, the performance of the system shows a statistically significant increase in classification accuracy of 5%. In addition, this system provides a solution for identifying steganographic fingerprints as well as the ability to include future multi-class classification tools

    Preventing Accidental Privacy Leakage in Ubiquitous Visual Sensing

    Full text link
    The digital cameras are ubiquitous in our daily life and became an essential part of everyday devices. For example, there are cameras in cellphones, tablets, and many other wearable devices. This facilitates capturing and storing information around us through taking images and videos. Despite of the easiness and usefulness of cameras, they also bring a lot of serious privacy problems. These privacy problems can be either malicious intent or accidental user own ignorance, for example, malicious secret filming or unwanted recognition from a photo. To illustrate, the camera cannot distinguish whether part of the data captured is private or sensitive. Once the picture is exposed, accidental privacy leakage may happen. In many cases, we cannot avoid the malicious picture capture. However, we can avoid the information leakage in proactive way. Our goal in this thesis work is to prevent or minimize accidental privacy leakage. To achieve it, the basic idea is to mark the sensitive areas or objects by QR code, in order to encode the privacy information in such targets. Thus those areas and objects can be filtered out before publishing. Usually, face is a common case as the sensitive information, which we used in this thesis. In our implementation, the recognized faces were blurred from the original picture before publishing. We discussed the implementation of proactive protection of information from three aspects, including real time none-object based approach, offline none-object based approach and object based approach. In the real-time approach, QR code is used to implement real time processing. By using this feasible method, user can have fine grained control on what is revealed and what is kept private. In the offline method, a range selector is implemented to eliminate the specific area, even though there are still some defects of this approach. For example, it is not real time, and the lists of vii sensitive area differs from picture to picture. To help our implementation, this research utilized face recognition for the object based method as well. Compared with none object based sensing, this approach has less feasibility, lower speed, and less accuracy. The thesis categorizes existed ways by two types (permission requirement and predefine sensitive area) and compares these ways from different perspectives through showing the pros and cons for each side.Master of ScienceComputer and Information Science, College of Engineering and Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/136612/1/YingZou_thesis.pdfDescription of YingZou_thesis.pdf : Thesi

    Marine Debris Detection in Satellite Surveillance using Attention Mechanisms

    Get PDF
    Marine debris poses a critical threat to environmental ecosystems, necessitating effective methods for its detection and localization. This study addresses the existing limitations in the literature by proposing an innovative approach that combines the instance segmentation capabilities of YOLOv7 with various attention mechanisms to enhance efficiency and broaden applicability. The primary contribution lies in the exploration and comparison of three attentional models: lightweight coordinate attention, CBAM (combining spatial and channel focus), and bottleneck transformer based on self-attention. Leveraging a meticulously labeled dataset of satellite images containing ocean debris, the study conducts a comprehensive assessment of box detection and mask evaluation. The results demonstrate that CBAM emerges as the standout performer, achieving the highest F1 score (77%) in box detection, surpassing coordinate attention (71%) and YOLOv7/bottleneck transformer (both around 66%). In mask evaluation, CBAM continues to lead with an F1 score of 73%, while coordinate attention and YOLOv7 exhibit comparable performances (around F1 scores of 68% and 69%), and bottleneck transformer lags behind at an F1 score of 56%. This compelling evidence underscores CBAM's superior suitability for detecting marine debris compared to existing methods. Notably, the study reveals an intriguing aspect of the bottleneck transformer, which, despite lower overall performance, successfully detected areas overlooked by manual annotation. Moreover, it demonstrated enhanced mask precision for larger debris pieces, hinting at potentially superior practical performance in certain scenarios. This nuanced finding underscores the importance of considering specific application requirements when selecting a detection model, as the bottleneck transformer may offer unique advantages in certain contexts

    Regression Based Gaze Estimation with Natural Head Movement

    Get PDF
    This thesis presents a non-contact, video-based gaze tracking system using novel eye detection and gaze estimation techniques. The objective of the work is to develop a real-time gaze tracking system that is capable of estimating the gaze accurately under natural head movement. The system contains both hardware and software components. The hardware of the system is responsible for illuminating the scene and capturing facial images for further computer analysis, while the software implements the core technique of gaze tracking which consists of two main modules, i.e., eye detection subsystem and gaze estimation subsystem. The proposed gaze tracking technique uses image plane features, namely, the inter-pupil vector (IPV) and the image center-inter pupil center vector (IC-IPCV) to improve gaze estimation precision under natural head movement. A support vector regression (SVR) based estimation method using image plane features along with traditional pupil center-cornea reflection (PC-CR) vector is also proposed to estimate the gaze. The designed gaze tracking system can work in real-time and achieve an overall estimation accuracy of 0.84Âș with still head and 2.26Âș under natural head movement. By using the SVR method for off-line processing, the estimation accuracy with head movement can be improved to 1.12Âș while providing a tolerance of 10cm×8cm×5cm head movement

    Reconhecimento automåtico de sinalização vertical de trùnsito a partir de dados vídeo de um sistema de mapeamento móvel

    Get PDF
    Tese de mestrado em Engenharia GeogrĂĄfica, apresentada Ă  Universidade de Lisboa, atravĂ©s da Faculdade de CiĂȘncias, 2013Este trabalho consiste no estudo e desenvolvimento de um mĂ©todo de Reconhecimento AutomĂĄtico de Sinalização Vertical de TrĂąnsito (RASVT), nomeadamente da sinalização de perigo e alguma da sinalização de regulamentação (cedĂȘncia de passagem, proibição e obrigação), toda de cor vermelha ou azul. O ponto de partida para o seu desenvolvimento sĂŁo os dados vĂ­deo de um Sistema de Mapeamento MĂłvel, obtidos em condiçÔes reais (nĂŁo controladas). A metodologia desenvolvida de reconhecimento de sinalização vertical de trĂąnsito funciona em modo pĂłs-aquisição, ou seja, apĂłs aquisição e processamento dos dados vĂ­deo em bruto (raw) e dos dados de posicionamento (GPS e IMU). O mĂ©todo desenvolvido Ă© constituĂ­do por trĂȘs fases. A fase de detecção, que consiste em detectar RegiĂ”es de Interesse da imagem (RdI) que possam conter sinalização. Esta detecção Ă© feita utilizando um processo de segmentação por cor (vermelha e azul) e utilizando cinco critĂ©rios de selecção: dimensĂ”es da RdI, rĂĄcio altura-comprimento da RdI, proximidade da RdI aos limites da imagem, posicionamento relativo entre o centrĂłide da RdI e centrĂłide da ĂĄrea segmentada e rĂĄcio de preenchimento da RdI. A fase de classificação, que tem por base as RdI obtidas na fase anterior e consiste no reconhecimento da forma geomĂ©trica de cada regiĂŁo, bem como na agregação, por classes, de cada uma dessas RdI analisadas. As classes utilizadas baseiam-se na cor e forma geomĂ©trica. A fase de reconhecimento, consiste na identificação da sinalização que tenha sido detectada e classificada e baseia-se na correspondĂȘncia dos pictogramas presentes nos sinais. AtravĂ©s de um processo de segmentação por cor Ă© extraĂ­do o pictograma para de seguida ser feito o seu reconhecimento, atravĂ©s da correspondĂȘncia entre o pictograma extraĂ­do e pictogramas template. Esta correspondĂȘncia Ă© realizada utilizando a correlação simples. A fase de detecção apresentou uma taxa de sucesso de 32 % (que excluindo os resultados falsos positivos, apresenta uma taxa de sucesso de 89 %), a fase de classificação teve uma taxa de sucesso de 93 % e a fase de reconhecimento teve uma taxa de sucesso de 91 %. A taxa de sucesso global obtido pelo mĂ©todo RASVT implementado Ă© de 81 %, ou seja, da sinalização presente na amostra analisada, 81 % foi correctamente detectada, classificada e reconhecida.This work aims to develop a method for Automatic Traffic Sign Recognition, namely, danger signs and some regulation signs (giving way, prohibition and obligation signs, all of them red or blue). The starting point for developing the method was a Mobile Mapping System’s video data, obtained in real conditions (uncontrolled conditions). The Automatic Traffic Sign Recognition method performs data post-processing (not real time processing) and comprises three stages. The detection stage consists in detecting the image Regions Of Interest (ROI) that may contain signs. This detection is performed using a color segmentation process (red and blue) and using five selection criteria (ROI dimensions, length-height ratio of the ROI, ROI distance to the limits of the image, the relative positioning between ROI’s centroid and a targeted area’s centroid and finally the filling ratio of ROI). The step of classification is based on the ROI obtained in the previous stage and consists in recognizing each region’s geometric shape, as well as the aggregation per classes of each one of these ROI. The classes used are based on color and geometric shapes. The recognition stage consists in the identification of the traffic signs that has been detected and classified, and is based on the correspondence of the pictograms present in the signals. Through a process of color segmentation the pictogram is extracted from the ROI and then its recognition is done by matching with template pictograms. This matching is performed using simple correlation. The detection stage showed a success rate of 32 % (if we exclude false positive results, the success rate increases to 89 %), the classification stage had a success rate of 93 % and the final recognition stage had a success rate of 91 %. The overall success rate obtained by the implemented method is 81 %, i.e., from the totality of traffic signs present in the sample, 81 % were correctly detected, classified and recognized

    ClasificaciĂłn automĂĄtica de los estados de desarrollo del arroz a partir de imĂĄgenes de RADARSAT-2

    Get PDF
    Las imågenes de radar se utilizan tanto para la identificación, como para el seguimiento del crecimiento y la medición de las superficies destinadas a los diferentes cultivos. En este trabajo, utilizamos una imagen de radar capturada por el satélite RADARSAT-2 en el mes de febrero de 2009, para implementar un sistema de clasificación automåtica de los estados de desarrollo del cultivo de arroz. Debido a que las imågenes presentan un ruido de moteado característico, en la primera parte del proyecto, realizamos un estudio comparativo de varios métodos propuestos en la literatura para filtrar imågenes con ruido speckle. Adicionalmente, proponemos un método original de ventana adaptativa para el filtrado y desarrollamos un filtro que combina la media, la moda y la mediana, el cual dio buenos resultados. Seguidamente implementamos tres clasificadores, el clasificador Bayesiano; el clasificador Fuzzy c-mean y el Perceptrón Multicapa, para segmentar y clasificar las imågenes filtradas. Desarrollamos un clasificador mixto utilizando los resultados de los tres clasificadores, el cual nos dio, luego de la evaluación, exactitud mayor al 94%. El sistema se implementó utilizando la librería OpenCV en una plataforma de Ubuntu. / Abstract. Radar images are used for the identification, growth control of crops and the measurement of the areas destined for crops. In this work we used a radar image captured by the RADARSAT-2 satellite in February 2009 to implement an automatic classification system for the stages of development of rice crops. Due to the characteristic speckle noise present in the images, in the first part of the project we made a comparative study of several methods that are proposed in the literature to filter images with speckle noise. Additionally we propose an original adaptive window for the filtering and we developed a filter that combines the arithmetic mean, the mode and the median which gave good results. Next we implemented three classifiers, Bayesian classifier; Fuzzy c-mean classifier and the Multilayer Perceptron classifier in order to segment and classify the filtered images. We developed a mixed classifier using the results of the three classifiers which after the evaluation gave exactness greater than 94%. This system was implemented using the OpenCV library used in an Ubuntu platform.Maestrí
    corecore