21 research outputs found

    Rapid intelligent watermarking system for high-resolution grayscale facial images

    Get PDF
    Facial captures are widely used in many access control applications to authenticate individuals, and grant access to protected information and locations. For instance, in passport or smart card applications, facial images must be secured during the enrollment process, prior to exchange and storage. Digital watermarking may be used to assure integrity and authenticity of these facial images against unauthorized manipulations, through fragile and robust watermarking, respectively. It can also combine other biometric traits to be embedded as invisible watermarks in these facial captures to improve individual verification. Evolutionary Computation (EC) techniques have been proposed to optimize watermark embedding parameters in IntelligentWatermarking (IW) literature. The goal of such optimization problem is to find the trade-off between conflicting objectives of watermark quality and robustness. Securing streams of high-resolution biometric facial captures results in a large number of optimization problems of high dimension search space. For homogeneous image streams, the optimal solutions for one image block can be utilized for other image blocks having the same texture features. Therefore, the computational complexity for handling a stream of high-resolution facial captures is significantly reduced by recalling such solutions from an associative memory instead of re-optimizing the whole facial capture image. In this thesis, an associative memory is proposed to store the previously calculated solutions for different categories of texture using the optimization results of the whole image for few training facial images. A multi-hypothesis approach is adopted to store in the associative memory the solutions for different clustering resolutions (number of blocks clusters based on texture features), and finally select the optimal clustering resolution based on the watermarking metrics for each facial image during generalization. This approach was verified using streams of facial captures from PUT database (Kasinski et al., 2008). It was compared against a baseline system representing traditional IW methods with full optimization for all stream images. Both proposed and baseline systems are compared with respect to quality of solution produced and the computational complexity measured in fitness evaluations. The proposed approach resulted in a decrease of 95.5% in computational burden with little impact in watermarking performance for a stream of 198 facial images. The proposed framework Blockwise Multi-Resolution Clustering (BMRC) has been published in Machine Vision and Applications (Rabil et al., 2013a) Although the stream of high dimensionality optimization problems are replaced by few training optimizations, and then recalls from an associative memory storing the training artifacts. Optimization problems with high dimensionality search space are challenging, complex, and can reach up to dimensionality of 49k variables represented using 293k bits for high-resolution facial images. In this thesis, this large dimensionality problem is decomposed into smaller problems representing image blocks which resolves convergence problems with handling the larger problem. Local watermarking metrics are used in cooperative coevolution on block level to reach the overall solution. The elitism mechanism is modified such that the blocks of higher local watermarking metrics are fetched across all candidate solutions for each position, and concatenated together to form the elite candidate solutions. This proposed approach resulted in resolving premature convergence for traditional EC methods, and thus 17% improvement on the watermarking fitness is accomplished for facial images of resolution 2048×1536. This improved fitness is achieved using few iterations implying optimization speedup. The proposed algorithm Blockwise Coevolutionary Genetic Algorithm (BCGA) has been published in Expert Systems with Applications (Rabil et al., 2013c). The concepts and frameworks presented in this thesis can be generalized on any stream of optimization problems with large search space, where the candidate solutions consist of smaller granularity problems solutions that affect the overall solution. The challenge for applying this approach is finding the significant feature for this smaller granularity that affects the overall optimization problem. In this thesis the texture features of smaller granularity blocks represented in the candidate solutions are affecting the watermarking fitness optimization of the whole image. Also the local metrics of these smaller granularity problems are indicating the fitness produced for the larger problem. Another proposed application for this thesis is to embed offline signature features as invisible watermark embedded in facial captures in passports to be used for individual verification during border crossing. The offline signature is captured from forms signed at borders and verified against the embedded features. The individual verification relies on one physical biometric trait represented by facial captures and another behavioral trait represented by offline signature

    Palmprint Recognition by using Bandlet, Ridgelet, Wavelet and Neural Network

    Get PDF
    Palmprint recognition has emerged as a substantial biometric based personal identification. Tow types of biometrics palmprint feature. high resolution feature that includes: minutia points, ridges and singular points that could be extracted for forensic applications. Moreover, low resolution feature such as wrinkles and principal lines which could be extracted for commercial applications. This paper uses 700nm spectral band PolyU hyperspectral palmprint database. Multiscale image transform: bandlet, ridgelet and 2D discrete wavelet have been applied to extract feature. The size of features are reduced by using principle component analysis and linear discriminate analysis. Feed-forward Back-propagation neural network is used as a classifier. The recognition rate accuracy shows that bandlet transform outperforms others

    Intelligent watermarking of long streams of document images

    Get PDF
    Digital watermarking has numerous applications in the imaging domain, including (but not limited to) fingerprinting, authentication, tampering detection. Because of the trade-off between watermark robustness and image quality, the heuristic parameters associated with digital watermarking systems need to be optimized. A common strategy to tackle this optimization problem formulation of digital watermarking, known as intelligent watermarking (IW), is to employ evolutionary computing (EC) to optimize these parameters for each image, with a computational cost that is infeasible for practical applications. However, in industrial applications involving streams of document images, one can expect instances of problems to reappear over time. Therefore, computational cost can be saved by preserving the knowledge of previous optimization problems in a separate archive (memory) and employing that memory to speedup or even replace optimization for future similar problems. That is the basic principle behind the research presented in this thesis. Although similarity in the image space can lead to similarity in the problem space, there is no guarantee of that and for this reason, knowledge about the image space should not be employed whatsoever. Therefore, in this research, strategies to appropriately represent, compare, store and sample from problem instances are investigated. The objective behind these strategies is to allow for a comprehensive representation of a stream of optimization problems in a way to avoid re-optimization whenever a previously seen problem provides solutions as good as those that would be obtained by reoptimization, but at a fraction of its cost. Another objective is to provide IW systems with a predictive capability which allows replacing costly fitness evaluations with cheaper regression models whenever re-optimization cannot be avoided. To this end, IW of streams of document images is first formulated as the problem of optimizing a stream of recurring problems and a Dynamic Particle Swarm Optimization (DPSO) technique is proposed to tackle this problem. This technique is based on a two-tiered memory of static solutions. Memory solutions are re-evaluated for every new image and then, the re-evaluated fitness distribution is compared with stored fitness distribution as a mean of measuring the similarity between both problem instances (change detection). In simulations involving homogeneous streams of bi-tonal document images, the proposed approach resulted in a decrease of 95% in computational burden with little impact in watermarking performace. Optimization cost was severely decreased by replacing re-optimizations with recall to previously seen solutions. After that, the problem of representing the stream of optimization problems in a compact manner is addressed. With that, new optimization concepts can be incorporated into previously learned concepts in an incremental fashion. The proposed strategy to tackle this problem is based on Gaussian Mixture Models (GMM) representation, trained with parameter and fitness data of all intermediate (candidate) solutions of a given problem instance. GMM sampling replaces selection of individual memory solutions during change detection. Simulation results demonstrate that such memory of GMMs is more adaptive and can thus, better tackle the optimization of embedding parameters for heterogeneous streams of document images when compared to the approach based on memory of static solutions. Finally, the knowledge provided by the memory of GMMs is employed as a manner of decreasing the computational cost of re-optimization. To this end, GMM is employed in regression mode during re-optimization, replacing part of the costly fitness evaluations in a strategy known as surrogate-based optimization. Optimization is split in two levels, where the first one relies primarily on regression while the second one relies primarily on exact fitness values and provide a safeguard to the whole system. Simulation results demonstrate that the use of surrogates allows for better adaptation in situations involving significant variations in problem representation as when the set of attacks employed in the fitness function changes. In general lines, the intelligent watermarking system proposed in this thesis is well adapted for the optimization of streams of recurring optimization problems. The quality of the resulting solutions for both, homogeneous and heterogeneous image streams is comparable to that obtained through full optimization but for a fraction of its computational cost. More specifically, the number of fitness evaluations is 97% smaller than that of full optimization for homogeneous streams and 95% for highly heterogeneous streams of document images. The proposed method is general and can be easily adapted to other applications involving streams of recurring problems

    Dynamical Systems

    Get PDF
    Complex systems are pervasive in many areas of science integrated in our daily lives. Examples include financial markets, highway transportation networks, telecommunication networks, world and country economies, social networks, immunological systems, living organisms, computational systems and electrical and mechanical structures. Complex systems are often composed of a large number of interconnected and interacting entities, exhibiting much richer global scale dynamics than the properties and behavior of individual entities. Complex systems are studied in many areas of natural sciences, social sciences, engineering and mathematical sciences. This special issue therefore intends to contribute towards the dissemination of the multifaceted concepts in accepted use by the scientific community. We hope readers enjoy this pertinent selection of papers which represents relevant examples of the state of the art in present day research. [...

    An Efficient Hybrid Fuzzy-Clustering Driven 3D-Modeling of Magnetic Resonance Imagery for Enhanced Brain Tumor Diagnosis

    Get PDF
    Brain tumor detection and its analysis are essential in medical diagnosis. The proposed work focuses on segmenting abnormality of axial brain MR DICOM slices, as this format holds the advantage of conserving extensive metadata. The axial slices presume the left and right part of the brain is symmetric by a Line of Symmetry (LOS). A semi-automated system is designed to mine normal and abnormal structures from each brain MR slice in a DICOM study. In this work, Fuzzy clustering (FC) is applied to the DICOM slices to extract various clusters for di erent k. Then, the best-segmented image that has high inter-class rigidity is obtained using the silhouette fitness function. The clustered boundaries of the tissue classes further enhanced by morphological operations. The FC technique is hybridized with the standard image post-processing techniques such as marker controlled watershed segmentation (MCW), region growing (RG), and distance regularized level sets (DRLS). This procedure is implemented on renowned BRATS challenge dataset of di erent modalities and a clinical dataset containing axial T2 weighted MR images of a patient. The sequential analysis of the slices is performed using the metadata information present in the DICOM header. The validation of the segmentation procedures against the ground truth images authorizes that the segmented objects of DRLS through FC enhanced brain images attain maximum scores of Jaccard and Dice similarity coe cients. The average Jaccard and dice scores for segmenting tumor part for ten patient studies of the BRATS dataset are 0.79 and 0.88, also for the clinical study 0.78 and 0.86, respectively. Finally, 3D visualization and tumor volume estimation are done using accessible DICOM information.Ministerio de Desarrollo de Recursos Humanos, India SPARC/2018-2019/P145/SLUniversidad Politécnica de Tomsk, Rusia RRSG/19/500

    The 5th Conference of PhD Students in Computer Science

    Get PDF

    Wireless multimedia sensor networks, security and key management

    Get PDF
    Wireless Multimedia Sensor Networks (WMSNs) have emerged and shifted the focus from the typical scalar wireless sensor networks to networks with multimedia devices that are capable to retrieve video, audio, images, as well as scalar sensor data. WMSNs are able to deliver multimedia content due to the availability of inexpensive CMOS cameras and microphones coupled with the significant progress in distributed signal processing and multimedia source coding techniques. These mentioned characteristics, challenges, and requirements of designing WMSNs open many research issues and future research directions to develop protocols, algorithms, architectures, devices, and testbeds to maximize the network lifetime while satisfying the quality of service requirements of the various applications. In this thesis dissertation, we outline the design challenges of WMSNs and we give a comprehensive discussion of the proposed architectures and protocols for the different layers of the communication protocol stack for WMSNs along with their open research issues. Also, we conduct a comparison among the existing WMSN hardware and testbeds based on their specifications and features along with complete classification based on their functionalities and capabilities. In addition, we introduce our complete classification for content security and contextual privacy in WSNs. Our focus in this field, after conducting a complete survey in WMSNs and event privacy in sensor networks, and earning the necessary knowledge of programming sensor motes such as Micaz and Stargate and running simulation using NS2, is to design suitable protocols meet the challenging requirements of WMSNs targeting especially the routing and MAC layers, secure the wirelessly exchange of data against external attacks using proper security algorithms: key management and secure routing, defend the network from internal attacks by using a light-weight intrusion detection technique, protect the contextual information from being leaked to unauthorized parties by adapting an event unobservability scheme, and evaluate the performance efficiency and energy consumption of employing the security algorithms over WMSNs

    A Search-Based Testing Approach for Deep Reinforcement Learning Agents

    Full text link
    Deep Reinforcement Learning (DRL) algorithms have been increasingly employed during the last decade to solve various decision-making problems such as autonomous driving and robotics. However, these algorithms have faced great challenges when deployed in safety-critical environments since they often exhibit erroneous behaviors that can lead to potentially critical errors. One way to assess the safety of DRL agents is to test them to detect possible faults leading to critical failures during their execution. This raises the question of how we can efficiently test DRL policies to ensure their correctness and adherence to safety requirements. Most existing works on testing DRL agents use adversarial attacks that perturb states or actions of the agent. However, such attacks often lead to unrealistic states of the environment. Their main goal is to test the robustness of DRL agents rather than testing the compliance of agents' policies with respect to requirements. Due to the huge state space of DRL environments, the high cost of test execution, and the black-box nature of DRL algorithms, the exhaustive testing of DRL agents is impossible. In this paper, we propose a Search-based Testing Approach of Reinforcement Learning Agents (STARLA) to test the policy of a DRL agent by effectively searching for failing executions of the agent within a limited testing budget. We use machine learning models and a dedicated genetic algorithm to narrow the search towards faulty episodes. We apply STARLA on Deep-Q-Learning agents which are widely used as benchmarks and show that it significantly outperforms Random Testing by detecting more faults related to the agent's policy. We also investigate how to extract rules that characterize faulty episodes of the DRL agent using our search results. Such rules can be used to understand the conditions under which the agent fails and thus assess its deployment risks

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Biomedical Image Processing and Classification

    Get PDF
    Biomedical image processing is an interdisciplinary field involving a variety of disciplines, e.g., electronics, computer science, physics, mathematics, physiology, and medicine. Several imaging techniques have been developed, providing many approaches to the study of the human body. Biomedical image processing is finding an increasing number of important applications in, for example, the study of the internal structure or function of an organ and the diagnosis or treatment of a disease. If associated with classification methods, it can support the development of computer-aided diagnosis (CAD) systems, which could help medical doctors in refining their clinical picture
    corecore