22 research outputs found

    Early and accurate detection of melanoma skin cancer using hybrid level set approach

    Get PDF
    Digital dermoscopy is used to identify cancer in skin lesions, and sun exposure is one of the leading causes of melanoma. It is crucial to distinguish between healthy skin and malignant lesions when using computerised lesion detection and classification. Lesion segmentation influences categorization accuracy and precision. This study introduces a novel way of classifying lesions. Hair filters, gel, bubbles, and specular reflection are all options. An improved levelling method is employed in an innovative method for detecting and removing cancerous hairs. The lesion is distinguished from the surrounding skin by the adaptive sigmoidal function; this function considers the severity of localised lesions. An improved technique for identifying a lesion from surrounding tissue is proposed in the article, followed by a classifier and available features that resulted in 94.40% accuracy and 93% success. According to research, the best method for selecting features and classifications can produce more accurate predictions before and during treatment. When the recommended strategy is put to the test using the Melanoma Skin Cancer Dataset, the recommended technique outperforms the alternative

    Application of advanced fluorescence microscopy and spectroscopy in live-cell imaging

    Get PDF
    Since its inception, fluorescence microscopy has been a key source of discoveries in cell biology. Advancements in fluorophores, labeling techniques and instrumentation have made fluorescence microscopy a versatile quantitative tool for studying dynamic processes and interactions both in vitro and in live-cells. In this thesis, I apply quantitative fluorescence microscopy techniques in live-cell environments to investigate several biological processes. To study Gag processing in HIV-1 particles, fluorescence lifetime imaging microscopy and single particle tracking are combined to follow nascent HIV-1 virus particles during assembly and release on the plasma membrane of living cells. Proteolytic release of eCFP embedded in the Gag lattice of immature HIV-1 virus particles results in a characteristic increase in its fluorescence lifetime. Gag processing and rearrangement can be detected in individual virus particles using this approach. In another project, a robust method for quantifying Förster resonance energy transfer in live-cells is developed to allow direct comparison of live-cell FRET experiments between laboratories. Finally, I apply image fluctuation spectroscopy to study protein behavior in a variety of cellular environments. Image cross-correlation spectroscopy is used to study the oligomerization of CXCR4, a G-protein coupled receptor on the plasma membrane. With raster image correlation spectroscopy, I measure the diffusion of histones in the nucleoplasm and heterochromatin domains of the nuclei of early mouse embryos. The lower diffusion coefficient of histones in the heterochromatin domain supports the conclusion that heterochromatin forms a liquid phase-separated domain. The wide range of topics covered in this thesis demonstrate that fluorescence microscopy is more than just an imaging tool but also a powerful instrument for the quantification and elucidation of dynamic cellular processes

    An improved image steganography scheme based on distinction grade value and secret message encryption

    Get PDF
    Steganography is an emerging and greatly demanding technique for secure information communication over the internet using a secret cover object. It can be used for a wide range of applications such as safe circulation of secret data in intelligence, industry, health care, habitat, online voting, mobile banking and military. Commonly, digital images are used as covers for the steganography owing to their redundancy in the representation, making them hidden to the intruders, hackers, adversaries, unauthorized users. Still, any steganography system launched over the Internet can be cracked upon recognizing the stego cover. Thus, the undetectability that involves data imperceptibility or concealment and security is the significant trait of any steganography system. Presently, the design and development of an effective image steganography system are facing several challenges including low capacity, poor robustness and imperceptibility. To surmount such limitations, it is important to improve the capacity and security of the steganography system while maintaining a high signal-to-noise ratio (PSNR). Based on these factors, this study is aimed to design and develop a distinction grade value (DGV) method to effectively embed the secret data into a cover image for achieving a robust steganography scheme. The design and implementation of the proposed scheme involved three phases. First, a new encryption method called the shuffle the segments of secret message (SSSM) was incorporated with an enhanced Huffman compression algorithm to improve the text security and payload capacity of the scheme. Second, the Fibonacci-based image transformation decomposition method was used to extend the pixel's bit from 8 to 12 for improving the robustness of the scheme. Third, an improved embedding method was utilized by integrating a random block/pixel selection with the DGV and implicit secret key generation for enhancing the imperceptibility of the scheme. The performance of the proposed scheme was assessed experimentally to determine the imperceptibility, security, robustness and capacity. The standard USC-SIPI images dataset were used as the benchmarking for the performance evaluation and comparison of the proposed scheme with the previous works. The resistance of the proposed scheme was tested against the statistical, X2 , Histogram and non-structural steganalysis detection attacks. The obtained PSNR values revealed the accomplishment of higher imperceptibility and security by the proposed DGV scheme while a higher capacity compared to previous works. In short, the proposed steganography scheme outperformed the commercially available data hiding schemes, thereby resolved the existing issues

    Triple scheme based on image steganography to improve imperceptibility and security

    Get PDF
    A foremost priority in the information technology and communication era is achieving an effective and secure steganography scheme when considering information hiding. Commonly, the digital images are used as the cover for the steganography owing to their redundancy in the representation, making them hidden to the intruders. Nevertheless, any steganography system launched over the internet can be attacked upon recognizing the stego cover. Presently, the design and development of an effective image steganography system are facing several challenging issues including the low capacity, poor security, and imperceptibility. Towards overcoming the aforementioned issues, a new decomposition scheme was proposed for image steganography with a new approach known as a Triple Number Approach (TNA). In this study, three main stages were used to achieve objectives and overcome the issues of image steganography, beginning with image and text preparation, followed by embedding and culminating in extraction. Finally, the evaluation stage employed several evaluations in order to benchmark the results. Different contributions were presented with this study. The first contribution was a Triple Text Coding Method (TTCM), which was related to the preparation of secret messages prior to the embedding process. The second contribution was a Triple Embedding Method (TEM), which was related to the embedding process. The third contribution was related to security criteria which were based on a new partitioning of an image known as the Image Partitioning Method (IPM). The IPM proposed a random pixel selection, based on image partitioning into three phases with three iterations of the Hénon Map function. An enhanced Huffman coding algorithm was utilized to compress the secret message before TTCM process. A standard dataset from the Signal and Image Processing Institute (SIPI) containing color and grayscale images with 512 x 512 pixels were utilised in this study. Different parameters were used to test the performance of the proposed scheme based on security and imperceptibility (image quality). In image quality, four important measurements that were used are Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Mean Square Error (MSE) and Histogram analysis. Whereas, two security measurements that were used are Human Visual System (HVS) and Chi-square (X2) attacks. In terms of PSNR and SSIM, the Lena grayscale image obtained results were 78.09 and 1 dB, respectively. Meanwhile, the HVS and X2 attacks obtained high results when compared to the existing scheme in the literature. Based on the findings, the proposed scheme give evidence to increase capacity, imperceptibility, and security to overcome existing issues

    Characterization of digital film scanner systems for use with digital scene algorithms

    Get PDF
    Digital film scanners have been used in the photographic industry for more than a decade. The existence of digital image data has made possible the use of computer-based scene enhancement algorithms to improve image quality. These algorithms are usually device-dependent, functioning properly only for data generated by one scanner system. The complexity of most enhancement algorithms make them costly to develop, thus device-independent scene enhancement algorithms would be valuable. The computation of mathematical transformations to convert scanner data to a device-independent space is possible. The data created using these transformations should serve as the input for device-independent enhancement algorithms. A study to determine scanner data space transformations was performed. This study evaluated a subset of Operational Characteristics for three film scanners. These scanner characteristics were used to determine transformations to convert between scanner data spaces. These results were used as part of a system prototype to test the performance of scanner data space transformations

    Eigenvalues and low energy eigenvectors of quantum many-body systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 211-221).I first give an overview of the thesis and Matrix Product States (MPS) representation of quantum spin systems on a line with an improvement on the notation. The rest of this thesis is divided into two parts. The first part is devoted to eigenvalues of quantum many-body systems (QMBS). I introduce Isotropic Entanglement (IE) and show that the distribution of QMBS with generic interactions can be accurately obtained using IE. Next, I discuss the eigenvalue distribution of one particle hopping random Schrbdinger operator in one dimension from free probability theory in context of the Anderson model. The second part is devoted to ground states and gap of QMBS. I first give the necessary background on frustration free Hamiltonians, real and imaginary time evolution of quantum spin systems on a line within MPS representation and the numerical implementation. I then prove the degeneracy and unfrustration condition for quantum spin chains with generic local interactions. Following this, I summarize my efforts in proving lower bounds for the entanglement of the ground states, which includes partial results, with the hope that it will inspire future work resulting in solving the conjecture given. Next I discuss two interesting measure zero examples where the Hamiltonians are carefully constructed to give unique ground states with high entanglement. This includes exact calculations of Schmidt numbers, entanglement entropies and a novel technique for calculating the gap. The last chapter elaborates on one of the measure zero examples (i.e., d = 3) which is the first example of a Frustration Free translation-invariant spin-i chain that has a unique highly entangled ground state and exhibits signatures of a critical behavior.by Ramis Movassagh.Ph.D

    Probing star formation and radio activity using faint galaxy redshift surveys

    Get PDF
    In this thesis, we study the evolution of radio luminosity functions (RLF) for AGN and star forming galaxies (SFG), the colour-magnitude distributions of radio and X-ray sources at redshift z ~1, the star formation rate density in dwarf galaxies at z ~1 and investigate downsizing. In chapter 1 we give the background to our studies. We describe the Big Bang model before going on to examine different types of galaxies and looking at their star formation rates and the variation of their properties with their envirorunents. We summarise the elements of modern astronomy methodology used throughout this thesis in chapter 2. In this chapter we describe the methods of measuring star formation rates, galaxy environments and luminosity functions. In chapter 3 we match the AEGIS20 radio survey to the DEEP2 optical spectroscopic survey in the extended Groth Strip (EGS) to create a sample of radio-emitting galaxies that we separate into AGN and SFGs. We derive the RLFs of each of these at two redshift intervals and measure their evolution out to z ~1. We also compare the colour-magnitude distribution of the radio sources to that of the general galaxy population at this redshift and compare these to their local Universe equivalents. We found the evolution of the RLFs to be consistent with pure luminosity evolution with the form L x (1 + z)(^a) where a = 1.0 ± 0.9 for the AGN and a = 3.7 ± 0.3 for the SFGs. We analyse the variations of these radio sources' properties with their environments in chapter 4. Using the projected n(^th) nearest neighbour method to estimate the density of the environments, we find a strong trend of SFG numbers dropping with density. The final science chapter is chapter 5 in which we describe the Redshift One LDSS3 Emission-line Survey (ROLES). This survey targets the [OII] emission line in dwarf galaxies with log (M(_*)/M(_ʘ) )<9.5. We convert the [OII] luminosity to a star formation rate (SFR) and then proceed to analyse the mass-dependence of the global star formation rate at redshift z ~ 1 We find that SFR turns over with stellar mass at this redshift. By also comparing to similar studies in the local Universe, we investigate the empirical "downsizing" picture of galaxy evolution. Finally, we present our conclusions and suggestions for future work in chapter 6

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
    corecore