226 research outputs found

    Improved steganalysis technique based on least significant bit using artificial neural network for MP3 files

    Get PDF
    MP3 files are one of the most widely used digital audio formats that provide a high compression ratio with reliable quality. Their widespread use has resulted in MP3 audio files becoming excellent covers to carry hidden information in audio steganography on the Internet. Emerging interest in uncovering such hidden information has opened up a field of research called steganalysis that looked at the detection of hidden messages in a specific media. Unfortunately, the detection accuracy in steganalysis is affected by bit rates, sampling rate of the data type, compression rates, file track size and standard, as well as benchmark dataset of the MP3 files. This thesis thus proposed an effective technique to steganalysis of MP3 audio files by deriving a combination of features from MP3 file properties. Several trials were run in selecting relevant features of MP3 files like the total harmony distortion, power spectrum density, and peak signal-to-noise ratio (PSNR) for investigating the correlation between different channels of MP3 signals. The least significant bit (LSB) technique was used in the detection of embedded secret files in stego-objects. This involved reading the stego-objects for statistical evaluation for possible points of secret messages and classifying these points into either high or low tendencies for containing secret messages. Feed Forward Neural Network with 3 layers and traingdx function with an activation function for each layer were also used. The network vector contains information about all features, and is used to create a network for the given learning process. Finally, an evaluation process involving the ANN test that compared the results with previous techniques, was performed. A 97.92% accuracy rate was recorded when detecting MP3 files under 96 kbps compression. These experimental results showed that the proposed approach was effective in detecting embedded information in MP3 files. It demonstrated significant improvement in detection accuracy at low embedding rates compared with previous work

    3D Reconstruction of Small Solar System Bodies using Rendered and Compressed Images

    Get PDF
    Synthetic image generation and reconstruction of Small Solar System Bodies and the influence of compression is becoming an important study topic because of the advent of small spacecraft in deep space missions. Most of these missions are fly-by scenarios, for example in the Comet Interceptor mission. Due to limited data budgets of small satellite missions, maximising scientific return requires investigating effects of lossy compression. A preliminary simulation pipeline had been developed that uses physics-based rendering in combination with procedural terrain generation to overcome limitations of currently used methods for image rendering like the Hapke model. The rendered Small Solar System Body images are combined with a star background and photometrically calibrated to represent realistic imagery. Subsequently, a Structure-from-Motion pipeline reconstructs three-dimensional models from the rendered images. In this work, the preliminary simulation pipeline was developed further into the Space Imaging Simulator for Proximity Operations software package and a compression package was added. The compression package was used to investigate effects of lossy compression on reconstructed models and the possible amount of data reduction of lossy compression to lossless compression. Several scenarios with varying fly-by distances ranging from 50 km to 400 km and body sizes of 1 km and 10 km were simulated and compressed with lossless and several quality levels of lossy compression using PNG and JPEG 2000 respectively. It was found that low compression ratios introduce artefacts resembling random noise while high compression ratios remove surface features. The random noise artefacts introduced by low compression ratios frequently increased the number of vertices and faces of the reconstructed three-dimensional model

    The Little Photometer That Could: Technical Challenges and Science Results from the Kepler Mission

    Get PDF
    The Kepler spacecraft launched on March 7, 2009, initiating NASA's first search for Earth-size planets orbiting Sun-like stars. Since launch, Kepler has announced the discovery of 17 exoplanets, including a system of six transiting a Sun-like star, Kepler-11, and the first confirmed rocky planet, Kepler-10b, with a radius of 1.4 that of Earth. Kepler is proving to be a cornucopia of discoveries: it has identified over 1200 candidate planets based on the first 120 days of observations, including 54 that are in or near the habitable zone of their stars, and 68 that are 1.2 Earth radii or smaller. An astounding 408 of these planetary candidates are found in 170 multiple systems, demonstrating the compactness and flatness of planetary systems composed of small planets. Never before has there been a photometer capable of reaching a precision near 20 ppm in 6.5 hours and capable of conducting nearly continuous and uninterrupted observations for months to years. In addition to exoplanets, Kepler is providing a wealth of astrophysics, and is revolutionizing the field of asteroseismology. Designing and building the Kepler photometer and the software systems that process and analyze the resulting data to make the discoveries presented a daunting set of challenges, including how to manage the large data volume. The challenges continue into flight operations, as the photometer is sensitive to its thermal environment, complicating the task of detecting 84 ppm drops in brightness corresponding to Earth-size planets transiting Sun-like stars

    Efficient implementations of machine vision algorithms using a dynamically typed programming language

    Get PDF
    Current machine vision systems (or at least their performance critical parts) are predominantly implemented using statically typed programming languages such as C, C++, or Java. Statically typed languages however are unsuitable for development and maintenance of large scale systems. When choosing a programming language, dynamically typed languages are usually not considered due to their lack of support for high-performance array operations. This thesis presents efficient implementations of machine vision algorithms with the (dynamically typed) Ruby programming language. The Ruby programming language was used, because it has the best support for meta-programming among the currently popular programming languages. Although the Ruby programming language was used, the approach presented in this thesis could be applied to any programming language which has equal or stronger support for meta-programming (e.g. Racket (former PLT Scheme)). A Ruby library for performing I/O and array operations was developed as part of this thesis. It is demonstrated how the library facilitates concise implementations of machine vision algorithms commonly used in industrial automation. I.e. this thesis is about a different way of implementing machine vision systems. The work could be applied to prototype and in some cases implement machine vision systems in industrial automation and robotics. The development of real-time machine vision software is facilitated as follows 1. A JIT compiler is used to achieve real-time performance. It is demonstrated that the Ruby syntax is sufficient to integrate the JIT compiler transparently. 2. Various I/O devices are integrated for seamless acquisition, display, and storage of video and audio data. In combination these two developments preserve the expressiveness of the Ruby programming language while providing good run-time performance of the resulting implementation. To validate this approach, the performance of different operations is compared with the performance of equivalent C/C++ programs

    Design and characterization of downconversion mixers and the on-chip calibration techniques for monolithic direct conversion radio receivers

    Get PDF
    This thesis consists of eight publications and an overview of the research topic, which is also a summary of the work. The research described in this thesis is focused on the design of downconversion mixers and direct conversion radio receivers for UTRA/FDD WCDMA and GSM standards. The main interest of the work is in the 1-3 GHz frequency range and in the Silicon and Silicon-Germanium BiCMOS technologies. The RF front-end, and especially the mixer, limits the performance of direct conversion architecture. The most stringent problems are involved in the second-order distortion in mixers to which special attention has been given. The work introduces calibration techniques to overcome these problems. Some design considerations for front-end radio receivers are also given through a mixer-centric approach. The work summarizes the design of several downconversion mixers. Three of the implemented mixers are integrated as the downconversion stages of larger direct conversion receiver chips. One is realized together with the LNA as an RF front-end. Also, some stand-alone structures have been characterized. Two of the mixers that are integrated together with whole analog receivers include calibration structures to improve the second-order intermodulation rejection. A theoretical mismatch analysis of the second-order distortion in the mixers is also presented in this thesis. It gives a comprehensive illustration of the second-order distortion in mixers. It also gives the relationships between the dc-offsets and high IIP2. In addition, circuit and layout techniques to improve the LO-to-RF isolation are discussed. The presented work provides insight into how the mixer immunity against the second-order distortion can be improved. The implemented calibration structures show promising performance. On the basis of these results, several methods of detecting the distortion on-chip and the possibilities of integrating the automatic on-chip calibration procedures to produce a repeatable and well-predictable receiver IIP2 are presented.reviewe

    Engineering systematic musicology : methods and services for computational and empirical music research

    Get PDF
    One of the main research questions of *systematic musicology* is concerned with how people make sense of their musical environment. It is concerned with signification and meaning-formation and relates musical structures to effects of music. These fundamental aspects can be approached from many different directions. One could take a cultural perspective where music is considered a phenomenon of human expression, firmly embedded in tradition. Another approach would be a cognitive perspective, where music is considered as an acoustical signal of which perception involves categorizations linked to representations and learning. A performance perspective where music is the outcome of human interaction is also an equally valid view. To understand a phenomenon combining multiple perspectives often makes sense. The methods employed within each of these approaches turn questions into concrete musicological research projects. It is safe to say that today many of these methods draw upon digital data and tools. Some of those general methods are feature extraction from audio and movement signals, machine learning, classification and statistics. However, the problem is that, very often, the *empirical and computational methods require technical solutions* beyond the skills of researchers that typically have a humanities background. At that point, these researchers need access to specialized technical knowledge to advance their research. My PhD-work should be seen within the context of that tradition. In many respects I adopt a problem-solving attitude to problems that are posed by research in systematic musicology. This work *explores solutions that are relevant for systematic musicology*. It does this by engineering solutions for measurement problems in empirical research and developing research software which facilitates computational research. These solutions are placed in an engineering-humanities plane. The first axis of the plane contrasts *services* with *methods*. Methods *in* systematic musicology propose ways to generate new insights in music related phenomena or contribute to how research can be done. Services *for* systematic musicology, on the other hand, support or automate research tasks which allow to change the scope of research. A shift in scope allows researchers to cope with larger data sets which offers a broader view on the phenomenon. The second axis indicates how important Music Information Retrieval (MIR) techniques are in a solution. MIR-techniques are contrasted with various techniques to support empirical research. My research resulted in a total of thirteen solutions which are placed in this plane. The description of seven of these are bundled in this dissertation. Three fall into the methods category and four in the services category. For example Tarsos presents a method to compare performance practice with theoretical scales on a large scale. SyncSink is an example of a service

    Markov bidirectional transfer matrix for detecting LSB speech steganography with low embedding rates

    Get PDF
    Steganalysis with low embedding rates is still a challenge in the field of information hiding. Speech signals are typically processed by wavelet packet decomposition, which is capable of depicting the details of signals with high accuracy. A steganography detection algorithm based on the Markov bidirectional transition matrix (MBTM) of the wavelet packet coefficient (WPC) of the second-order derivative-based speech signal is proposed. On basis of the MBTM feature, which can better express the correlation of WPC, a Support Vector Machine (SVM) classifier is trained by a large number of Least Significant Bit (LSB) hidden data with embedding rates of 1%, 3%, 5%, 8%,10%, 30%, 50%, and 80%. LSB matching steganalysis of speech signals with low embedding rates is achieved. The experimental results show that the proposed method has obvious superiorities in steganalysis with low embedding rates compared with the classic method using histogram moment features in the frequency domain (HMIFD) of the second-order derivative-based WPC and the second-order derivative-based Mel-frequency cepstral coefficients (MFCC). Especially when the embedding rate is only 3%, the accuracy rate improves by 17.8%, reaching 68.5%, in comparison with the method using HMIFD features of the second derivative WPC. The detection accuracy improves as the embedding rate increases

    A novel concept for a fully digital particle detector

    Full text link
    Silicon sensors are the most diffuse position sensitive device in particle physics 8 experiments and in countless applications in science and technology. They had a spectacular progress in performance over almost 40 years since their first introduction, but their evolution is now slowing down. The position resolution for single particle hits is larger than a few microns in the most advanced sensors. This value was reached already over 30 years ago [1]. The minimum ionising path length a sensor can detect is several tens of microns. There are fundamental reasons why these limits will not be substantially improved by further refinements of the current technology. This makes silicon sensors unsuitable to applications where the physics signature is the short path of a recoiling atom and constrains the layout of physics experiments where they represent by far the best option like high energy physics collider experiments. In perspective, the availability of sensors with sub-micron spatial resolution, in the order of a few tens of nanometres, would be a disruptive change for the sensor technology with a foreseeable huge impact on experiment layout and various applications of these devices. For providing such a leap in resolution, we propose a novel design based on a purely digital circuit. This disruptive concept potentially enables pixel sizes much smaller than 1{\mu}m2 and a number of advantages in terms of power consumption, readout speed and reduced thickness (for low mass sensors).Comment: 12 pages, 12 figures, 10 reference

    Efficient implementations of machine vision algorithms using a dynamically typed programming language

    Get PDF
    Current machine vision systems (or at least their performance critical parts) are predominantly implemented using statically typed programming languages such as C, C++, or Java. Statically typed languages however are unsuitable for development and maintenance of large scale systems. When choosing a programming language, dynamically typed languages are usually not considered due to their lack of support for high-performance array operations. This thesis presents efficient implementations of machine vision algorithms with the (dynamically typed) Ruby programming language. The Ruby programming language was used, because it has the best support for meta-programming among the currently popular programming languages. Although the Ruby programming language was used, the approach presented in this thesis could be applied to any programming language which has equal or stronger support for meta-programming (e.g. Racket (former PLT Scheme)). A Ruby library for performing I/O and array operations was developed as part of this thesis. It is demonstrated how the library facilitates concise implementations of machine vision algorithms commonly used in industrial automation. I.e. this thesis is about a different way of implementing machine vision systems. The work could be applied to prototype and in some cases implement machine vision systems in industrial automation and robotics. The development of real-time machine vision software is facilitated as follows 1. A JIT compiler is used to achieve real-time performance. It is demonstrated that the Ruby syntax is sufficient to integrate the JIT compiler transparently. 2. Various I/O devices are integrated for seamless acquisition, display, and storage of video and audio data. In combination these two developments preserve the expressiveness of the Ruby programming language while providing good run-time performance of the resulting implementation. To validate this approach, the performance of different operations is compared with the performance of equivalent C/C++ programs.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A Scintillator-Based Range Telescope for Particle Beam Radiotherapy

    Get PDF
    Particle beam therapy (PBT) is a form of radiation therapy that is used for cancer treatment. Recently, interest in scintillator-based detectors for the measurement of depth-dose curves of therapeutic particle beams has been growing. In this work, a novel range telescope based on plastic scintillator and read out by a large-scale CMOS image sensor is presented. The detector is made of a stack of 49 plastic scintillator sheets with a thickness of 2–3mm and a transverse area of 100 times 100mm2. A novel Bragg curve model that incorporates scintillator quenching effects was developed for the beam range reconstruction from depth-light curves with low depthresolution. Measurements to characterise the performance of the detector were carried out at three different PBT centres across Europe. The maximum difference between the measured proton range and the reference range was found to be 0.46 mm. An evaluation of the radiation hardness proved that the range reconstruction algorithm is robust following the deposition of 6,300 Gy peak dose into the detector. Variations in the beam spot size, the transverse beam position and the beam intensity were shown to have a negligible effect on the range reconstruction accuracy. Range measurements of helium, carbon and oxygen ion beams were also performed. A novel technique for online range verification based on a mixed helium/ carbon ion beam and the range telescope was investigated. The helium beam range modulation by a narrow air gap of 1mm thickness in a plastic phantom that affected less than a quarter of the beam particles was detected, demonstrating the outstanding sensitivity of the mixed-beam technique. Using two anthropomorphic pelvis phantoms it was shown that small rotations of the phantom as well as simulated bowel gas movements cause detectable range changes. The future prospects and limitations of the helium-carbon mixing as well as its technical feasibility are discussed
    corecore