320 research outputs found

    Comparison of file sanitization techniques in usb based on average file entropy valves

    Get PDF
    Nowadays, the technology has become so advanced that many electronic gadgets are in every household today. The fast growth of technology today gives the ability for digital devices like smartphones and laptops to have a huge size of storage which is letting people to keep many of their infonnation like contact lists, photos, videos and even personal infonnation. When these infonnation are not useful anymore, users will delete them. However, the growth of technology also letting people to recover back data that has been deleted. In this case, users do not realise that their deleted data can be recovered and then used by unauthorized user. The data deleted is invisible but not gone. This is where file sanitization plays it role. File sanitization is the process of deleting the memory of the content and over write it with a different characters. In this research, the methods chosen to sanitize file are Write Zero, Write Zero Randomly and Write Zero Alternately. All of the techniques will overwrite data with zero. The best technique is chosen based on the comparison of average entropy value of the files after they have been overwritten. Write Zero is the only technique that is provided by many software like WipeFile and BitKiller. There is no software that provide Write Zero Randomly technique except for sanitizing disk using dd. As for that, Write Zero Randomly and proposed technique, Write Zero Alternately are developed using C programming language in Dev-C++. In this research, sanitization with Write Zero has the lowest average entropy value for text document (TXT), Microsoft Word (DOCX) and image (JPG) with 100% of data in the files undergone this technique have been zero-filled compared to Write Zero Randomly and Write Zero Alternately. Next, Write Zero Alternately is more efficient in tenns of average entropy by 4.64 bpB to its closest competitor which is Write Zero Randomly with 5.02 bpB. This shows that Write Zero is the best sanitization method. These file sanitization techniques are important to keep the confidentiality against unauthorized user

    Investigation and analysis of time codes Final report

    Get PDF
    Optimal time code system using correlation detection technique

    Design and Evaluation of Stochastic Processes as Physical Radar Waveforms

    Get PDF
    Recent advances in waveform generation and in computational power have enabledthe design and implementation of new complex radar waveforms. Still despite these advances, in a waveform agile mode where the radar transmits unique waveforms for every pulse or a nonrepeating signal continuously, effective operation can be difficult due the waveform design requirements. In general, for radar waveforms to be both useful and physically robust they must achieve good autocorrelation sidelobes, be spectrally contained, and possess a constant amplitude envelope for high power operation. Meeting these design goals represents a tremendous computational overhead that can easily impede real-time operation and the overall effectiveness of the radar. This work addresses this concern in the context of random FM waveforms (RFM) that have been demonstrated in recent years in both simulation and in experiments to achieve low autocorrelation sidelobes through the high dimensionality of coherent integration when operating in a waveform agile mode. However, while they are effective, the approaches to design these waveforms require optimization of each individual waveform, making them subject to costly computational requirements. This dissertation takes a different approach. Since RFM waveforms are meant to be noise like in the first place, the waveforms here are instantiated as the sample functions of an underlying stochastic process called a waveform generating function (WGF). This approach enables the convenient generation of spectrally contained RFM waveforms for little more computational cost than pulling numbers from a random number generator (RNG). To do so, this work translates the traditional mathematical treatment of random variables and random processes to a more radar centric perspective such that the WGFs can be analytically evaluated as a function of the usefulness ofthe radar waveforms that they produce via metrics such as the expected matched filter response and the expected power spectral density (PSD). Further, two WGF models denoted as pulsed stochastic waveform generation (Pulsed StoWGe) and continuouswave stochastic waveform generation (CW-StoWGe) are devised as means to optimize WGFs to produce RFM waveform with good spectral containment and design flexibility between the degree of spectral containment and autocorrelation sidelobe levels for both pulsed and CW modes. This goal is achieved by leveraging gradient descent optimization methods to reduce the expected frequency template error (EFTE) cost function. The EFTE optimization is shown analytically using the metrics above, as well as others defined in this work and through simulation, to produce WGFs whose sample functions achieve these goals and thus produce useful random FM waveforms. To complete the theory-modeling-experimentation design life cycle, the resultant StoWGe waveforms are implemented in a loop-back configuration and are shown to be amenable to physical implementation

    Speech enhancement with spectral magnitude side information

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 43-44).by Charles Kasimer Sestok, IV.S.M

    Ultrasound imaging using coded signals

    Get PDF

    An Analysis of Mutually Dispersive Brown Symbols for Non-Linear Ambiguity Suppression

    Get PDF
    This thesis significantly advances research towards the implementation of optimal Non-linear Ambiguity Suppression (NLS) waveforms by analyzing the Brown theorem. The Brown theorem is reintroduced with the use of simplified linear algebraic notation. A methodology for Brown symbol design and digitization is provided, and the concept of dispersive gain is introduced. Numerical methods are utilized to design, synthesize, and analyze Brown symbol performance. The theoretical performance in compression and dispersion of Brown symbols is demonstrated and is shown to exhibit significant improvement compared to discrete codes. As a result of this research a process is derived for the design of optimal mutually dispersive symbols for any sized family. In other words, the limitations imposed by conjugate LFM are overcome using NLS waveforms that provide an effective-fold increase in radar unambiguous range. This research effort has taken a theorem from its infancy, validated it analytically, simplified it algebraically, tested it for realizability, and now provides a means for the synthesis and digitization of pulse coded waveforms that generate an N-fold increase in radar effective unambiguous range. Peripherally, this effort has motivated many avenues of future research

    In pursuit of high resolution radar using pursuit algorithms

    Get PDF
    Radar receivers typically employ matched filters designed to maximize signal to noise ratio (SNR) in a single target environment. In a multi-target environment, however, matched filter estimates of target environment often consist of spurious targets because of radar signal sidelobes. As a result, matched filters are not suitable for use in high resolution radars operating in multi-target environments. Assuming a point target model, we show that the radar problem can be formulated as a linear under-determined system with a sparse solution. This suggests that radar can be considered as a sparse signal recovery problem. However, it is shown that the sensing matrix obtained using common radar signals does not usually satisfy the mutual coherence condition. This implies that using recovery techniques available in compressed sensing literature may not result in the optimal solution. In this thesis, we focus on the greedy algorithm approach to solve the problem and show that it naturally yields a quantitative measure for radar resolution. In addition, we show that the limitations of the greedy algorithms can be attributed to the close relation between greedy matching pursuit algorithms and the matched filter. This suggests that improvements to the resolution capability of the greedy pursuit algorithms can be made by using a mismatched signal dictionary. In some cases, unlike the mismatched filter, the proposed mismatched pursuit algorithm is shown to offer improved resolution and stability without any noticeable difference in detection performance. Further improvements in resolution are proposed by using greedy algorithms in a radar system using multiple transmit waveforms. It is shown that while using the greedy algorithms together with linear channel combining can yield significant resolution improvement, a greedy approach using nonlinear channel combining also shows some promise. Finally, a forward-backward greedy algorithm is proposed for target environments comprising of point targets as well as extended targets

    Nonlinear Suppression of Range Ambiguity in Pulse Doppler Radar

    Get PDF
    Coherent pulse train processing is most commonly used in airborne pulse Doppler radar, achieving adequate transmitter/receiver isolation and excellent resolution properties while inherently inducing ambiguities in Doppler and range. First introduced by Palermo in 1962 using two conjugate LFM pulses, the primary nonlinear suppression objective involves reducing range ambiguity, given the waveform is nominally unambiguous in Doppler, by using interpulse and intrapulse coding (pulse compression) to discriminate received ambiguous pulse responses. By introducing a nonlinear operation on compressed (undesired) pulse responses within individual channels, ambiguous energy levels are reduced in channel outputs. This research expands the NLS concept using discrete coding and processing. A general theory is developed showing how NLS accomplishes ambiguity surface volume removal without requiring orthogonal coding. Useful NLS code sets are generated using combinatorial, simulated annealing optimization techniques - a general algorithm is developed to extended family size, code length, and number of phases (polyphase coding). An adaptive reserved code thresholding scheme is introduced to efficiently and effectively track the matched filter response of a target field over a wide dynamic range, such as normally experienced in airborne radar systems. An evaluation model for characterizing NLS clutter suppression performance is developed - NLS performance is characterized using measured clutter data with analysis indicating the proposed technique performs relatively well even when large clutter cells exist
    • …
    corecore