110 research outputs found
Image processing algorithms employing two-dimensional Karhunen-Loeve Transform
In the fields of image processing and pattern recognition there is an important problem of acquiring, gathering, storing and processing large volumes of data. The most frequently used solution making these data reduced is a compression, which in many cases leads also to the speeding-up further computations. One of the most frequently employed approaches is an image handling by means of Principal Component Analysis and Karhunen-Loeve Transform, which are well known statistical tools used in many areas of applied science. Their main property is the possibility of reducing the volume of data required for its optimal representation while preserving its specific characteristics.The paper presents selected image processing algorithms such as compression, scrambling (coding) and information embedding (steganography) and their realizations employing the twodimensional Karhunen-Loeve Transform (2DKLT), which is superior to the standard, onedimensional KLT since it represents images respecting their spatial properties. The principles of KLT and 2DKLT as well as sample implementations and experiments performed on the standard benchmark datasets are presented. The results show that the 2DKLT employed in the above applications gives obvious advantages in comparison to certain standard algorithms, such as DCT, FFT and wavelets
Image processing algorithms employing two-dimensional Karhunen-Loeve Transform
In the fields of image processing and pattern recognition there is an important problem of acquiring, gathering, storing and processing large volumes of data. The most frequently used solution making these data reduced is a compression, which in many cases leads also to the speeding-up further computations. One of the most frequently employed approaches is an image handling by means of Principal Component Analysis and Karhunen-Loeve Transform, which are well known statistical tools used in many areas of applied science. Their main property is the possibility of reducing the volume of data required for its optimal representation while preserving its specific characteristics.The paper presents selected image processing algorithms such as compression, scrambling (coding) and information embedding (steganography) and their realizations employing the twodimensional Karhunen-Loeve Transform (2DKLT), which is superior to the standard, onedimensional KLT since it represents images respecting their spatial properties. The principles of KLT and 2DKLT as well as sample implementations and experiments performed on the standard benchmark datasets are presented. The results show that the 2DKLT employed in the above applications gives obvious advantages in comparison to certain standard algorithms, such as DCT, FFT and wavelets
Fast Automatic Bayesian Cubature Using Matching Kernels and Designs
Automatic cubatures approximate integrals to user-specified error tolerances.
For high dimensional problems, it is difficult to adaptively change the
sampling pattern to focus on peaks because peaks can hide more easily in high
dimensional space. But, one can automatically determine the sample size, ,
given a reasonable, fixed sampling pattern. This approach is pursued in
Jagadeeswaran and Hickernell, Stat.\ Comput., 29:1214-1229, 2019, where a
Bayesian perspective is used to construct a credible interval for the integral,
and the computation is terminated when the half-width of the interval is no
greater than the required error tolerance. Our earlier work employs integration
lattice sampling, and the computations are expedited by the fast Fourier
transform because the covariance kernels for the Gaussian process prior on the
integrand are chosen to be shift-invariant. In this chapter, we extend our fast
automatic Bayesian cubature to digital net sampling via \emph{digitally}
shift-invariant covariance kernels and fast Walsh transforms.
Our algorithm is implemented in the MATLAB Guaranteed Automatic Integration
Library (GAIL) and the QMCPy Python library.Comment: PhD thesi
FAST AUTOMATIC BAYESIAN CUBATURE USING MATCHING KERNELS AND DESIGNS
Automatic cubatures approximate multidimensional integrals to user-specified
error tolerances. In many real-world integration problems, the analytical solution is
either unavailable or difficult to compute. To overcome this, one can use numerical
algorithms that approximately estimate the value of the integral.
For high dimensional integrals, quasi-Monte Carlo (QMC) methods are very
popular. QMC methods are equal-weight quadrature rules where the quadrature
points are chosen deterministically, unlike Monte Carlo (MC) methods where the
points are chosen randomly. The families of integration lattice nodes and digital nets
are the most popular quadrature points used. These methods consider the integrand
to be a deterministic function. An alternate approach, called Bayesian cubature,
postulates the integrand to be an instance of a Gaussian stochastic process
Intelligent Processing in Wireless Communications Using Particle Swarm Based Methods
There are a lot of optimization needs in the research and design of wireless communica- tion systems. Many of these optimization problems are Nondeterministic Polynomial (NP) hard problems and could not be solved well. Many of other non-NP-hard optimization problems are combinatorial and do not have satisfying solutions either. This dissertation presents a series of Particle Swarm Optimization (PSO) based search and optimization algorithms that solve open research and design problems in wireless communications. These problems are either avoided or solved approximately before.
PSO is a bottom-up approach for optimization problems. It imposes no conditions on the underlying problem. Its simple formulation makes it easy to implement, apply, extend and hybridize. The algorithm uses simple operators like adders, and multipliers to travel through the search space and the process requires just five simple steps. PSO is also easy to control because it has limited number of parameters and is less sensitive to parameters than other swarm intelligence algorithms. It is not dependent on initial points and converges very fast.
Four types of PSO based approaches are proposed targeting four different kinds of problems in wireless communications. First, we use binary PSO and continuous PSO together to find optimal compositions of Gaussian derivative pulses to form several UWB pulses that not only comply with the FCC spectrum mask, but also best exploit the avail- able spectrum and power. Second, three different PSO based algorithms are developed to solve the NLOS/LOS channel differentiation, NLOS range error mitigation and multilateration problems respectively. Third, a PSO based search method is proposed to find optimal orthogonal code sets to reduce the inter carrier interference effects in an frequency redundant OFDM system. Fourth, a PSO based phase optimization technique is proposed in reducing the PAPR of an frequency redundant OFDM system. The PSO based approaches are compared with other canonical solutions for these communication problems and showed superior performance in many aspects. which are confirmed by analysis and simulation results provided respectively. Open questions and future
Open questions and future works for the dissertation are proposed to serve as a guide for the future research efforts
Semi-Supervised GNSS Scintillations Detection Based on DeepInfomax
This work focuses on a machine learning based detection of iono-spheric scintillation events affecting Global Navigation Satellite System (GNSS) signals. We here extend the recent detection results based on Decision Trees, designing a semi-supervised detection system based on the DeepInfomax approach recently proposed. The paper shows that it is possible to achieve good classification accuracy while reducing the amount of time that human experts must spend manually labelling the datasets for the training of supervised algorithms. The proposed method is scalable and reduces the required percentage of annotated samples to achieve a given performance, making it a viable candidate for a realistic deployment of scintillation detection in software defined GNSS receivers
Asymmetrical digital subscriber line (ADSL) an in-depth study
Asymmetrical Digital Subscriber Line (ADSL) is one member of a group of broadband access technologies that uses the existing copper-based local loop of the analog PSTN for high-speed digital data transmission. One feature of ADSL is that it permits analog voice POTS transmissions to continue uninterrupted over the same wiring. Specifically, POTS continues to use the 0 to 4 KHz frequency range of the copper wiring, while ADSL uses bandwidth starting at 25 KHz and extending up to approximately 1.1 MHz for data transmission. The term asymmetrical refers to the fact that data rates downstream (to the user) and upstream (from the user) are not the same. Typical ADSL data rates range from 1.536 to 6.144 Mbps downstream and from 16 to 640 Kbps upstream. Local loop length, wire size, and the presence of devices to improve voice communication such as bridged taps and loading coils all affect ADSL data rates. Digital data is coded by one of two methods: Discrete Multitone Modulation (DMT) or Carrierless Amplitude and Phase Modulation (CAP). Echo control is also accomplished by one of two methods: Frequency Division Multiplexing (FDM) or echo cancellation. This paper consists of four sections: 1) A technical review and comparison of the CAP and DMT line encoding technologies. 2) A market review of the presence of CAP and DMT technologies in customer premise equipment (CPE) such as modems and routers. 3) A review of the POTS physical layer that exists between the ADSL subscriber and the Telco CO, and its impact on ADSL availability and quality of service (QOS). 4) A technical review of the newer, splitterless, G.Lite technolog
Research on digital image watermark encryption based on hyperchaos
The digital watermarking technique embeds meaningful information into one or more watermark images hidden in one image, in which it is known as a secret carrier. It is difficult for a hacker to extract or remove any hidden watermark from an image, and especially to crack so called digital watermark. The combination of digital watermarking technique and traditional image encryption technique is able to greatly improve anti-hacking capability, which suggests it is a good method for keeping the integrity of the original image. The research works contained in this thesis include: (1)A literature review the hyperchaotic watermarking technique is relatively more advantageous, and becomes the main subject in this programme. (2)The theoretical foundation of watermarking technologies, including the human visual system (HVS), the colour space transform, discrete wavelet transform (DWT), the main watermark embedding algorithms, and the mainstream methods for improving watermark robustness and for evaluating watermark embedding performance. (3) The devised hyperchaotic scrambling technique it has been applied to colour image watermark that helps to improve the image encryption and anti-cracking capabilities. The experiments in this research prove the robustness and some other advantages of the invented technique. This thesis focuses on combining the chaotic scrambling and wavelet watermark embedding to achieve a hyperchaotic digital watermark to encrypt digital products, with the human visual system (HVS) and other factors taken into account. This research is of significant importance and has industrial application value
Multi-signal Anomaly Detection for Real-Time Embedded Systems
This thesis presents MuSADET, an anomaly detection framework targeting timing anomalies found in event traces from real-time embedded systems. The method leverages stationary event generators, signal processing, and distance metrics to classify inter-arrival time sequences as normal/anomalous. Experimental evaluation of traces collected from two real-time embedded systems provides empirical evidence of MuSADET’s anomaly detection performance.
MuSADET is appropriate for embedded systems, where many event generators are intrinsically recurrent and generate stationary sequences of timestamp. To find timinganomalies, MuSADET compares the frequency domain features of an unknown trace to a normal model trained from well-behaved executions of the system. Each signal in the analysis trace receives a normal/anomalous score, which can help engineers isolate the source of the anomaly.
Empirical evidence of anomaly detection performed on traces collected from an industrygrade hexacopter and the Controller Area Network (CAN) bus deployed in a real vehicle demonstrates the feasibility of the proposed method. In all case studies, anomaly detection did not require an anomaly model while achieving high detection rates. For some of the studied scenarios, the true positive detection rate goes above 99 %, with false-positive rates below one %. The visualization of classification scores shows that some timing anomalies can propagate to multiple signals within the system. Comparison to the similar method, Signal Processing for Trace Analysis (SiPTA), indicates that MuSADET is superior in detection performance and provides complementary information that can help link anomalies to the process where they occurred
- …