92 research outputs found

    Outlier detection from ETL Execution trace

    Full text link
    Extract, Transform, Load (ETL) is an integral part of Data Warehousing (DW) implementation. The commercial tools that are used for this purpose captures lot of execution trace in form of various log files with plethora of information. However there has been hardly any initiative where any proactive analyses have been done on the ETL logs to improve their efficiency. In this paper we utilize outlier detection technique to find the processes varying most from the group in terms of execution trace. As our experiment was carried on actual production processes, any outlier we would consider as a signal rather than a noise. To identify the input parameters for the outlier detection algorithm we employ a survey among developer community with varied mix of experience and expertise. We use simple text parsing to extract these features from the logs, as shortlisted from the survey. Subsequently we applied outlier detection technique (Clustering based) on the logs. By this process we reduced our domain of detailed analysis from 500 logs to 44 logs (8 Percentage). Among the 5 outlier cluster, 2 of them are genuine concern, while the other 3 figure out because of the huge number of rows involved.Comment: 2011 3rd International Conference on Electronics Computer Technology (ICECT 2011

    Hardware Implementation of four byte per clock RC4 algorithm

    Full text link
    In the field of cryptography till date the 2-byte in 1-clock is the best known RC4 hardware design [1], while 1-byte in 1-clock [2], and the 1-byte in 3 clocks [3][4] are the best known implementation. The design algorithm in[2] considers two consecutive bytes together and processes them in 2 clocks. The design [1] is a pipelining architecture of [2]. The design of 1-byte in 3-clocks is too much modular and clock hungry. In this paper considering the RC4 algorithm, as it is, a simpler RC4 hardware design providing higher throughput is proposed in which 6 different architecture has been proposed. In design 1, 1-byte is processed in 1-clock, design 2 is a dynamic KSA-PRGA architecture of Design 1. Design 3 can process 2 byte in a single clock, where as Design 4 is Dynamic KSA-PRGA architecture of Design 3. Design 5 and Design 6 are parallelization architecture design 2 and design 4 which can compute 4 byte in a single clock. The maturity in terms of throughput, power consumption and resource usage, has been achieved from design 1 to design 6. The RC4 encryption and decryption designs are respectively embedded on two FPGA boards as co-processor hardware, the communication between the two boards performed using Ethernet.Comment: This is an unpublished draft versio

    A Brief Survey of Recent Edge-Preserving Smoothing Algorithms on Digital Images

    Full text link
    Edge preserving filters preserve the edges and its information while blurring an image. In other words they are used to smooth an image, while reducing the edge blurring effects across the edge like halos, phantom etc. They are nonlinear in nature. Examples are bilateral filter, anisotropic diffusion filter, guided filter, trilateral filter etc. Hence these family of filters are very useful in reducing the noise in an image making it very demanding in computer vision and computational photography applications like denoising, video abstraction, demosaicing, optical-flow estimation, stereo matching, tone mapping, style transfer, relighting etc. This paper provides a concrete introduction to edge preserving filters starting from the heat diffusion equation in olden to recent eras, an overview of its numerous applications, as well as mathematical analysis, various efficient and optimized ways of implementation and their interrelationships, keeping focus on preserving the boundaries, spikes and canyons in presence of noise. Furthermore it provides a realistic notion for efficient implementation with a research scope for hardware realization for further acceleration.Comment: Manuscrip

    Multi Core SSL/TLS Security Processor Architecture Prototype Design with automated Preferential Algorithm in FPGA

    Full text link
    In this paper a pipelined architecture of a high speed network security processor (NSP) for SSL,TLS protocol is implemented on a system on chip (SOC) where hardware information of all encryption, hashing and key exchange algorithms are stored in flash memory in terms of bit files, in contrary to related works where all are actually implemented in hardware. The NSP finds applications in e-commerce, virtual private network (VPN) and in other fields that require data confidentiality. The motivation of the present work is to dynamically execute applications with stipulated throughput within budgeted hardware resource and power. A preferential algorithm choosing an appropriate cipher suite is proposed, which is based on Efficient System Index (ESI) budget comprising of power, throughput and resource given by the user. The bit files of the chosen security algorithms are downloaded from the flash memory to the partial region of field programmable gate array (FPGA). The proposed SOC controls data communication between an application running in a system through a PCI and the Ethernet interface of a network. Partial configuration feature is used in ISE14.4 suite with ZYNQ 7z020-clg484 FPGA platform. The performancesComment: This is Manuscrip

    A Novel Approach for Human Action Recognition from Silhouette Images

    Full text link
    In this paper, a novel human action recognition technique from video is presented. Any action of human is a combination of several micro action sequences performed by one or more body parts of the human. The proposed approach uses spatio-temporal body parts movement (STBPM) features extracted from foreground silhouette of the human objects. The newly proposed STBPM feature estimates the movements of different body parts for any given time segment to classify actions. We also proposed a rule based logic named rule action classifier (RAC), which uses a series of condition action rules based on prior knowledge and hence does not required training to classify any action. Since we don't require training to classify actions, the proposed approach is view independent. The experimental results on publicly available Wizeman and MuHVAi datasets are compared with that of the related research work in terms of accuracy in the human action detection, and proposed technique outperforms the others.Comment: Manuscrip

    Linear Nearest Neighbor Synthesis of Reversible Circuits by Graph Partitioning

    Full text link
    Linear Nearest Neighbor (LNN) synthesis in reversible circuits has emerged as an important issue in terms of technological implementation for quantum computation. The objective is to obtain a LNN architecture with minimum gate cost. As achieving optimal synthesis is a hard problem, heuristic methods have been proposed in recent literature. In this work we present a graph partitioning based approach for LNN synthesis with reduction in circuit cost. In particular, the number of SWAP gates required to convert a given gate-level quantum circuit to its equivalent LNN configuration is minimized. Our algorithm determines the reordering of indices of the qubit line(s) for both single control and multiple controlled gates. Experimental results for placing the target qubits of Multiple Controlled Toffoli (MCT) library of benchmark circuits show a significant reduction in gate count and quantum gate cost compared to those of related research works

    Cobb Angle Measurement of Scoliosis with Reduced Variability

    Full text link
    Cobb angle, which is a measure of spinal curvature is the standard method for quantifying the magnitude of Scoliosis related to spinal deformity in orthopedics. Determining the Cobb angle through manual process is subject to human errors. In this work, we propose a methodology to measure the magnitude of Cobb angle, which appreciably reduces the variability related to its measurement compared to the related works. The proposed methodology is facilitated by using a suitable new improved version of Non-Local Means for image denoisation and Otsus automatic threshold selection for Canny edge detection. We have selected NLM for preprocessing of the image as it is one of the fine states of art for image denoisation and helps in retaining the image quality. Trimmedmean, median are more robust to outliners than mean and following this concept we observed that NLM denoising quality performance can be enhanced by using Euclidean trimmed-mean replacing the mean. To prove the better performance of the Non-Local Euclidean Trimmed-mean denoising filter, we have provided some comparative study results of the proposed denoising technique with traditional NLM and NonLocal Euclidean Medians. The experimental results for Cobb angle measurement over intra observer and inter observer experimental data reveals the better performance and superiority of the proposed approach compared to the related works. MATLAB2009b image processing toolbox was used for the purpose of simulation and verification of the proposed methodology.Comment: MedImage201

    Performance Evaluation of ECC in Single and Multi Processor Architectures on FPGA Based Embedded System

    Full text link
    Cryptographic algorithms are computationally costly and the challenge is more if we need to execute them in resource constrained embedded systems. Field Programmable Gate Arrays (FPGAs) having programmable logic de- vices and processing cores, have proven to be highly feasible implementation platforms for embedded systems providing lesser design time and reconfig- urability. Design parameters like throughput, resource utilization and power requirements are the key issues. The popular Elliptic Curve Cryptography (ECC), which is superior over other public-key crypto-systems like RSA in many ways, such as providing greater security for a smaller key size, is cho- sen in this work and the possibilities of its implementation in FPGA based embedded systems for both single and dual processor core architectures in- volving task parallelization have been explored. This exploration, which is first of its kind considering the other existing works, is a needed activity for evaluating the best possible architectural environment for ECC implementa- tion on FPGA (Virtex4 XC4VFX12, FF668, -10) based embedded platform.Comment: Published Book Title: Elsevier Science and Technology, ICCN 2013, Bangalore, Page(s): 140 - 147, Volume 3, 03.elsevierst.2013.3.ICCN16, ISBN :9789351071044, Paper link:-http://searchdl.org/index.php/book_series/view/91

    A Novel Reconfigurable Hardware Design for Speech Enhancement Based on Multi-Band Spectral Subtraction Involving Magnitude and Phase Components

    Full text link
    This paper proposes an efficient reconfigurable hardware design for speech enhancement based on multi band spectral subtraction algorithm and involving both magnitude and phase components. Our proposed design is novel as it estimates environmental noise from speech adaptively utilizing both magnitude and phase components of the speech spectrum. We performed multi-band spectrum subtraction by dividing the noisy speech spectrum into different non-uniform frequency bands having varying signal to noise ratio (SNR) and subtracting the estimated noise from each of these frequency bands. This results to the elimination of noise from both high SNR and low SNR signal components for all the frequency bands. We have coined our proposed speech enhancement technique as Multi Band Magnitude Phase Spectral Subtraction (MBMPSS). The magnitude and phase operations are executed concurrently exploiting the parallel logic blocks of Field Programmable Gate Array (FPGA), thus increasing the throughput of the system to a great extent. We have implemented our design on Spartan6 Lx45 FPGA and presented the implementation result in terms of resource utilization and delay information for the different blocks of our design. To the best of our best knowledge, this is a new type of hardware design for speech enhancement application and also a first of its kind implementation on reconfigurable hardware. We have used benchmark audio data for the evaluation of the proposed hardware and the experimental results show that our hardware shows a better SNR value compared to the existing state of the art research works.Comment: Yet to be published (manuscript

    A Novel Method for Soft Error Mitigation in FPGA using Adaptive Cross Parity Code

    Full text link
    Field Programmable Gate Arrays (FPGAs) are more prone to be affected by transient faults in presence of radiation and other environmental hazards compared to Application Specific Integrated Circuits (ASICs). Hence, error mitigation and recovery techniques are absolutely necessary to protect the FPGA hardware from soft errors arising due to such transient faults. In this paper, a new efficient multi-bit error correcting method for FPGAs is proposed using adaptive cross parity check (ACPC) code. ACPC is easy to implement and the needed decoding circuit is also simple. In the proposed scheme total configuration memory is partitioned into two parts. One part will contain ACPC hardware, which is static and assumed to be unaffected by any kind of errors. Other portion will store the binary file for logic, which is to be protected from transient error and is assumed to be dynamically reconfigurable (Partial reconfigurable area). Binary file from the secondary memory passes through ACPC hardware and the bits for forward error correction (FEC) field are calculated before entering into the reconfigurable portion. In the runtime scenario, the data from the dynamically reconfigurable portion of the configuration memory will be read back and passed through the ACPC hardware. The ACPC hardware will correct the errors before the data enters into the dynamic configuration memory. We propose a first of its kind methodology for novel transient fault correction using ACPC code for FPGAs. To validate the design we have tested the proposed methodology with Kintex FPGA. We have also measured different parameters like critical path, power consumption, overhead resource and error correction efficiency to estimate the performance of our proposed method.Comment: Manuscrip
    corecore