112 research outputs found

    Continuous Time Identification in Laplace Domain

    Get PDF
    We give a simple and accurate method for estimating the paramters of continuous time systems under the constraint that all the poles of the system lie to the left of the line s = -1. The method relies on the simple solution of a linear system of equations in the complex domain. We demonstrate by the use of simulation that the proposed methods gives accruate estimates when compared to existing methods. Methods for obtaining sparse solutions, which help in determining the order of the system are also give

    Enhanced Antihypertensive Activity of Candesartan Cilexetil Nanosuspension: Formulation, Characterization and Pharmacodynamic Study

    Get PDF
    The objective of the present investigation was to enhance the oral bioavailability of practically insoluble Candesartan cilexetil [CC] by preparing nanosuspension. The nanosuspension was prepared by media milling using zirconium oxide beads and converted to solid state by spray drying. The spray dried nanosuspension of CC [SDCN] was evaluated for particle size, zeta potential, saturation solubility, crystallanity, surface morphology and dissolution behavior. SDCN showed particle size of 223.5±5.4 nm and zeta potential of −32.2±0.6 mV while saturation solubility of bulk CC and SDCN were 125±6.9 μg/ml and 2805±29.5 μg/ml respectively, showing more than 20 times increase in solubility. Differential Scanning Calorimetry [DSC] and X-ray diffraction [XRD] analysis showed that crystalline state of CC remained unchanged in SDCN. Dissolution studies in phosphate buffer pH 6.5 containing 0.7% Tween 20 showed that 53±5% of bulk drug dissolved in 15 min whereas SDCN was almost completely dissolved exhibiting higher dissolution velocity and solubility. Transmission electron microscopy [TEM] revealed that nanocrystals were not of uniform size, and approximately of oval shape. Pharmacodynamic study based on deoxycorticosterone acetate [DOCA] salt model was performed in rats to evaluate in-vivo performance, which showed 26.75±0.33% decrease in systolic blood pressure for nanosuspension while plain drug suspension showed 16.0±0.38% reduction, indicating that increase in dissolution velocity and saturation solubility leads to enhancement of bioavailability of SDCN when compared to bulk CC suspension. Thus, the results conclusively demonstrated a significant enhancement in antihypertensive activity of candesartan when formulated as nanosuspension

    P-AODV Routing Protocol for Better Performance in MANET

    Get PDF
    MANET (Mobile Ad-Hoc Network) is an independent collection of mobile nodes that communicate over quite bandwidth constrained wireless links. In Mobile Ad hoc Networks (MANETs), the performance of various on-demand routing protocols significantly affected by the changing network topology.in the route discovery process, AODV (Ad-hoc On-Demand Distance Vector) is the mostly studied on-demand routing protocol that uses single route reply packet with reverse path for answering to the source node. Due to increase in the variability of the network topology, the possibility of route reply packet loss increases & destroys the performance of the routing protocol. It includes related material and details of other modified AODV protocols like R-AODV, Multipath Routing Protocol. This protocols makes better performance as compared to AODV but there we need more modification for efficient. We then focus on the end-to-end delay, throughput and overhead for the performance improvement. As by, we proposed a new AODV routing protocol that uses R-AODV for route discovery and Multipath routing protocol for data(packet) sending from source to the destination. Our proposed Protocol (P-AODV) would improve performance in terms of Average End-to-End Delay, Throughput and Routing Overhead. DOI: 10.17762/ijritcc2321-8169.15058

    Fast Algorithm Development for SVD: Applications in Pattern Matching and Fault Diagnosis

    Get PDF
    The project aims for fast detection and diagnosis of faults occurring in process plants by designing a low-cost FPGA module for the computation. Fast detection and diagnosis when the process is still operating in a controllable region helps avoiding the further advancement of the fault and reduce the productivity loss. Model-based methods are not popular in the domain of process control as obtaining an accurate model is expensive and requires an expertise. Data-driven methods like Principal Component Analysis(PCA) is a quite popular diagnostic method for process plants as they do not require any model. PCA is widely used tool for dimensionality reduction and thus reducing the computational e�ort. The trends are captured in prinicpal components as it is di�cult to have a same amount of disturbance as simulated in historical database. The historical database has multiple instances of various kinds of faults and disturbances along with normal operation. A moving window approach has been employed to detect similar instances in the historical database based on Standard PCA similarity factor. The measurements of variables of interest over a certain period of time forms the snapshot dataset, S. At each instant, a window of same size as that of snapshot dataset is picked from the historical database forms the historical window, H. The two datasets are then compared using similarity factors like Standard PCA similarity factor which signi�es the angular di�erence between the principal components of two datasets. Since many of the operating conditions are quite similar to each other and signi�cant number of mis-classi�cations have been observed, a candidate pool which orders the historical data windows on the values of similarity factor is formed. Based on the most detected operation among the top-most windows, the operating personnel takes necessary action. Tennessee Eastman Challenge process has been chosen as an initial case study for evaluating the performance. The measurements are sampled for every one minute and the fault having the smallest maximum duration is 8 hours. Hence the snapshot window size, m has been chosen to be consisting of 500 samples i.e 8.33 hours of most recent data of all the 52 variables. Ideally, the moving window should replace the oldest sample with a new one. Then it would take approximately the same number of comparisons as that of size of historical database. The size of the historical database is 4.32 million measurements(past 8years data) for each of the 52 variables. With software simulation on Matlab, this takes around 80-100 minutes to sweep through the whole 4.32 million historical database. Since most of the computation is spent in �nding principal components of the two datasets using SVD, a hardware design has to be incorporated to accelerate the pattern matching approach. The thesis is organized as follows: Chapter 1 describes the moving window approach, various similarity factors and metrics used for pattern matching. The previous work proposed by Ashish Singhal is based on skipping few samples for reducing the computational e�ort and also employs windows as large as 5761 which is four days of snapshot. Instead, a new method which skips the samples when the similarity factor is quite low has been proposed. A simpli�ed form of the Standard PCA similarity has been proposed without any trade-o� in accuracy. Pre-computation of historical database can also be done as the data is available aprior, but this requires a large memory requirement as most of the time is spent in read/write operations. The large memory requirement is due to the fact that every sample will give rise to 52�35 matrix assuming the top-35 PC's are sufficient enough to capture the variance of the dataset. Chapter 2 describes various popular algorithms for SVD. Algorithms apart from Jacobi methods like Golub-Kahan, Divide and conquer SVD algorithms are brie y discussed. While bi-diagonal methods are very accurate they suffer from large latency and computationally intensive. On the other hand, Jacobi methods are computationally inexpensive and parallelizable, thus reducing the latency. We also evaluted the performance of the proposed hybrid Golub-Kahan Jacobi algorithm to our application. Chapter 3 describes the basic building block CORDIC which is used for performing rotations required for Jacobi methods or for n-D householder re ections of Golub-Kahan SVD. CORIDC is widely employed in hardware design for computing trigonometric, exponential or logarithmic functions as it makes use of simple shift and add/subtract operations. Two modes of CORDIC namely Rotation mode and Vectoring mode are discussed which are used in the derivation of Two-sided Jacobi SVD. Chapter 4 describes the Jacobi methods of SVD which are quite popular in hardware implementation as they are quite amenable to parallel computation. Two variants of Jacobi methods namely One-sided and Two-sided Jacobi methods are brie y discussed. Two-sided Jacobi making making use of CORDIC has has been derived. The systolic array implementation which is quite popular in hardware implementation for the past three decades has been discussed. Chapter 5 deals with the Hardware implementation of Pattern matching and reports the literature survey of various architectures developed for computing SVD. Xilinx ZC7020 has been chosen as target device for FPGA implementation as it is inexpensive device with many built-in peripherals. The latency reports with both Vivado HLS and Vivado SDSoC are also reported for the application of interest. Evaluation of other case studies and other datadriven methods similar to PCA like Correspondence Analysis(CA) and Independent Component Analysis(ICA), development of efficient hybrid method for computing SVD in hardware and highly discriminating similarity factor, extending CORDIC to n-dimensions for householder re ections have been considered for future research

    Fault Tolerant Power Balancing Strategy in an Isolated Microgrid via Optimization

    Get PDF
    The increasing penetration of renewable energy generation (REG) in t he m icrogrid paradigm has brought with it larger uncertainty in the scheduled generation. This along with the inevitable variation between actual load and for ecasted load has further accentuated the issue of real - time power balancing. With the advent of sm art loads and meters supported by advanced communication technologies , several new possibilities for demand side management have opened up . In this paper , a real time optimization strategy for load side energy management system (EMS) and for power balance is proposed. The proposed strategy achieves power balance by optimizing load reduction . The objective is to ascertain uninterrupted power to critical loads and reduce non - critical loads depending on the priorities for various loads. To further enhance the flexibility of the system, the addition of a battery to the management model is also discussed. The proposed algorithm also makes the system tolerant to possible generator failures if battery is added to the system. The effectiveness of the proposed online power balancing strategy via optimization is demonstrated through various simulation case studies

    Fault Diagnosis In Batch Process Monitoring

    Get PDF
    Every process plant nowadays highly complex to produce high-quality products and to satisfy de- mands in time. Other than that, plant safety is also crucial event had to be taken care to increase plant e�ciency. Due to poor monitoring strategies leads to huge loss of income and valuable time to regain its normal behavior. So, when there is any fault occurs in the plant it should be detected and need to take supervisory action before propagating it to new locations and new equipment failure leads to plant halt. Therefore process monitoring is very crucial event had to be done e�ectively. In Chapter 1 Importance of fault detection and diagnosis(FDD) in plant monitoring, what are the typical situations will leads to fault and their causes of fault is discussed. How data will be transformed in di�erent stages in diagnostic system before certain action, desirable characteristics for good diagnostic systems are discussed brie y. And in �nal part of this chapter what are the basic classi�cations of FDD methods are discussed. Principle component analysis is multivariate statistical technique helps to extract major information with few dimensions. Dimensionality of reduced space is very low compared to original dimension of data set. Number of principle component(PC) selection depends on variability or information required in lower dimensional space. So PCA is e�ective dimensionality reduction technique. But for process monitoring both PC and residual space are important. In chapter 2 mainly discussed about PCA and its theory. Batch Process Monitoring is relatively not easy to monitor compared to Continuous process be- cause of their dynamic nature and non-linearity in the data. So there are methods like MPCA(multi- way Principle component analysis), MCA(multi-way correspondence analysis) and Kernal PCA, Dis- similarity Index based(DISSIM) etc., are there to monitor batch process. Kernal based methods need to choose right kernal based on the non-linearity in the data. Dissimilarity Index based methods well suits for continuous process monitoring since it can able to detect the changes in distribution of data. Extension of DISSIM method to batch process monitoring is EDISSIM, which is discussed in chapter 3. And also MPCA is very traditional method which can able to detect abnormal sample but these cannot be able to detect small mean deviations in measurements. Multi Way PCA is applied after unfolding the data. Batch data Unfolding discussed in section 3.2 and selection of control lim- its discussed in 3.2.3. Apart from these methods there is another strategy called Pattern matching method introduced by Johannesmeyer. This method will helps to quickly locate the similar patterns in historical database. In Process industries we frequently collect the data so that there will be lot of data available. But there will be less information containing in it, used PCA to extract main information. In pattern matching strategy to detect the similar patterns in historical data base we need to provide some quantitative measure of similarity between two data sets those are similarity factors. So by using PCA method we are extracting high informative data in lower dimensional space. So Using PCA method similarity factors are calculated. Di�erent similarity factors and their calculation is shown in chapter 4. On-line monitoring of Acetone Butanol batch process discussed using pattern matching strategy. Acetone Butanol fermentation process mathematical model will be simulated to di�erent nominal values with di�erent operating conditions to develop historical database. In this case study there will be 500 batches with �ve operations conditions like one NOC and 4 di�erent faulty operation batches. In each batch there will be 100 batches. After calculation of similarity factors instead of going for candidate pool selection directly we are trying to detect the batches which are similar to snapshot data. Performance of On-line monitoring using pattern matching strategy is discussed. On-line monitoring strategy will change the way we are anticipating iv the un�lled data. Here we are trying to �ll with reference batch data. Reference data will be average of NOC batches. The performance of this method veri�ed in MATLAB as shown in section 4.3. In Chapter 5 described average PC's(Principle components) model. This method will helps to decrease the e�orts in candidate pool selection and evaluation to �nd snapshot data in historical database. And also Incremental average model building and model updating will improves the quality of model ultimately.In incremental average model building If any of the snapshot data classi�ed as any of the already existed operating condition data set it will be used in building average model. If not existed in any of the operating condition data set utilized to update average model. This method applied on Acetone Butanol fermentation process data and veri�ed. Because of the fact that batch data highly non linear in nature So PCA not able to handle non-linear correlations. And pattern matching approach using PCA average model not give good discrimination. For better discrimination ability and self aggregation can be possible using Corresponding Analysis because of non-linear scaling. In chapter 6 pattern matching approach using corresponding analysis has been discussed brie y. Results obtained using CA based similarity factor displayed for Acetone Butanol fermentation process case study

    Computational analysis of sense-antisense chimeric transcripts reveals their potential regulatory features and the landscape of expression in human cells

    Get PDF
    Many human genes are transcribed from both strands and produce sense-antisense gene pairs. Sense-antisense (SAS) chimeric transcripts are produced upon the coalescing of exons/introns from both sense and antisense transcripts of the same gene. SAS chimera was first reported in prostate cancer cells. Subsequently, numerous SAS chimeras have been reported in the ChiTaRS-2.1 database. However, the landscape of their expression in human cells and functional aspects are still unknown. We found that longer palindromic sequences are a unique feature of SAS chimeras. Structural analysis indicates that a long hairpin-like structure formed by many consecutive Watson-Crick base pairs appears because of these long palindromic sequences, which possibly play a similar role as double-stranded RNA (dsRNA), interfering with gene expression. RNA–RNA interaction analysis suggested that SAS chimeras could significantly interact with their parental mRNAs, indicating their potential regulatory features. Here, 267 SAS chimeras were mapped in RNA-seq data from 16 healthy human tissues, revealing their expression in normal cells. Evolutionary analysis suggested the positive selection favoring sense-antisense fusions that significantly impacted the evolution of their function and structure. Overall, our study provides detailed insight into the expression landscape of SAS chimeras in human cells and identifies potential regulatory features.Israeli Council for Higher Education [PBC Fellowship for Outstanding Post-Doctoral Fellows, 2019-2021 to S.M.]; Israel Innovation Authority [66824, 2019–2021 to M.F-M.]; RSF [18–14-00240 to Y.A.M. (in part)].Peer ReviewedPostprint (published version

    tRNA methylation resolves codon usage bias at the limit of cell viability.

    Get PDF
    Codon usage of each genome is closely correlated with the abundance of tRNA isoacceptors. How codon usage bias is resolved by tRNA post-transcriptional modifications is largely unknown. Here we demonstrate that the N1-methylation of guanosine at position 37 (m1G37) on the 3'-side of the anticodon, while not directly responsible for reading of codons, is a neutralizer that resolves differential decoding of proline codons. A genome-wide suppressor screen of a non-viable Escherichia coli strain, lacking m1G37, identifies proS suppressor mutations, indicating a coupling of methylation with tRNA prolyl-aminoacylation that sets the limit of cell viability. Using these suppressors, where prolyl-aminoacylation is decoupled from tRNA methylation, we show that m1G37 neutralizes differential translation of proline codons by the major isoacceptor. Lack of m1G37 inactivates this neutralization and exposes the need for a minor isoacceptor for cell viability. This work has medical implications for bacterial species that exclusively use the major isoacceptor for survival
    corecore