124 research outputs found

    Fuzzy system model for gene expression

    Get PDF
    Background: The theoretical information of a gene is contained in cell’s genetic materials, namely, DNA, mRNA and proteins. In the synthesis of functional gene products, this information can be expressed in mathematical way.Aim: In this paper, a fuzzy approach is used to analyse of the behaviour of a gene expression in a cell. The main aim of the present study is to unravel the complexity of gene expression and develop the mathematical model which can be used for better insight of functional gene products.Subjects and methods: The model for gene expression is obtained in terms of the system of fuzzy differential equations assuming that the transcription and translation processes are taking place in the cell. The Michaelis–Menten’s mechanism is incorporated in the model.Results: The analytic solution for crisp case as well as for fuzzy case is carried out. The sensitivity analysis is also performed and it is observed that the model is highly stable.Conclusion: The model for gene expression is obtained in terms of system of differential equations involving fuzzy initial values using geometric approach. The numerical results have been obtained for TJK16 strain of E.coli. The semi temporal concentrations profile of DNA, mRNA and protein are obtained and sensitivity analysis has been performed to study the variation in concentrations of DNA, mRNA and protein with respect to variation in transcription and translation rates.Keyword: Fuzzy Linear Differential equation model, DNA, mRNA, Protein, TJK1

    On Boundary Value Problems for Second-order Fuzzy Linear Differential Equations with Constant Coefficients

    Get PDF
    In this paper we investigate the solutions of boundary value problems for second-order fuzzy linear differential equations with constant coefficients. There are four different solutions for the problems by using a generalized differentiability. Solutions and several comparison results are presented. Some examples are provided for which the solutions are found

    Generalized intuitionistic fuzzy laplace transform and its application in electrical circuit

    Get PDF
    In this paper we describe the generalized intuitionistic fuzzy laplace transform method for solving first order generalized intutionistic fuzzy differential equation. The procedure is applied in imprecise electrical circuit theory problem. Here the initial condition of those applications is taken as Generalized Intuitionistic triangular fuzzy numbers (GITFNs).Publisher's Versio

    Spray behavior on compression ignition internal combustion engines: a computational analysis using CFD

    Get PDF
    TCC (graduação) - Universidade Federal de Santa Catarina. Campus Joinville. Engenharia Automotiva.Utilizando a comparação de experimentos, analisamos o bico injetor diesel e suas características principais. Com o auxílio de software de engenharia, AVL FIRE, podemos simular as diferentes condiçÔes de operação e analisar o desenvolvimento da cavitação e erosão no canal do injetor. Variando condiçÔes inicias - pressão inicial, temperatura inicial e levante da agulha de injeção, é possível identificar variaçÔes no leque de combustível injetado, penetração e fases envolvidas.Using the comparison of experiments, we analyzed the diesel injector nozzle and its main characteristics. With the aid of engineering software, AVL FIRE, we can simulate the different operating conditions and analyze the development of cavitation and erosion in the injector channel. By varying the initial conditions - initial pressure, initial temperature and injection needle elevation, it is possible to identify variations in the injected fuel range, penetration and involved phases

    An innovative information fusion method with adaptive Kalman filter for integrated INS/GPS navigation of autonomous vehicles

    Get PDF
    Information fusion method of INS/GPS navigation system based on filtering technology is a research focus at present. In order to improve the precision of navigation information, a navigation technology based on Adaptive Kalman Filter with attenuation factor is proposed to restrain noise in this paper. The algorithm continuously updates the measurement noise variance and processes noise variance of the system by collecting the estimated and measured values, and this method can suppress white noise. Because a measured value closer to the current time would more accurately reflect the characteristics of the noise, an attenuation factor is introduced to increase the weight of the current value, in order to deal with the noise variance caused by environment disturbance. To validate the effectiveness of the proposed algorithm, a series of road tests are carried out in urban environment. The GPS and IMU data of the experiments were collected and processed by dSPACE and MATLAB/Simulink. Based on the test results, the accuracy of the proposed algorithm is 20% higher than that of a traditional Adaptive Kalman Filter. It also shows that the precision of the integrated navigation can be improved due to the reduction of the influence of environment noise

    Color Image Processing based on Graph Theory

    Full text link
    [ES] La visiĂłn artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologĂ­as como la BiometrĂ­a o el Big Data, se ha convertido en el foco de interĂ©s de numerosas investigaciones y es considerada como una de las tecnologĂ­as del futuro. Este amplio campo abarca diversos mĂ©todos entre los que se encuentra el procesamiento y anĂĄlisis de imĂĄgenes digitales. El Ă©xito del anĂĄlisis de imĂĄgenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visiĂłn 3D, dependerĂĄ en gran medida de la buena calidad de las imĂĄgenes de partida. Hoy en dĂ­a existen multitud de factores que dañan las imĂĄgenes dificultando la obtenciĂłn de imĂĄgenes de calidad Ăłptima, esto ha convertido el (pre-) procesamiento digital de imĂĄgenes en un paso fundamental previo a la aplicaciĂłn de cualquier otra tarea de procesado. Los factores mĂĄs comunes son el ruido y las malas condiciones de adquisiciĂłn: los artefactos provocados por el ruido dificultan la interpretaciĂłn adecuada de la imagen y la adquisiciĂłn en condiciones de iluminaciĂłn o exposiciĂłn deficientes, como escenas dinĂĄmicas, causan pĂ©rdida de informaciĂłn de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imĂĄgenes conocidos como suavizado y realce se aplican comĂșnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la informaciĂłn imprecisa o dañada. Con estos mĂ©todos conseguimos reparar informaciĂłn de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento Ăłptimo de la imagen. Existen numerosos mĂ©todos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de tĂ©cnicas de realce que intentan combatir las pĂ©rdidas de informaciĂłn, sin embargo, estas tĂ©cnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier tĂ©cnica de realce provocarĂĄ tambiĂ©n un aumento del ruido. Aunque la idea intuitiva para solucionar este Ășltimo caso serĂĄ el previo filtrado y posterior realce, este enfoque ha demostrado no ser Ăłptimo: el filtrado podrĂĄ eliminar informaciĂłn que, a su vez, podrĂ­a no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teorĂ­a de grafos para el procesamiento de imĂĄgenes en color. En este modelo, se construye un grafo para cada pĂ­xel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imĂĄgenes: suavizado y realce. Se ha estudiado el modelo en profundidad en funciĂłn del umbral, parĂĄmetro clave que asegura la correcta clasificaciĂłn de los pĂ­xeles de la imagen. AdemĂĄs, tambiĂ©n se han estudiado las posibles caracterĂ­sticas y posibilidades del modelo que nos han permitido sacarle el mĂĄximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder informaciĂłn de los detalles. AdemĂĄs, tambiĂ©n ha permitido desarrollar un mĂ©todo capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicaciĂłn simultĂĄnea consigue combinar dos operaciones opuestas por definiciĂłn y superar asĂ­ los inconvenientes presentados por el enfoque en dos etapas.[CA] La visiĂł artificial Ă©s un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interĂ©s de nombroses investigacions i Ă©s considerada com una de les tecnologies del futur. Aquest ampli camp comprĂ©n diversos m`etodes entre els quals es troba el processament digital d'imatges i anĂ lisis d'imatges digitals. L'Ăšxit de l'anĂ lisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visiĂł 3D, dependrĂ  en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenciĂł d'imatges de qualitat ĂČptima, açĂČ ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicaciĂł de qualsevol altra tasca de processament. Els factors mĂ©s comuns sĂłn el soroll i les males condicions d'adquisiciĂł: els artefactes provocats pel soroll dificulten la inter- pretaciĂł adequada de la imatge i l'adquisiciĂł en condicions d'il·luminaciĂł o exposiciĂł deficients, com a escenes dinĂ miques, causen pĂšrdua d'informaciĂł de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat tĂ© com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informaciĂł imprecisa o danyada. Amb aquests mĂštodes aconseguim reparar informaciĂł dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrĂłs que impedeix el (post-)processament ĂČptim de la imatge. Existeixen nombrosos mĂštodes que suavitzen el soroll d'una imatge, no obstant aixĂČ, en molts casos el procĂ©s de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tĂšcniques de realç que intenten combatre les pĂšrdues d'informaciĂł, no obstant aixĂČ, aquestes tĂšcniques no contemplen l'existĂšncia de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tĂšcnica de realç provocarĂ  tambĂ© un augment del soroll. Encara que la idea intuĂŻtiva per a solucionar aquest Ășltim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser ĂČptim: el filtrat podria eliminar informaciĂł que, al seu torn, podria no ser recuperable en el seguĂ«nt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construĂŻx un graf per a cada pĂ­xel de tal manera que les seues propietats permeten caracteritzar i classificar el pĂ­xel en quĂ«stiĂł. Com veurem, el model proposat Ă©s robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funciĂł del llindar, parĂ metre clau que assegura la correcta classificaciĂł dels pĂ­xels de la imatge. A mĂ©s, tambĂ© s'han estudiat les possibles caracterĂ­stiques i possibilitats del model que ens han permĂ©s traure-li el mĂ xim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussiĂ  d'una imatge sense difuminar els bords ni perdre informaciĂł dels detalls. A mĂ©s, tambĂ© ha permĂ©s desenvolupar un mĂštode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicaciĂł simultĂ nia aconseguix combinar dues operacions oposades per definiciĂł i superar aixĂ­ els inconvenients presentats per l'enfocament en dues etapes.[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.PĂ©rez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat PolitĂšcnica de ValĂšncia. https://doi.org/10.4995/Thesis/10251/123955TESI

    Studies on SI engine simulation and air/fuel ratio control systems design

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.More stringent Euro 6 and LEV III emission standards will immediately begin execution on 2014 and 2015 respectively. Accurate air/fuel ratio control can effectively reduce vehicle emission. The simulation of engine dynamic system is a very powerful method for developing and analysing engine and engine controller. Currently, most engine air/fuel ratio control used look-up table combined with proportional and integral (PI) control and this is not robust to system uncertainty and time varying effects. This thesis first develops a simulation package for a port injection spark-ignition engine and this package include engine dynamics, vehicle dynamics as well as driving cycle selection module. The simulations results are very close to the data obtained from laboratory experiments. New controllers have been proposed to control air/fuel ratio in spark ignition engines to maximize the fuel economy while minimizing exhaust emissions. The PID control and fuzzy control methods have been combined into a fuzzy PID control and the effectiveness of this new controller has been demonstrated by simulation tests. A new neural network based predictive control is then designed for further performance improvements. It is based on the combination of inverse control and predictive control methods. The network is trained offline in which the control output is modified to compensate control errors. The simulation evaluations have shown that the new neural controller can greatly improve control air/fuel ratio performance. The test also revealed that the improved AFR control performance can effectively restrict engine harmful emissions into atmosphere, these reduce emissions are important to satisfy more stringent emission standards
    • 

    corecore