10 research outputs found
Garment smoothness appearance evaluation through computer vision
The measurement and evaluation of the appearance of wrinkling in textile products after domestic washing and drying is performed currently by the comparison of the fabric with the replicas. This kind of evaluation has certain drawbacks, the most significant of which are its subjectivity and its limitations when used with garments. In this paper, we present an automated wrinkling evaluation system. The system developed can process fabrics as well as any type of garment, independent of size or pattern on the material. The system allows us to label different parts of the garment. Thus, as different garment parts have different influence on human perception, this labeling enables the use of weighting, to improve the correlation with the human visual system. The system has been tested with different garments showing good performance and correlation with human perception. © The Author(s) 2012.Silvestre-Blanes, J.; Berenguer Sebastiá, JR.; Pérez Llorens, R.; Miralles, I.; Moreno Canton, J. (2012). Garment smoothness appearance evaluation through computer vision. Textile Research Journal. 82(3):299-309. doi:10.1177/0040517511424530S299309823López, F., Miguel Valiente, J., Manuel Prats, J., & Ferrer, A. (2008). Performance evaluation of soft color texture descriptors for surface grading using experimental design and logistic regression. Pattern Recognition, 41(5), 1744-1755. doi:10.1016/j.patcog.2007.09.011Villette, S. (2008). Simple imaging system to measure velocity and improve the quality of fertilizer spreading in agriculture. Journal of Electronic Imaging, 17(3), 031109. doi:10.1117/1.2956835Neri, F., & Tirronen, V. (2009). Memetic Differential Evolution Frameworks in Filter Design for Defect Detection in Paper Production. Studies in Computational Intelligence, 113-131. doi:10.1007/978-3-642-01636-3_7Carfagni, M., Furferi, R., & Governi, L. (2005). A real-time machine-vision system for monitoring the textile raising process. Computers in Industry, 56(8-9), 831-842. doi:10.1016/j.compind.2005.05.010Wang, W., Wong, Y. S., & Hong, G. S. (2005). Flank wear measurement by successive image analysis. Computers in Industry, 56(8-9), 816-830. doi:10.1016/j.compind.2005.05.009Cho, C.-S., Chung, B.-M., & Park, M.-J. (2005). Development of Real-Time Vision-Based Fabric Inspection System. IEEE Transactions on Industrial Electronics, 52(4), 1073-1079. doi:10.1109/tie.2005.851648Kawabata, S., Mori, M., & Niwa, M. (1997). An experiment on human sensory measurement and its objective measurement. International Journal of Clothing Science and Technology, 9(3), 203-206. doi:10.1108/09556229710168324Fan, J., Lu, D., Macalpine, J. M. K., & Hui, C. L. P. (1999). Objective Evaluation of Pucker in Three-Dimensional Garment Seams. Textile Research Journal, 69(7), 467-472. doi:10.1177/004051759906900701Fan, J., & Liu, F. (2000). Objective Evaluation of Garment Seams Using 3D Laser Scanning Technology. Textile Research Journal, 70(11), 1025-1030. doi:10.1177/004051750007001114Yang, X. B., & Huang, X. B. (2003). Evaluating Fabric Wrinkle Degree with a Photometric Stereo Method. Textile Research Journal, 73(5), 451-454. doi:10.1177/004051750307300513Kang, T. J., Kim, S. C., Sul, I. H., Youn, J. R., & Chung, K. (2005). Fabric Surface Roughness Evaluation Using Wavelet-Fractal Method. Textile Research Journal, 75(11), 751-760. doi:10.1177/0040517505058855Mohri, M., Ravandi, S. A. H., & Youssefi, M. (2005). Objective evaluation of wrinkled fabric using radon transform. Journal of the Textile Institute, 96(6), 365-370. doi:10.1533/joti.2004.0066Zaouali, R., Msahli, S., El Abed, B., & Sakli, F. (2007). Objective evaluation of multidirectional fabric wrinkling using image analysis. Journal of the Textile Institute, 98(5), 443-451. doi:10.1080/00405000701489156Yu, W., Yao, M., & Xu, B. (2009). 3-D Surface Reconstruction and Evaluation of Wrinkled Fabrics by Stereo Vision. Textile Research Journal, 79(1), 36-46. doi:10.1177/004051750809049
Two-Dimensional EspEn: A New Approach to Analyze Image Texture by Irregularity
Image processing has played a relevant role in various industries, where the main challenge is to extract specific features from images. Specifically, texture characterizes the phenomenon of the occurrence of a pattern along the spatial distribution, taking into account the intensities of the pixels for which it has been applied in classification and segmentation tasks. Therefore, several feature extraction methods have been proposed in recent decades, but few of them rely on entropy, which is a measure of uncertainty. Moreover, entropy algorithms have been little explored in bidimensional data. Nevertheless, there is a growing interest in developing algorithms to solve current limits, since Shannon Entropy does not consider spatial information, and SampEn2D generates unreliable values in small sizes. We introduce a proposed algorithm, EspEn (Espinosa Entropy), to measure the irregularity present in two-dimensional data, where the calculation requires setting the parameters as follows: m (length of square window), r (tolerance threshold), and ρ (percentage of similarity). Three experiments were performed; the first two were on simulated images contaminated with different noise levels. The last experiment was with grayscale images from the Normalized Brodatz Texture database (NBT). First, we compared the performance of EspEn against the entropy of Shannon and SampEn2D. Second, we evaluated the dependence of EspEn on variations of the values of the parameters m, r, and ρ. Third, we evaluated the EspEn algorithm on NBT images. The results revealed that EspEn could discriminate images with different size and degrees of noise. Finally, EspEn provides an alternative algorithm to quantify the irregularity in 2D data; the recommended parameters for better performance are m = 3, r = 20, and ρ = 0.7
Weighted splicing systems
In this paper we introduce a new variant of splicing systems,
called weighted splicing systems, and establish some basic properties of language families generated by this type of splicing systems. We show that a simple extension of splicing systems with weights can increase the computational power of splicing systems with finite components
Recommended from our members
Evolved transistor array robot controllers
For the first time a field programmable transistor array (FPTA) was used to evolve robot control circuits directly in analog hardware. Controllers were successfully incrementally evolved for a physical robot engaged in a series of visually guided behaviours, including finding a target in a complex environment where the goal was hidden from most locations. Circuits for recognising spoken commands were also evolved and these were used in conjunction with the controllers to enable voice control of the robot, triggering behavioural switching. Poor quality visual sensors were deliberately used to test the ability of evolved analog circuits to deal with noisy uncertain data in realtime. Visual features were coevolved with the controllers to automatically achieve dimensionality reduction and feature extraction and selection in an integrated way. An efficient new method was developed for simulating the robot in its visual environment. This allowed controllers to be evaluated in a simulation connected to the FPTA. The controllers then transferred seamlessly to the real world. The circuit replication issue was also addressed in experiments where circuits were evolved to be able to function correctly in multiple areas of the FPTA. A methodology was developed to
analyse the evolved circuits which provided insights into their operation. Comparative experiments demonstrated the superior evolvability of the transistor array medium
Genetic programming applied to morphological image processing
This thesis presents three approaches to the automatic design of algorithms for the processing of binary images based on the Genetic Programming (GP) paradigm. In the first approach the algorithms are designed using the basic Mathematical Morphology (MM) operators, i.e. erosion and dilation, with a variety of Structuring Elements (SEs). GP is used to design algorithms to convert a binary image into another containing just a particular characteristic of interest. In the study we have tested two similarity fitness functions, training sets with different numbers of elements and different sizes of the training images over three different objectives. The results of the first approach showed some success in the evolution of MM algorithms but also identifed problems with the amount of computational resources the method required. The second approach uses Sub-Machine-Code GP (SMCGP) and bitwise operators as an attempt to speed-up the evolution of the algorithms and to make them both feasible and effective. The SMCGP approach was successful in the speeding up of the computation but it was not successful in improving the quality of the obtained algorithms. The third approach presents the combination of logical and morphological operators in an attempt to improve the quality of the automatically designed algorithms. The results obtained provide empirical evidence showing that the evolution of high quality MM algorithms using GP is possible and that this technique has a broad potential that should be explored further. This thesis includes an analysis of the potential of GP and other Machine Learning techniques for solving the general problem of Signal Understanding by means of exploring Mathematical Morphology
Evolutionary Optimization Techniques for 3D Simultaneous Localization and Mapping
Mención Internacional en el título de doctorMobile robots are growing up in applications to move through indoors and outdoors environments,
passing from teleoperated applications to autonomous applications like exploring
or navigating. For a robot to move through a particular location, it needs to gather information
about the scenario using sensors. These sensors allow the robot to observe, depending on the
sensor data type. Cameras mostly give information in two dimensions, with colors and pixels
representing an image. Range sensors give distances from the robot to obstacles. Depth
Cameras mix both technologies to expand their information to three-dimensional information.
Light Detection and Ranging (LiDAR) provides information about the distance to the sensor
but expands its range to planes and three dimensions alongside precision. So, mobile robots
use those sensors to scan the scenario while moving. If the robot already has a map, the sensors
measure, and the robot finds features that correspond to features on the map to localize
itself. Men have used Maps as a specialized form of representing the environment for more
than 5000 years, becoming a piece of important information in today’s daily basics. Maps are
used to navigate from one place to another, localize something inside some boundaries, or as
a form of documentation of essential features. So naturally, an intuitive way of making an
autonomous mobile robot is to implement geometrical information maps to represent the environment.
On the other hand, if the robot does not have a previous map, it should build it while
moving around. The robot computes the sensor information with the odometer sensor information
to achieve this task. However, sensors have their own flaws due to precision, calibration,
or accuracy. Furthermore, moving a robot has its physical constraints and faults that may occur
randomly, like wheel drifting or mechanical miscalibration that may make the odometers fail
in the measurement, causing misalignment during the map building. A novel technique was
presented in the mid-90s to solve this problem and overpass the uncertainty of sensors while
the robot is building the map, the Simultaneous Localization and Mapping algorithm (SLAM).
Its goal is to build a map while the robot’s position is corrected based on the information of
two or more consecutive scans matched together or find the rigid registration vector between
them. This algorithm has been broadly studied and developed for almost 25 years. Nonetheless,
it is highly relevant in innovations, modifications, and adaptations due to the advances in new
sensors and the complexity of the scenarios in emerging mobile robotics applications. The scan
matching algorithm aims to find a pose vector representing the transformation or movement
between two robot observations by finding the best possible value after solving an equation
representing a good transformation. It means searching for a solution in an optimum way. Typically
this optimization process has been solved using classical optimization algorithms, like
Newton’s algorithm or solving gradient and second derivatives formulations, yet this requires
an initial guess or initial state that helps the algorithm point in the right direction, most of the
time by getting this information from the odometers or inertial sensors. Although, it is not always possible to have or trust this information, as some scenarios are complex and reckon
sensors fail. In order to solve this problem, this research presents the uses of evolutionary optimization
algorithms, those with a meta-heuristics definition based on iterative evolution that
mimics optimization processes that do not need previous information to search a limited range
for solutions to solve a fitness function. The main goal of this dissertation is to study, develop
and prove the benefits of evolutionary optimization algorithms in simultaneous localization and
mapping for mobile robots in six degrees of freedom scenarios using LiDAR sensor information.
This work introduces several evolutionary algorithms for scan matching, acknowledge a
mixed fitness function for registration, solve simultaneous localization and matching in different
scenarios, implements loop closure and error relaxation, and proves its performance at indoors,
outdoors and underground mapping applications.Los robots móviles están creciendo en aplicaciones para moverse por entornos interiores
y exteriores, pasando de aplicaciones teleoperadas a aplicaciones autónomas como explorar o
navegar. Para que un robot se mueva a través de una ubicación en particular, necesita recopilar
información sobre el escenario utilizando sensores. Estos sensores permiten que el robot observe,
según el tipo de datos del sensor. Las cámaras en su mayoría brindan información en
dos dimensiones, con colores y píxeles que representan una imagen. Los sensores de rango dan
distancias desde el robot hasta los obstáculos. Las Cámaras de Profundidad mezclan ambas
tecnologías para expandir su información a información tridimensional. Light Detection and
Ranging (LiDAR) proporciona información sobre la distancia al sensor, pero amplía su rango a
planos y tres dimensiones así como mejora la precisión. Por lo tanto, los robots móviles usan
esos sensores para escanear el escenario mientras se mueven. Si el robot ya tiene un mapa, los
sensores miden y el robot encuentra características que corresponden a características en dicho
mapa para localizarse. La humanidad ha utilizado los mapas como una forma especializada
de representar el medio ambiente durante más de 5000 años, convirtiéndose en una pieza de
información importante en los usos básicos diarios de hoy en día. Los mapas se utilizan para
navegar de un lugar a otro, localizar algo dentro de algunos límites o como una forma de documentación
de características esenciales. Entonces, naturalmente, una forma intuitiva de hacer
un robot móvil autónomo es implementar mapas de información geométrica para representar el
entorno. Por otro lado, si el robot no tiene un mapa previo, deberá construirlo mientras se desplaza.
El robot junta la información del sensor de distancias con la información del sensor del
odómetro para lograr esta tarea de crear un mapa. Sin embargo, los sensores tienen sus propios
defectos debido a la precisión, la calibración o la exactitud. Además, mover un robot tiene sus
limitaciones físicas y fallas que pueden ocurrir aleatoriamente, como el desvío de las ruedas o
una mala calibración mecánica que puede hacer que los contadores de desplazamiento fallen en
la medición, lo que provoca una desalineación durante la construcción del mapa. A mediados
de los años 90 se presentó una técnica novedosa para resolver este problema y superar la incertidumbre
de los sensores mientras el robot construye el mapa, el algoritmo de localización y
mapeo simultáneos (SLAM). Su objetivo es construir un mapa mientras se corrige la posición
del robot en base a la información de dos o más escaneos consecutivos emparejados o encontrar
el vector de correspondencia entre ellos. Este algoritmo ha sido ampliamente estudiado y
desarrollado durante casi 25 años. No obstante, es muy relevante en innovaciones, modificaciones
y adaptaciones debido a los avances en sensores y la complejidad de los escenarios en las
aplicaciones emergentes de robótica móvil. El algoritmo de correspondencia de escaneo tiene
como objetivo encontrar un vector de pose que represente la transformación o el movimiento
entre dos observaciones del robot al encontrar el mejor valor posible después de resolver una
ecuación que represente una buena transformación. Significa buscar una solución de forma óptima. Por lo general, este proceso de optimización se ha resuelto utilizando algoritmos de
optimización clásicos, como el algoritmo de Newton o la resolución de formulaciones de gradientes
y segundas derivadas, pero esto requiere una conjetura inicial o un estado inicial que
ayude al algoritmo a apuntar en la dirección correcta, la mayoría de las veces obteniendo esta
información de los sensores odometricos o sensores de inercia, aunque no siempre es posible
tener o confiar en esta información, ya que algunos escenarios son complejos y los sensores
fallan. Para resolver este problema, esta investigación presenta los usos de los algoritmos de
optimización evolutiva, aquellos con una definición meta-heurística basada en la evolución iterativa
que imita los procesos de optimización que no necesitan información previa para buscar
dentro de un rango limitado el grupo de soluciones que resuelve una función de calidad. El
objetivo principal de esta tesis es estudiar, desarrollar y probar los usos de algoritmos de optimización
evolutiva en localización y mapeado simultáneos para robots móviles en escenarios de
seis grados de libertad utilizando información de sensores LiDAR. Este trabajo introduce varios
algoritmos evolutivos que resuelven la correspondencia entre medidas, soluciona el problema
de SLAM, implementa una fusion de funciones objetivos y demuestra sus ventajas con pruebas
en escenarios reales tanto en interiores, exteriores como mapeado de escenarios subterraneos.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Gerardo Fernández López.- Secretario: María Dolores Blanco Rojas.- Vocal: David Álvarez Sánche
BAYESIAN MODELLING OF ULTRA HIGH-FREQUENCY FINANCIAL DATA
The availability of ultra high-frequency (UHF) data on transactions has revolutionised
data processing and statistical modelling techniques in finance. The unique characteristics
of such data, e.g. discrete structure of price change, unequally spaced time intervals
and multiple transactions have introduced new theoretical and computational challenges.
In this study, we develop a Bayesian framework for modelling integer-valued variables
to capture the fundamental properties of price change. We propose the application of the
zero inflated Poisson difference (ZPD) distribution for modelling UHF data and assess
the effect of covariates on the behaviour of price change. For this purpose, we present
two modelling schemes; the first one is based on the analysis of the data after the market
closes for the day and is referred to as off-line data processing. In this case, the Bayesian
interpretation and analysis are undertaken using Markov chain Monte Carlo methods.
The second modelling scheme introduces the dynamic ZPD model which is implemented
through Sequential Monte Carlo methods (also known as particle filters). This procedure
enables us to update our inference from data as new transactions take place and is known
as online data processing. We apply our models to a set of FTSE100 index changes. Based
on the probability integral transform, modified for the case of integer-valued random variables,
we show that our models are capable of explaining well the observed distribution
of price change. We then apply the deviance information criterion and introduce its sequential
version for the purpose of model comparison for off-line and online modelling,
respectively. Moreover, in order to add more flexibility to the tails of the ZPD distribution,
we introduce the zero inflated generalised Poisson difference distribution and outline its
possible application for modelling UHF data
Fast Detection of Application Protocols
Diplomová práce se zabývá klasifikací aplikačních protokolů na základě aplikačních dat, tedy dat vrstvy L7 modelu ISO/OSI. Cílem je navrhnout klasifikátor pro systém SDM (Softwarově řízené monitorování) tak, aby mohl být nasazen na linkách s propustností až 100 Gb/s a klasifikoval s co nejmenší chybou. Navržený klasifikátor se skládá ze dvou částí. První částí jsou kodéry, pro zakódování vybraných znaků. Druhou částí je vyhodnocovací obvod detekující řetězce charakterizující jednotlivé aplikační protokoly na výstupu první části. Uvažované znaky pro kodéry a řetězce charakterizující protokoly vychází ze statistické analýzy dat aplikačních protokolů. Samotný klasifikátor je navržen tak, aby mohl být implementován v FPGA a umožňoval upravit množinu aplikačních protokolů určenou pro klasifikaci. Kvalita klasifikátoru je otestována na reálných síťových datech. Výsledky klasifikace jsou srovnány s současnými metodami klasifikace aplikačních protokolů.Master thesis is focused on classification of application protocols based on application data taken from layer L7 of ISO/OSI model. The aim of the thesis is to suggest a classifier for SDM system (Software defined monitoring) so it could be used for links with throughput up to 100 Gb/s. At the same time it should classify with the fewest possible errors.Designed classifier consists of two parts. First part depicts encoders for encoding selected attributes. Second part deals with evaluating circuit which detects series characteristic for particular application protocols on the output from the first part. Considered attributes and series are taken from statistic analyzes of application protocol data.The classifier itself is designed so it can be implemented in FPGA and enables modification set of application protocols who intended for classification. The quality of designed classifier is tested on real network data. The results of classification are compared with current methods used for classification of application protocols.