1,665 research outputs found
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
A Survey on Evolutionary Computation for Computer Vision and Image Analysis: Past, Present, and Future Trends
Computer vision (CV) is a big and important field
in artificial intelligence covering a wide range of applications.
Image analysis is a major task in CV aiming to extract, analyse
and understand the visual content of images. However, imagerelated
tasks are very challenging due to many factors, e.g., high
variations across images, high dimensionality, domain expertise
requirement, and image distortions. Evolutionary computation
(EC) approaches have been widely used for image analysis with
significant achievement. However, there is no comprehensive
survey of existing EC approaches to image analysis. To fill
this gap, this paper provides a comprehensive survey covering
all essential EC approaches to important image analysis tasks
including edge detection, image segmentation, image feature
analysis, image classification, object detection, and others. This
survey aims to provide a better understanding of evolutionary
computer vision (ECV) by discussing the contributions of different
approaches and exploring how and why EC is used for
CV and image analysis. The applications, challenges, issues, and
trends associated to this research field are also discussed and
summarised to provide further guidelines and opportunities for
future research
Stretching the Rules: Monocentric Chromosomes with Multiple Centromere Domains
The centromere is a functional chromosome domain that is essential for faithful chromosome segregation during cell division and that can be reliably identified by the presence of the centromere-specific histone H3 variant CenH3. In monocentric chromosomes, the centromere is characterized by a single CenH3-containing region within a morphologically distinct primary constriction. This region usually spans up to a few Mbp composed mainly of centromere-specific satellite DNA common to all chromosomes of a given species. In holocentric chromosomes, there is no primary constriction; the centromere is composed of many CenH3 loci distributed along the entire length of a chromosome. Using correlative fluorescence light microscopy and high-resolution electron microscopy, we show that pea (Pisum sativum) chromosomes exhibit remarkably long primary constrictions that contain 3-5 explicit CenH3-containing regions, a novelty in centromere organization. In addition, we estimate that the size of the chromosome segment delimited by two outermost domains varies between 69 Mbp and 107 Mbp, several factors larger than any known centromere length. These domains are almost entirely composed of repetitive DNA sequences belonging to 13 distinct families of satellite DNA and one family of centromeric retrotransposons, all of which are unevenly distributed among pea chromosomes. We present the centromeres of Pisum as novel ``meta-polycentric'' functional domains. Our results demonstrate that the organization and DNA composition of functional centromere domains can be far more complex than previously thought, do not require single repetitive elements, and do not require single centromere domains in order to segregate properly. Based on these findings, we propose Pisum as a useful model for investigation of centromere architecture and the still poorly understood role of repetitive DNA in centromere evolution, determination, and function
Evolutionary Optimization Techniques for 3D Simultaneous Localization and Mapping
Mención Internacional en el título de doctorMobile robots are growing up in applications to move through indoors and outdoors environments,
passing from teleoperated applications to autonomous applications like exploring
or navigating. For a robot to move through a particular location, it needs to gather information
about the scenario using sensors. These sensors allow the robot to observe, depending on the
sensor data type. Cameras mostly give information in two dimensions, with colors and pixels
representing an image. Range sensors give distances from the robot to obstacles. Depth
Cameras mix both technologies to expand their information to three-dimensional information.
Light Detection and Ranging (LiDAR) provides information about the distance to the sensor
but expands its range to planes and three dimensions alongside precision. So, mobile robots
use those sensors to scan the scenario while moving. If the robot already has a map, the sensors
measure, and the robot finds features that correspond to features on the map to localize
itself. Men have used Maps as a specialized form of representing the environment for more
than 5000 years, becoming a piece of important information in today’s daily basics. Maps are
used to navigate from one place to another, localize something inside some boundaries, or as
a form of documentation of essential features. So naturally, an intuitive way of making an
autonomous mobile robot is to implement geometrical information maps to represent the environment.
On the other hand, if the robot does not have a previous map, it should build it while
moving around. The robot computes the sensor information with the odometer sensor information
to achieve this task. However, sensors have their own flaws due to precision, calibration,
or accuracy. Furthermore, moving a robot has its physical constraints and faults that may occur
randomly, like wheel drifting or mechanical miscalibration that may make the odometers fail
in the measurement, causing misalignment during the map building. A novel technique was
presented in the mid-90s to solve this problem and overpass the uncertainty of sensors while
the robot is building the map, the Simultaneous Localization and Mapping algorithm (SLAM).
Its goal is to build a map while the robot’s position is corrected based on the information of
two or more consecutive scans matched together or find the rigid registration vector between
them. This algorithm has been broadly studied and developed for almost 25 years. Nonetheless,
it is highly relevant in innovations, modifications, and adaptations due to the advances in new
sensors and the complexity of the scenarios in emerging mobile robotics applications. The scan
matching algorithm aims to find a pose vector representing the transformation or movement
between two robot observations by finding the best possible value after solving an equation
representing a good transformation. It means searching for a solution in an optimum way. Typically
this optimization process has been solved using classical optimization algorithms, like
Newton’s algorithm or solving gradient and second derivatives formulations, yet this requires
an initial guess or initial state that helps the algorithm point in the right direction, most of the
time by getting this information from the odometers or inertial sensors. Although, it is not always possible to have or trust this information, as some scenarios are complex and reckon
sensors fail. In order to solve this problem, this research presents the uses of evolutionary optimization
algorithms, those with a meta-heuristics definition based on iterative evolution that
mimics optimization processes that do not need previous information to search a limited range
for solutions to solve a fitness function. The main goal of this dissertation is to study, develop
and prove the benefits of evolutionary optimization algorithms in simultaneous localization and
mapping for mobile robots in six degrees of freedom scenarios using LiDAR sensor information.
This work introduces several evolutionary algorithms for scan matching, acknowledge a
mixed fitness function for registration, solve simultaneous localization and matching in different
scenarios, implements loop closure and error relaxation, and proves its performance at indoors,
outdoors and underground mapping applications.Los robots móviles están creciendo en aplicaciones para moverse por entornos interiores
y exteriores, pasando de aplicaciones teleoperadas a aplicaciones autónomas como explorar o
navegar. Para que un robot se mueva a través de una ubicación en particular, necesita recopilar
información sobre el escenario utilizando sensores. Estos sensores permiten que el robot observe,
según el tipo de datos del sensor. Las cámaras en su mayoría brindan información en
dos dimensiones, con colores y píxeles que representan una imagen. Los sensores de rango dan
distancias desde el robot hasta los obstáculos. Las Cámaras de Profundidad mezclan ambas
tecnologías para expandir su información a información tridimensional. Light Detection and
Ranging (LiDAR) proporciona información sobre la distancia al sensor, pero amplía su rango a
planos y tres dimensiones así como mejora la precisión. Por lo tanto, los robots móviles usan
esos sensores para escanear el escenario mientras se mueven. Si el robot ya tiene un mapa, los
sensores miden y el robot encuentra características que corresponden a características en dicho
mapa para localizarse. La humanidad ha utilizado los mapas como una forma especializada
de representar el medio ambiente durante más de 5000 años, convirtiéndose en una pieza de
información importante en los usos básicos diarios de hoy en día. Los mapas se utilizan para
navegar de un lugar a otro, localizar algo dentro de algunos límites o como una forma de documentación
de características esenciales. Entonces, naturalmente, una forma intuitiva de hacer
un robot móvil autónomo es implementar mapas de información geométrica para representar el
entorno. Por otro lado, si el robot no tiene un mapa previo, deberá construirlo mientras se desplaza.
El robot junta la información del sensor de distancias con la información del sensor del
odómetro para lograr esta tarea de crear un mapa. Sin embargo, los sensores tienen sus propios
defectos debido a la precisión, la calibración o la exactitud. Además, mover un robot tiene sus
limitaciones físicas y fallas que pueden ocurrir aleatoriamente, como el desvío de las ruedas o
una mala calibración mecánica que puede hacer que los contadores de desplazamiento fallen en
la medición, lo que provoca una desalineación durante la construcción del mapa. A mediados
de los años 90 se presentó una técnica novedosa para resolver este problema y superar la incertidumbre
de los sensores mientras el robot construye el mapa, el algoritmo de localización y
mapeo simultáneos (SLAM). Su objetivo es construir un mapa mientras se corrige la posición
del robot en base a la información de dos o más escaneos consecutivos emparejados o encontrar
el vector de correspondencia entre ellos. Este algoritmo ha sido ampliamente estudiado y
desarrollado durante casi 25 años. No obstante, es muy relevante en innovaciones, modificaciones
y adaptaciones debido a los avances en sensores y la complejidad de los escenarios en las
aplicaciones emergentes de robótica móvil. El algoritmo de correspondencia de escaneo tiene
como objetivo encontrar un vector de pose que represente la transformación o el movimiento
entre dos observaciones del robot al encontrar el mejor valor posible después de resolver una
ecuación que represente una buena transformación. Significa buscar una solución de forma óptima. Por lo general, este proceso de optimización se ha resuelto utilizando algoritmos de
optimización clásicos, como el algoritmo de Newton o la resolución de formulaciones de gradientes
y segundas derivadas, pero esto requiere una conjetura inicial o un estado inicial que
ayude al algoritmo a apuntar en la dirección correcta, la mayoría de las veces obteniendo esta
información de los sensores odometricos o sensores de inercia, aunque no siempre es posible
tener o confiar en esta información, ya que algunos escenarios son complejos y los sensores
fallan. Para resolver este problema, esta investigación presenta los usos de los algoritmos de
optimización evolutiva, aquellos con una definición meta-heurística basada en la evolución iterativa
que imita los procesos de optimización que no necesitan información previa para buscar
dentro de un rango limitado el grupo de soluciones que resuelve una función de calidad. El
objetivo principal de esta tesis es estudiar, desarrollar y probar los usos de algoritmos de optimización
evolutiva en localización y mapeado simultáneos para robots móviles en escenarios de
seis grados de libertad utilizando información de sensores LiDAR. Este trabajo introduce varios
algoritmos evolutivos que resuelven la correspondencia entre medidas, soluciona el problema
de SLAM, implementa una fusion de funciones objetivos y demuestra sus ventajas con pruebas
en escenarios reales tanto en interiores, exteriores como mapeado de escenarios subterraneos.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Gerardo Fernández López.- Secretario: María Dolores Blanco Rojas.- Vocal: David Álvarez Sánche
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Advanced Brain Tumour Segmentation from MRI Images
Magnetic resonance imaging (MRI) is widely used medical technology for diagnosis of various tissue abnormalities, detection of tumors. The active development in the computerized medical image segmentation has played a vital role in scientific research. This helps the doctors to take necessary treatment in an easy manner with fast decision making. Brain tumor segmentation is a hot point in the research field of Information technology with biomedical engineering. The brain tumor segmentation is motivated by assessing tumor growth, treatment responses, computer-based surgery, treatment of radiation therapy, and developing tumor growth models. Therefore, computer-aided diagnostic system is meaningful in medical treatments to reducing the workload of doctors and giving the accurate results. This chapter explains the causes, awareness of brain tumor segmentation and its classification, MRI scanning process and its operation, brain tumor classifications, and different segmentation methodologies
Unmanned Aerial Vehicles (UAVs) in environmental biology: A Review
Acquiring information about the environment is a key step during each study in the field of environmental biology at different levels, from an individual species to community and biome. However, obtaining information about the environment is frequently difficult because of, for example, the phenological timing, spatial distribution of a species or limited accessibility of a particular area for the field survey. Moreover, remote sensing technology, which enables the observation of the Earth’s surface and is currently very common in environmental research, has many limitations such as insufficient spatial, spectral and temporal resolution and a high cost of data acquisition. Since the 1990s, researchers have been exploring the potential of different types of unmanned aerial vehicles (UAVs) for monitoring Earth’s surface. The present study reviews recent scientific literature dealing with the use of UAV in environmental biology. Amongst numerous papers, short communications and conference abstracts, we selected 110 original studies of how UAVs can be used in environmental biology and which organisms can be studied in this manner. Most of these studies concerned the use of UAV to measure the vegetation parameters such as crown height, volume, number of individuals (14 studies) and quantification of the spatio-temporal dynamics of vegetation changes (12 studies). UAVs were also frequently applied to count birds and mammals, especially those living in the water. Generally, the analytical part of the present study was divided into following sections: (1) detecting, assessing and predicting threats on vegetation, (2) measuring the biophysical parameters of vegetation, (3) quantifying the dynamics of changes in plants and habitats and (4) population and behaviour studies of animals. At the end, we also synthesised all the information showing, amongst others, the advances in environmental biology because of UAV application. Considering that 33% of studies found and included in this review were published in 2017 and 2018, it is expected that the number and variety of applications of UAVs in environmental biology will increase in the future
Modeling and Simulation in Engineering
This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results
- …