5,929 research outputs found
Proceedings of the ECCS 2005 satellite workshop: embracing complexity in design - Paris 17 November 2005
Embracing complexity in design is one of the critical issues and challenges of the 21st century. As the realization grows that design activities and artefacts display properties associated with complex adaptive systems, so grows the need to use complexity concepts and methods to understand these properties and inform the design of better artifacts. It is a great challenge because complexity science represents an epistemological and methodological swift that promises a holistic approach in the understanding and operational support of design. But design is also a major contributor in complexity research. Design science is concerned with problems that are fundamental in the sciences in general and complexity sciences in particular. For instance, design has been perceived and studied as a ubiquitous activity inherent in every human activity, as the art of generating hypotheses, as a type of experiment, or as a creative co-evolutionary process. Design science and its established approaches and practices can be a great source for advancement and innovation in complexity science. These proceedings are the result of a workshop organized as part of the activities of a UK government AHRB/EPSRC funded research cluster called Embracing Complexity in Design (www.complexityanddesign.net) and the European Conference in Complex Systems (complexsystems.lri.fr). Embracing complexity in design is one of the critical issues and challenges of the 21st century. As the realization grows that design activities and artefacts display properties associated with complex adaptive systems, so grows the need to use complexity concepts and methods to understand these properties and inform the design of better artifacts. It is a great challenge because complexity science represents an epistemological and methodological swift that promises a holistic approach in the understanding and operational support of design. But design is also a major contributor in complexity research. Design science is concerned with problems that are fundamental in the sciences in general and complexity sciences in particular. For instance, design has been perceived and studied as a ubiquitous activity inherent in every human activity, as the art of generating hypotheses, as a type of experiment, or as a creative co-evolutionary process. Design science and its established approaches and practices can be a great source for advancement and innovation in complexity science. These proceedings are the result of a workshop organized as part of the activities of a UK government AHRB/EPSRC funded research cluster called Embracing Complexity in Design (www.complexityanddesign.net) and the European Conference in Complex Systems (complexsystems.lri.fr)
Affective Communication for Socially Assistive Robots (SARs) for Children with Autism Spectrum Disorder: A Systematic Review
Research on affective communication for socially assistive robots has been conducted to
enable physical robots to perceive, express, and respond emotionally. However, the use of affective
computing in social robots has been limited, especially when social robots are designed for children,
and especially those with autism spectrum disorder (ASD). Social robots are based on cognitiveaffective models, which allow them to communicate with people following social behaviors and
rules. However, interactions between a child and a robot may change or be different compared to
those with an adult or when the child has an emotional deficit. In this study, we systematically
reviewed studies related to computational models of emotions for children with ASD. We used the
Scopus, WoS, Springer, and IEEE-Xplore databases to answer different research questions related to
the definition, interaction, and design of computational models supported by theoretical psychology
approaches from 1997 to 2021. Our review found 46 articles; not all the studies considered children
or those with ASD.This research was funded by VRIEA-PUCV, grant number 039.358/202
Multivariate Approaches to Classification in Extragalactic Astronomy
Clustering objects into synthetic groups is a natural activity of any
science. Astrophysics is not an exception and is now facing a deluge of data.
For galaxies, the one-century old Hubble classification and the Hubble tuning
fork are still largely in use, together with numerous mono-or bivariate
classifications most often made by eye. However, a classification must be
driven by the data, and sophisticated multivariate statistical tools are used
more and more often. In this paper we review these different approaches in
order to situate them in the general context of unsupervised and supervised
learning. We insist on the astrophysical outcomes of these studies to show that
multivariate analyses provide an obvious path toward a renewal of our
classification of galaxies and are invaluable tools to investigate the physics
and evolution of galaxies.Comment: Open Access paper.
http://www.frontiersin.org/milky\_way\_and\_galaxies/10.3389/fspas.2015.00003/abstract\>.
\<10.3389/fspas.2015.00003 \&g
A layered control architecture for mobile robot navigation
A Thesis submitted to the University Research Degree Committee in fulfillment ofthe
requirements for the degree of DOCTOR OF PHILOSOPHY in RoboticsThis thesis addresses the problem of how to control an autonomous mobile robot navigation in indoor environments, in the face of sensor noise, imprecise information, uncertainty and limited response time. The thesis argues that the effective control of autonomous mobile robots can be achieved by organising low level and higher level control activities into a layered architecture. The low level reactive control allows the robot to respond to contingencies quickly. The higher level control allows the robot to make longer term decisions and arranges appropriate sequences for a task execution. The thesis describes the design and implementation of a two layer control architecture, a task template based sequencing layer and a fuzzy behaviour based low level control layer. The sequencing layer works at the pace of the higher level of abstraction, interprets a task plan, mediates and monitors the controlling activities. While the low level performs fast computation in response to dynamic changes in the real world and carries out robust control under uncertainty. The organisation and fusion of fuzzy behaviours are described extensively for the construction of a low level control system. A learning methodology is also developed to systematically learn fuzzy behaviours and the behaviour selection network and therefore solve the difficulties in configuring the low level control layer. A two layer control system has been implemented and used to control a simulated mobile robot performing two tasks in simulated indoor environments. The effectiveness of the layered control and learning methodology is demonstrated through the traces of controlling activities at the two different levels. The results also show a general design methodology that the high level should be used to guide the robot's actions while the low level takes care of detailed control in the face of sensor noise and environment uncertainty in real time
Decision tree learning for intelligent mobile robot navigation
The replication of human intelligence, learning and reasoning by means of computer
algorithms is termed Artificial Intelligence (Al) and the interaction of such
algorithms with the physical world can be achieved using robotics. The work described in
this thesis investigates the applications of concept learning (an approach which takes its
inspiration from biological motivations and from survival instincts in particular) to robot
control and path planning. The methodology of concept learning has been applied using
learning decision trees (DTs) which induce domain knowledge from a finite set of training
vectors which in turn describe systematically a physical entity and are used to train a robot
to learn new concepts and to adapt its behaviour.
To achieve behaviour learning, this work introduces the novel approach of hierarchical
learning and knowledge decomposition to the frame of the reactive robot architecture.
Following the analogy with survival instincts, the robot is first taught how to survive in
very simple and homogeneous environments, namely a world without any disturbances or
any kind of "hostility". Once this simple behaviour, named a primitive, has been established, the robot is trained to adapt new knowledge to cope with increasingly complex
environments by adding further worlds to its existing knowledge. The repertoire of the
robot behaviours in the form of symbolic knowledge is retained in a hierarchy of clustered
decision trees (DTs) accommodating a number of primitives. To classify robot perceptions,
control rules are synthesised using symbolic knowledge derived from searching the
hierarchy of DTs.
A second novel concept is introduced, namely that of multi-dimensional fuzzy associative
memories (MDFAMs). These are clustered fuzzy decision trees (FDTs) which are trained
locally and accommodate specific perceptual knowledge. Fuzzy logic is incorporated to
deal with inherent noise in sensory data and to merge conflicting behaviours of the DTs.
In this thesis, the feasibility of the developed techniques is illustrated in the robot
applications, their benefits and drawbacks are discussed
Computing with Granular Words
Computational linguistics is a sub-field of artificial intelligence; it is an interdisciplinary field dealing with statistical and/or rule-based modeling of natural language from a computational perspective. Traditionally, fuzzy logic is used to deal with fuzziness among single linguistic terms in documents. However, linguistic terms may be related to other types of uncertainty. For instance, different users search âcheap hotelâ in a search engine, they may need distinct pieces of relevant hidden information such as shopping, transportation, weather, etc. Therefore, this research work focuses on studying granular words and developing new algorithms to process them to deal with uncertainty globally. To precisely describe the granular words, a new structure called Granular Information Hyper Tree (GIHT) is constructed. Furthermore, several technologies are developed to cooperate with computing with granular words in spam filtering and query recommendation. Based on simulation results, the GIHT-Bayesian algorithm can get more accurate spam filtering rate than conventional method Naive Bayesian and SVM; computing with granular word also generates better recommendation results based on usersâ assessment when applied it to search engine
Formalized Conceptual Spaces with a Geometric Representation of Correlations
The highly influential framework of conceptual spaces provides a geometric
way of representing knowledge. Instances are represented by points in a
similarity space and concepts are represented by convex regions in this space.
After pointing out a problem with the convexity requirement, we propose a
formalization of conceptual spaces based on fuzzy star-shaped sets. Our
formalization uses a parametric definition of concepts and extends the original
framework by adding means to represent correlations between different domains
in a geometric way. Moreover, we define various operations for our
formalization, both for creating new concepts from old ones and for measuring
relations between concepts. We present an illustrative toy-example and sketch a
research project on concept formation that is based on both our formalization
and its implementation.Comment: Published in the edited volume "Conceptual Spaces: Elaborations and
Applications". arXiv admin note: text overlap with arXiv:1706.06366,
arXiv:1707.02292, arXiv:1707.0516
On the Stability of Region Count in the Parameter Space of Image Analysis Methods
In this dissertation a novel bottom-up computer vision approach is proposed. This approach is based upon quantifying the stability of the number of regions or count in a multi-dimensional parameter scale-space. The stability analysis comes from the properties of flat areas in the region count space generated through bottom-up algorithms of thresholding and region growing, hysteresis thresholding, variance-based region growing. The parameters used can be threshold, region growth, intensity statistics and other low-level parameters. The advantages and disadvantages of top-down, bottom-up and hybrid computational models are discussed. The approaches of scale-space, perceptual organization and clustering methods in computer vision are also analyzed, and the difference between our approach and these approaches is clarified. An overview of our stable count idea and implementation of three algorithms derived from this idea are presented. The algorithms are applied to real-world images as well as simulated signals. We have developed three experiments based upon our framework of stable region count. The experiments are using flower detector, peak detector and retinal image lesion detector respectively to process images and signals. The results from these experiments all suggest that our computer vision framework can solve different image and signal problems and provide satisfactory solutions. In the end future research directions and improvements are proposed
- âŠ