87 research outputs found

    Intervention in the social population space of Cultural Algorithm

    Get PDF
    Cultural Algorithms (CA) offers a better way to simulate social and culture driven agents by introducing the notion of culture into the artificial population. When it comes to mimic intelligent social beings such as humans, the search for a better fit or global optima becomes multi dimensional because of the complexity produced by the relevant system parameters and intricate social behaviour. In this research an extended CA framework has been presented. The architecture provides extensions to the basic CA framework. The major extensions include the mechanism of influencing selected individuals into the population space by means of existing social network and consequently alter the cultural belief favourably. Another extension of the framework was done in the population space by introducing the concept of social network. The agents in the population are put into one (or more) network through which they can communicate and propagate knowledge. Identification and exploitation of such network is necessary sinceit may lead to a quicker shift of the cultural norm

    Pre-processing, classification and semantic querying of large-scale Earth observation spaceborne/airborne/terrestrial image databases: Process and product innovations.

    Get PDF
    By definition of Wikipedia, “big data is the term adopted for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The big data challenges typically include capture, curation, storage, search, sharing, transfer, analysis and visualization”. Proposed by the intergovernmental Group on Earth Observations (GEO), the visionary goal of the Global Earth Observation System of Systems (GEOSS) implementation plan for years 2005-2015 is systematic transformation of multisource Earth Observation (EO) “big data” into timely, comprehensive and operational EO value-adding products and services, submitted to the GEO Quality Assurance Framework for Earth Observation (QA4EO) calibration/validation (Cal/Val) requirements. To date the GEOSS mission cannot be considered fulfilled by the remote sensing (RS) community. This is tantamount to saying that past and existing EO image understanding systems (EO-IUSs) have been outpaced by the rate of collection of EO sensory big data, whose quality and quantity are ever-increasing. This true-fact is supported by several observations. For example, no European Space Agency (ESA) EO Level 2 product has ever been systematically generated at the ground segment. By definition, an ESA EO Level 2 product comprises a single-date multi-spectral (MS) image radiometrically calibrated into surface reflectance (SURF) values corrected for geometric, atmospheric, adjacency and topographic effects, stacked with its data-derived scene classification map (SCM), whose thematic legend is general-purpose, user- and application-independent and includes quality layers, such as cloud and cloud-shadow. Since no GEOSS exists to date, present EO content-based image retrieval (CBIR) systems lack EO image understanding capabilities. Hence, no semantic CBIR (SCBIR) system exists to date either, where semantic querying is synonym of semantics-enabled knowledge/information discovery in multi-source big image databases. In set theory, if set A is a strict superset of (or strictly includes) set B, then A B. This doctoral project moved from the working hypothesis that SCBIR computer vision (CV), where vision is synonym of scene-from-image reconstruction and understanding EO image understanding (EO-IU) in operating mode, synonym of GEOSS ESA EO Level 2 product human vision. Meaning that necessary not sufficient pre-condition for SCBIR is CV in operating mode, this working hypothesis has two corollaries. First, human visual perception, encompassing well-known visual illusions such as Mach bands illusion, acts as lower bound of CV within the multi-disciplinary domain of cognitive science, i.e., CV is conditioned to include a computational model of human vision. Second, a necessary not sufficient pre-condition for a yet-unfulfilled GEOSS development is systematic generation at the ground segment of ESA EO Level 2 product. Starting from this working hypothesis the overarching goal of this doctoral project was to contribute in research and technical development (R&D) toward filling an analytic and pragmatic information gap from EO big sensory data to EO value-adding information products and services. This R&D objective was conceived to be twofold. First, to develop an original EO-IUS in operating mode, synonym of GEOSS, capable of systematic ESA EO Level 2 product generation from multi-source EO imagery. EO imaging sources vary in terms of: (i) platform, either spaceborne, airborne or terrestrial, (ii) imaging sensor, either: (a) optical, encompassing radiometrically calibrated or uncalibrated images, panchromatic or color images, either true- or false color red-green-blue (RGB), multi-spectral (MS), super-spectral (SS) or hyper-spectral (HS) images, featuring spatial resolution from low (> 1km) to very high (< 1m), or (b) synthetic aperture radar (SAR), specifically, bi-temporal RGB SAR imagery. The second R&D objective was to design and develop a prototypical implementation of an integrated closed-loop EO-IU for semantic querying (EO-IU4SQ) system as a GEOSS proof-of-concept in support of SCBIR. The proposed closed-loop EO-IU4SQ system prototype consists of two subsystems for incremental learning. A primary (dominant, necessary not sufficient) hybrid (combined deductive/top-down/physical model-based and inductive/bottom-up/statistical model-based) feedback EO-IU subsystem in operating mode requires no human-machine interaction to automatically transform in linear time a single-date MS image into an ESA EO Level 2 product as initial condition. A secondary (dependent) hybrid feedback EO Semantic Querying (EO-SQ) subsystem is provided with a graphic user interface (GUI) to streamline human-machine interaction in support of spatiotemporal EO big data analytics and SCBIR operations. EO information products generated as output by the closed-loop EO-IU4SQ system monotonically increase their value-added with closed-loop iterations

    Development of an autonomous distributed multiagent monitoring system for the automatic classification of end users

    Get PDF
    The purpose of this study is to investigate the feasibility of constructing a software Multi-Agent based monitoring and classification system and utilizing it to provide an automated and accurate classification for end users developing applications in the spreadsheet domain. Resulting in, is the creation of the Multi-Agent Classification System (MACS). The Microsoft‘s .NET Windows Service based agents were utilized to develop the Monitoring Agents of MACS. These agents function autonomously to provide continuous and periodic monitoring of spreadsheet workbooks by content. .NET Windows Communication Foundation (WCF) Services technology was used together with the Service Oriented Architecture (SOA) approach for the distribution of the agents over the World Wide Web in order to satisfy the monitoring and classification of the multiple developer aspect. The Prometheus agent oriented design methodology and its accompanying Prometheus Design Tool (PDT) was employed for specifying and designing the agents of MACS, and Visual Studio.NET 2008 for creating the agency using visual C# programming language. MACS was evaluated against classification criteria from the literature with the support of using real-time data collected from a target group of excel spreadsheet developers over a network. The Monitoring Agents were configured to execute automatically, without any user intervention as windows service processes in the .NET web server application of the system. These distributed agents listen to and read the contents of excel spreadsheets development activities in terms of file and author properties, function and formulas used, and Visual Basic for Application (VBA) macro code constructs. Data gathered by the Monitoring Agents from various resources over a period of time was collected and filtered by a Database Updater Agent residing in the .NET client application of the system. This agent then transfers and stores the data in Oracle server database via Oracle stored procedures for further processing that leads to the classification of the end user developers. Oracle data mining classification algorithms: Naive Bayes, Adaptive Naive Bayes, Decision Trees, and Support Vector Machine were utilized to analyse the results from the data gathering process in order to automate the classification of excel spreadsheet developers. The accuracy of the predictions achieved by the models was compared. The results of the comparison showed that Naive Bayes classifier achieved the best results with accuracy of 0.978. Therefore, the MACS can be utilized to provide a Multi-Agent based automated classification solution to spreadsheet developers with a high degree of accuracy

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    How sketches work: a cognitive theory for improved system design

    Get PDF
    Evidence is presented that in the early stages of design or composition the mental processes used by artists for visual invention require a different type of support from those used for visualising a nearly complete object. Most research into machine visualisation has as its goal the production of realistic images which simulate the light pattern presented to the retina by real objects. In contrast sketch attributes preserve the results of cognitive processing which can be used interactively to amplify visual thought. The traditional attributes of sketches include many types of indeterminacy which may reflect the artist's need to be "vague". Drawing on contemporary theories of visual cognition and neuroscience this study discusses in detail the evidence for the following functions which are better served by rough sketches than by the very realistic imagery favoured in machine visualising systems. 1. Sketches are intermediate representational types which facilitate the mental translation between descriptive and depictive modes of representing visual thought. 2. Sketch attributes exploit automatic processes of perceptual retrieval and object recognition to improve the availability of tacit knowledge for visual invention. 3. Sketches are percept-image hybrids. The incomplete physical attributes of sketches elicit and stabilise a stream of super-imposed mental images which amplify inventive thought. 4. By segregating and isolating meaningful components of visual experience, sketches may assist the user to attend selectively to a limited part of a visual task, freeing otherwise over-loaded cognitive resources for visual thought. 5. Sequences of sketches and sketching acts support the short term episodic memory for cognitive actions. This assists creativity, providing voluntary control over highly practised mental processes which can otherwise become stereotyped. An attempt is made to unite the five hypothetical functions. Drawing on the Baddeley and Hitch model of working memory, it is speculated that the five functions may be related to a limited capacity monitoring mechanism which makes tacit visual knowledge explicitly available for conscious control and manipulation. It is suggested that the resources available to the human brain for imagining nonexistent objects are a cultural adaptation of visual mechanisms which evolved in early hominids for responding to confusing or incomplete stimuli from immediately present objects and events. Sketches are cultural inventions which artificially mimic aspects of such stimuli in order to capture these shared resources for the different purpose of imagining objects which do not yet exist. Finally the implications of the theory for the design of improved machine systems is discussed. The untidy attributes of traditional sketches are revealed to include cultural inventions which serve subtle cognitive functions. However traditional media have many short-comings which it should be possible to correct with new technology. Existing machine systems for sketching tend to imitate nonselectively the media bound properties of sketches without regard to the functions they serve. This may prove to be a mistake. It is concluded that new system designs are needed in which meaningfully structured data and specialised imagery amplify without interference or replacement the impressive but limited creative resources of the visual brain

    Uma comparação entre arquiteturas cognitivas : análise teórica e prática

    Get PDF
    Orientador: Ricardo Ribeiro GudwinDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Este trabalho apresenta uma comparação teórica e prática entre três das mais populares arquiteturas cognitivas: SOAR, CLARION e LIDA. A comparação teórica é realizada com base em um conjunto de funções cognitivas supostamente existentes no ciclo cognitivo humano. A comparação prática é realizada aplicando-se um mesmo experimento em todas as arquiteturas, coletando alguns dados e comparando-as usando como base algumas métricas de qualidade de software. O objetivo é enfatizar semelhanças e diferenças entre os modelos e implementações, com o objetivo de aconselhar um novo usuário a escolher a arquitetura mais apropriada para uma certa aplicaçãoAbstract: This work presents a theoretical and practical comparison of three popular cognitive architectures: SOAR, CLARION, and LIDA. The theoretical comparison is performed based on a set of cognitive functions supposed to exist in the human cognitive cycle. The practical comparison is performed applying the same experiment in all architectures, collecting some data and comparing them using a set of software quality metrics as a basis. The aim is to emphasize similarities and differences among the models and implementations, with the purpose to advise a newcomer on how to choose the appropriated architecture for an applicationMestradoEngenharia de ComputaçãoMestre em Engenharia Elétric

    Transforming structured descriptions to visual representations. An automated visualization of historical bookbinding structures.

    Full text link
    In cultural heritage, the documentation of artefacts can be both iconographic and textual, i.e. both pictures and drawings on the one hand, and text and words on the other are used for documentation purposes. This research project aims to produce a methodology to transform automatically verbal descriptions of material objects, with a focus on bookbinding structures, into standardized and scholarly-sound visual representations. In the last few decades, the recording and management of documentation data about material objects, including bookbindings, has switched from paper-based archives to databases, but sketches and diagrams are a form of documentation still carried out mostly by hand. Diagrams hold some unique information, but often, also redundant information already secured through verbal means within the databases. This project proposes a methodology to harness verbal information stored within a database and automatically generate visual representations. A number of projects within the cultural heritage sector have applied semantic modelling to generate graphic outputs from verbal inputs. None of these has considered bookbindings and none of these relies on information already recorded within databases. Instead they develop an extra layer of modelling and typically gather more data, specifically for the purpose of generating a pictorial output. In these projects qualitative data (verbal input) is often mixed with quantitative data (measurements, scans, or other direct acquisition methods) to solve the problems of indeterminateness found in verbal descriptions. Also, none of these projects has attempted to develop a general methodology to ascertain the minimum amount ii of information that is required for successful verbal-to-visual transformations for material objects in other fields. This research has addressed these issues. The novel contributions of this research include: (i) a series of methodological recommendations for successful automated verbal-to-visual intersemiotic translations for material objects — and bookbinding structures in particular — which are possible when whole/part relationships, spatial configurations, the object’s logical form, and its prototypical shapes are communicated; (ii) the production of intersemiotic transformations for the domain of bookbinding structures; (iii) design recommendations for the generation of standardized automated prototypical drawings of bookbinding structures; (iv) the application — never considered before — of uncertainty visualization to the field of the archaeology of the book. This research also proposes the use of automatically generated diagrams as data verification tools to help identify meaningless or wrong data, thus increasing data accuracy within databases

    Implementing Industry 4.0 in SMEs

    Get PDF
    This open access book addresses the practical challenges that Industry 4.0 presents for SMEs. While large companies are already responding to the changes resulting from the fourth industrial revolution , small businesses are in danger of falling behind due to the lack of examples, best practices and established methods and tools. Following on from the publication of the previous book ‘Industry 4.0 for SMEs: Challenges, Opportunities and Requirements’, the authors offer in this new book innovative results from research on smart manufacturing, smart logistics and managerial models for SMEs. Based on a large scale EU-funded research project involving seven academic institutions from three continents and a network of over fifty small and medium sized enterprises, the book reveals the methods and tools required to support the successful implementation of Industry 4.0 along with practical examples

    Human-Centered Content-Based Image Retrieval

    Get PDF
    Retrieval of images that lack a (suitable) annotations cannot be achieved through (traditional) Information Retrieval (IR) techniques. Access through such collections can be achieved through the application of computer vision techniques on the IR problem, which is baptized Content-Based Image Retrieval (CBIR). In contrast with most purely technological approaches, the thesis Human-Centered Content-Based Image Retrieval approaches the problem from a human/user centered perspective. Psychophysical experiments were conducted in which people were asked to categorize colors. The data gathered from these experiments was fed to a Fast Exact Euclidean Distance (FEED) transform (Schouten & Van den Broek, 2004), which enabled the segmentation of color space based on human perception (Van den Broek et al., 2008). This unique color space segementation was exploited for texture analysis and image segmentation, and subsequently for full-featured CBIR. In addition, a unique CBIR-benchmark was developed (Van den Broek et al., 2004, 2005). This benchmark was used to explore what and how several parameters (e.g., color and distance measures) of the CBIR process influence retrieval results. In contrast with other research, users judgements were assigned as metric. The online IR and CBIR system Multimedia for Art Retrieval (M4ART) (URL: http://www.m4art.org) has been (partly) founded on the techniques discussed in this thesis. References: - Broek, E.L. van den, Kisters, P.M.F., and Vuurpijl, L.G. (2004). The utilization of human color categorization for content-based image retrieval. Proceedings of SPIE (Human Vision and Electronic Imaging), 5292, 351-362. [see also Chapter 7] - Broek, E.L. van den, Kisters, P.M.F., and Vuurpijl, L.G. (2005). Content-Based Image Retrieval Benchmarking: Utilizing Color Categories and Color Distributions. Journal of Imaging Science and Technology, 49(3), 293-301. [see also Chapter 8] - Broek, E.L. van den, Schouten, Th.E., and Kisters, P.M.F. (2008). Modeling Human Color Categorization. Pattern Recognition Letters, 29(8), 1136-1144. [see also Chapter 5] - Schouten, Th.E. and Broek, E.L. van den (2004). Fast Exact Euclidean Distance (FEED) transformation. In J. Kittler, M. Petrou, and M. Nixon (Eds.), Proceedings of the 17th IEEE International Conference on Pattern Recognition (ICPR 2004), Vol 3, p. 594-597. August 23-26, Cambridge - United Kingdom. [see also Appendix C
    corecore