3 research outputs found

    Agent-oriented domain-specific language for the development of intelligentdistributed non-axiomatic reasoning agents

    Get PDF
    У дисертацији је представљен прототип агентског, домен-оријентисаног језика ALAS. Основни мотиви развоја ALAS језика су подршка дистрибуираном не-аксиоматском резоновању као и омогућавање интероперабилности и хетерогене мобилности Siebog агената јер је приликом анализе постојећих агентских домен-оријентисаних језика утврђено да ни један језик не подржава ове захтеве. Побољшање у односу на сличне постојеће агентске, домен-оријентисане језике огледа се и у програмским конструктима које нуди ALAS језик а чија је основна сврха писање концизних агената који се извршавају у специфичним доменима.U disertaciji je predstavljen prototip agentskog, domen-orijentisanog jezika ALAS. Osnovni motivi razvoja ALAS jezika su podrška distribuiranom ne-aksiomatskom rezonovanju kao i omogućavanje interoperabilnosti i heterogene mobilnosti Siebog agenata jer je prilikom analize postojećih agentskih domen-orijentisanih jezika utvrđeno da ni jedan jezik ne podržava ove zahteve. Poboljšanje u odnosu na slične postojeće agentske, domen-orijentisane jezike ogleda se i u programskim konstruktima koje nudi ALAS jezik a čija je osnovna svrha pisanje konciznih agenata koji se izvršavaju u specifičnim domenima.The dissertation presents the prototype of an agent-oriented, domainspecific language ALAS. The basic motives for the development of the ALAS language are support for distributed non-axiomatic reasoning, as well as enabling the interoperability and heterogeneous mobility of agents, because it is concluded by analysing existing agent-oriented, domainspecific languages, that there is no language that supports these requirements. The improvement compared to similar existing agentoriented, domain-specific languages are also reflected in program constructs offered by ALAS language, whose the main purpose is to enable writing the concise agents that are executed in specific domains

    Experimental Evaluation of Growing and Pruning Hyper Basis Function Neural Networks Trained with Extended Information Filter

    Get PDF
    In this paper we test Extended Information Filter (EIF) for sequential training of Hyper Basis Function Neural Networks with growing and pruning ability (HBF-GP). The HBF neuron allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The main intuition behind HBF is in generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. We exploit concept of neuron’s significance and allow growing and pruning of HBF neurons during sequential learning process. From engineer’s perspective, EIF is attractive for training of neural networks because it allows a designer to have scarce initial knowledge of the system/problem. Extensive experimental study shows that HBF neural network trained with EIF achieves same prediction error and compactness of network topology when compared to EKF, but without the need to know initial state uncertainty, which is its main advantage over EKF

    Bioinspired metaheuristic algorithms for global optimization

    Get PDF
    This paper presents concise comparison study of newly developed bioinspired algorithms for global optimization problems. Three different metaheuristic techniques, namely Accelerated Particle Swarm Optimization (APSO), Firefly Algorithm (FA), and Grey Wolf Optimizer (GWO) are investigated and implemented in Matlab environment. These methods are compared on four unimodal and multimodal nonlinear functions in order to find global optimum values. Computational results indicate that GWO outperforms other intelligent techniques, and that all aforementioned algorithms can be successfully used for optimization of continuous functions
    corecore