Computing and Informatics (E-Journal - Institute of Informatics, SAS, Bratislava)
Not a member yet
    1486 research outputs found

    Hybrid approach to task allocation in distributed systems

    No full text
    This paper describes the hybrid approach to task allocation in distributed systems by using problem solving methods of the artificial intelligence. For static mapping the objective function is used to evaluate the optimality of the allocation of a task graph onto a processor graph. Together with our optimization method also augmented simulated annealing and heuristic move exchange methods in distributed form are implemented.  For dynamic task allocation the semidistributed approach was designed based on the division of processor network topology into independent and symmetric spheres. Distributed static mapping (DSM) and  dynamic load balancing (DLB) tools are controlled by user window interface. DSM and DLB tools are integrated together with software monitor (PG_PVM) in the graphical GRAPNEL environment

    Use of Autoregressive Predictor in Echo State Neural Networks

    Get PDF
    "Echo State" neural networks (ESN), which are a special case of recurrent neural networks, are studied with the goal to achieve their greater predictive ability by the correction of their output signal. In this paper standard ESN was supplemented by a new correcting neural network which has served as an autoregressive predictor. The main task of this special neural network was output signal correction and therefore also a decrease of the prediction error. The goal of this paper was to compare the results achieved by this new approach with those achieved by original one-step learning algorithm. This approach was tested in laser fluctuations and air temperature prediction. Its prediction error decreased substantially in comparison to the standard approach

    Modelling Web Service Composition for Deductive Web Mining

    Get PDF
    Composition of simpler web services into custom applications is understood as promising technique for information requests in a heterogeneous and changing environment. This is also relevant for applications characterised as deductive web mining (DWM). We suggest to use problem-solving methods (PSMs) as templates for composed services. We developed a multi-dimensional, ontology-based framework, and a collection of PSMs, which enable to characterise DWM applications at an abstract level; we describe several existing applications in this framework. We show that the heterogeneity and unboundedness of the web demands for some modifications of the PSM paradigm used in the context of traditional artificial intelligence. Finally, as simple proof of concept, we simulate automated DWM service composition on a small collection of services, PSM-based templates, data objects and ontological knowledge, all implemented in Prolog

    Object Replication Algorithms for World Wide Web

    Get PDF
    Object replication is a well-known technique to improve the accessibility of the Web sites. It generally offers reduced client latencies and increases a site's availability. However, applying replication techniques is not trivial and a large number of heuristics have been proposed to decide the number of replicas of an object and their placement in a distributed web server system. This paper presents three object placement and replication algorithms. The first two heuristics are centralized in the sense that a central site determines the number of replicas and their placement. Due to the dynamic nature of the Internet traffic and the rapid change in the access pattern of the World-Wide Web, we also propose a distributed algorithm where each site relies on some locally collected information to decide what objects should be replicated at that site. The performance of the proposed algorithms is evaluated through a simulation study. Also, the performance of the proposed algorithms has been compared with that of three other well-known algorithms and the results are presented. The simulation results demonstrate the effectiveness and superiority of the proposed algorithms

    The Use of Genetic Algorithms and Neural Networks to Approximate Missing Data in Database

    Get PDF
    Missing data creates various problems in analysing and processing data in databases. In this paper we introduce a new method aimed at approximating missing data in a database using a combination of genetic algorithms and neural networks. The proposed method uses genetic algorithm to minimise an error function derived from an auto-associative neural network. Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) networks are employed to train the neural networks. Our focus also lies on the investigation of using the proposed method in accurately predicting missing data as the number of missing cases within a single record increases. It is observed that there is no significant reduction in accuracy of results as the number of missing cases in a single record increases. It is also found that results obtained using RBF are superior toMLP

    Exploiting type analysis for unification in a distributed environment

    No full text
    Unification, in distributed implementations of logic programming, involves sending and receiving messages to access data structures spread among different nodes.  In traditional implementations, processes access remote data structures by exchanging messages which carry either the overall data structures or only remote references to them. Intermediate but fixed solutions are also possible.  These fixed policies can be far from optimal on various classes of programs and may induce substantial overhead. This paper presents an implementation scheme for distributed logic programming which consists of tailoring the copying level for each procedure argument. The scheme is based on a consumption specification which describes the way each procedure "consumes" its arguments locally. Consumption specification avoids unnecessary copying and allows to request data structures globally. The consumption specification (or an approximation of it) can be obtained through a static analysis inspired by traditional type analyses. The paper presents two implementations which exploit the consumption specification. The low-level implementation extends the Warren Abstract Machine with instructions and data structures for exploiting the consumption specification during code compilation. The high-level implementation is based on attributed variables in order to capture and implement, at a higher-level, distributed unification. Experimental results of the high-level implementation on a network of workstations show the potential of the approach

    1,242

    full texts

    1,486

    metadata records
    Updated in last 30 days.
    Computing and Informatics (E-Journal - Institute of Informatics, SAS, Bratislava) is based in Slovakia
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇