50 research outputs found

    A Novel Technique for Task Re-Allocation in Distributed Computing System

    Get PDF
    A distributed computing is software system in which components are located on different attached computers can communicate and organize their actions by transferring messages. A task applied on the distributed system must be reliable and feasible. The distributed system for instance grid networks, robotics, air traffic control systems, etc. exceedingly depends on time. If not detected accurately and recovered at the proper time, a single error in real time distributed system can cause a whole system failure. Fault-tolerance is the key method which is mostly used to provide continuous reliability in these systems. There are some challenges in distributed computing system such as resource sharing, transparency, dependability, Complex mappings, concurrency, Fault tolerance etc. In this paper, we focus on fault tolerance which is responsible for the degradation of the system. A novel technique is proposed based upon reliability to overcome fault tolerance problem and re-allocate the task. DOI: 10.17762/ijritcc2321-8169.15080

    A JADE Implemented Mobile Agent Based Host Platform Security

    Get PDF
    Mobile agent paradigm relies heavily on security of both the agent as well as its host platform. Both of the entities are prone to security threats and attacks such as masquerading, denial-of-service and unauthorized access. Security fissures on the platform can result in significant losses. This paper produced a Robust Series Checkpointing Algorithm (SCpA) implemented in JADE environment, which extends our previous work, keeping in mind the security of mobile host platforms. The algorithm is Series Check-pointing in the sense that layers are placed in series one after the other, in the framework, to provide two-level guard system so that if incase, any malevolent agent somehow able to crack the security at first level and unfortunately managed to enter the platform; may be trapped at the next level and hence block the threat. The work also aimed to evaluate the performance of the agents’ execution, through graphical analysis. Our previous work proposed successfully a platform security framework (PSF) to secure host platform from various security threats, but the technical algorithm realization and its implementation was deliberately ignored, which has now been completed.   Keywords: Mobile Agent, Security, Reputation Score, Threshold Value, Check-points, Algorithm

    Fault Tolerance for Stream Programs on Parallel Platforms

    Get PDF
    A distributed system is defined as a collection of autonomous computers connected by a network, and with the appropriate distributed software for the system to be seen by users as a single entity capable of providing computing facilities. Distributed systems with centralised control have a distinguished control node, called leader node. The main role of a leader node is to distribute and manage shared resources in a resource-efficient manner. A distributed system with centralised control can use stream processing networks for communication. In a stream processing system, applications typically act as continuous queries, ingesting data continuously, analyzing and correlating the data, and generating a stream of results. Fault tolerance is the ability of a system to process the information, even if it happens any failure or anomaly in the system. Fault tolerance has become an important requirement for distributed systems, due to the possibility of failure has currently risen to the increase in number of nodes and the runtime of applications in distributed system. Therefore, to resolve this problem, it is important to add fault tolerance mechanisms order to provide the internal capacity to preserve the execution of the tasks despite the occurrence of faults. If the leader on a centralised control system fails, it is necessary to elect a new leader. While leader election has received a lot of attention in message-passing systems, very few solutions have been proposed for shared memory systems, as we propose. In addition, rollback-recovery strategies are important fault tolerance mechanisms for distributed systems, since that it is based on storing information into a stable storage in failure-free state and when a failure affects a node, the system uses the information stored to recover the state of the node before the failure appears. In this thesis, we are focused on creating two fault tolerance mechanisms for distributed systems with centralised control that uses stream processing for communication. These two mechanism created are leader election and log-based rollback-recovery, implemented using LPEL. The leader election method proposed is based on an atomic Compare-And-Swap (CAS) instruction, which is directly available on many processors. Our leader election method works with idle nodes, meaning that only the non-busy nodes compete to become the new leader while the busy nodes can continue with their tasks and later update their leader reference. Furthermore, this leader election method has short completion time and low space complexity. The log-based rollback-recovery method proposed for distributed systems with stream processing networks is a novel approach that is free from domino effect and does not generate orphan messages accomplishing the always-no-orphans consistency condition. Additionally, this approach has lower overhead impact into the system compared to other approaches, and it is a mechanism that provides scalability, because it is insensitive to the number of nodes in the system

    Fault Tolerance for High-Performance Applications Using Structured Parallelism Models

    Get PDF
    In the last years parallel computing has increasingly exploited the high-level models of structured parallel programming, an example of which are algorithmic skeletons. This trend has been motivated by the properties featuring structured parallelism models, which can be used to derive several (static and dynamic) optimizations at various implementation levels. In this thesis we study the properties of structured parallel models useful for attacking the issue of providing a fault tolerance support oriented towards High-Performance applications. This issue has been traditionally faced in two ways: (i) in the context of unstructured parallelism models (e.g. MPI), which computation model is essentially based on a distributed set of processes communicating through message-passing, with an approach based on checkpointing and rollback recovery or software replication; (ii) in the context of high-level models, based on a specific parallelism model (e.g. data-flow) and/or an implementation model (e.g. master-slave), by introducing specific techniques based on the properties of the programming and computation models themselves. In this thesis we make a step towards a more abstract viewpoint and we highlight the properties of structured parallel models interesting for fault tolerance purposes. We consider two classes of parallel programs (namely task parallel and data parallel) and we introduce a fault tolerance support based on checkpointing and rollback recovery. The support is derived according to the high-level properties of the parallel models: we call this derivation specialization of fault tolerance techniques, highlighting the difference with classical solutions supporting structure-unaware computations. As a consequence of this specialization, the introduced fault tolerance techniques can be configured and optimized to meet specific needs at different implementation levels. That is, the supports we present do not target a single computing platform or a specific class of them. Indeed the specializations are the mechanism to target specific issues of the exploited environment and of the implemented applications, as proper choices of the protocols and their configurations

    Indirect impact of landslide hazards on transportation infrastructure

    Get PDF
    This thesis examines the indirect impact of natural hazards on infrastructure networks. It addresses several key themes and issues for hazard assessment, network modelling and risk assessment using the case study of landslides impacting the national road network in Scotland, United Kingdom. The research follows four distinct stages. First, a landslide susceptibility model is developed using a database of landslide occurrences, spatial data sets and logistic regression. The model outputs indicate the terrain characteristics that are associated with increased landslide potential, including critical slope angles and south westerly aspects associated with increased rates of solar irradiance and precipitation. The results identify the hillslopes and road segments that are most prone to disruption by landslides and these indicate that 40 % (1,700 / 4,300 km) of Scotland s motorways and arterial roads (i.e. strategic road network) are susceptible to landslides and this is above previous assessments. Second, a novel user-equilibrium traffic model is developed using UK Census origin-destination tables. The traffic model calculates the additional travel time and cost (i.e. indirect impacts) caused by network disruptions due to landslide events. The model is applied to calculate the impact of historic scenarios and for sets of plausible landslide events generated using the landslide susceptibility model. Impact assessments for historic scenarios are 29 to 83 % greater than previous, including £1.2 million of indirect impacts over 15 days of disruption at the A83 Rest and Be Thankful landslide October 2007. The model results indicate that the average impact of landslides is £64 k per day of disruption, and up to £130 k per day on the most critical road segments in Scotland. In addition to identifying critical road segments with both high impact and high susceptibility to landslides, the study indicates that the impact of landslides is concentrated away from urban centres to the central and north-west regions of Scotland that are heavily reliant on road and haulage-based industries such as seasonal tourism, agriculture and craft distilling. The third research element is the development of landslide initiation thresholds using weather radar data. The thresholds classify the rainfall conditions that are most commonly associated with landslide occurrence in Scotland, improving knowledge of the physical initiation processes and their likelihood. The thresholds are developed using a novel optimal-point threshold selection technique, high resolution radar and new rain variables that provide spatio-temporally normalised thresholds. The thresholds highlight the role of the 12-day antecedent hydrological condition of soils as a precursory factor in controlling the rain conditions that trigger landslides. The new results also support the observation that landslides occur more frequently in the UK during the early autumn and winter seasons when sequences or clustering of multiple cyclonic-storm systems is common in periods lasting 5 to 15 days. Fourth, the three previous elements are combined to evaluate the landslide hazard of the strategic road segments and a prototype risk assessment model is produced - a catastrophe model. The catastrophe model calculates the annual average loss and aggregated exceedance probability of losses due to the indirect impact of landslides in Scotland. Beyond application to cost-benefit analyses for landslide mitigation efforts, the catastrophe model framework is applicable to the study of other natural hazards (e.g. flooding), combinations of hazards, and other infrastructure networks

    Construcción de un sub-sistema de software que permita generar movilidad de agentes artificiales de acuerdo con el rol que ellos desempeñan sobre el Sistema TLÖN cuyos recursos se comparten a través de una red Ad Hoc

    Get PDF
    En este documento se presenta el diseñno, desarrollo e implementación del sub-sistema de movilidad BEAMS(Better Environment for Agent Mobility Subsystem) para agentes artificiales como un componente del sistema TLON, un sistema de computo virtualizado basado en los comportamientos de agentes. Un modelo de agente, ambiente y sistema multi-agente son creados con el fin de lograr y probar la movilidad de BEAMS. Como escenario de prueba de desplegó el Agente Móvil cuya meta fue capturar la información de los recursos computacionales que poseen los nodos para una movilidad física y virtual por las cuales este ha migrado; En el escenario virtual fueron medidos los tiempos de movilidad para un incremento lineal del tamaño de dicho agente, logrando ver la respuesta del sub-sistema ante diferentes tamaños de agente.Abstract: This document presents the design, development and implementation of the mobility subsystem BEAMS (Better Environment for Agent Mobility Subsystem) for artificial Agents as a component of the TLON system, a computer system Virtualized based on the behavior of Agents.A model of Agent, Environment and Multi-Agent System are created in order to achieve and test the mobility of BEAMS.As a test scenario deployed the Mobile Agent whose goal was to capture the information of the computational resources possessed by the nodes for a physical and virtual mobility through which this has migrated, in virtual scenario were measured the mobility times for a linear increase in the size of said agent, managing to see the response of the sub-system to different sizes of AgentsMaestrí

    Ecological Causal Assessment

    Get PDF
    Edited by experts at the leading edge of the development of causal assessment methods for more than two decades, Ecological Causal Assessment gives insight and expert guidance on how to identify cause-effect relationships in environmental systems. The book discusses the importance of asking the fundamental question "Why did this effect happen?" be

    Second CLIPS Conference Proceedings, volume 2

    Get PDF
    Papers presented at the 2nd C Language Integrated Production System (CLIPS) Conference held at the Lyndon B. Johnson Space Center (JSC) on 23-25 September 1991 are documented in these proceedings. CLIPS is an expert system tool developed by the Software Technology Branch at NASA JSC and is used at over 4000 sites by government, industry, and business. During the three days of the conference, over 40 papers were presented by experts from NASA, Department of Defense, other government agencies, universities, and industry

    Social context of creativity

    Get PDF
    This thesis analyses the long-distance control of the environmentally-situated imagination, in both spatial and temporal dimensions. Central to the project is what I call the extended social brain hypothesis. Grounded in the Peircean conception of 'pragmaticism‘, this re-introduces technical intelligence to Dunbar‘s social brain—conceptually, through Clark‘s 'extended mind‘ philosophy, and materially, through Callon‘s 'actor–network theory‘. I claim that: There is no subjectivity without intersubjectivity. That is to say: as an evolutionary matter, it was necessary for the empathic capacities to evolve before the sense of self we identify as human could emerge. Intersubjectivity is critical to human communication, because of its role in interpreting intention. While the idea that human communication requires three levels of intentionality carries analytical weight, I argue that the inflationary trajectory is wrong as an evolutionary matter. The trend is instead towards increasing powers of individuation. The capacity for tool-use is emphasized less under the social brain hypothesis, but the importance of digital manipulation needs to be reasserted as part of a mature ontology. These claims are modulated to substantiate the work-maker, a socially situated (and embodied) creative agent who draws together Peircean notions of epistemology, phenomenology and oral performance

    Artificial intelligence and its application in architectural design

    Get PDF
    No abstract available.No abstract available
    corecore