5,420 research outputs found

    Multi-agent Adaptive Architecture for Flexible Distributed Real-time Systems

    Get PDF
    Recent critical embedded systems become more and more complex and usually react to their environment that requires to amend their behaviors by applying run-time reconfiguration scenarios. A system is defined in this paper as a set of networked devices, where each of which has its own operating system, a processor to execute related periodic software tasks, and a local battery. A reconfiguration is any operation allowing the addition-removal-update of tasks to adapt the device and the whole system to its environment. It may be a reaction to a fault or even optimization of the system functional behavior. Nevertheless, such scenario can cause the violation of real-time or energy constraints, which is considered as a critical run-time problem. We propose a multi-agent adaptive architecture to handle dynamic reconfigurations and ensure the correct execution of the concurrent real-time distributed tasks under energy constraints. The proposed architecture integrates a centralized scheduler agent (ScA) which is the common decision making element for the scheduling problem. It is able to carry out the required run-time solutions based on operation research techniques and mathematical tools for the system's feasibility. This architecture assigns also a reconfiguration agent (RA p ) to each device p to control and handle the local reconfiguration scenarios under the instructions of ScA. A token-based protocol is defined in this case for the coordination between the different distributed agents in order to guarantee the whole system's feasibility under energy constraints.info:eu-repo/semantics/publishedVersio

    Developing the scales on evaluation beliefs of student teachers

    Get PDF
    The purpose of the study reported in this paper was to investigate the validity and the reliability of a newly developed questionnaire named ‘Teacher Evaluation Beliefs’ (TEB). The framework for developing items was provided by the two models. The first model focuses on Student-Centered and Teacher-Centered beliefs about evaluation while the other centers on five dimensions (what/ who/ when/ why/ how). The validity and reliability of the new instrument was investigated using both exploratory and confirmatory factor analysis study (n=446). Overall results indicate that the two-factor structure is more reasonable than the five-factor one. Further research needs additional items about the latent dimensions “what” ”who” ”when” ”why” “how” for each existing factor based on Student-centered and Teacher-centered approaches

    Medical analysis and diagnosis by neural networks

    Get PDF
    In its first part, this contribution reviews shortly the application of neural network methods to medical problems and characterizes its advantages and problems in the context of the medical background. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic systems. Then, paradigm of neural networks is shortly introduced and the main problems of medical data base and the basic approaches for training and testing a network by medical data are described. Additionally, the problem of interfacing the network and its result is given and the neuro-fuzzy approach is presented. Finally, as case study of neural rule based diagnosis septic shock diagnosis is described, on one hand by a growing neural network and on the other hand by a rule based system. Keywords: Statistical Classification, Adaptive Prediction, Neural Networks, Neurofuzzy, Medical System

    Integrated Scenario-based Design Methodology for Collaborative Technology Innovation

    Get PDF
    The paper presents a scenario-based methodology developed and tested throughout cooperative research and development projects. It is aimed at supporting information technology innovation with an end-to-end Human and Social Sciences assistance. This methodology provides an integrated approach combining a vision of the potential users, business aspects and technological challenges throughout the design process. An original combination of different methods is proposed and experimented: user-centred design, scenario-based design, user and functional requirements analysis, business value analysis, user acceptance studies, and visualization methods. This methodology has been implemented in three European R&D projects, in the domain of the telecommunications and Internet infrastructure. The key contributions of this approach are that it unifies brings together visions of the users, potential business value and technology challenges thanks to scenario construction.Scenario-based design ; user requirements ; business economics ; functional requirements ; visualization

    Multi-Behavior Agent Model for Supply Chain Management

    Get PDF
    Recent economic and international threats to occidental industries have encouraged companies to rethink their planning systems. Due to consolidation, the development of integrated supply chains and the use of inter-organizational information systems have increased business interdependencies and the need for collaboration. Thus, agility and the ability to deal quickly with disturbances in supply chains are critical to maintain overall performance. In order to develop tools to increase the agility of the supply chain and to promote the collaborative management of such disturbances, agent-based technology takes advantage of the ability of agents to make autonomous decisions in a distributed network. This paper proposes a multi-behavior agent model using different decision making approaches in a context where planning decisions are supported by a distributed advanced planning system (d-APS). The implementation of this solution is realized through the FOR@C experimental agent-based platform, dedicated to the supply chain planning for the forest products industry

    Multi-behavior agent model for supply chain management

    Get PDF
    Recent economic and international threats to occidental industries have encouraged companies to rethink their planning systems. Due to consolidation, the development of integrated supply chains and the use of inter-organizational information systems have increased business interdependencies and the need for collaboration. Thus, agility and the ability to deal quickly with disturbances in supply chains are critical to maintain overall performance. In order to develop tools to increase the agility of the supply chain and to promote the collaborative management of such disturbances, agent-based technology takes advantage of the ability of agents to make autonomous decisions in a distributed network. This paper proposes a multi-behavior agent model using different decision making approaches in a context where planning decisions are supported by a distributed advanced planning system (d-APS). The implementation of this solution is realized through the FOR@C experimental agent-based platform, dedicated to the supply chain planning for the forest products industry

    Service Quality Assessment for Cloud-based Distributed Data Services

    Full text link
    The issue of less-than-100% reliability and trust-worthiness of third-party controlled cloud components (e.g., IaaS and SaaS components from different vendors) may lead to laxity in the QoS guarantees offered by a service-support system S to various applications. An example of S is a replicated data service to handle customer queries with fault-tolerance and performance goals. QoS laxity (i.e., SLA violations) may be inadvertent: say, due to the inability of system designers to model the impact of sub-system behaviors onto a deliverable QoS. Sometimes, QoS laxity may even be intentional: say, to reap revenue-oriented benefits by cheating on resource allocations and/or excessive statistical-sharing of system resources (e.g., VM cycles, number of servers). Our goal is to assess how well the internal mechanisms of S are geared to offer a required level of service to the applications. We use computational models of S to determine the optimal feasible resource schedules and verify how close is the actual system behavior to a model-computed \u27gold-standard\u27. Our QoS assessment methods allow comparing different service vendors (possibly with different business policies) in terms of canonical properties: such as elasticity, linearity, isolation, and fairness (analogical to a comparative rating of restaurants). Case studies of cloud-based distributed applications are described to illustrate our QoS assessment methods. Specific systems studied in the thesis are: i) replicated data services where the servers may be hosted on multiple data-centers for fault-tolerance and performance reasons; and ii) content delivery networks to geographically distributed clients where the content data caches may reside on different data-centers. The methods studied in the thesis are useful in various contexts of QoS management and self-configurations in large-scale cloud-based distributed systems that are inherently complex due to size, diversity, and environment dynamicity

    The University Defence Research Collaboration In Signal Processing

    Get PDF
    This chapter describes the development of algorithms for automatic detection of anomalies from multi-dimensional, undersampled and incomplete datasets. The challenge in this work is to identify and classify behaviours as normal or abnormal, safe or threatening, from an irregular and often heterogeneous sensor network. Many defence and civilian applications can be modelled as complex networks of interconnected nodes with unknown or uncertain spatio-temporal relations. The behavior of such heterogeneous networks can exhibit dynamic properties, reflecting evolution in both network structure (new nodes appearing and existing nodes disappearing), as well as inter-node relations. The UDRC work has addressed not only the detection of anomalies, but also the identification of their nature and their statistical characteristics. Normal patterns and changes in behavior have been incorporated to provide an acceptable balance between true positive rate, false positive rate, performance and computational cost. Data quality measures have been used to ensure the models of normality are not corrupted by unreliable and ambiguous data. The context for the activity of each node in complex networks offers an even more efficient anomaly detection mechanism. This has allowed the development of efficient approaches which not only detect anomalies but which also go on to classify their behaviour
    • 

    corecore