9 research outputs found
A game engine to make games as multi-agent systems
Video games are applications that present design patterns that resemble multi-agent systems. Game objects or actors are like autonomous agents that interact with each other to describe complex systems. The purpose of this work is to develop a game engine to build games as multi-agent systems. The actors or game engine agents have a set of properties and behaviour rules with the end to interact with the environment of the game. The behaviour definition is established through a formal semantic based on predicate logic. The proposed engine tries to fulfil the basic requirements of the multi-agent systems, by adjusting the characteristics of the system, without affecting its potential. Finally, a set of games are introduced to validate the operation and possibilities of the engine
A game engine designed to simplify 2D video game development
In recent years, the increasing popularity of casual games for mobile and web has
promoted the development of new editors to make video games easier to create. The
development of these interactive applications is on its way to becoming democratized, so
that anyone who is interested, without any advanced knowledge of programming, can
create them for devices such as mobile phones or consoles. Nevertheless, most game
development environments rely on the traditional way of programming and need advanced technical skills, even despite today’s improvements. This paper presents a new 2D
game engine that reduces the complexity of video game development processes. The
game specification has been simplified, decreasing the complexity of the engine architecture and introducing a very easy-to-use editing environment for game creation. The
engine presented here allows the behaviour of the game objects to be defined using a very
small set of conditions and actions, without the need to use complex data structures. Some
experiments have been designed in order to validate its ease of use and its capacity in the
creation of a wide variety of games. To test it, users with little experience in programming
have developed arcade games using the presented environment as a proof of its easiness
with respect to other comparable software. Results obtained endorse the concept and the
hypothesis of its easiness of use and demonstrate the engine potential
Specification and automatic verification of trust-based multi-agent systems
We present a new logic-based framework for modeling and automatically verifying trust in Multi-Agent Systems (MASs). We start by refining TCTL, a temporal logic of trust that extends the Computation Tree Logic (CTL) to enable reasoning about trust with preconditions. A new vector-based version of interpreted systems is defined to capture the trust relationship between the interacting parties. We introduce a set of reasoning postulates along with formal proofs to support our logic. Moreover, we present new symbolic model checking algorithms to formally and automatically verify the system under consideration against some desirable properties expressed using the proposed logic. We fully implemented our proposed algorithms as a model checker tool called MCMAS-T on top of the MCMAS model checker for MASs along with its new input language VISPL (Vector-extended ISPL). We evaluated the tool and reported experimental results using a real-life scenario in the healthcare field
Model Checking Trust-based Multi-Agent Systems
Trust has been the focus of many research projects, both theoretical and practical, in
the recent years, particularly in domains where open multi-agent technologies are applied
(e.g., Internet-based markets, Information retrieval, etc.). The importance of trust in such
domains arises mainly because it provides a social control that regulates the relationships
and interactions among agents. Despite the growing number of various multi-agent applications, they still encounter many challenges in their formal modeling and the verification
of agents’ behaviors. Many formalisms and approaches that facilitate the specifications of
trust in Multi-Agent Systems (MASs) can be found in the literature. However, most of these
approaches focus on the cognitive side of trust where the trusting entity is normally capable
of exhibiting properties about beliefs, desires, and intentions. Hence, the trust is considered
as a belief of an agent (the truster) involving ability and willingness of the trustee to perform some actions for the truster. Nevertheless, in open MASs, entities can join and leave
the interactions at any time. This means MASs will actually provide no guarantee about the
behavior of their agents, which makes the capability of reasoning about trust and checking
the existence of untrusted computations highly desired.
This thesis aims to address the problem of modeling and verifying at design time
trust in MASs by (1) considering a cognitive-independent view of trust where trust ingredients are seen from a non-epistemic angle, (2) introducing a logical language named Trust
Computation Tree Logic (TCTL), which extends CTL with preconditional, conditional, and graded trust operators along with a set of reasoning postulates in order to explore its capabilities, (3) proposing a new accessibility relation which is needed to define the semantics
of the trust modal operators. This accessibility relation is defined so that it captures the
intuition of trust while being easily computable, (4) investigating the most intuitive and
efficient algorithm for computing the trust set by developing, implementing, and experimenting different model checking techniques in order to compare between them in terms of
memory consumption, efficiency, and scalability with regard to the number of considered
agents, (5) evaluating the performance of the model checking techniques by analyzing the
time and space complexity.
The approach has been applied to different application domains to evaluate its computational performance and scalability. The obtained results reveal the effectiveness of the
proposed approach, making it a promising methodology in practice
Automatic Transformation-Based Model Checking of Multi-agent Systems
Multi-Agent Systems (MASs) are highly useful constructs in the context of real-world software applications. Built upon communication and interaction between autonomous agents, these systems are suitable to model and implement intelligent applications. Yet these desirable features are precisely what makes these systems very challenging to design, and their compliance with requirements extremely difficult to verify. This explains the need for the development of techniques and tools to model, understand, and implement interacting MASs. Among the different methods developed, the design-time verification techniques for MASs based on model checking offer the advantage of being formal and fully automated. We can distinguish between two different approaches used in model checking MASs, the direct verification approach, and the transformation-based approach. This thesis focuses on the later that relies on formal reduction techniques to transform the problem of model checking a source logic into that of an equivalent problem of model checking a target logic.
In this thesis, we propose a new transformation framework leveraging the model checking of the computation tree logic (CTL) and its NuSMV model checker to design and implement the process of transformation-based model checking for CTL-extension logics to MASs. The approach provides an integrated system with a rich set of features, designed to support the transformation process while simplifying the most challenging and error-prone tasks. The thesis presents and describes the tool built upon this framework and its different applications. A performance comparison with MCMAS, the model checker of MASs, is also discussed
Modeling and Verifying Probabilistic Social Commitments in Multi-Agent Systems
Interaction among autonomous agents in Multi-Agent Systems (MASs) is the key aspect for solving complex problems that an individual agent cannot handle alone. In this context, social approaches, as opposed to the mental approaches, have recently received a considerable attention in the area of agent communication. They exploit observable social
commitments to develop a verifiable formal semantics by which communication protocols can be specified. However, existing approaches for defining social commitments tend to
assume an absolute guarantee of correctness so that systems run in a certain manner. That is, social commitments have always been modeled with the assumption of certainty. Moreover, the widespread use of MASs increases the interest to explore the interactions between different aspects of the participating agents such as the interaction between agents’ knowledge and social commitments in the presence of uncertainty. This results in having a gap, in the literature of agent communication, on modeling and verifying social commitments in probabilistic settings.
In this thesis, we aim to address the above-mentioned problems by presenting a practical formal framework that is capable of handling the problem of uncertainty in social
commitments. First, we develop an approach for representing, reasoning about, and verifying
probabilistic social commitments in MASs. This includes defining a new logic called the probabilistic logic of commitments (PCTLC), and a reduction-based model checking
procedure for verifying the proposed logic. In the reduction technique, the problem of model checking PCTLC is transformed into the problem of model checking PCTL so that
the use of the PRISM (Probabilistic Symbolic Model Checker) is made possible. Formulae of PCTLC are interpreted over an extended version of the probabilistic interpreted systems
formalism. Second, we extend the work we proposed for probabilistic social commitments to be able to capture and verify the interactions between knowledge and commitments.
Properties representing the interactions between the two aspects are expressed in a new developed logic called the probabilistic logic of knowledge and commitment (PCTLkc).
Third, we develop an adequate semantics for the group social commitments, for the first time in the literature, and integrate it into the framework. We then introduce an improved version of PCTLkc and extend it with operators for the group knowledge and group social commitments. The new refined logic is called PCTLkc+. In each of the latter stages, we respectively develop a new version of the probabilistic interpreted systems over which the
presented logic is interpreted, and introduce a new reduction-based verification technique to verify the proposed logic. To evaluate our proposed work, we implement the proposed verification techniques on top of the PRISM model checker and apply them on several case studies. The results demonstrate the usefulness and effectiveness of our proposed work
Multimodal social media product reviews and ratings in e-commerce: an empirical approach
Since the booming of the internet and the “.com” (e-commerce) in the 1990’s, everything has changed. This improvement created different areas for researchers to investigate and examine, especially in the fields of human computer interaction and social media. This technological revolution has dramatically changed the way we interact with computers, buy, communicate and share information. This thesis investigates multimodal presentations of social media review and rating messages within an e-commerce interface. Multimodality refers to the communication pattern that goes beyond text to include images, audio and media. Multimodality provides a new way of communication, as images, for example, can deliver an additional information which might be difficult or impossible to communicate using text only. Social media can be defined as a two-way interaction using the internet as the communication medium.The overall hypothesis is that the use of multimodal metaphors (sound and avatars) to present social media product r views will improve the usability of the ecommerce interface and increase the user understanding, reduce the time needed to make a decision when compared to non-multimodal presentations. E-commerce usability refers to the presentation, accessibility and clarity of information. An experimental e-commerce platform was developed to investigate the particular interactive circumstances that multimodal metaphors may benefit the social media communication of reviews of products to users. The first experiment using three conditions (text with emoji’s, earcons and facially expressive avatars) measured the user comprehension, understanding information, user satisfaction with the way in which information was communicated and social media preference in e-commerce. The second experiment investigated the time taken by users to understand information, understanding information correctly, user satisfaction and user enjoyment using three conditions (emoji’s, facially expressive avatar and animation clips) in ecommerce platform. The results of the first set experiments of the showed that the text with emoji’s and the use of facially expressive avatar conditions had improved the users’ performance through understanding information effectively and making decisions quicker compared to the earcons condition. In the second experiment, the results showed that the users performed better (understanding information, understating information faster) using the emoji’s and the facially expressive avatar presentations compared to the use of the animation clip condition. A set of empirically derived guidelines to implement these metaphors to communicate social media product reviews in e-commerce interface have been presented