6 research outputs found

    How to Extend the Abstraction Refinement Model for Systems with Emergent Behavior ?

    Full text link
    The Abstraction Refinement Model has been widely adopted since it was firstly proposed many decades ago. This powerful model of software evolution process brings important properties into the system under development, properties such as the guarantee that no extra behavior (specifically harmful behaviors) will be observed once the system is deployed. However, perfect systems with such a guarantee are not a common thing to find in real world cases, anomalies and unspecified behaviors will always find a way to manifest in our systems, behaviors that are addressed in this paper with the name "emergent behavior". In this paper, we extend the Abstract Refinement Model to include the concept of the emergent behavior. Eventually, this should enable system developers to: (i) Concretely define what an emergent behavior is, (ii) help reason about the potential sources of the emergent behavior along the development process, which in return will help in controlling the emergent behavior at early steps of the development process

    Identifying Authorship Style in Malicious Binaries: Techniques, Challenges & Datasets

    Get PDF
    Attributing a piece of malware to its creator typically requires threat intelligence. Binary attribution increases the level of difficulty as it mostly relies upon the ability to disassemble binaries to identify authorship style. Our survey explores malicious author style and the adversarial techniques used by them to remain anonymous. We examine the adversarial impact on the state-of-the-art methods. We identify key findings and explore the open research challenges. To mitigate the lack of ground truth datasets in this domain, we publish alongside this survey the largest and most diverse meta-information dataset of 15,660 malware labeled to 164 threat actor groups

    Deep learning híbrido: modelo de conjunto difuso de imágenes para monitorear el comportamiento humano en la protección forestal

    Get PDF
    In conventional monitoring forest protection, detection methods use optical sensors or RGB cameras combine features including smokes, fires and human-destroyed forests at national forests. This paper has presented a new approach using Deep learning integrated with Picture Fuzzy Set for the surveillance monitoring system to be activated to confirm human behaviour in real-time in forest protection. Picture Fuzzy Graph (PFG) are applied to solve many complex problems in the real-world problems. The paper has presented a novel approach using deep learning with knowledge graphs to find a human profile including the detection of humans in large data. In the proposed model, digital human profiles are collected from conventional databases combination with social networks in real-time, and a knowledge graph is created to represent complex-relational user attributes of human profile in large datasets. PFG is applied to quantify the degree centrality of nodes. To confirm the proposed model, the proposed model has been tested with data sets through case studies of a forest. Experimental results show that the proposed model has been validated on real world datasets to demonstrate this method’s effectiveness.Introducción: este artículo es producto de la investigación “Monitoreo del comportamiento humano en la protección forestal” desarrollada en la Universidad de Ciencia y Tecnología de Hanoi en el 2021. Objetivo: este artículo presenta un nuevo enfoque utilizando el deep learning integrado con un conjunto difuso de imágenes (Picture Fuzzy Set), para un sistema de monitoreo de vigilancia para identificar el comportamiento humano en tiempo real con el propósito de proteger bosques. Metodología: el trabajo tiene un enfoque novedoso que utiliza el deep learning con gráficos de conocimiento para detectar humanos en grandes conjuntos de datos, incluida la búsqueda de un perfil humano. En el modelo propuesto, los perfiles humanos digitales se recopilan de bases de datos convencionales combinadas con redes sociales en tiempo real, y se crea un gráfico de conocimiento para representar atributos de usuario relacionales complejos de perfiles humanos en grandes conjuntos de datos. Se aplican Picture Fuzzy Graphs (PFG) para cuantificar el grado de centralidad de los nodos. El modelo propuesto ha sido probado con conjuntos de datos a través de estudios de caso de un bosque. Resultados: Los resultados experimentales muestran que el modelo propuesto ha sido validado en conjuntos de datos del mundo real para demostrar la efectividad de este método. El conjunto de datos incluye 93.979 identidades de un total de 2.830.146 imágenes procesadas que identifican la detección de rostros. En un estudio de caso de protección forestal en video, se considera que un ser humano se comporta normalmente en el sistema propuesto. Conclusión: se ha demostrado la efectividad de la base teórica para el deep learning integrado con una base de datos de gráficos, para demostrar comportamientos humanos mediante el seguimiento de perfiles con el propósito de proteger los bosques.

    Investigating the process of process modeling and its relation to modeling quality : the role of structured serialization

    Get PDF
    Lately, the focus of organizations is changing fundamentally. Where they used to spend almost exclusively attention to results, in terms of goods, services, revenue and costs, they are now concerned about the efficiency of their business processes. Each step of the business processes needs to be known, controlled and optimized. This explains the huge effort that many organizations currently put into the mapping of their processes in so-called (business) process models. Unfortunately, sometimes these models do not (completely) reflect the business reality or the reader of the model does not interpret the represented information as intended. Hence, whereas on the one hand we observe how organizations are attaching increasing importance to these models, on the other hand we notice how the quality of process models in companies often proves to be insufficient. The doctoral research makes a significant contribution in this context. This work investigates in detail how people create process models and why and when this goes wrong. A better understanding of current process modeling practice will form the basis for the development of concrete guidelines that result in the construction of better process models in the future. The first study investigated how we can represent the approach of different modelers in a cognitive effective way, in order to facilitate knowledge building. For this purpose the PPMChart was developed. It represents the different operations of a modeler in a modeling tool in such a way that patterns in their way of working can be detected easily. Through the collection of 704 unique modeling executions (a joint contribution of several authors in the research domain), and through the development of a concrete implementation of the visualization, it became possible to gather a great amount of insights about how different people work in different situations while modeling a concrete process. The second study explored, based on the discovered modeling patterns of the first study, the potential relations between how process models were being constructed and which quality was delivered. To be precise, three modeling patterns from the previous study were investigated further in their relation with the understandability of the produced process model. By comparing the PPMCharts that show these patterns with corresponding process models, a connection was found in each case. It was noticed that when a process model was constructed in consecutive blocks (i.e., in a structured way), a better understandable process model was produced. A second relation stated that modelers who (frequently) moved (many) model elements during modeling usually created a less understandable model. The third connection was found between the amount of time spent at constructing the model and a declining understandability of the resulting model. These relations were established graphically on paper, but were also confirmed by a simple statistical analysis. The third study selected one of the relations from the previous study, i.e., the relation between structured modeling and model quality, and investigated this relation in more detail. Again, the PPMChart was used, which has lead to the identification of different ways of structured process modeling. When a task is difficult, people will spontaneously split up this task in sub-tasks that are executed consecutively (instead of simultaneously). Structuring is the way in which the splitting of tasks is handled. It was found that when this happens consistently and according to certain logic, modeling became more effective and more efficient. Effective because a process model was created with less syntactic and semantic errors and efficient because it took less time and modeling operations. Still, we noticed that splitting up the modeling in sub-tasks in a structured way, did not always lead to a positive result. This can be explained by some people structuring the modeling in the wrong way. Our brain has cognitive preferences that cause certain ways of working not to fit. The study identified three important cognitive preferences: does one have a sequential or a global learning style, how context-dependent one is and how big one’s desire and need for structure is. The Structured Process Modeling Theory was developed, which captures these relations and which can form the basis for the development of an optimal individual approach to process modeling. In our opinion the theory has the potential to also be applicable in a broader context and to help solving various types of problems effectively and efficiently

    Understanding specific gaming experiences: the case of open world games

    Get PDF
    Digital games offer players a variety of experiences. Open world games allow players to choose what to engage with, and subsequently choose what experiences they want to have. However, this means it is not always clear what players are doing or why, even within the same game. This lack of commonality questions what it means to have `a’ gaming experience if there is little overlap in player behaviour. This thesis explored what it means to experience an open world game, and how experiences are unique to specific games/type of games. The first two studies showed that despite differences in what players do, there is an overarching experience: self-pacing gameplay by choosing what to engage with. Studies three and four explored if motivation could explain what experiences players pursue, but current measurement tools were not statistically or conceptually dependable enough to provide robust findings. Study five conversely explored whether goals can explain player behaviour, and found players consider their actions goal-directed. Finally, study six explored how to overlay goals to actions taken in a specific gaming session. This revealed that the game also provides goals for players to consider, meaning gameplay is not only driven by player intent. Overall, open world games are a series of contextually-situated experiences; players purposefully engage with in-game content, but remain flexible to what the game may offer in the moment. Whilst individual experiences vary greatly, players had the same unifying experience of navigating goal pursuit. Goals can be related to gameplay data to reveal what player-game interactions take place, and how players report them. Therefore, this thesis shows players can have little overlap in the specific experiences they have within games, yet still have the same overarching experience. Understanding such experiences requires data from a player’s perspective, as gameplay data alone cannot reveal player intent
    corecore