302 research outputs found

    On Preserving the Behavior in Software Refactoring: A Systematic Mapping Study

    Get PDF
    Context: Refactoring is the art of modifying the design of a system without altering its behavior. The idea is to reorganize variables, classes and methods to facilitate their future adaptations and comprehension. As the concept of behavior preservation is fundamental for refactoring, several studies, using formal verification, language transformation and dynamic analysis, have been proposed to monitor the execution of refactoring operations and their impact on the program semantics. However, there is no existing study that examines the available behavior preservation strategies for each refactoring operation. Objective: This paper identifies behavior preservation approaches in the research literature. Method: We conduct, in this paper, a systematic mapping study, to capture all existing behavior preservation approaches that we classify based on several criteria including their methodology, applicability, and their degree of automation. Results: The results indicate that several behavior preservation approaches have been proposed in the literature. The approaches vary between using formalisms and techniques, developing automatic refactoring safety tools, and performing a manual analysis of the source code. Conclusion: Our taxonomy reveals that there exist some types of refactoring operations whose behavior preservation is under-researched. Our classification also indicates that several possible strategies can be combined to better detect any violation of the program semantics

    Semantical Correctness of Simulation-to-Animation Model and Rule Transformation

    Get PDF
    In the framework of graph transformation, simulation rules are well-known to define the operational behavior of visual models. Moreover, it has been shown already how to construct animation rules in a domain specific layout from simulation rules. An important requirement of this construction is the semantical correctness which has not yet been considered. In this paper we give a precise definition for simulation-to-animation (S2A) model and rule transformations. Our main results show under which conditions semantical correctness can be obtained. The results are applied to analyze the S2A transformation of a Radio Clock model. Keywords: graph transformation, model and rule transformation, semantical correctness, simulation, animatio

    Agent-oriented constructivist knowledge management

    Get PDF
    In Ancient Times, when written language was introduced, books and manuscripts were often considered sacred. During these times, only a few persons were able to read and interpret them, while most people were limited in accepting these interpretations. Then, along with the industrial revolution of the XVIII and XIX centuries and especially boosted by the development of the press, knowledge slowly became available to all people. Simultaneously, people were starting to apply machines in the development of their work, usually characterized by repetitive processes, and especially focused in the production of consuming goods, such as furniture, clocks, clothes and so on. Following the needs of this new society, it was finally through science that new processes emerged to enable the transmission of knowledge from books and instructors to learners. Still today, people gain knowledge based on these processes, created to fulfill the needs of a society in its early stages of industrialization, thus not being compatible with the needs of the information society. In the information society, people must deal with an overloading amount of information, by the means of the media, books, besides different telecommunication and information systems technology. Furthermore, people’s relation to work has been influenced by profound changes, for instance, knowledge itself is now regarded as a valuable work product and, thus, the workplace has become an environment of knowledge creation and learning. Modifications in the world economical, political and social scenarios led to the conclusion that knowledge is the differential that can lead to innovation and, consequently, save organizations, societies, and even countries from failing in achieving their main goals. Focusing on these matters is the Knowledge Management (KM) research area, which deals with the creation, integration and use of knowledge, aiming at improving the performance of individuals and organizations. Advances in this field are mainly motivated by the assumption that organizations should focus on knowledge assets (generally maintained by the members of an organization) to remain competitive in the information society’s market. This thesis argues that KM initiatives should be targeted based on a constructivist perspective. In general, a constructivist view on KM focuses on how knowledge emerges, giving great importance to the knowledge holders and their natural practices. With the paragraph above, the reader may already have an intuition of how this work faces and targets Knowledge Management, however, let us be more precise. Research in Knowledge Management has evolved substantially in the past 30 years, coming from a centralized view of KM processes to a distributed view, grounded in organizational and cognitive sciences studies that point out the social, distributed, and subjective nature of knowledge. The first Knowledge Management Systems (KMSs) were centrally based and followed a top-down design approach. The organization managers, supported by knowledge engineers, collected and structured the contents of an organizational memory as a finished product at design time (before the organizational memory was deployed) and then disseminated the product, expecting employees to use it and update it. However, employees often claimed that the knowledge stored in the repository was detached from their real working practices. This led to the development of evolutionary methods, which prescribe that the basic KM system is initially developed and evolves proactively in an on-going fashion. However, most of the initiatives are still based on building central repositories and portals, which assume standardized vocabularies, languages, and classification schemes. Consequently, employees’ lack of trust and motivation often lead to dissatisfaction. In other words, workers resist on sharing knowledge, since they do not know who is going to access it and what is going to be done with it. Moreover, the importance attributed to knowledge may give an impression that these central systems take away a valuable asset from his or her owner, without giving appreciable benefits in return. The problems highlighted in the previous paragraph may be attenuated or even solved if a top-down/bottom-up strategy is applied when proposing a KM solution. This means that the solution should be sought with aim at organizational goals (top-down) but at the same time, more attention should be given to the knowledge holders and on the natural processes they already use to share knowledge (bottom-up). Being active agency such an important principle of Constructivism, this work recognizes that the Agent Paradigm (first defined by Artificial Intelligence and more recently adopted by Software Engineering) is the best approach to target Knowledge Management, taking a technological and social perspective. Capable of modeling and supporting social environments, agents is here recognized as a suitable solution for Knowledge Management especially by providing a suitable metaphor used for modeling KM domains (i.e. representing humans and organizations) and systems. Applying agents as metaphors on KM is mainly motivated by the definition of agents as cognitive beings having characteristics that resemble human cognition, such as autonomy, reactivity, goals, beliefs, desires, and social-ability. Using agents as human abstractions is motivated by the fact that, for specific problems, such as software engineering and knowledge management process modeling, agents may aid the analyst to abstract away from some of the problems related to human complexity, and focus on the important issues that impact the specific goals, beliefs and tasks of agents of the domain. This often leads to a clear understanding of the current situation, which is essential for the proposal of an appropriate solution. The current situation may be understood by modeling at the same time the overall goals of the organization, and the needs and wants of knowledge holders. Towards facilitating the analysis of KM scenarios and the development of adequate solutions, this work proposes ARKnowD (Agent-oriented Recipe for Knowledge Management Systems Development). Systems here have a broad definition, comprehending both technology-based systems (e.g. information system, groupware, repositories) and/or human systems, i.e. human processes supporting KM using non-computational artifacts (e.g. brain stormings, creativity workshops). The basic philosophical assumptions behind ARKnowD are: a) the interactions between human and system should be understood according to the constructivist principle of self-construction, claiming that humans and communities are self-organizing entities that constantly construct their identities and evolve throughout endless interaction cycles. As a result of such interactions, humans shape systems and, at the same time, systems constrain the ways humans act and change; b) KM enabling systems should be built in a bottom-up approach, aiming at the organizational goals, but understanding that in order to fulfill these goals, some personal needs and wants of the knowledge holders (i.e. the organizational members) need to be targeted; and c) there is no “silver bullet��? when pursuing a KM tailoring methodology and the best approach is combining existing agent-oriented approaches according to the given domain or situation. This work shows how the principles above may be achieved by the integration of two existing work on agent-oriented software engineering, which are combined to guide KM analysts and system developers when conceiving KM solutions. Innovation in our work is achieved by supporting topdown/bottom-up approaches to KM as mentioned above. The proposed methodology does that by strongly emphasizing the earlier phases of software development, the so-called requirement analysis activity. In this way, we consider all stakeholders (organizations and humans) as agents in our analysis model, and start by understanding their relations before actually thinking of developing a system. Perhaps the problem may be more effectively solved by proposing changes in the business processes, rather than by making use of new technology. And besides, in addition to humans and organizations, existing systems are also included in the model from start, helping the analyst and designer to understand which functionalities are delegated to these so-called artificial agents. In addition to that, benefits as a result of the application of ARKnowD may be also attributed to our choice of using the proper agent cognitive characteristics in the different phases of the development cycle. With the main purpose of exemplifying the use of the proposed methodology, this work presents a socially-aware recommender agent named KARe (Knowledgeable Agent for Recommendations). Recommender Systems may be defined by those that support users in selecting items of their need from a big set of items, helping users to overcome the overwhelming feeling when facing a vast information source, such as the web, an organizational repository or the like. Besides serving as a case for our methodology, this work also aims at exploring the suitability of the KARe system to support KM processes. Our choice for supporting knowledge sharing through questioning and answering processes is again supported by Constructivism proponents, who understand that social interaction is vital for active knowledge building. This assumption is also defended by some KM theories, claiming that knowledge is created through cycles of transformation between two types of knowledge: tacit and explicit knowledge. Up to now, research on KM has paid much attention to the formalization and exchange of explicit knowledge, in the form of documents or other physical artifacts, often annotated with metadata, and classified by taxonomies or ontologies. Investigations surrounding tacit knowledge have been so far scarce, perhaps by the complexity of the tasks of capturing and integrating such kind of knowledge, defined as knowledge about personal experience and values, usually confined on people’s mind. Taking a flexible approach on supporting this kind of knowledge conversion, KARe relies on the potential of social interaction underlying organizational practices to support knowledge creation and sharing. The global objective of this work is to support knowledge creation and sharing within an organization, according to its own natural processes and social behaviors. In other words, this work is based on the assumption that KM is better supported if knowledge is looked at from a constructivist perspective. To sum up, this thesis aims at: 1) Providing an agent-oriented approach to guide the creation and evolvement of KM initiatives, by analyzing the organizational potentials, behaviors and processes concerning knowledge sharing; 2) Developing the KARe recommender system, based on a semantically enriched Information Retrieval technique for recommending knowledge artifacts, supporting users to ask and answer to each others’ questions. These objectives are achieved as follows: - Defining the principles that characterize a Constructivist KM supporting environment and understanding how they may be used to support the creation of more effective KM solutions; - Providing an agent-oriented approach to develop KM systems. This approach is based on the integration of two different agent-oriented software engineering works, profiting from their strengths in providing a comprehensive methodology that targets both analysis and design activities; - Proposing and designing a socially aware agent-oriented recommender system both to exemplify the application of the proposed approach and to explore its potential on supporting knowledge creation and sharing. - Implementing an Information Retrieval algorithm to support the previously mentioned system in generating recommendations. Besides describing the algorithm, this thesis brings experimental results to prove its effectiveness

    Scaling Size and Parameter Spaces in Variability-Aware Software Performance Models (T)

    Get PDF
    In software performance engineering, what-if scenarios, architecture optimization, capacity planning, run-time adaptation, and uncertainty management of realistic models typically require the evaluation of many instances. Effective analysis is however hindered by two orthogonal sources of complexity. The first is the infamous problem of state space explosion — the analysis of a single model becomes intractable with its size. The second is due to massive parameter spaces to be explored, but such that computations cannot be reused across model instances. In this paper, we efficiently analyze many queuing models with the distinctive feature of more accurately capturing variability and uncertainty of execution rates by incorporating general (i.e., non-exponential) distributions. Applying product-line engineering methods, we consider a family of models generated by a core that evolves into concrete instances by applying simple delta operations affecting both the topology and the model's parameters. State explosion is tackled by turning to a scalable approximation based on ordinary differential equations. The entire model space is analyzed in a family-based fashion, i.e., at once using an efficient symbolic solution of a super-model that subsumes every concrete instance. Extensive numerical tests show that this is orders of magnitude faster than a naive instance-by-instance analysis

    A Generic Framework for Enforcing Security in Distributed Systems

    Get PDF
    A large extent of today's computer programs is distributed. For instance, services for backups, file storage, and cooperative work are now typically managed by distributed programs. The last two decades also brought a variety of services establishing social networks, from exchanging short messages to sharing personal information to dating. In each of the services, distributed programs process and store sensitive information about their users or the corporations their users work for. Secure processing of the sensitive information is essential for service providers. For instance, businesses are bound by law to take security measures against conflicts of interest. Beyond legal regulations, service providers are also pressed by users to satisfy their demands for security, such as the privacy of their profiles and messages in online social networks. In both instances, the prospect of security violations by a service provider constitutes a serious disadvantage and deters potential users from using the service. The focus of this thesis is on enabling service providers to secure their distributed programs by means of run-time enforcement mechanisms. Run-time enforcement mechanisms enforce security in a given program by monitoring, at run-time, the behavior of the program and by intervening when security violations are about to occur. Enforcing security in a distributed program includes securing the behavior of the individual agents of the distributed program as well as securing the joint behavior of all the agents. We present a framework for enforcing security in distributed programs. The framework combines tools and techniques for the specification, enforcement, and verification of security policies for distributed programs. For the specification of security policies, the framework provides the policy language CoDSPL. For generating run-time enforcement mechanisms from given security policies and applying these mechanisms to given distributed programs, the framework includes the tool CliSeAu. For the verification of generated enforcement mechanisms, the framework provides a formal model in the process algebra CSP. All three, the policy language, the tool, and the formal model allow for the distributed units of enforcement mechanisms to cooperate with each other. For supporting the specification of cooperating units, the framework provides two techniques as extensions of CoDSPL: a technique for specifying cooperation in a modular fashion and a technique for effectively cooperating in presence of race conditions. Finally, with the cross-lining technique of the framework, we devise a general approach for instrumenting distributed programs to apply an enforcement mechanism whose units can cooperate. The particular novelty of the presented framework is that the cooperation to be performed can be specified by the security policies and can take place even when the agents of the distributed program do not interact. This distinguishing feature of the framework enables one to specify and enforce security policies that employ a form of cooperation that suits the application scenario: Cooperation can be used when one's security requirements cannot be enforced in a fully decentralized fashion; but the overhead of cooperation can be avoided when no cooperation is needed. The case studies described in this thesis provide evidence that our framework is suited for enforcing custom security requirements in services based on third-party programs. In the case studies, we use the framework for developing two run-time enforcement mechanisms: one for enforcing a policy against conflicts of interest in a storage service and one for enforcing users' privacy policies in online social networks with respect to the sharing and re-sharing of messages. In both case studies, we experimentally verify the enforcement mechanisms to be effective and efficient, with an overhead in the range of milliseconds

    Stochastic hybrid system : modelling and verification

    Get PDF
    Hybrid systems now form a classical computational paradigm unifying discrete and continuous system aspects. The modelling, analysis and verification of these systems are very difficult. One way to reduce the complexity of hybrid system models is to consider randomization. The need for stochastic models has actually multiple motivations. Usually, when building models complete information is not available and we have to consider stochastic versions. Moreover, non-determinism and uncertainty are inherent to complex systems. The stochastic approach can be thought of as a way of quantifying non-determinism (by assigning a probability to each possible execution branch) and managing uncertainty. This is built upon to the - now classical - approach in algorithmics that provides polynomial complexity algorithms via randomization. In this thesis we investigate the stochastic hybrid systems, focused on modelling and analysis. We propose a powerful unifying paradigm that combines analytical and formal methods. Its applications vary from air traffic control to communication networks and healthcare systems. The stochastic hybrid system paradigm has an explosive development. This is because of its very powerful expressivity and the great variety of possible applications. Each hybrid system model can be randomized in different ways, giving rise to many classes of stochastic hybrid systems. Moreover, randomization can change profoundly the mathematical properties of discrete and continuous aspects and also can influence their interaction. Beyond the profound foundational and semantics issues, there is the possibility to combine and cross-fertilize techniques from analytic mathematics (like optimization, control, adaptivity, stability, existence and uniqueness of trajectories, sensitivity analysis) and formal methods (like bisimulation, specification, reachability analysis, model checking). These constitute the major motivations of our research. We investigate new models of stochastic hybrid systems and their associated problems. The main difference from the existing approaches is that we do not follow one way (based only on continuous or discrete mathematics), but their cross-fertilization. For stochastic hybrid systems we introduce concepts that have been defined only for discrete transition systems. Then, techniques that have been used in discrete automata now come in a new analytical fashion. This is partly explained by the fact that popular verification methods (like theorem proving) can hardly work even on probabilistic extensions of discrete systems. When the continuous dimension is added, the idea to use continuous mathematics methods for verification purposes comes in a natural way. The concrete contribution of this thesis has four major milestones: 1. A new and a very general model for stochastic hybrid systems; 2. Stochastic reachability for stochastic hybrid systems is introduced together with an approximating method to compute reach set probabilities; 3. Bisimulation for stochastic hybrid systems is introduced and relationship with reachability analysis is investigated. 4. Considering the communication issue, we extend the modelling paradigm
    • …
    corecore