250,605 research outputs found

    Source code independent reverse engineering of dynamic web sites

    Get PDF
    This paper describes source code independent reverse engineering of dynamic web sites. The tool Revangie builds a form-oriented analysis model solely from the usage of a web application. The recovered models can be, for example, exploited for the purpose of requirements engineering and load test development. Revangie can explore a given web application fully automatically or can passively record its usages. The collected data, i.e., data about screens, server-side programs, and system responsiveness, are analyzed in order to build a user interface model. The paper presents several adequate screen classifications, which are utilized to yield significant models

    Supporting collaborative grid application development within the escience community

    Get PDF
    The systemic representation and organisation of software artefacts, e.g. specifications, designs, interfaces, and implementations, resulting from the development of large distributed systems from software components have been addressed by our research within the Practitioner and AMES projects [1,2,3,4]. Without appropriate representations and organisations, large collections of existing software are not amenable to the activities of software reuse and software maintenance, as these activities are likely to be severely hindered by the difficulties of understanding the software applications and their associated components. In both of these projects, static analysis of source code and other development artefacts, where available, and subsequent application of reverse engineering techniques were successfully used to develop a more comprehensive understanding of the software applications under study [5,6]. Later research addressed the maintenance of a component library in the context of component-based software product line development and maintenance [7]. The classic software decompositions, horizontal and vertical, proposed by Goguen [8] influenced all of this research. While they are adequate for static composition, they fail to address the dynamic aspects of composing large distributed software applications from components especially where these include software services. The separation of component co-ordination concerns from component functionality proposed in [9] offers a partial solution

    Model and system learners, optimal process constructors and kinetic theory-based goal-oriented design: a new paradigm in materials and processes informatics

    Get PDF
    Traditionally, Simulation-Based Engineering Sciences (SBES) has relied on the use of static data inputs (model parameters, initial or boundary conditions, ... obtained from adequate experiments) to perform simulations. A new paradigm in the field of Applied Sciences and Engineering has emerged in the last decade. Dynamic Data-Driven Application Systems [9, 10, 11, 12, 22] allow the linkage of simulation tools with measurement devices for real-time control of simulations and applications, entailing the ability to dynamically incorporate additional data into an executing application, and in reverse, the ability of an application to dynamically steer the measurement process. It is in that context that traditional "digital-twins" are giving raise to a new generation of goal-oriented data-driven application systems, also known as "hybrid-twins", embracing models based on physics and models exclusively based on data adequately collected and assimilated for filling the gap between usual model predictions and measurements. Within this framework new methodologies based on model learners, machine learning and kinetic goal-oriented design are defining a new paradigm in materials, processes and systems engineering

    Facing human rights attributes of copyright in Europe in the context of the EU Digital Single Market

    Get PDF
    The principle of equality as a fundamental norm in law and political philosophy, Jurysprudencja 8., Wojciechowski B., Bekrycht T., Cern K.M., (eds.), Wydawnictwo Uniwersytetu Ɓódzkiego, ƁódĆș 2017The project was financed by National Science Centre Poland (decision no. DEC-2012/05/B/HS5/01111)

    On Reverse Engineering in the Cognitive and Brain Sciences

    Get PDF
    Various research initiatives try to utilize the operational principles of organisms and brains to develop alternative, biologically inspired computing paradigms and artificial cognitive systems. This paper reviews key features of the standard method applied to complexity in the cognitive and brain sciences, i.e. decompositional analysis or reverse engineering. The indisputable complexity of brain and mind raise the issue of whether they can be understood by applying the standard method. Actually, recent findings in the experimental and theoretical fields, question central assumptions and hypotheses made for reverse engineering. Using the modeling relation as analyzed by Robert Rosen, the scientific analysis method itself is made a subject of discussion. It is concluded that the fundamental assumption of cognitive science, i.e. complex cognitive systems can be analyzed, understood and duplicated by reverse engineering, must be abandoned. Implications for investigations of organisms and behavior as well as for engineering artificial cognitive systems are discussed.Comment: 19 pages, 5 figure

    A comparative evaluation of dynamic visualisation tools

    Get PDF
    Despite their potential applications in software comprehension, it appears that dynamic visualisation tools are seldom used outside the research laboratory. This paper presents an empirical evaluation of five dynamic visualisation tools - AVID, Jinsight, jRMTool, Together ControlCenter diagrams and Together ControlCenter debugger. The tools were evaluated on a number of general software comprehension and specific reverse engineering tasks using the HotDraw objectoriented framework. The tasks considered typical comprehension issues, including identification of software structure and behaviour, design pattern extraction, extensibility potential, maintenance issues, functionality location, and runtime load. The results revealed that the level of abstraction employed by a tool affects its success in different tasks, and that tools were more successful in addressing specific reverse engineering tasks than general software comprehension activities. It was found that no one tool performs well in all tasks, and some tasks were beyond the capabilities of all five tools. This paper concludes with suggestions for improving the efficacy of such tools

    Instantaneous modelling and reverse engineering of data-consistent prime models in seconds!

    Get PDF
    A theoretical framework that supports automated construction of dynamic prime models purely from experimental time series data has been invented and developed, which can automatically generate (construct) data-driven models of any time series data in seconds. This has resulted in the formulation and formalisation of new reverse engineering and dynamic methods for automated systems modelling of complex systems, including complex biological, financial, control, and artificial neural network systems. The systems/model theory behind the invention has been formalised as a new, effective and robust system identification strategy complementary to process-based modelling. The proposed dynamic modelling and network inference solutions often involve tackling extremely difficult parameter estimation challenges, inferring unknown underlying network structures, and unsupervised formulation and construction of smart and intelligent ODE models of complex systems. In underdetermined conditions, i.e., cases of dealing with how best to instantaneously and rapidly construct data-consistent prime models of unknown (or well-studied) complex system from small-sized time series data, inference of unknown underlying network of interaction is more challenging. This article reports a robust step-by-step mathematical and computational analysis of the entire prime model construction process that determines a model from data in less than a minute

    What conceptual graph workbenches need for natural language processing

    Get PDF
    An important capability of the conceptual graph knowledge engineering tools now under development will be the transformation of natural language texts into graphs (conceptual parsing) and its reverse, the production of text from graphs (conceptual generation). Are the existing basic designs adequate for these tasks? Experience developing the BEELINE system's natural language capabilities suggests that good entry/editing tools, a generous but not unlimited storage capacity and efficient, bidirectional lexical access techniques are needed to support the supply of data structures at both the linguistic and conceptual knowledge levels. An active formalism capable of supporting declarative and procedural programs containing both linguistic and knowledge level terms is also important. If these requirements are satisfied, future text-readers can be included as part of a conceptual knowledge workbench without unexpected problems
    • 

    corecore