206 research outputs found

    Program slicing by calculation

    Get PDF
    Program slicing is a well known family of techniques used to identify code fragments which depend on or are depended upon specific program entities. They are particularly useful in the areas of reverse engineering, program understanding, testing and software maintenance. Most slicing methods, usually oriented towards the imperatice or object paradigms, are based on some sort of graph structure representing program dependencies. Slicing techniques amount, therefore, to (sophisticated) graph transversal algorithms. This paper proposes a completely different approach to the slicing problem for functional programs. Instead of extracting program information to build an underlying dependencies' structure, we resort to standard program calculation strategies, based on the so-called Bird-Meertens formalism. The slicing criterion is specified either as a projection or a hiding function which, once composed with the original program, leads to the identification of the intended slice. Going through a number of examples, the paper suggests this approach may be an interesting, even if not completely general, alternative to slicing functional programsFundação para a Ciência e a Tecnologia (FCT

    Modeling Taxi Drivers' Behaviour for the Next Destination Prediction

    Full text link
    In this paper, we study how to model taxi drivers' behaviour and geographical information for an interesting and challenging task: the next destination prediction in a taxi journey. Predicting the next location is a well studied problem in human mobility, which finds several applications in real-world scenarios, from optimizing the efficiency of electronic dispatching systems to predicting and reducing the traffic jam. This task is normally modeled as a multiclass classification problem, where the goal is to select, among a set of already known locations, the next taxi destination. We present a Recurrent Neural Network (RNN) approach that models the taxi drivers' behaviour and encodes the semantics of visited locations by using geographical information from Location-Based Social Networks (LBSNs). In particular, RNNs are trained to predict the exact coordinates of the next destination, overcoming the problem of producing, in output, a limited set of locations, seen during the training phase. The proposed approach was tested on the ECML/PKDD Discovery Challenge 2015 dataset - based on the city of Porto -, obtaining better results with respect to the competition winner, whilst using less information, and on Manhattan and San Francisco datasets.Comment: preprint version of a paper submitted to IEEE Transactions on Intelligent Transportation System

    The exploitation of pipeline parallelism by compile time dataflow analysis

    Full text link
    The automatic and implicit transformation of sequential instruction streams, which execute efficiently for pipelined architectures is the subject of this paper. This paper proposes a method which maximizes the parallel performance of an instruction pipeline by detecting and eliminating specific pipeline hazards known as resource conflicts. The detection of resource conflicts is accomplished with data dependence analysis, while the elimination of resource conflicts is accomplished by instruction stream code transformation. The transformation of instruction streams is guided by data dependence analysis, and dependence graphs. This thesis is based on the premise that the elimination of resource conflicts is synonymous with the elimination of specific arcs in the dependence graph. Examples will be given showing how detection and elimination of resource conflicts is possible through compiler optimization

    Replicode: A Constructivist Programming Paradigm and Language

    Get PDF
    Replicode is a language designed to encode short parallel programs and executable models, and is centered on the notions of extensive pattern-matching and dynamic code production. The language is domain independent and has been designed to build systems that are modelbased and model-driven, as production systems that can modify their own code. More over, Replicode supports the distribution of knowledge and computation across clusters of computing nodes. This document describes Replicode and its executive, i.e. the system that executes Replicode constructions. The Replicode executive is meant to run on Linux 64 bits and Windows 7 32/64 bits platforms and interoperate with custom C++ code. The motivations for the Replicode language, the constructivist paradigm it rests on, and the higher-level AI goals targeted by its construction, are described by Thórisson (2012), Nivel and Thórisson (2009), and Thórisson and Nivel (2009a, 2009b). An overview presents the main concepts of the language. Section 3 describes the general structure of Replicode objects and describes pattern matching. Section 4 describes the execution model of Replicode and section 5 describes how computation and knowledge are structured and controlled. Section 6 describes the high-level reasoning facilities offered by the system. Finally, section 7 describes how the computation is distributed over a cluster of computing nodes. Consult Annex 1 for a formal definition of Replicode, Annex 2 for a specification of the executive, Annex 3 for the specification of the executable code format (r-code) and its C++ API, and Annex 4 for the definition of the Replicode Extension C++ API

    Estabelecendo um processo de customização livre de retrocessos para famílias de produtos

    Get PDF
    Orientador: Yuzo IanoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Um conceito chave na área de customização em massa é o de família de produtos. Embora o projeto de uma família de produtos é uma tarefa difícil e desafiadora, derivar os membros da família de produtos para atender os requisitos de clientes individuais pode ser uma tarefa de design rotineira. Neste trabalho, propomos uma abordagem formal para modelar o processo de customização de famílias de produtos, que alcançar este objetivo. De fato, construímos uma teoria para a customização de famílias de produtos. Esta abordagem é baseada em uma estrutura de conhecimento para a representação de famílias de produtos que combina uma estrutura de produto genérica e uma rede de restrições estendida com funções de design. O método para derivar os membros da família de produtos é um processo de instanciação com duas fases. Primeiramente, uma solução para o modelo de rede de restrição consistente com os requisitos do cliente é encontrada. Em seguida, esta solução é utilizada para transformar a estrutura de produto genérica em uma estrutura especifica que corresponde a um membro da família de produtos. Neste trabalho, provamos que, se o modelo de rede de restrição estendida com funções de design satisfaz algumas condições de modelagem, então encontrar soluções se torna um processo livre de retrocessos. Embora existam outros trabalhos na literatura que também afirmam ser livre de retrocessos, um fato notável sobre a nossa abordagem é que conseguimos isso através da introdução de conhecimento sobre a família de produtos, em vez de recorrer ao poder computacional e pré-processamento como naquelas abordagens. Outro aspecto notável da nossa abordagem é que os componentes podem ser projetados como parte do processo de personalização através das funções de design. Isto implica que é possível dispor de um processo de customização eficiente sem comprometer a flexibilidade da família de produtos. Na conclusão deste trabalho, argumentamos que a nossa abordagem pode lidar com problemas de customização que estão fora da área de configuração de produtos. Dois apêndices também são adicionados à tese. Um deles é uma modelagem completa de uma família de produtos Chave de Transferência Automática (ATS) baseado em nossa abordagem. Este exemplo é usado no corpo principal da tese para ilustrar os conceitos que estão sendo introduzidos. A outra é uma implementação computacional do primeiro estágio do processo de customização da família de produtos ATSAbstract: Product family is a key concept is the area of mass customisation. Although the design of a product family is a difficult and challenging task, to derive members of the product family to meet the requirements of individual customers can be a routine design task. In this work, we propose a formal approach to model the customisation of product families that achieves this goal. In fact, we are setting up a theory for the customization of product families. This approach is based on a knowledge framework for the representation of product families, which combines a generic product structure and a constraint network extended with design functions. The method for deriving members of the product family is a two-stage instantiation process. First, a solution to the constraint network model consistent with the customer requirements is found. Next, this solution is used to transform the generic product structure into a specific structure that corresponds to a member of the product family. In this work, we prove that if the constraint network model extended with design functions satisfies a few modelling conditions, then to find solutions become a backtrack-free process. Although there are other works in the literature that also claim to be backtrack-free, a remarkable fact about our approach is that we achieve this by the introduction of knowledge about the product family, instead of resorting to computational power and pre-processing as in those approaches. Another remarkable aspect of our approach is that components can be designed as part of the customisation process using the design functions. This implies that it is possible to have an efficient customisation process without compromising the flexibility of the product family. In the conclusion of this work, we argue that our approach can deal with customisation problems outside the product configuration area. Two appendixes are also added to the thesis. One is a compete modelling of the Automatic Transfer Switch (ATS) product family using our approach. This example is used in the main body of the thesis to illustrate the concepts that are being introduced. The other one is the computational implementation of the first-stage customisation process of the ATS product familyDoutoradoTelecomunicações e TelemáticaDoutor em Engenharia Elétric

    Deadlock-Free Typestate-Oriented Programming

    Get PDF
    Context. TypeState-Oriented Programming (TSOP) is a paradigm intended to help developers in the implementation and use of mutable objects whose public interface depends on their private state. Under this paradigm, well-typed programs are guaranteed to conform with the protocol of the objects they use. Inquiry. Previous works have investigated TSOP for both sequential and concurrent objects. However, an important difference between the two settings still remains. In a sequential setting, a well-typed program either progresses indefinitely or terminates eventually. In a concurrent setting, protocol conformance is no longer enough to avoid deadlocks, a situation in which the execution of the program halts because two or more objects are involved in mutual dependencies that prevent any further progress. Approach. In this work, we put forward a refinement of TSOP for concurrent objects guaranteeing that well-typed programs not only conform with the protocol of the objects they use, but are also deadlock free. The key ingredients of the type system are behavioral types, used to specify and enforce object protocols, and dependency relations, used to represent abstract descriptions of the dependencies between objects and detect circularities that might cause deadlocks. Knowledge. The proposed approach stands out for two features. First, the approach is fully compositional and therefore scalable: the objects of a large program can be type checked in isolation; deadlock freedom of an object composition solely depends on the types of the objects being composed; any modification/refactoring of an object that does not affect its public interface does not affect other objects either. Second, we provide the first deadlock analysis technique for join patterns, a high-level concurrency abstraction with which programmers can express complex synchronizations in a succinct and declarative form. Grounding. We detail the proposed typing discipline for a core programming language blending concurrent objects, asynchronous message passing and join patterns. We prove that the type system is sound and give non-trivial examples of programs that can be successfully analyzed. A Haskell implementation of the type system that demonstrates the feasibility of the approach is publicly available. Importance. The static analysis technique described in this work can be used to certify programs written in a core language for concurrent TSOP with proven correctness guarantees. This is an essential first step towards the integration and application of the technique in a real-world developer toolchain, making programming of such systems more productive and less frustrating

    A cost estimate maturity benchmark method to support early concept design decision-making: a case study application to the small modular nuclear reactor

    Get PDF
    Constructing large Nuclear Power Plants (NPPs) is synonymous with significant cost and schedule uncertainty. Innovative Small Modular Reactors (SMRs) have been identified as a way of increasing certainty of delivery, whilst also maintaining a competitive Life Cycle Cost (LCC). Previous research into the cost of SMRs has focused on the economics of a design from the perspective of an owner or investor. There is a significant gap in the literature associated with cost estimating SMRs at the early concept development stage from the perspective of a reactor developer. Early design stage cost estimates are inherently uncertain. Design teams, therefore, need to make decisions that will achieve a cost competitive product by considering uncertainty. Existing cost uncertainty analysis methods lack standardisation in their application, often relying on the subjective assessment of experts. The central argument presented in this research is that the SMR vendor can make more effective decisions related to achieving cost certainty by understanding the drivers of knowledge uncertainty associated with early design stage cost estimates. This thesis describes research spanning the concept design phase of the UK SMR development programme. The research investigation is divided into two distinct phases. The first phase identifies the requirements for cost information from the perspective of the SMR vendor through interviews, a participatory case study investigation and surveys. Limited access to cost information means that early design cost assessment is highly subjective. Cost uncertainty analysis should provide decision makers with an understanding of the level of confidence associated with the estimate. A survey investigating how cost information is interpreted revealed that providing more granular detail about cost uncertainty would support the design team with additional rationale for selecting a design option. The main requirement identified from phase 1 of the research is the need for a standardised method to identify how sources of cost uncertainty influence the maturity of the estimate at each stage of the design development process. The second phase of the research involved a participatory research approach where the Acceptable Cost Uncertainty Benchmark Assessment (ACUBA) method was developed and then implemented retrospectively on the case study cost data. The ACUBA method uses a qualitative measure to assess the quality and impact of engineering definition, manufacturing process knowledge and supply chain knowledge on the cost estimate confidence. The maturity rating is then assessed against a benchmark to determine the acceptability of the estimate uncertainty range. Focus groups were carried out in the vendor organisation to investigate whether the design team could clarify their reasoning for decisions related to reducing cost uncertainty when given insight into the sources of cost uncertainty. The rationale for a decision is found to be clearer using the ACUBA method compared with existing cost uncertainty analysis methods used by the case study organisation. This research has led to the development of a novel method which standardises and improves the communication of cost information across different functions within a design team. By establishing a benchmark acceptable level of cost maturity for a decision, the cost maturity metric can be employed to measure the performance of the SMR development programme towards achieving product cost maturity. In addition, the ACUBA method supports the more effective allocation of limited resources available at the early design stage, by identifying design activities which could lead to an acceptable cost maturity.</div

    Estimation under group actions: recovering orbits from invariants

    Full text link
    Motivated by geometric problems in signal processing, computer vision, and structural biology, we study a class of orbit recovery problems where we observe very noisy copies of an unknown signal, each acted upon by a random element of some group (such as Z/p or SO(3)). The goal is to recover the orbit of the signal under the group action in the high-noise regime. This generalizes problems of interest such as multi-reference alignment (MRA) and the reconstruction problem in cryo-electron microscopy (cryo-EM). We obtain matching lower and upper bounds on the sample complexity of these problems in high generality, showing that the statistical difficulty is intricately determined by the invariant theory of the underlying symmetry group. In particular, we determine that for cryo-EM with noise variance σ2\sigma^2 and uniform viewing directions, the number of samples required scales as σ6\sigma^6. We match this bound with a novel algorithm for ab initio reconstruction in cryo-EM, based on invariant features of degree at most 3. We further discuss how to recover multiple molecular structures from heterogeneous cryo-EM samples.Comment: 54 pages. This version contains a number of new result

    Evolving Legacy System\u27s Features into Fine-grained Components Using Regression Test-Cases

    Get PDF
    Because many software systems used for business today are considered legacy systems, the need for software evolution techniques has never been greater. We propose a novel evolution methodology for legacy systems that integrates the concepts of features, regression testing, and Component-Based Software Engineering (CBSE). Regression test suites are untapped resources that contain important information about the features of a software system. By exercising each feature with its associated test cases using code profilers and similar tools, code can be located and refactored to create components. The unique combination of Feature Engineering and CBSE makes it possible for a legacy system to be modernized quickly and affordably. We develop a new framework to evolve legacy software that maps the features to software components refactored from their feature implementation. In this dissertation, we make the following contributions: First, a new methodology to evolve legacy code is developed that improves the maintainability of evolved legacy systems. Second, the technique describes a clear understanding between features and functionality, and relationships among features using our feature model. Third, the methodology provides guidelines to construct feature-based reusable components using our fine-grained component model. Fourth, we bridge the complexity gap by identifying feature-based test cases and developing feature-based reusable components. We show how to reuse existing tools to aid the evolution of legacy systems rather than re-writing special purpose tools for program slicing and requirement management. We have validated our approach on the evolution of a real-world legacy system. By applying this methodology, American Financial Systems, Inc. (AFS), has successfully restructured its enterprise legacy system and reduced the costs of future maintenance
    corecore