414,366 research outputs found

    Operations research methods in compiler backends

    Get PDF
    Operations research can be defined as the theory of numerically solving decision problems. In this context, dealing with optimization problems is a central issue. Code generation is performed by the backend phase of a compiler, a program which transforms the source code of an application into optimized machine code. Basically, code generation is an optimization problem, which can be modelled in a way similar to typical problems in the area of operations research. In this article, that similarity is demonstrated by opposing integer linear programming models for problems of the operations research and of code generation. The time frame for solving the generated integer linear programs (ILPs) as a part of the compilation process is small. As a consequence, using well-structured ILP-formulations and ILP-based approximations is necessary. The second part of the paper will give a brief survey on guidelines and techniques for both issues

    Towards Comparative Web Content Mining using Object Oriented Model

    Get PDF
    Web content data are heterogeneous in nature; usually composed of different types of contents and data structure. Thus, extraction and mining of web content data is a challenging branch of data mining. Traditional web content extraction and mining techniques are classified into three categories: programming language based wrappers, wrapper (data extraction program) induction techniques, and automatic wrapper generation techniques. First category constructs data extraction system by providing some specialized pattern specification languages, second category is a supervised learning, which learns data extraction rules and third category is automatic extraction process. All these data extraction techniques rely on web document presentation structures, which need complicated matching and tree alignment algorithms, routine maintenance, hard to unify for vast variety of websites and fail to catch heterogeneous data together. To catch more diversity of web documents, a feasible implementation of an automatic data extraction technique based on object oriented data model technique, 00Web, had been proposed in Annoni and Ezeife (2009). This thesis implements, materializes and extends the structured automatic data extraction technique. We developed a system (called WebOMiner) for extraction and mining of structured web contents based on object-oriented data model. Thesis extends the extraction algorithms proposed by Annoni and Ezeife (2009) and develops an automata based automatic wrapper generation algorithm for extraction and mining of structured web content data. Our algorithm identifies data blocks from flat array data structure and generates Non-Deterministic Finite Automata (NFA) pattern for different types of content data for extraction. Objective of this thesis is to extract and mine heterogeneous web content and relieve the hard effort of matching, tree alignment and routine maintenance. Experimental results show that our system is highly effective and it performs the mining task with 100% precision and 96.22% recall value

    Branch-coverage testability transformation for unstructured programs

    Get PDF
    Test data generation by hand is a tedious, expensive and error-prone activity, yet testing is a vital part of the development process. Several techniques have been proposed to automate the generation of test data, but all of these are hindered by the presence of unstructured control flow. This paper addresses the problem using testability transformation. Testability transformation does not preserve the traditional meaning of the program, rather it deals with preserving test-adequate sets of input data. This requires new equivalence relations which, in turn, entail novel proof obligations. The paper illustrates this using the branch coverage adequacy criterion and develops a branch adequacy equivalence relation and a testability transformation for restructuring. It then presents a proof that the transformation preserves branch adequacy

    Instructional strategies and tactics for the design of introductory computer programming courses in high school

    Get PDF
    This article offers an examination of instructional strategies and tactics for the design of introductory computer programming courses in high school. We distinguish the Expert, Spiral and Reading approach as groups of instructional strategies that mainly differ in their general design plan to control students' processing load. In order, they emphasize topdown program design, incremental learning, and program modification and amplification. In contrast, tactics are specific design plans that prescribe methods to reach desired learning outcomes under given circumstances. Based on ACT* (Anderson, 1983) and relevant research, we distinguish between declarative and procedural instruction and present six tactics which can be used both to design courses and to evaluate strategies. Three tactics for declarative instruction involve concrete computer models, programming plans and design diagrams; three tactics for procedural instruction involve worked-out examples, practice of basic cognitive skills and task variation. In our evaluation of groups of instructional strategies, the Reading approach has been found to be superior to the Expert and Spiral approaches
    • ā€¦
    corecore