78 research outputs found

    Cost-effective HPC clustering for computer vision applications

    Get PDF
    We will present a cost-effective and flexible realization of high performance computing (HPC) clustering and its potential in solving computationally intensive problems in computer vision. The featured software foundation to support the parallel programming is the GNU parallel Knoppix package with message passing interface (MPI) based Octave, Python and C interface capabilities. The implementation is especially of interest in applications where the main objective is to reuse the existing hardware infrastructure and to maintain the overall budget cost. We will present the benchmark results and compare and contrast the performances of Octave and MATLAB

    Requirements, design and business process reengineering as vital parts of any system development methodology

    Get PDF
    This thesis analyzes different aspects of system development life cycle, concentrating on the requirements and design stages. It describes various methodologies, methods and tools that have been developed over the years. It evaluates them and compares them against each other. Finally a conclusion is made that there is a very important stage missing in the system development life cycle, which is the Business Process Reengineering Stage

    A NASA-wide approach toward cost-effective, high-quality software through reuse

    Get PDF
    NASA Langley Research Center sponsored the second Workshop on NASA Research in Software Reuse on May 5-6, 1992 at the Research Triangle Park, North Carolina. The workshop was hosted by the Research Triangle Institute. Participants came from the three NASA centers, four NASA contractor companies, two research institutes and the Air Force's Rome Laboratory. The purpose of the workshop was to exchange information on software reuse tool development, particularly with respect to tool needs, requirements, and effectiveness. The participants presented the software reuse activities and tools being developed and used by their individual centers and programs. These programs address a wide range of reuse issues. The group also developed a mission and goals for software reuse within NASA. This publication summarizes the presentations and the issues discussed during the workshop

    An Adaptive Integration Architecture for Software Reuse

    Get PDF
    The problem of building large, reliable software systems in a controlled, cost-effective way, the so-called software crisis problem, is one of computer science\u27s great challenges. From the very outset of computing as science, software reuse has been touted as a means to overcome the software crisis issue. Over three decades later, the software community is still grappling with the problem of building large reliable software systems in a controlled, cost effective way; the software crisis problem is alive and well. Today, many computer scientists still regard software reuse as a very powerful vehicle to improve the practice of software engineering. The advantage of amortizing software development cost through reuse continues to be a major objective in the art of building software, even though the tools, methods, languages, and overall understanding of software engineering have changed significantly over the years. Our work is primarily focused on the development of an Adaptive Application Integration Architecture Framework. Without good integration tools and techniques, reuse is difficult and will probably not happen to any significant degree. In the development of the adaptive integration architecture framework, the primary enabling concept is object-oriented design supported by the unified modeling language. The concepts of software architecture, design patterns, and abstract data views are used in a structured and disciplined manner to established a generic framework. This framework is applied to solve the Enterprise Application Integration (EM) problem in the telecommunications operations support system (OSS) enterprise marketplace. The proposed adaptive application integration architecture framework facilitates application reusability and flexible business process re-engineering. The architecture addresses the need for modern businesses to continuously redefine themselves to address changing market conditions in an increasingly competitive environment. We have developed a number of Enterprise Application Integration design patterns to enable the implementation of an EAI framework in a definite and repeatable manner. The design patterns allow for integration of commercial off-the-shelf applications into a unified enterprise framework facilitating true application portfolio interoperability. The notion of treating application services as infrastructure services and using business processes to combine them arbitrarily provides a natural way of thinking about adaptable and reusable software systems. We present a mathematical formalism for the specification of design patterns. This specification constitutes an extension of the basic concepts from many-sorted algebra. In particular, the notion of signature is extended to that of a vector, consisting of a set of linearly independent signatures. The approach can be used to reason about various properties including efforts for component reuse and to facilitate complex largescale software development by providing the developer with design alternatives and support for automatic program verification

    Program Optimization Based on a Non-Procedural Specification

    Get PDF
    This dissertation deals with two related problems: development of a methodology for achieving memory and computation efficiency of computer programs, and the use of this methodology in very high-level programming and associated automatic program generators. Computer efficiency of programs has many aspects. Usually additional memory saves computation by avoiding the need to recompute certain variables. Our emphasis has been on reducing memory use by variables sharing memory space, without requiring recomputation. It will be shown that this also reduces computation overhead. The most significant savings are due to sharing memory in iterative steps. This is the focus of the reported research. The evaluation of memory use of the many possible alternatives for realizing a computation is highly complex and requires lengthy and expensive computations. We have developed a heuristic approach, which has been very effective in our experience, and which is practical and economical in use of the computer. Basically it consists of evaluating global memory usage altertnatives on each level of nested iteration loops, starting with the outside level and moving inwardly. Thus we neglect the rare impact of a nested iteration loop on the memory usage calculated for an outside iteration. This has lead to the principle of maximizing size of loop scopes in a program as a means to attaining a more efficient program for present-day sequential computers. The automatic design of efficient programs is also essential in use of very high level languages. The use of very high level languages offers many benefits, such as less program coding, less required proficiency in programming and analysis, and ease in understanding maintenance and updating of programs. All these benefits are conditioned on whether the language processor can produce satisfactorily efficient program. The dissertation reports the design and implementation of a new version of the MODEL language and processor which incorporates algorithms for producing more efficient programs. The dissertation describes briefly the MODEL non-procedural language and the analysis, scheduling, and code generation tasks

    New inhibitor targeting human transcription factor HSF1: effects on the heat shock response and tumour cell survival.

    Get PDF
    © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact [email protected] Nuria Vilabo, Alba Bore, Francisco Martin-Saavedra, Melanie Bayford, Natalie Winfield, Stuart Firth-Clark, Stewart B. Kirton, and Richard Voellmy, 'New inhibitor targeting human transcription factor HSF1: effects on the heat shock response and tumor cell survival', Nucleic Acids Research, 2017, 1, doi: 10.1093/nar/gkx194Comparative modeling of the DNA-binding domain of human HSF1 facilitated the prediction of possible binding pockets for small molecules and definition of corresponding pharmacophores. In silico screening of a large library of lead-like compounds identified a set of compounds that satisfied the pharmacophoric criteria, a selection of which compounds was purchased to populate a biased sublibrary. A discriminating cell-based screening assay identified compound 001, which was subjected to systematic analysis of structure–activity relationships, resulting in the development of compound 115 (IHSF115). IHSF115 bound to an isolated HSF1 DNAbinding domain fragment. The compound did not affect heat-induced oligomerization, nuclear localization and specific DNA binding but inhibited the transcriptional activity of human HSF1, interfering with the assembly of ATF1-containing transcription complexes. IHSF115 was employed to probe the human heat shock response at the transcriptome level. In contrast to earlier studies of differential regulation in HSF1-na¨ıve and -depleted cells, our results suggest that a large majority of heat-induced genes is positively regulated by HSF1. That IHSF115 effectively countermanded repression in a significant fraction of heat-repressed genes suggests that repression of these genes is mediated by transcriptionally active HSF1. IHSF115 is cytotoxic for a variety of human cancer cell lines, multiplemyeloma lines consistently exhibiting high sensitivity.Peer reviewedFinal Published versio
    corecore