11,080 research outputs found
An Empirical Study of a Repeatable Method for Reengineering Procedural Software Systems to Object- Oriented Systems
This paper describes a repeatable method for reengineering a procedural
system to an object-oriented system. The method uses coupling metrics to assist a domain
expert in identifying candidate objects. An application of the method to a simple program
is given, and the effectiveness of the various coupling metrics are discussed. We perform
a detailed comparison of our repeatable method with an ad hoc, manual reengineering
effort based on the same procedural program. The repeatable method was found to be
effective for identifying objects. It produced code that was much smaller, more efficient,
and passed more regression tests than the ad hoc method. Analysis of object-oriented
metrics indicated both simpler code and less variability among classes for the repeatable
method
Recommended from our members
Next generation software environments : principles, problems, and research directions
The past decade has seen a burgeoning of research and development in software environments. Conferences have been devoted to the topic of practical environments, journal papers produced, and commercial systems sold. Given all the activity, one might expect a great deal of consensus on issues, approaches, and techniques. This is not the case, however. Indeed, the term "environment" is still used in a variety of conflicting ways. Nevertheless substantial progress has been made and we are at least nearing consensus on many critical issues.The purpose of this paper is to characterize environments, describe several important principles that have emerged in the last decade or so, note current open problems, and describe some approaches to these problems, with particular emphasis on the activities of one large-scale research program, the Arcadia project. Consideration is also given to two related topics: empirical evaluation and technology transition. That is, how can environments and their constituents be evaluated, and how can new developments be moved effectively into the production sector
Beyond Reuse Distance Analysis: Dynamic Analysis for Characterization of Data Locality Potential
Emerging computer architectures will feature drastically decreased flops/byte
(ratio of peak processing rate to memory bandwidth) as highlighted by recent
studies on Exascale architectural trends. Further, flops are getting cheaper
while the energy cost of data movement is increasingly dominant. The
understanding and characterization of data locality properties of computations
is critical in order to guide efforts to enhance data locality. Reuse distance
analysis of memory address traces is a valuable tool to perform data locality
characterization of programs. A single reuse distance analysis can be used to
estimate the number of cache misses in a fully associative LRU cache of any
size, thereby providing estimates on the minimum bandwidth requirements at
different levels of the memory hierarchy to avoid being bandwidth bound.
However, such an analysis only holds for the particular execution order that
produced the trace. It cannot estimate potential improvement in data locality
through dependence preserving transformations that change the execution
schedule of the operations in the computation. In this article, we develop a
novel dynamic analysis approach to characterize the inherent locality
properties of a computation and thereby assess the potential for data locality
enhancement via dependence preserving transformations. The execution trace of a
code is analyzed to extract a computational directed acyclic graph (CDAG) of
the data dependences. The CDAG is then partitioned into convex subsets, and the
convex partitioning is used to reorder the operations in the execution trace to
enhance data locality. The approach enables us to go beyond reuse distance
analysis of a single specific order of execution of the operations of a
computation in characterization of its data locality properties. It can serve a
valuable role in identifying promising code regions for manual transformation,
as well as assessing the effectiveness of compiler transformations for data
locality enhancement. We demonstrate the effectiveness of the approach using a
number of benchmarks, including case studies where the potential shown by the
analysis is exploited to achieve lower data movement costs and better
performance.Comment: Transaction on Architecture and Code Optimization (2014
Towards an ontology-based platform-independent framework for developing KBE systems in the aerospace industry
Aerospace engineering is considered to be one of the most complex and advanced branches of engineering. The use of knowledge based engineering (KBE) technologies has played a major role in automating routine design activities in view of supporting the cost-effective and timely development of a product. However, technologies employed within KBE systems are usually platform-specific. The nature of these platform-specific models has significantly limited knowledge abstraction and reusability in KBE systems. This research paper presents a novel approach that illustrates the use of platform-independent knowledge models for the development of KBE systems in the aerospace industry. The use of semantic technologies through the definition of generic-purposed ontologies has been employed to support the notion of independent knowledge models that strengthens knowledge reusability in KBE systems. This approach has been validated qualitatively through experts’ opinion and its benefit realised in the abstraction, reusability and maintainability of KBE systems
Technology Solutions for Developmental Math: An Overview of Current and Emerging Practices
Reviews current practices in and strategies for incorporating innovative technology into the teaching of remedial math at the college level. Outlines challenges, emerging trends, and ways to combine technology with new concepts of instructional strategy
A Survey of Symbolic Execution Techniques
Many security and software testing applications require checking whether
certain properties of a program hold for any possible usage scenario. For
instance, a tool for identifying software vulnerabilities may need to rule out
the existence of any backdoor to bypass a program's authentication. One
approach would be to test the program using different, possibly random inputs.
As the backdoor may only be hit for very specific program workloads, automated
exploration of the space of possible inputs is of the essence. Symbolic
execution provides an elegant solution to the problem, by systematically
exploring many possible execution paths at the same time without necessarily
requiring concrete inputs. Rather than taking on fully specified input values,
the technique abstractly represents them as symbols, resorting to constraint
solvers to construct actual instances that would cause property violations.
Symbolic execution has been incubated in dozens of tools developed over the
last four decades, leading to major practical breakthroughs in a number of
prominent software reliability applications. The goal of this survey is to
provide an overview of the main ideas, challenges, and solutions developed in
the area, distilling them for a broad audience.
The present survey has been accepted for publication at ACM Computing
Surveys. If you are considering citing this survey, we would appreciate if you
could use the following BibTeX entry: http://goo.gl/Hf5FvcComment: This is the authors pre-print copy. If you are considering citing
this survey, we would appreciate if you could use the following BibTeX entry:
http://goo.gl/Hf5Fv
Semi-Automated SVG Programming via Direct Manipulation
Direct manipulation interfaces provide intuitive and interactive features to
a broad range of users, but they often exhibit two limitations: the built-in
features cannot possibly cover all use cases, and the internal representation
of the content is not readily exposed. We believe that if direct manipulation
interfaces were to (a) use general-purpose programs as the representation
format, and (b) expose those programs to the user, then experts could customize
these systems in powerful new ways and non-experts could enjoy some of the
benefits of programmable systems.
In recent work, we presented a prototype SVG editor called Sketch-n-Sketch
that offered a step towards this vision. In that system, the user wrote a
program in a general-purpose lambda-calculus to generate a graphic design and
could then directly manipulate the output to indirectly change design
parameters (i.e. constant literals) in the program in real-time during the
manipulation. Unfortunately, the burden of programming the desired
relationships rested entirely on the user.
In this paper, we design and implement new features for Sketch-n-Sketch that
assist in the programming process itself. Like typical direct manipulation
systems, our extended Sketch-n-Sketch now provides GUI-based tools for drawing
shapes, relating shapes to each other, and grouping shapes together. Unlike
typical systems, however, each tool carries out the user's intention by
transforming their general-purpose program. This novel, semi-automated
programming workflow allows the user to rapidly create high-level, reusable
abstractions in the program while at the same time retaining direct
manipulation capabilities. In future work, our approach may be extended with
more graphic design features or realized for other application domains.Comment: In 29th ACM User Interface Software and Technology Symposium (UIST
2016
Recommended from our members
STELLAR (Semantic Technologies Enhancing the Lifecycle of Learning Resources): Jisc Final Report
[Project Summary]
As one of the earliest distance learning providers The Open University (OU) has a rich heritage of archived learning materials. An ever increasing amount of that is in digital form and is being deposited with the University Archive. This growth has been driven by digitisation activity from projects such as AVA (Access to Video Assets) and the Fedora-based Open University Digital Library ‘a place to discover digital and digitised archival content from the OU Library, from videos and images to digitised documents’. Other digital content is being captured from web archiving activities, such as work to preserve Moodle Virtual Learning Environment course websites. An evidence based understanding is required to inform digital preservation policies, curation strategy and investment in digital library development.
Following the Pre-enhancement, Enhancement and Post-enhancement methodology set out by Jisc, STELLAR adopted the model of a balanced scorecard to ascertain the value ascribed to the non-current learning materials. Four aspects were considered: Personal and professional perspectives of value; Value to the Higher Educational and academic communities; Value to internal processes and cultures; Financial perspectives of value. The outcomes of the survey indicated that stakeholders place a high value on the materials, and that they perceived them to have value in all areas evaluated.
Three OU courses were chosen from the digital library for the transformation stage. These materials were enhanced and transformed into RDF, a process that required more extensive metadata expertise and effort than was expected. Following enhancement the RDF was accessed through a tool called DiscOU, created by a member of the project team from the OU’s Knowledge Media Institute. DiscOU uses both linked data and a semantic meaning engine to analyse the meaning of the text in a search query. This is matched against the meaning of the content derived from an index of the full-text of the digital library content.
In the final stage stakeholders were asked through a survey and series of workshops to use the DiscOU proof-of-concept tool to assess their perception of the value of this transformation. This has revealed that overall, academics and other stakeholders in the university do believe that the value of the selected materials was positively impacted by the application of semantic technologies
An EPIIC Vision to Evolve Project Integration, Innovation, and Collaboration with Broad Impact for How NASA Executes Complex Projects
Evolving Project Integration, Innovation, and Collaboration (EPIIC) is a vision defined to transform the way projects manage information to support real-time decisions, capture best practices and lessons learned, perform assessments, and manage risk across a portfolio of projects. The foundational project management needs for data and information will be revolutionized through innovations on how we manage and access that data, implement configuration control, and certify compliance. The embedded intelligence of new interactive data interfaces integrate technical and programmatic data such that near real time analytics can be accomplished to more efficiently and accurately complete systems engineering and project management tasks. The system-wide data analytics that are integrated into customized data interfaces allows the growing team of engineers and managers required to develop and implement major NASA missions the ability to access authoritative source(s) of system information while greatly reducing the labor required to complete system assessments. This would allow, for example, much of what is accomplished in a scheduled design review to take place as needed, between any team members, at any time. An intelligent data interface that rigorously integrates systems engineering and project management information in near real time can provide substantially greater insight for systems engineers, project managers, and the large diverse teams required to complete a complex project. System engineers, programmatic personnel (those who focus on cost, schedule, and risk), the technical engineering disciplines, and project management can realize immediate benefit from the shared vision described herein. Implementation of the vision also enables significant improvements in the performance of the engineered system being developed
Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design
The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface
- …