327 research outputs found
Recommended from our members
Systematic techniques for more effective fault localization and program repair
Debugging faulty code is a tedious process that is often quite expensive and can require much manual effort. Developers typically perform debugging in two key steps: (1) fault localization, i.e., identifying the location of faulty line(s) of code; and (2) program repair, i.e., modifying the code to remove the fault(s). Automating debugging to reduce its cost has been the focus of a number of research projects during the last decade, which have introduced a variety of techniques.
However, existing techniques suffer from two basic limitations. One, they lack accuracy to handle real programs. Two, they focus on automating only one of the two key steps, thereby leaving the other key step to the developer.
Our thesis is that an approach that integrates systematic search based on state-of-the-art constraint solvers with techniques to analyze artifacts that describe application specific properties and behaviors, provides the basis for developing more effective debugging techniques. We focus on faults in programs that operate on structurally complex inputs, such as heap-allocated data or relational databases.
Our approach lays the foundation for a unified framework for localization and repair of faults in programs. We embody our thesis in a suite of integrated techniques based on propositional satisfiability solving, correctness specifications analysis, test-spectra analysis, and rule-learning algorithms from machine learning, implement them as a prototype tool-set, and evaluate them using several subject programs.Electrical and Computer Engineerin
Program Model Checking: A Practitioner's Guide
Program model checking is a verification technology that uses state-space exploration to evaluate large numbers of potential program executions. Program model checking provides improved coverage over testing by systematically evaluating all possible test inputs and all possible interleavings of threads in a multithreaded system. Model-checking algorithms use several classes of optimizations to reduce the time and memory requirements for analysis, as well as heuristics for meaningful analysis of partial areas of the state space Our goal in this guidebook is to assemble, distill, and demonstrate emerging best practices for applying program model checking. We offer it as a starting point and introduction for those who want to apply model checking to software verification and validation. The guidebook will not discuss any specific tool in great detail, but we provide references for specific tools
Automated application robustification based on outlier detection
University of Minnesota Master of Science thesis. August 2013. Major:Electrical/Computer Engineering. Advisor: John Sartori. 1 computer file (PDF); viii, 63 pages.In this thesis, we propose automated algorithmic error resilience based on outlier detection. Our approach employs metric functions that normally produce metric values according to a designed distribution or behavior and produce outlier values (i.e., values that do not conform to the designed distribution or behavior) when computations are affected by errors. Thus, for our robust algorithms, error detection becomes equivalent to outlier detection. Our error resilient algorithms use outlier detection not only to detect errors, but also to aid in reducing the amount of redundancy required to produce correct results when errors are detected. Our error-resilient algorithms incur significantly lower overhead than traditional hardware and software error resilience techniques. Also, compared to previous approaches to application-based error resilience, our approaches parameterize the robustification process, making it easy to automatically transform large classes of applications into robust applications with the use of parser-based tools and minimal programmer effort. We demonstrate the use of automated error resilience based on outlier detection for two important classes of applications, namely, structured grid and dynamic programming problems, leveraging the flexibility of algorithmic error resilience to achieve improved application robustness and lower overhead compared to previous error resilience approaches
SAVCBS 2004 Specification and Verification of Component-Based Systems: Workshop Proceedings
This is the proceedings of the 2004 SAVCBS workshop. The workshop is concerned with how formal (i.e., mathematical) techniques can be or should be used to establish a suitable foundation for the specification and verification of component-based systems. Component-based systems are a growing concern for the software engineering community. Specification and reasoning techniques are urgently needed to permit composition of systems from components. Component-based specification and verification is also vital for scaling advanced verification techniques such as extended static analysis and model checking to the size of real systems. The workshop considers formalization of both functional and non-functional behavior, such as performance or reliability
A microcomputer-based vision system to recognize and locate partially occluded parts in binary and gray level images
This paper presents a microcomputer-based machine vision system to recognize and locate partially occluded parts in binary or gray level images. The recognition process is restricted to untilted, two-dimensional objects;A new edge-tracking technique in conjunction with a straight-line approximation algorithm is used to identify the local features in an image. Corners and holes serve as local features. The local features identified in an image are matched against all the compatible features stored for the model parts. The algorithm computes, for all image and model features matches, a coordinate transformation that maps a model feature onto an image feature. A new clustering algorithm has been developed to identify consistent coordinate transformation clusters that serve as initial match hypotheses. A hypothesis verification process eliminates the match hypotheses that are not compatible with the image information;The system performance was compared to a vision system restricted to recognize nonoverlapping parts. Both systems require the same hardware configuration and share the basic image processing routines
Developing a distributed electronic health-record store for India
The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
AI for IT Operations (AIOps) on Cloud Platforms: Reviews, Opportunities and Challenges
Artificial Intelligence for IT operations (AIOps) aims to combine the power
of AI with the big data generated by IT Operations processes, particularly in
cloud infrastructures, to provide actionable insights with the primary goal of
maximizing availability. There are a wide variety of problems to address, and
multiple use-cases, where AI capabilities can be leveraged to enhance
operational efficiency. Here we provide a review of the AIOps vision, trends
challenges and opportunities, specifically focusing on the underlying AI
techniques. We discuss in depth the key types of data emitted by IT Operations
activities, the scale and challenges in analyzing them, and where they can be
helpful. We categorize the key AIOps tasks as - incident detection, failure
prediction, root cause analysis and automated actions. We discuss the problem
formulation for each task, and then present a taxonomy of techniques to solve
these problems. We also identify relatively under explored topics, especially
those that could significantly benefit from advances in AI literature. We also
provide insights into the trends in this field, and what are the key investment
opportunities
Satisfiability-Based Methods for Digital Circuit Design, Debug, and Optimization
Designing digital circuits well is notoriously difficult. This difficulty stems in part from the very
many degrees of freedom inherent in circuit design, typically coupled with the need to satisfy
various constraints. In this thesis, we demonstrate how formulations of satisfiability problems
can be used automatically to complete a design, or to find a specific design architecture that
satisfies certain constraints; how these can be used to create, debug, and optimize designs;
and introduce a domain-specific language particularly well-suited for satisfiability-assisted
design, debug, and optimization.
In the first application, we show how explicit uncertainties called âholesâ can both be natural
to use and conducive to the creation of formal satisfiability problems useful for designing
circuits. We further develop a Scala-hosted Domain Specific Language (DSL) with appropriate
syntactic sugar to make design with holes easy and effective.
We then show how, utilizing the same kind of satisfiability formulation, we can automatically
instrument a given buggy design to replace suspicious syntax fragments with potentially-correct alternatives. The satisfiability solver then determines if there is any possible set of
alternative fragments which fix the bug. We also demonstrate that this approach is reasonably
scalable, in part because there is less need for a fully-precise specification in the formulation
of the satisfiability problem.
We then advance beyond mere hole-filling and show how a tight integration of design elaboration with satisfiability solvers allows totally new approaches. To point, we use this tight
integration to create the first known methods to optimize Gate-Level Information Flow Track-
ing (GLIFT) model circuits and to make principled trade-offs in their precision.
Finally, integrating all the previous work, we propose a more powerful DSL specifically designed to address the shortcomings of the first âhole-fillingâ language. This language, which
we call Nasadiya, affords more general integrations of satisfiability into circuit design and optimization, and provides built-in modeling functionality useful for optimizing extra-functional
properties like critical path delay and circuit area. We demonstrate the utility of these features
by implementing an automatic power optimizer for a popular type of parallel prefix adders
- …