21,601 research outputs found
Recommended from our members
Reverse Engineering Environment for Teaching Secure Coding in Java
Few toolsets for program analysis and Java learning system provide an integrated console, debugger, and reverse engineered visualizer. We present an interactive debugging environment for Java which helps students to understand the secure coding by detecting and visualizing the data flow anomaly. Previous research shows that the earlier students learn secure coding concepts, even at the same time as they first learn to write code, the better they will continue using secure coding practices. This paper proposes web-based Java programming environment for teaching secure coding practices which provides the essential and fundamental skills in secure coding. Also, this tool helps students to understand the data anomaly and security leak with detecting vulnerabilities in given code.Cockrell School of Engineerin
MCViNE -- An object oriented Monte Carlo neutron ray tracing simulation package
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is a versatile Monte Carlo
(MC) neutron ray-tracing program that provides researchers with tools for
performing computer modeling and simulations that mirror real neutron
scattering experiments. By adopting modern software engineering practices such
as using composite and visitor design patterns for representing and accessing
neutron scatterers, and using recursive algorithms for multiple scattering,
MCViNE is flexible enough to handle sophisticated neutron scattering problems
including, for example, neutron detection by complex detector systems, and
single and multiple scattering events in a variety of samples and sample
environments. In addition, MCViNE can take advantage of simulation components
in linear-chain-based MC ray tracing packages widely used in instrument design
and optimization, as well as NumPy-based components that make prototypes useful
and easy to develop. These developments have enabled us to carry out detailed
simulations of neutron scattering experiments with non-trivial samples in
time-of-flight inelastic instruments at the Spallation Neutron Source. Examples
of such simulations for powder and single-crystal samples with various
scattering kernels, including kernels for phonon and magnon scattering, are
presented. With simulations that closely reproduce experimental results,
scattering mechanisms can be turned on and off to determine how they contribute
to the measured scattering intensities, improving our understanding of the
underlying physics.Comment: 34 pages, 14 figure
CLPGUI: a generic graphical user interface for constraint logic programming over finite domains
CLPGUI is a graphical user interface for visualizing and interacting with
constraint logic programs over finite domains. In CLPGUI, the user can control
the execution of a CLP program through several views of constraints, of finite
domain variables and of the search tree. CLPGUI is intended to be used both for
teaching purposes, and for debugging and improving complex programs of
realworld scale. It is based on a client-server architecture for connecting the
CLP process to a Java-based GUI process. Communication by message passing
provides an open architecture which facilitates the reuse of graphical
components and the porting to different constraint programming systems.
Arbitrary constraints and goals can be posted incrementally from the GUI. We
propose several dynamic 2D and 3D visualizations of the search tree and of the
evolution of finite domain variables. We argue that the 3D representation of
search trees proposed in this paper provides the most appropriate visualization
of large search trees. We describe the current implementation of the
annotations and of the interactive execution model in GNU-Prolog, and report
some evaluation results.Comment: 16 pages; Alexandre Tessier, editor; WLPE 2002,
http://xxx.lanl.gov/abs/cs.SE/020705
Bringing Back-in-Time Debugging Down to the Database
With back-in-time debuggers, developers can explore what happened before
observable failures by following infection chains back to their root causes.
While there are several such debuggers for object-oriented programming
languages, we do not know of any back-in-time capabilities at the
database-level. Thus, if failures are caused by SQL scripts or stored
procedures, developers have difficulties in understanding their unexpected
behavior.
In this paper, we present an approach for bringing back-in-time debugging
down to the SAP HANA in-memory database. Our TARDISP debugger allows developers
to step queries backwards and inspecting the database at previous and arbitrary
points in time. With the help of a SQL extension, we can express queries
covering a period of execution time within a debugging session and handle large
amounts of data with low overhead on performance and memory. The entire
approach has been evaluated within a development project at SAP and shows
promising results with respect to the gathered developer feedback.Comment: 24th IEEE International Conference on Software Analysis, Evolution,
and Reengineerin
A Monitoring Language for Run Time and Post-Mortem Behavior Analysis and Visualization
UFO is a new implementation of FORMAN, a declarative monitoring language, in
which rules are compiled into execution monitors that run on a virtual machine
supported by the Alamo monitor architecture.Comment: In M. Ronsse, K. De Bosschere (eds), proceedings of the Fifth
International Workshop on Automated Debugging (AADEBUG 2003), September 2003,
Ghent. cs.SE/030902
Analysis of Software Binaries for Reengineering-Driven Product Line Architecture\^aAn Industrial Case Study
This paper describes a method for the recovering of software architectures
from a set of similar (but unrelated) software products in binary form. One
intention is to drive refactoring into software product lines and combine
architecture recovery with run time binary analysis and existing clustering
methods. Using our runtime binary analysis, we create graphs that capture the
dependencies between different software parts. These are clustered into smaller
component graphs, that group software parts with high interactions into larger
entities. The component graphs serve as a basis for further software product
line work. In this paper, we concentrate on the analysis part of the method and
the graph clustering. We apply the graph clustering method to a real
application in the context of automation / robot configuration software tools.Comment: In Proceedings FMSPLE 2015, arXiv:1504.0301
ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization
ROOT is an object-oriented C++ framework conceived in the high-energy physics
(HEP) community, designed for storing and analyzing petabytes of data in an
efficient way. Any instance of a C++ class can be stored into a ROOT file in a
machine-independent compressed binary format. In ROOT the TTree object
container is optimized for statistical data analysis over very large data sets
by using vertical data storage techniques. These containers can span a large
number of files on local disks, the web, or a number of different shared file
systems. In order to analyze this data, the user can chose out of a wide set of
mathematical and statistical functions, including linear algebra classes,
numerical algorithms such as integration and minimization, and various methods
for performing regression analysis (fitting). In particular, ROOT offers
packages for complex data modeling and fitting, as well as multivariate
classification based on machine learning techniques. A central piece in these
analysis tools are the histogram classes which provide binning of one- and
multi-dimensional data. Results can be saved in high-quality graphical formats
like Postscript and PDF or in bitmap formats like JPG or GIF. The result can
also be stored into ROOT macros that allow a full recreation and rework of the
graphics. Users typically create their analysis macros step by step, making use
of the interactive C++ interpreter CINT, while running over small data samples.
Once the development is finished, they can run these macros at full compiled
speed over large data sets, using on-the-fly compilation, or by creating a
stand-alone batch program. Finally, if processing farms are available, the user
can reduce the execution time of intrinsically parallel tasks - e.g. data
mining in HEP - by using PROOF, which will take care of optimally distributing
the work over the available resources in a transparent way
- …