6,798 research outputs found
An evaluation framework to drive future evolution of a research prototype
The Open Source Component Artefact Repository (OSCAR) requires
evaluation to confirm its suitability as a development environment
for distributed software engineers. The evaluation will take note of
several factors including usability of OSCAR as a stand-alone system,
scalability and maintainability of the system and novel features not
provided by existing artefact management systems. Additionally, the
evaluation design attempts to address some of the omissions (due to
time constraints) from the industrial partner evaluations.
This evaluation is intended to be a prelude to the evaluation of the
awareness support being added to OSCAR; thus establishing a baseline
to which the effects of awareness support may be compared
Cause Clue Clauses: Error Localization using Maximum Satisfiability
Much effort is spent everyday by programmers in trying to reduce long,
failing execution traces to the cause of the error. We present a new algorithm
for error cause localization based on a reduction to the maximal satisfiability
problem (MAX-SAT), which asks what is the maximum number of clauses of a
Boolean formula that can be simultaneously satisfied by an assignment. At an
intuitive level, our algorithm takes as input a program and a failing test, and
comprises the following three steps. First, using symbolic execution, we encode
a trace of a program as a Boolean trace formula which is satisfiable iff the
trace is feasible. Second, for a failing program execution (e.g., one that
violates an assertion or a post-condition), we construct an unsatisfiable
formula by taking the trace formula and additionally asserting that the input
is the failing test and that the assertion condition does hold at the end.
Third, using MAX-SAT, we find a maximal set of clauses in this formula that can
be satisfied together, and output the complement set as a potential cause of
the error. We have implemented our algorithm in a tool called bug-assist for C
programs. We demonstrate the surprising effectiveness of the tool on a set of
benchmark examples with injected faults, and show that in most cases,
bug-assist can quickly and precisely isolate the exact few lines of code whose
change eliminates the error. We also demonstrate how our algorithm can be
modified to automatically suggest fixes for common classes of errors such as
off-by-one.Comment: The pre-alpha version of the tool can be downloaded from
http://bugassist.mpi-sws.or
Neuroimaging study designs, computational analyses and data provenance using the LONI pipeline.
Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges--management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu
ATMSeer: Increasing Transparency and Controllability in Automated Machine Learning
To relieve the pain of manually selecting machine learning algorithms and
tuning hyperparameters, automated machine learning (AutoML) methods have been
developed to automatically search for good models. Due to the huge model search
space, it is impossible to try all models. Users tend to distrust automatic
results and increase the search budget as much as they can, thereby undermining
the efficiency of AutoML. To address these issues, we design and implement
ATMSeer, an interactive visualization tool that supports users in refining the
search space of AutoML and analyzing the results. To guide the design of
ATMSeer, we derive a workflow of using AutoML based on interviews with machine
learning experts. A multi-granularity visualization is proposed to enable users
to monitor the AutoML process, analyze the searched models, and refine the
search space in real time. We demonstrate the utility and usability of ATMSeer
through two case studies, expert interviews, and a user study with 13 end
users.Comment: Published in the ACM Conference on Human Factors in Computing Systems
(CHI), 2019, Glasgow, Scotland U
- ā¦