110,409 research outputs found
Supporting Defect Causal Analysis in Practice with Cross-Company Data on Causes of Requirements Engineering Problems
[Context] Defect Causal Analysis (DCA) represents an efficient practice to
improve software processes. While knowledge on cause-effect relations is
helpful to support DCA, collecting cause-effect data may require significant
effort and time. [Goal] We propose and evaluate a new DCA approach that uses
cross-company data to support the practical application of DCA. [Method] We
collected cross-company data on causes of requirements engineering problems
from 74 Brazilian organizations and built a Bayesian network. Our DCA approach
uses the diagnostic inference of the Bayesian network to support DCA sessions.
We evaluated our approach by applying a model for technology transfer to
industry and conducted three consecutive evaluations: (i) in academia, (ii)
with industry representatives of the Fraunhofer Project Center at UFBA, and
(iii) in an industrial case study at the Brazilian National Development Bank
(BNDES). [Results] We received positive feedback in all three evaluations and
the cross-company data was considered helpful for determining main causes.
[Conclusions] Our results strengthen our confidence in that supporting DCA with
cross-company data is promising and should be further investigated.Comment: 10 pages, 8 figures, accepted for the 39th International Conference
on Software Engineering (ICSE'17
Classifying Relations by Ranking with Convolutional Neural Networks
Relation classification is an important semantic processing task for which
state-ofthe-art systems still rely on costly handcrafted features. In this work
we tackle the relation classification task using a convolutional neural network
that performs classification by ranking (CR-CNN). We propose a new pairwise
ranking loss function that makes it easy to reduce the impact of artificial
classes. We perform experiments using the the SemEval-2010 Task 8 dataset,
which is designed for the task of classifying the relationship between two
nominals marked in a sentence. Using CRCNN, we outperform the state-of-the-art
for this dataset and achieve a F1 of 84.1 without using any costly handcrafted
features. Additionally, our experimental results show that: (1) our approach is
more effective than CNN followed by a softmax classifier; (2) omitting the
representation of the artificial class Other improves both precision and
recall; and (3) using only word embeddings as input features is enough to
achieve state-of-the-art results if we consider only the text between the two
target nominals.Comment: Accepted as a long paper in the 53rd Annual Meeting of the
Association for Computational Linguistics (ACL 2015
New method for calculating helicity amplitudes of jet--like QED processes for high--energy colliders I. Bremsstrahlung processes
Inelastic QED processes, the cross sections of which do not drop with
increasing energy, play an important role at high-energy colliders. Such
reactions have the form of two-jet processes with the exchange of a virtual
photon in the t-channel. We consider them in the region of small scattering
angles , which yields the dominant contribution to
their total cross sections. A new effective method is presented and applied to
QED processes with emission of real photons to calculate the helicity
amplitudes of these processes. Its basic idea is similar to the well-known
equivalent-lepton method. Compact analytical expressions for those amplitudes
up to are derived omitting only terms of the order of , and higher order. The helicity amplitudes are presented
in a compact form in which large compensating terms are already cancelled. Some
common properties for all jet-like processes are found and we discuss their
origin.Comment: 17 pages, LATEX (svjour style files included
Implementation and complexity of the watershed-from-markers algorithm computed as a minimal cost forest
The watershed algorithm belongs to classical algorithms in mathematical
morphology. Lotufo et al. published a principle of the watershed computation by
means of an Image Foresting Transform (IFT), which computes a shortest path
forest from given markers. The algorithm itself was described for a 2D case
(image) without a detailed discussion of its computation and memory demands for
real datasets. As IFT cleverly solves the problem of plateaus and as it gives
precise results when thin objects have to be segmented, it is obvious to use
this algorithm for 3D datasets taking in mind the minimizing of a higher memory
consumption for the 3D case without loosing low asymptotical time complexity of
O(m+C) (and also the real computation speed). The main goal of this paper is an
implementation of the IFT algorithm with a priority queue with buckets and
careful tuning of this implementation to reach as minimal memory consumption as
possible.
The paper presents five possible modifications and methods of implementation
of the IFT algorithm. All presented implementations keep the time complexity of
the standard priority queue with buckets but the best one minimizes the costly
memory allocation and needs only 19-45% of memory for typical 3D medical
imaging datasets. Memory saving was reached by an IFT algorithm simplification,
which stores more elements in temporary structures but these elements are
simpler and thus need less memory.
The best presented modification allows segmentation of large 3D medical
datasets (up to 512x512x680 voxels) with 12-or 16-bits per voxel on currently
available PC based workstations.Comment: v1: 10 pages, 6 figures, 7 tables EUROGRAPHICS conference,
Manchester, UK, 2001. v2: 12 pages, reformated for letter, corrected IFT to
"Image Foresting Tranform
Write Free or Die: Vol. 01, No. 02
Writing at UNH, Page 1
Upcoming Events, Page 1
Writing Committee Members, Page 2
Dangling Modifier, Page 2
Ask Patty, Page 3
Les Perelman, Page 4
Grammar Box, Page 4
Tom Newkirk and Self-Conferencing, Page 5
Notes on Oxford Comma, Page Page 6
Past Perfect, Page 9
Faculty Resources, Page
Optimal power harness routing for small-scale satellites
This paper presents an approach to optimal power harness design based on a modified ant colony optimisation algorithm. The optimisation of the harness routing topology is formulated as a constrained multi-objective optimisation problem in which the main objectives are to minimise the length (and therefore the mass) of the harness. The modified ant colony optimisation algorithm automatically routes different types of wiring, creating the optimal harness layout. During the optimisation the length, mass and bundleness of the cables are computed and used as cost functions. The optimisation algorithm works incrementally on a finite set of waypoints, forming a tree, by adding and evaluating one branch at a time, utilising a set of heuristics using the cable length and cable bundling as criteria to select the optimal path. Constraints are introduced as forbidden waypoints through which digital agents (hereafter called ants) cannot travel. The new algorithm developed will be applied to the design of the harness of a small satellite, with results highlighting the capabilities and potentialities of the code
- âŠ