29,839 research outputs found
Estimating, planning and managing Agile Web development projects under a value-based perspective
Context: The processes of estimating, planning and managing are crucial for software development projects,
since the results must be related to several business strategies. The broad expansion of the Internet
and the global and interconnected economy make Web development projects be often characterized by
expressions like delivering as soon as possible, reducing time to market and adapting to undefined
requirements. In this kind of environment, traditional methodologies based on predictive techniques
sometimes do not offer very satisfactory results. The rise of Agile methodologies and practices has
provided some useful tools that, combined with Web Engineering techniques, can help to establish a
framework to estimate, manage and plan Web development projects.
Objective: This paper presents a proposal for estimating, planning and managing Web projects, by
combining some existing Agile techniques with Web Engineering principles, presenting them as an
unified framework which uses the business value to guide the delivery of features.
Method: The proposal is analyzed by means of a case study, including a real-life project, in order to obtain
relevant conclusions.
Results: The results achieved after using the framework in a development project are presented, including
interesting results on project planning and estimation, as well as on team productivity throughout the
project.
Conclusion: It is concluded that the framework can be useful in order to better manage Web-based
projects, through a continuous value-based estimation and management process.Ministerio de EconomÃa y Competitividad TIN2013-46928-C3-3-
Mitigating smart card fault injection with link-time code rewriting: a feasibility study
We present a feasibility study to protect smart card software against fault-injection attacks by means of binary code rewriting. We implemented a range of protection techniques in a link-time rewriter and evaluate and discuss the obtained coverage, the associated overhead and engineering effort, as well as its practical usability
PVR: Patch-to-Volume Reconstruction for Large Area Motion Correction of Fetal MRI
In this paper we present a novel method for the correction of motion
artifacts that are present in fetal Magnetic Resonance Imaging (MRI) scans of
the whole uterus. Contrary to current slice-to-volume registration (SVR)
methods, requiring an inflexible anatomical enclosure of a single investigated
organ, the proposed patch-to-volume reconstruction (PVR) approach is able to
reconstruct a large field of view of non-rigidly deforming structures. It
relaxes rigid motion assumptions by introducing a specific amount of redundant
information that is exploited with parallelized patch-wise optimization,
super-resolution, and automatic outlier rejection. We further describe and
provide an efficient parallel implementation of PVR allowing its execution
within reasonable time on commercially available graphics processing units
(GPU), enabling its use in the clinical practice. We evaluate PVR's
computational overhead compared to standard methods and observe improved
reconstruction accuracy in presence of affine motion artifacts of approximately
30% compared to conventional SVR in synthetic experiments. Furthermore, we have
evaluated our method qualitatively and quantitatively on real fetal MRI data
subject to maternal breathing and sudden fetal movements. We evaluate
peak-signal-to-noise ratio (PSNR), structural similarity index (SSIM), and
cross correlation (CC) with respect to the originally acquired data and provide
a method for visual inspection of reconstruction uncertainty. With these
experiments we demonstrate successful application of PVR motion compensation to
the whole uterus, the human fetus, and the human placenta.Comment: 10 pages, 13 figures, submitted to IEEE Transactions on Medical
Imaging. v2: wadded funders acknowledgements to preprin
Using real options to select stable Middleware-induced software architectures
The requirements that force decisions towards building distributed system architectures are usually of a non-functional nature. Scalability, openness, heterogeneity, and fault-tolerance are examples of such non-functional requirements. The current trend is to build distributed systems with middleware, which provide the application developer with primitives for managing the complexity of distribution, system resources, and for realising many of the non-functional requirements. As non-functional requirements evolve, the `coupling' between the middleware and architecture becomes the focal point for understanding the stability of the distributed software system architecture in the face of change. It is hypothesised that the choice of a stable distributed software architecture depends on the choice of the underlying middleware and its flexibility in responding to future changes in non-functional requirements. Drawing on a case study that adequately represents a medium-size component-based distributed architecture, it is reported how a likely future change in scalability could impact the architectural structure of two versions, each induced with a distinct middleware: one with CORBA and the other with J2EE. An option-based model is derived to value the flexibility of the induced-architectures and to guide the selection. The hypothesis is verified to be true for the given change. The paper concludes with some observations that could stimulate future research in the area of relating requirements to software architectures
Link-time smart card code hardening
This paper presents a feasibility study to protect smart card software against fault-injection attacks by means of link-time code rewriting. This approach avoids the drawbacks of source code hardening, avoids the need for manual assembly writing, and is applicable in conjunction with closed third-party compilers. We implemented a range of cookbook code hardening recipes in a prototype link-time rewriter and evaluate their coverage and associated overhead to conclude that this approach is promising. We demonstrate that the overhead of using an automated link-time approach is not significantly higher than what can be obtained with compile-time hardening or with manual hardening of compiler-generated assembly code
Fair value on commons-based intellectual property assets: Lessons of an estimation over Linux kernel.
Open source describes practices in production and development that promote access to the end product's source materials, spreading development burden amongst individuals and companies. This model has resulted in a large and efficient ecosystem and unheralded software innovation, freely available to society. Open source methods are also increasingly being applied in other fields of endeavour, such as biotechnology or cultural production. But under financial reporting framework, general volunteer activity is not reflected on financial statements. As a result, there is not value of volunteer contributions and there is also no single source for cost estimates of how much it has taken to develop an open source technology. This volunteer activity encloses not only individuals but corporations developing and contributing open source products. Standard methodology for reporting open source asset valuation is needed and must include value creation from the perspective of the different stakeholders.FLOSS, commons, accounting standards, financial reporting
- …