948 research outputs found
A novel model for improving the maintainability of web-based systems
Web applications incorporate important business assets and offer a convenient way for
businesses to promote their services through the internet. Many of these web applica-
tions have evolved from simple HTML pages to complex applications that have a high
maintenance cost. This is due to the inherent characteristics of web applications, to
the fast internet evolution and to the pressing market which imposes short development
cycles and frequent modifications. In order to control the maintenance cost, quantita-
tive metrics and models for predicting web applications’ maintainability must be used.
Maintainability metrics and models can be useful for predicting maintenance cost, risky
components and can help in assessing and choosing between different software artifacts.
Since, web applications are different from traditional software systems, models and met-
rics for traditional systems can not be applied with confidence to web applications. Web
applications have special features such as hypertext structure, dynamic code generation
and heterogenousity that can not be captured by traditional and object-oriented metrics.
This research explores empirically the relationships between new UML design met-
rics based on Conallen’s extension for web applications and maintainability. UML web
design metrics are used to gauge whether the maintainability of a system can be im-
proved by comparing and correlating the results with different measures of maintain-
ability. We studied the relationship between our UML metrics and the following main-
tainability measures: Understandability Time (the time spent on understanding the soft-
ware artifact in order to complete the questionnaire), Modifiability Time(the time spent
on identifying places for modification and making those modifications on the software
artifact), LOC (absolute net value of the total number of lines added and deleted for com-
ponents in a class diagram), and nRev (total number of revisions for components in a class diagram). Our results gave an indication that there is a possibility for a relationship
to exist between our metrics and modifiability time. However, the results did not show
statistical significance on the effect of the metrics on understandability time. Our results
showed that there is a relationship between our metrics and LOC(Lines of Code). We
found that the following metrics NAssoc, NClientScriptsComp, NServerScriptsComp,
and CoupEntropy explained the effort measured by LOC(Lines of Code). We found that
NC, and CoupEntropy metrics explained the effort measured by nRev(Number of Revi-
sions). Our results give a first indication of the usefulness of the UML design metrics,
they show that there is a reasonable chance that useful prediction models can be built
from early UML design metrics
A COUPLING AND COHESION METRICS SUITE FOR
The increasing need for software quality measurements has led to extensive research
into software metrics and the development of software metric tools. To maintain high
quality software, developers need to strive for a low-coupled and highly cohesive
design. One of many properties considered when measuring coupling and cohesion is the
type of relationships that made up coupling and cohesion. What these specific
relationships are is widely understood and accepted by researchers and practitioners.
However, different researchers base their metrics on a different subset of these
relationships.
Studies have shown that because of the inclusion of multiple subsets of relationships
in one measure of coupling and cohesion metrics, the measures tend to correlate among
each other. Validation of these metrics against maintainability index of a Java program
suggested that there is high multicollinearity among coupling and cohesion metrics.
This research introduces an approach of implementing coupling and cohesion
metrics. Every possible relationship is considered and, for each, addressed the issue of
whether or not it has significant effect on maintainability index prediction. Validation of
orthogonality of the selected metrics is assessed by means of principal component
analysis. The investigation suggested that some of the metrics are independent set of
metrics, while some are measuring similar dimension
Do internal software quality tools measure validated metrics?
Internal software quality determines the maintainability of the software
product and influences the quality in use. There is a plethora of metrics which
purport to measure the internal quality of software, and these metrics are
offered by static software analysis tools. To date, a number of reports have
assessed the validity of these metrics. No data are available, however, on
whether metrics offered by the tools are somehow validated in scientific
studies. The current study covers this gap by providing data on which tools and
how many validated metrics are provided. The results show that a range of
metrics that the tools provided do not seem to be validated in the literature
and that only a small percentage of metrics are validated in the provided
tools
Estimation of Defect proneness Using Design complexity Measurements in Object- Oriented Software
Software engineering is continuously facing the challenges of growing
complexity of software packages and increased level of data on defects and
drawbacks from software production process. This makes a clarion call for
inventions and methods which can enable a more reusable, reliable, easily
maintainable and high quality software systems with deeper control on software
generation process. Quality and productivity are indeed the two most important
parameters for controlling any industrial process. Implementation of a
successful control system requires some means of measurement. Software metrics
play an important role in the management aspects of the software development
process such as better planning, assessment of improvements, resource
allocation and reduction of unpredictability. The process involving early
detection of potential problems, productivity evaluation and evaluating
external quality factors such as reusability, maintainability, defect proneness
and complexity are of utmost importance. Here we discuss the application of CK
metrics and estimation model to predict the external quality parameters for
optimizing the design process and production process for desired levels of
quality. Estimation of defect-proneness in object-oriented system at design
level is developed using a novel methodology where models of relationship
between CK metrics and defect-proneness index is achieved. A multifunctional
estimation approach captures the correlation between CK metrics and defect
proneness level of software modules.Comment: 5 pages, 1 figur
Qos-Based Web Service Discovery And Selection Using Machine Learning
In service computing, the same target functions can be achieved by multiple
Web services from different providers. Due to the functional similarities, the
client needs to consider the non-functional criteria. However, Quality of
Service provided by the developer suffers from scarcity and lack of
reliability. In addition, the reputation of the service providers is an
important factor, especially those with little experience, to select a service.
Most of the previous studies were focused on the user's feedbacks for
justifying the selection. Unfortunately, not all the users provide the feedback
unless they had extremely good or bad experience with the service. In this
vision paper, we propose a novel architecture for the web service discovery and
selection. The core component is a machine learning based methodology to
predict the QoS properties using source code metrics. The credibility value and
previous usage count are used to determine the reputation of the service.Comment: 8 Pages, 3 Figure
Multi-Paradigm Metric and its Applicability on JAVA Projects
JAVA is one of the favorite languages amongst software developers. However, the
numbers of specific software metrics to evaluate the JAVA code are limited. In this paper,
we evaluate the applicability of a recently developed multi paradigm metric to JAVA
projects. The experimentations show that the Multi paradigm metric is an effective measure
for estimating the complexity of the JAVA code/projects, and therefore it can be used for
controlling the quality of the projects. We have also evaluated the multi-paradigm metric
against the principles of measurement theory
Impact of Mediatedrelations As Confounding Factor on Cohesion and Coupling Metrics: For Measuring Fault Proneness in OO Software Quality Assessment
Mediated class relations and method calls as a confounding factor on coupling and cohesion metrics to assess the fault proneness of object oriented software is evaluated and proposed new cohesion and coupling metrics labeled as mediated cohesion (MCH) and mediated coupling (MCO) proposed. These measures differ from the majority of established metrics in two respects: they reflect the degree to which entities are coupled or resemble each other, and they take account of mediated relations in couplings or similarities. An empirical comparison of the new measures with eight established metrics is described. The new measures are shown to be consistently superior at measure the fault proneness
Software Metrics for Package Remodularisation
There is a plethora of software metrics \cite{Lore94a, Fent96a, Hend96a, Han00a, Lanz06a} and a large amount of research articles. Still there is a lack for a serious and practically-oriented evaluation of metrics. Often metrics lack the property that the software reengineer or quality expert can easily understand the situation summarized by the metrics. In particular, since the exact notion of coupling and cohesion is complex, a particular focus on such point is important. In the first chapter of the present document, we present a list of software metrics, that are commonly used to measure object-oriented programs. In the second chapter we present our proposition for package metrics that capture package aspects such as information hiding and change impact limits
- …