49,557 research outputs found
Soft computing techniques for software effort estimation
The effort invested in a software project is probably one of the most
important and most analyzed variables in recent years in the process of project
management. The limitation of algorithmic effort prediction models is their
inability to cope with uncertainties and imprecision surrounding software
projects at the early development stage. More recently attention has turned to
a variety of machine learning methods, and soft computing in particular to
predict software development effort. Soft computing is a consortium of
methodologies centering in fuzzy logic, artificial neural networks, and
evolutionary computation. It is important, to mention here, that these
methodologies are complementary and synergistic, rather than competitive. They
provide in one form or another flexible information processing capability for
handling real life ambiguous situations. These methodologies are currently used
for reliable and accurate estimate of software development effort, which has
always been a challenge for both the software industry and academia. The aim of
this study is to analyze soft computing techniques in the existing models and
to provide in depth review of software and project estimation techniques
existing in industry and literature based on the different test datasets along
with their strength and weaknesse
Regularized Fuzzy Neural Networks to Aid Effort Forecasting in the Construction and Software Development
Predicting the time to build software is a very complex task for software
engineering managers. There are complex factors that can directly interfere
with the productivity of the development team. Factors directly related to the
complexity of the system to be developed drastically change the time necessary
for the completion of the works with the software factories. This work proposes
the use of a hybrid system based on artificial neural networks and fuzzy
systems to assist in the construction of an expert system based on rules to
support in the prediction of hours destined to the development of software
according to the complexity of the elements present in the same. The set of
fuzzy rules obtained by the system helps the management and control of software
development by providing a base of interpretable estimates based on fuzzy
rules. The model was submitted to tests on a real database, and its results
were promissory in the construction of an aid mechanism in the predictability
of the software construction
Multi criteria decision making approach for selecting effort estimation model
Effort Estimation has always been a challenging task for the Project
managers. Many researchers have tried to help them by creating different types
of models. This has been already proved that none is successful for all types
of projects and every type of environment. Analytic Hierarchy Process has been
identified as the tool that would help in Multi Criteria Decision Making.
Researchers have identified that Analytic Hierarchy Process can be used for the
comparison of effort estimation of different models and techniques. But the
problem with traditional Analytic Hierarchy Process is its inability to deal
with the imprecision and subjectivity in the pairwise comparison process. The
motive of this paper is to propose Fuzzy Analytic Hierarchy Process, which can
be used to rectify the subjectivity and imprecision of Analytic Hierarchy
Process and can be used for selecting the type of Model best suited for
estimating the effort for a given problem type or environment. Instead of
single crisp value, Fuzzy Analytic Hierarchy Process uses a range of values to
incorporate decision maker uncertainty. From this range, decision maker can
select the value that reflects his confidence and also he can specify his
attitude like optimistic, pessimistic or moderate. In this work, the comparison
of Analytic Hierarchy Process and Fuzzy Analytic Hierarchy Process is concluded
using a case study of selection of effort estimation model
Analyzing the Relationship between Project Productivity and Environment Factors in the Use Case Points Method
Project productivity is a key factor for producing effort estimates from Use
Case Points (UCP), especially when the historical dataset is absent. The first
versions of UCP effort estimation models used a fixed number or very limited
numbers of productivity ratios for all new projects. These approaches have not
been well examined over a large number of projects so the validity of these
studies was a matter for criticism. The newly available large software datasets
allow us to perform further research on the usefulness of productivity for
effort estimation of software development. Specifically, we studied the
relationship between project productivity and UCP environmental factors, as
they have a significant impact on the amount of productivity needed for a
software project. Therefore, we designed four studies, using various
classification and regression methods, to examine the usefulness of that
relationship and its impact on UCP effort estimation. The results we obtained
are encouraging and show potential improvement in effort estimation.
Furthermore, the efficiency of that relationship is better over a dataset that
comes from industry because of the quality of data collection. Our comment on
the findings is that it is better to exclude environmental factors from
calculating UCP and make them available only for computing productivity. The
study also encourages project managers to understand how to better assess the
environmental factors as they do have a significant impact on productivityComment: Journal of Software: Evolution and Process, 201
Methods of Technical Prognostics Applicable to Embedded Systems
Hlavní cílem dizertace je poskytnutí uceleného pohledu na problematiku technické prognostiky, která nachází uplatnění v tzv. prediktivní údržbě založené na trvalém monitorování zařízení a odhadu úrovně degradace systému či jeho zbývající životnosti a to zejména v oblasti komplexních zařízení a strojů. V současnosti je technická diagnostika poměrně dobře zmapovaná a reálně nasazená na rozdíl od technické prognostiky, která je stále rozvíjejícím se oborem, který ovšem postrádá větší množství reálných aplikaci a navíc ne všechny metody jsou dostatečně přesné a aplikovatelné pro embedded systémy. Dizertační práce přináší přehled základních metod použitelných pro účely predikce zbývající užitné životnosti, jsou zde popsány metriky pomocí, kterých je možné jednotlivé přístupy porovnávat ať už z pohledu přesnosti, ale také i z pohledu výpočetní náročnosti. Jedno z dizertačních jader tvoří doporučení a postup pro výběr vhodné prognostické metody s ohledem na prognostická kritéria. Dalším dizertačním jádrem je představení tzv. částicového filtrovaní (particle filtering) vhodné pro model-based prognostiku s ověřením jejich implementace a porovnáním. Hlavní dizertační jádro reprezentuje případovou studii pro velmi aktuální téma prognostiky Li-Ion baterii s ohledem na trvalé monitorování. Případová studie demonstruje proces prognostiky založené na modelu a srovnává možné přístupy jednak pro odhad doby před vybitím baterie, ale také sleduje možné vlivy na degradaci baterie. Součástí práce je základní ověření modelu Li-Ion baterie a návrh prognostického procesu.The main aim of the thesis is to provide a comprehensive overview of technical prognosis, which is applied in the condition based maintenance, based on continuous device monitoring and remaining useful life estimation, especially in the field of complex equipment and machinery. Nowadays technical prognosis is still evolving discipline with limited number of real applications and is not so well developed as technical diagnostics, which is fairly well mapped and deployed in real systems. Thesis provides an overview of basic methods applicable for prediction of remaining useful life, metrics, which can help to compare the different approaches both in terms of accuracy and in terms of computational/deployment cost. One of the research cores consists of recommendations and guide for selecting the appropriate forecasting method with regard to the prognostic criteria. Second thesis research core provides description and applicability of particle filtering framework suitable for model-based forecasting. Verification of their implementation and comparison is provided. The main research topic of the thesis provides a case study for a very actual Li-Ion battery health monitoring and prognostics with respect to continuous monitoring. The case study demonstrates the prognostic process based on the model and compares the possible approaches for estimating both the runtime and capacity fade. Proposed methodology is verified on real measured data.
Predicting and Evaluating Software Model Growth in the Automotive Industry
The size of a software artifact influences the software quality and impacts
the development process. In industry, when software size exceeds certain
thresholds, memory errors accumulate and development tools might not be able to
cope anymore, resulting in a lengthy program start up times, failing builds, or
memory problems at unpredictable times. Thus, foreseeing critical growth in
software modules meets a high demand in industrial practice. Predicting the
time when the size grows to the level where maintenance is needed prevents
unexpected efforts and helps to spot problematic artifacts before they become
critical.
Although the amount of prediction approaches in literature is vast, it is
unclear how well they fit with prerequisites and expectations from practice. In
this paper, we perform an industrial case study at an automotive manufacturer
to explore applicability and usability of prediction approaches in practice. In
a first step, we collect the most relevant prediction approaches from
literature, including both, approaches using statistics and machine learning.
Furthermore, we elicit expectations towards predictions from practitioners
using a survey and stakeholder workshops. At the same time, we measure software
size of 48 software artifacts by mining four years of revision history,
resulting in 4,547 data points. In the last step, we assess the applicability
of state-of-the-art prediction approaches using the collected data by
systematically analyzing how well they fulfill the practitioners' expectations.
Our main contribution is a comparison of commonly used prediction approaches
in a real world industrial setting while considering stakeholder expectations.
We show that the approaches provide significantly different results regarding
prediction accuracy and that the statistical approaches fit our data best
Fusarium Damaged Kernels Detection Using Transfer Learning on Deep Neural Network Architecture
The present work shows the application of transfer learning for a pre-trained
deep neural network (DNN), using a small image dataset ( 12,000) on a
single workstation with enabled NVIDIA GPU card that takes up to 1 hour to
complete the training task and archive an overall average accuracy of .
The DNN presents a score of misclassification for an external test
dataset. The accuracy of the proposed methodology is equivalent to ones using
HSI methodology used for the same task, but with the advantage of
being independent on special equipment to classify wheat kernel for FHB
symptoms
A Deep Learning and Gamification Approach to Energy Conservation at Nanyang Technological University
The implementation of smart building technology in the form of smart
infrastructure applications has great potential to improve sustainability and
energy efficiency by leveraging humans-in-the-loop strategy. However, human
preference in regard to living conditions is usually unknown and heterogeneous
in its manifestation as control inputs to a building. Furthermore, the
occupants of a building typically lack the independent motivation necessary to
contribute to and play a key role in the control of smart building
infrastructure. Moreover, true human actions and their integration with
sensing/actuation platforms remains unknown to the decision maker tasked with
improving operational efficiency. By modeling user interaction as a sequential
discrete game between non-cooperative players, we introduce a gamification
approach for supporting user engagement and integration in a human-centric
cyber-physical system. We propose the design and implementation of a
large-scale network game with the goal of improving the energy efficiency of a
building through the utilization of cutting-edge Internet of Things (IoT)
sensors and cyber-physical systems sensing/actuation platforms. A benchmark
utility learning framework that employs robust estimations for classical
discrete choice models provided for the derived high dimensional imbalanced
data. To improve forecasting performance, we extend the benchmark utility
learning scheme by leveraging Deep Learning end-to-end training with Deep
bi-directional Recurrent Neural Networks. We apply the proposed methods to high
dimensional data from a social game experiment designed to encourage energy
efficient behavior among smart building occupants in Nanyang Technological
University (NTU) residential housing. Using occupant-retrieved actions for
resources such as lighting and A/C, we simulate the game defined by the
estimated utility functions.Comment: 16 double pages, shorter version submitted to Applied Energy Journa
An Augmented Lagrangian Method for Piano Transcription using Equal Loudness Thresholding and LSTM-based Decoding
A central goal in automatic music transcription is to detect individual note
events in music recordings. An important variant is instrument-dependent music
transcription where methods can use calibration data for the instruments in
use. However, despite the additional information, results rarely exceed an
f-measure of 80%. As a potential explanation, the transcription problem can be
shown to be badly conditioned and thus relies on appropriate regularization. A
recently proposed method employs a mixture of simple, convex regularizers (to
stabilize the parameter estimation process) and more complex terms (to
encourage more meaningful structure). In this paper, we present two extensions
to this method. First, we integrate a computational loudness model to better
differentiate real from spurious note detections. Second, we employ
(Bidirectional) Long Short Term Memory networks to re-weight the likelihood of
detected note constellations. Despite their simplicity, our two extensions lead
to a drop of about 35% in note error rate compared to the state-of-the-art
Recommended from our members
A review of machine learning techniques in photoplethysmography for the non-invasive cuff-less measurement of blood pressure
Hypertension or high blood pressure is a leading cause of death throughout the world and a critical factor for increasing the risk of serious diseases, including cardiovascular diseases such as stroke and heart failure. Blood pressure is a primary vital sign that must be monitored regularly for the early detection, prevention and treatment of cardiovascular diseases. Traditional blood pressure measurement techniques are either invasive or cuff-based, which are impractical, intermittent, and uncomfortable for patients. Over the past few decades, several indirect approaches using photoplethysmogram (PPG) have been investigated, namely, pulse transit time, pulse wave velocity, pulse arrival time and pulse wave analysis, in an effort to utilise PPG for estimating blood pressure. Recent advancements in signal processing techniques, including machine learning and artificial intelligence, have also opened up exciting new horizons for PPG-based cuff less and continuous monitoring of blood pressure. Such a device will have a significant and transformative impact in monitoring patients’ vital signs, especially those at risk of cardiovascular disease. This paper provides a comprehensive review for non-invasive cuff-less blood pressure estimation using the PPG approach along with their challenges and limitations
- …