6,784 research outputs found
DeepSoft: A vision for a deep model of software
Although software analytics has experienced rapid growth as a research area,
it has not yet reached its full potential for wide industrial adoption. Most of
the existing work in software analytics still relies heavily on costly manual
feature engineering processes, and they mainly address the traditional
classification problems, as opposed to predicting future events. We present a
vision for \emph{DeepSoft}, an \emph{end-to-end} generic framework for modeling
software and its development process to predict future risks and recommend
interventions. DeepSoft, partly inspired by human memory, is built upon the
powerful deep learning-based Long Short Term Memory architecture that is
capable of learning long-term temporal dependencies that occur in software
evolution. Such deep learned patterns of software can be used to address a
range of challenging problems such as code and task recommendation and
prediction. DeepSoft provides a new approach for research into modeling of
source code, risk prediction and mitigation, developer modeling, and
automatically generating code patches from bug reports.Comment: FSE 201
Personal recommendations in requirements engineering : the OpenReq approach
[Context & motivation] Requirements Engineering (RE) is considered as one of the most critical phases in software development but still many challenges remain open. [Problem] There is a growing trend of applying recommender systems to solve open RE challenges like requirements and stakeholder discovery; however, the existent proposals focus on specific RE tasks and do not give a general coverage for the RE process. [Principal ideas/results] In this research preview, we present the OpenReq approach to the development of intelligent recommendation and decision technologies that support different phases of RE in software projects. Specifically, we present the OpenReq part for personal recommendations for stakeholders. [Contribution] OpenReq aim is to improve and speed up RE processes, especially in large and distributed systemsPeer ReviewedPostprint (author's final draft
Predicting Scheduling Failures in the Cloud
Cloud Computing has emerged as a key technology to deliver and manage
computing, platform, and software services over the Internet. Task scheduling
algorithms play an important role in the efficiency of cloud computing services
as they aim to reduce the turnaround time of tasks and improve resource
utilization. Several task scheduling algorithms have been proposed in the
literature for cloud computing systems, the majority relying on the
computational complexity of tasks and the distribution of resources. However,
several tasks scheduled following these algorithms still fail because of
unforeseen changes in the cloud environments. In this paper, using tasks
execution and resource utilization data extracted from the execution traces of
real world applications at Google, we explore the possibility of predicting the
scheduling outcome of a task using statistical models. If we can successfully
predict tasks failures, we may be able to reduce the execution time of jobs by
rescheduling failed tasks earlier (i.e., before their actual failing time). Our
results show that statistical models can predict task failures with a precision
up to 97.4%, and a recall up to 96.2%. We simulate the potential benefits of
such predictions using the tool kit GloudSim and found that they can improve
the number of finished tasks by up to 40%. We also perform a case study using
the Hadoop framework of Amazon Elastic MapReduce (EMR) and the jobs of a gene
expression correlations analysis study from breast cancer research. We find
that when extending the scheduler of Hadoop with our predictive models, the
percentage of failed jobs can be reduced by up to 45%, with an overhead of less
than 5 minutes
DeepSolarEye: Power Loss Prediction and Weakly Supervised Soiling Localization via Fully Convolutional Networks for Solar Panels
The impact of soiling on solar panels is an important and well-studied
problem in renewable energy sector. In this paper, we present the first
convolutional neural network (CNN) based approach for solar panel soiling and
defect analysis. Our approach takes an RGB image of solar panel and
environmental factors as inputs to predict power loss, soiling localization,
and soiling type. In computer vision, localization is a complex task which
typically requires manually labeled training data such as bounding boxes or
segmentation masks. Our proposed approach consists of specialized four stages
which completely avoids localization ground truth and only needs panel images
with power loss labels for training. The region of impact area obtained from
the predicted localization masks are classified into soiling types using the
webly supervised learning. For improving localization capabilities of CNNs, we
introduce a novel bi-directional input-aware fusion (BiDIAF) block that
reinforces the input at different levels of CNN to learn input-specific feature
maps. Our empirical study shows that BiDIAF improves the power loss prediction
accuracy by about 3% and localization accuracy by about 4%. Our end-to-end
model yields further improvement of about 24% on localization when learned in a
weakly supervised manner. Our approach is generalizable and showed promising
results on web crawled solar panel images. Our system has a frame rate of 22
fps (including all steps) on a NVIDIA TitanX GPU. Additionally, we collected
first of it's kind dataset for solar panel image analysis consisting 45,000+
images.Comment: Accepted for publication at WACV 201
Dynamics of Internal Models in Game Players
A new approach for the study of social games and communications is proposed.
Games are simulated between cognitive players who build the opponent's internal
model and decide their next strategy from predictions based on the model. In
this paper, internal models are constructed by the recurrent neural network
(RNN), and the iterated prisoner's dilemma game is performed. The RNN allows us
to express the internal model in a geometrical shape. The complicated
transients of actions are observed before the stable mutually defecting
equilibrium is reached. During the transients, the model shape also becomes
complicated and often experiences chaotic changes. These new chaotic dynamics
of internal models reflect the dynamical and high-dimensional rugged landscape
of the internal model space.Comment: 19 pages, 6 figure
Framework for the Automation of SDLC Phases using Artificial Intelligence and Machine Learning Techniques
Software Engineering acts as a foundation stone for any software that is being built. It provides a common road-map for construction of software from any domain. Not following a well-defined Software Development Model have led to the failure of many software projects in the past. Agile is the Software Development Life Cycle (SDLC) Model that is widely used in practice in the IT industries to develop software on various technologies such as Big Data, Machine Learning, Artificial Intelligence, Deep learning. The focus on Software Engineering side in the recent years has been on trying to automate the various phases of SDLC namely- Requirements Analysis, Design, Coding, Testing and Operations and Maintenance. Incorporating latest trending technologies such as Machine Learning and Artificial Intelligence into various phases of SDLC, could facilitate for better execution of each of these phases. This in turn helps to cut-down costs, save time, improve the efficiency and reduce the manual effort required for each of these phases. The aim of this paper is to present a framework for the application of various Artificial Intelligence and Machine Learning techniques in the different phases of SDLC
- …