60,064 research outputs found
Story Point Estimation Using Issue Reports With Deep Attention Neural Network
Background: Estimating the effort required for software engineering tasks is incredibly tricky, but it is critical for project planning. Issue reports are frequently used in the agile community to describe tasks, and story points are used to estimate task effort.
Aim: This paper proposes a machine learning regression model for estimating the number of story points needed to solve a task. The system can be trained from raw input data to predict outcomes without the need for manual feature engineering.
Method: Hierarchical attention networks are used in the proposed model. It has two levels of attention mechanisms implemented at word and sentence levels. The model gradually constructs a document vector by grouping significant words into sentence vectors and then merging significant sentence vectors to create document vectors. Then, the document vectors are fed into a shallow neural network to predict the story point.
Results: The experiments show that the proposed approach outperforms the state-of-the-art technique Deep-S which uses Recurrent Highway Networks. The proposed model has improved Mean Absolute Error (MAE) by an average of 16.6% and has improved Median Absolute Error (MdAE) by an average of 53%.
Conclusion: An empirical evaluation shows that the proposed approach outperforms the previous work
A survey of comics research in computer science
Graphical novels such as comics and mangas are well known all over the world.
The digital transition started to change the way people are reading comics,
more and more on smartphones and tablets and less and less on paper. In the
recent years, a wide variety of research about comics has been proposed and
might change the way comics are created, distributed and read in future years.
Early work focuses on low level document image analysis: indeed comic books are
complex, they contains text, drawings, balloon, panels, onomatopoeia, etc.
Different fields of computer science covered research about user interaction
and content generation such as multimedia, artificial intelligence,
human-computer interaction, etc. with different sets of values. We propose in
this paper to review the previous research about comics in computer science, to
state what have been done and to give some insights about the main outlooks
A Scalable Deep Neural Network Architecture for Multi-Building and Multi-Floor Indoor Localization Based on Wi-Fi Fingerprinting
One of the key technologies for future large-scale location-aware services
covering a complex of multi-story buildings --- e.g., a big shopping mall and a
university campus --- is a scalable indoor localization technique. In this
paper, we report the current status of our investigation on the use of deep
neural networks (DNNs) for scalable building/floor classification and
floor-level position estimation based on Wi-Fi fingerprinting. Exploiting the
hierarchical nature of the building/floor estimation and floor-level
coordinates estimation of a location, we propose a new DNN architecture
consisting of a stacked autoencoder for the reduction of feature space
dimension and a feed-forward classifier for multi-label classification of
building/floor/location, on which the multi-building and multi-floor indoor
localization system based on Wi-Fi fingerprinting is built. Experimental
results for the performance of building/floor estimation and floor-level
coordinates estimation of a given location demonstrate the feasibility of the
proposed DNN-based indoor localization system, which can provide near
state-of-the-art performance using a single DNN, for the implementation with
lower complexity and energy consumption at mobile devices.Comment: 9 pages, 6 figure
500+ Times Faster Than Deep Learning (A Case Study Exploring Faster Methods for Text Mining StackOverflow)
Deep learning methods are useful for high-dimensional data and are becoming
widely used in many areas of software engineering. Deep learners utilizes
extensive computational power and can take a long time to train-- making it
difficult to widely validate and repeat and improve their results. Further,
they are not the best solution in all domains. For example, recent results show
that for finding related Stack Overflow posts, a tuned SVM performs similarly
to a deep learner, but is significantly faster to train. This paper extends
that recent result by clustering the dataset, then tuning very learners within
each cluster. This approach is over 500 times faster than deep learning (and
over 900 times faster if we use all the cores on a standard laptop computer).
Significantly, this faster approach generates classifiers nearly as good
(within 2\% F1 Score) as the much slower deep learning method. Hence we
recommend this faster methods since it is much easier to reproduce and utilizes
far fewer CPU resources. More generally, we recommend that before researchers
release research results, that they compare their supposedly sophisticated
methods against simpler alternatives (e.g applying simpler learners to build
local models)
Applying Deep Learning To Airbnb Search
The application to search ranking is one of the biggest machine learning
success stories at Airbnb. Much of the initial gains were driven by a gradient
boosted decision tree model. The gains, however, plateaued over time. This
paper discusses the work done in applying neural networks in an attempt to
break out of that plateau. We present our perspective not with the intention of
pushing the frontier of new modeling techniques. Instead, ours is a story of
the elements we found useful in applying neural networks to a real life
product. Deep learning was steep learning for us. To other teams embarking on
similar journeys, we hope an account of our struggles and triumphs will provide
some useful pointers. Bon voyage!Comment: 8 page
- …