194,441 research outputs found
The Application of Minicomputers to Problems of Information Retrieval
Although minicomputers can be used in many types of information
retrieval facilities, this
paper deals primarily with bibliographic reference re-
trieval
systems. There are two main reasons why it is attractive to consider
using a minicomputer for on-line applications: (1) the relatively low cost and
(2) the hardware and software provided.published or submitted for publicatio
Recommended from our members
TeamWorker: An agent-based support system for mobile task execution
Traditional workflow management systems are considered insufficiently flexible to support autonomous job management via close team working. This paper proposes a multi-agent system approach to enhancing existing workflow management systems to enable team-based job management in the field of telecommunications service provision and maintenance. This paper adopts a component-based approach and explains how applications can be developed by customising the generic components provided by a multi-agent systems framework
A Review of Object Detection Models based on Convolutional Neural Network
Convolutional Neural Network (CNN) has become the state-of-the-art for object
detection in image task. In this chapter, we have explained different
state-of-the-art CNN based object detection models. We have made this review
with categorization those detection models according to two different
approaches: two-stage approach and one-stage approach. Through this chapter, it
has shown advancements in object detection models from R-CNN to latest
RefineDet. It has also discussed the model description and training details of
each model. Here, we have also drawn a comparison among those models.Comment: 17 pages, 11 figures, 1 tabl
Distributed Training Large-Scale Deep Architectures
Scale of data and scale of computation infrastructures together enable the
current deep learning renaissance. However, training large-scale deep
architectures demands both algorithmic improvement and careful system
configuration. In this paper, we focus on employing the system approach to
speed up large-scale training. Via lessons learned from our routine
benchmarking effort, we first identify bottlenecks and overheads that hinter
data parallelism. We then devise guidelines that help practitioners to
configure an effective system and fine-tune parameters to achieve desired
speedup. Specifically, we develop a procedure for setting minibatch size and
choosing computation algorithms. We also derive lemmas for determining the
quantity of key components such as the number of GPUs and parameter servers.
Experiments and examples show that these guidelines help effectively speed up
large-scale deep learning training
- …