544,824 research outputs found
Performance Comparison of Automated Warehouses Using Simulation
The purpose of this study is to compare the performance of two types of warehouses, both of which use autonomous vehicles (AVs). One warehouse uses movable racks (MR) for storing mini-loads, whereas the other uses fixed racks (FR). In general, warehouse automation not only increases the speed of the fulfillment process but also makes the picking process more accurate. We simulate three scenarios for the MR and FR systems using Simio. Four performance measures are considered for the comparison â the average order processing time (WR), the average utilization of AVs (U), the average order processing queue length (Nq) and the average distance travelled by AVs (d). We also estimate the capital costs of both systems and use it to compare the two systems. On the basis of our assumptions and simulation results, we find that the FR system not only requires an average 56 % less capital investment than the MR system, but it also provides a more efficient warehousing automation option with relatively lower utilization of AVs, lower order processing time and lower average number of orders waiting to be processed
Study to design and develop remote manipulator system
Modeling of human performance in remote manipulation tasks is reported by automated procedures using computers to analyze and count motions during a manipulation task. Performance is monitored by an on-line computer capable of measuring the joint angles of both master and slave and in some cases the trajectory and velocity of the hand itself. In this way the operator's strategies with different transmission delays, displays, tasks, and manipulators can be analyzed in detail for comparison. Some progress is described in obtaining a set of standard tasks and difficulty measures for evaluating manipulator performance
RePOR: Mimicking humans on refactoring tasks. Are we there yet?
Refactoring is a maintenance activity that aims to improve design quality
while preserving the behavior of a system. Several (semi)automated approaches
have been proposed to support developers in this maintenance activity, based on
the correction of anti-patterns, which are `poor' solutions to recurring design
problems. However, little quantitative evidence exists about the impact of
automatically refactored code on program comprehension, and in which context
automated refactoring can be as effective as manual refactoring. Leveraging
RePOR, an automated refactoring approach based on partial order reduction
techniques, we performed an empirical study to investigate whether automated
refactoring code structure affects the understandability of systems during
comprehension tasks. (1) We surveyed 80 developers, asking them to identify
from a set of 20 refactoring changes if they were generated by developers or by
a tool, and to rate the refactoring changes according to their design quality;
(2) we asked 30 developers to complete code comprehension tasks on 10 systems
that were refactored by either a freelancer or an automated refactoring tool.
To make comparison fair, for a subset of refactoring actions that introduce new
code entities, only synthetic identifiers were presented to practitioners. We
measured developers' performance using the NASA task load index for their
effort, the time that they spent performing the tasks, and their percentages of
correct answers. Our findings, despite current technology limitations, show
that it is reasonable to expect a refactoring tools to match developer code
Automated Detection of Non-Relevant Posts on the Russian Imageboard "2ch": Importance of the Choice of Word Representations
This study considers the problem of automated detection of non-relevant posts
on Web forums and discusses the approach of resolving this problem by
approximation it with the task of detection of semantic relatedness between the
given post and the opening post of the forum discussion thread. The
approximated task could be resolved through learning the supervised classifier
with a composed word embeddings of two posts. Considering that the success in
this task could be quite sensitive to the choice of word representations, we
propose a comparison of the performance of different word embedding models. We
train 7 models (Word2Vec, Glove, Word2Vec-f, Wang2Vec, AdaGram, FastText,
Swivel), evaluate embeddings produced by them on dataset of human judgements
and compare their performance on the task of non-relevant posts detection. To
make the comparison, we propose a dataset of semantic relatedness with posts
from one of the most popular Russian Web forums, imageboard "2ch", which has
challenging lexical and grammatical features.Comment: 6 pages, 1 figure, 1 table, main proceedings of AIST-2017 (Analysis
of Images, Social Networks, and Texts
Recommended from our members
Toward improved streamflow forecasts: Value of semidistributed modeling
The focus of this study is to assess the performance improvements of semidistributed applications of the U.S. National Weather Service Sacramento Soil Moisture Accounting model on a watershed using radar-based remotely sensed precipitation data. Specifically, performance comparisons are made within an automated multicriteria calibration framework to evaluate the benefit of "spatial distribution" of the model input (precipitation), structural components (soil moisture and streamflow routing computations), and surface characteristics (parameters). A comparison of these results is made with those obtained through manual calibration. Results indicate that for the study watershed, there are performance improvements associated with semidistributed model applications when the watershed is partitioned into three subwatersheds; however, no additional benefit is gained from increasing the number of subwatersheds from three to eight. Improvements in model performance are demonstrably related to the spatial distribution of the model input and streamflow routing. Surprisingly, there is no improvement associated with the distribution of the surface characteristics (model parameters)
Completely Automated Public Physical test to tell Computers and Humans Apart: A usability study on mobile devices
A very common approach adopted to fight the increasing sophistication and dangerousness of malware and hacking is to introduce more complex authentication mechanisms. This approach, however, introduces additional cognitive burdens for users and lowers the whole authentication mechanism acceptability to the point of making it unusable. On the contrary, what is really needed to fight the onslaught of automated attacks to users data and privacy is to first tell human and computers apart and then distinguish among humans to guarantee correct authentication. Such an approach is capable of completely thwarting any automated attempt to achieve unwarranted access while it allows keeping simple the mechanism dedicated to recognizing the legitimate user. This kind of approach is behind the concept of Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA), yet CAPTCHA leverages cognitive capabilities, thus the increasing sophistication of computers calls for more and more difficult cognitive tasks that make them either very long to solve or very prone to false negatives. We argue that this problem can be overcome by substituting the cognitive component of CAPTCHA with a different property that programs cannot mimic: the physical nature. In past work we have introduced the Completely Automated Public Physical test to tell Computer and Humans Apart (CAPPCHA) as a way to enhance the PIN authentication method for mobile devices and we have provided a proof of concept implementation. Similarly to CAPTCHA, this mechanism can also be used to prevent automated programs from abusing online services. However, to evaluate the real efficacy of the proposed scheme, an extended empirical assessment of CAPPCHA is required as well as a comparison of CAPPCHA performance with the existing state of the art. To this aim, in this paper we carry out an extensive experimental study on both the performance and the usability of CAPPCHA involving a high number of physical users, and we provide comparisons of CAPPCHA with existing flavors of CAPTCHA
A multi-objective genetic algorithm for the design of pressure swing adsorption
Pressure Swing Adsorption (PSA) is a cyclic separation process, more advantageous over other separation options for middle scale processes. Automated tools for the design of PSA
processes would be beneficial for the development of the technology, but their development is
a difficult task due to the complexity of the simulation of PSA cycles and the computational
effort needed to detect the performance at cyclic steady state.
We present a preliminary investigation of the performance of a custom multi-objective genetic
algorithm (MOGA) for the optimisation of a fast cycle PSA operation, the separation of
air for N2 production. The simulation requires a detailed diffusion model, which involves coupled
nonlinear partial differential and algebraic equations (PDAEs). The efficiency of MOGA
to handle this complex problem has been assessed by comparison with direct search methods.
An analysis of the effect of MOGA parameters on the performance is also presented
A Simulator for Concept Detector Output
Concept based video retrieval is a promising search paradigm because it is fully automated and it investigates the fine grained content of a video, which is normally not captured by human annotations. Concepts are captured by so-called concept detectors. However, since these detectors do not yet show a sufficient performance, the evaluation of retrieval systems, which are built on top of the detector output, is difficult. In this report we describe a software package which generates simulated detector output for a specified performance level. Afterwards, this output can be used to execute a search run and ultimately to evaluate the performance of the proposed retrieval method, which is normally done through comparison to a baseline. The probabilistic model of the detectors are two Gaussians, one for the positive and one for the negative class. Thus, the parameters for the simulation are the two means and deviations plus the prior probability of the concept in the dataset
- âŠ