3,109 research outputs found
Frequency vs. Association for Constraint Selection in Usage-Based Construction Grammar
A usage-based Construction Grammar (CxG) posits that slot-constraints
generalize from common exemplar constructions. But what is the best model of
constraint generalization? This paper evaluates competing frequency-based and
association-based models across eight languages using a metric derived from the
Minimum Description Length paradigm. The experiments show that
association-based models produce better generalizations across all languages by
a significant margin
A literature survey of methods for analysis of subjective language
Subjective language is used to express attitudes and opinions towards things, ideas and people. While content and topic centred natural language processing is now part of everyday life, analysis of subjective aspects of natural language have until recently been largely neglected by the research community. The explosive growth of personal blogs, consumer opinion sites and social network applications in the last years, have however created increased interest in subjective language analysis. This paper provides an overview of recent research conducted in the area
Pairwise Covariates-adjusted Block Model for Community Detection
One of the most fundamental problems in network study is community detection.
The stochastic block model (SBM) is one widely used model for network data with
different estimation methods developed with their community detection
consistency results unveiled. However, the SBM is restricted by the strong
assumption that all nodes in the same community are stochastically equivalent,
which may not be suitable for practical applications. We introduce a pairwise
covariates-adjusted stochastic block model (PCABM), a generalization of SBM
that incorporates pairwise covariate information. We study the maximum
likelihood estimates of the coefficients for the covariates as well as the
community assignments. It is shown that both the coefficient estimates of the
covariates and the community assignments are consistent under suitable sparsity
conditions. Spectral clustering with adjustment (SCWA) is introduced to
efficiently solve PCABM. Under certain conditions, we derive the error bound of
community estimation under SCWA and show that it is community detection
consistent. PCABM compares favorably with the SBM or degree-corrected
stochastic block model (DCBM) under a wide range of simulated and real networks
when covariate information is accessible.Comment: 41 pages, 6 figure
Systematic Literature Review Of Particle Swarm Optimization Implementation For Time-Dependent Vehicle Routing Problem
Time-dependent VRP (TDVRP) is one of the three VRP variants that have not been widely explored in research in the field of operational research, while Particle Swarm Optimization (PSO) is an optimization algorithm in the field of operational research that uses many variables in its application. There is much research conducted about TDVRP, but few of them discuss PSO's implementation. This article presented as a literature review which aimed to find a research gap about implementation of PSO to resolve TDVRP cases. The research was conducted in five stages. The first stage, a review protocol defined in the form of research questions and methods to perform the review. The second stage is references searching. The third stage is screening the search result. The fourth stage is extracting data from references based on research questions. The fifth stage is reporting the study literature results. The results obtained from the screening process were 37 eligible reference articles, from 172 search results articles. The results of extraction and analysis of 37 reference articles show that research on TDVRP discusses the duration of travel time between 2 locations. The route optimization parameter is determined from the cost of the trip, including the total distance traveled, the total travel time, the number of routes, and the number used vehicles. The datasets that are used in research consist of 2 types, real-world datasets and simulation datasets. Solomon Benchmark is a simulation dataset that is widely used in the case of TDVRP. Research on PSO in the TDVRP case is dominated by the discussion of modifications to determine random values of PSO variables
Easy over Hard: A Case Study on Deep Learning
While deep learning is an exciting new technique, the benefits of this method
need to be assessed with respect to its computational cost. This is
particularly important for deep learning since these learners need hours (to
weeks) to train the model. Such long training time limits the ability of (a)~a
researcher to test the stability of their conclusion via repeated runs with
different random seeds; and (b)~other researchers to repeat, improve, or even
refute that original work.
For example, recently, deep learning was used to find which questions in the
Stack Overflow programmer discussion forum can be linked together. That deep
learning system took 14 hours to execute. We show here that applying a very
simple optimizer called DE to fine tune SVM, it can achieve similar (and
sometimes better) results. The DE approach terminated in 10 minutes; i.e. 84
times faster hours than deep learning method.
We offer these results as a cautionary tale to the software analytics
community and suggest that not every new innovation should be applied without
critical analysis. If researchers deploy some new and expensive process, that
work should be baselined against some simpler and faster alternatives.Comment: 12 pages, 6 figures, accepted at FSE201
BIBLIOMETRIJSKA ANALIZA UMJETNE INTELIGENCIJE U POSLOVNOJ EKONOMIJI
Invention of artificial intelligence (AI) is certainly one of the most promising
technological advancements in modern economy. General AI reaching singularity makes
one imagine its disruptive influence. Once invented it is supposed to surpass all human
cognitive capabilities. Nevertheless, narrow AI has already been widely applied
encompassing many technologies. This paper aims to explore the research area of
artificial intelligence with the emphasis on the business economics field. Data has been
derived from the records extracted from the Web of Science which is one of the most
relevant databases of scientific publications. Total number of extracted records published
in the period from 1963-2019 was 1369. Results provide systemic overview of the most
influential authors, seminal papers and the most important sources for AI publication.
Additionally, using MCA (multiple correspondence analysis) results display the
intellectual map of the research field.Otkriće umjetne inteligencije zasigurno predstavlja jednu od najvažniji
tehnoloških inovacija moderne ekonomije. Opća umjetna inteligencija koja može
dosegnuti singularitet ima potencijal kreirati novu tehnološku arenu. Jednom otkrivena
smatra se da će nadmašiti sve ljudske kognitivne sposobnosti. Nadalje, specifična
umjetna inteligencija već je otkrivena i primijenjena u brojnim sustavima. Ovaj rad
nastoji istražiti područje umjetne inteligencije s naglaskom primjene u ekonomiji. Podaci
su derivirani na osnovi zapisa Web of Science baze jednog od najrelevantnijih izvora
znanstvenih radova. Ukupan broj ekstrahiranih zapisa u periodu 1963-2019 bio je 1369.
Rezultati čine sustavan pregled najutjecajnijih autora, radova te izvora publikacija.
Dodatno, koristeći MCA kreirana je intelektualna mapa istraživačkog područja
- …