53,592 research outputs found
Efficient MaxCount and threshold operators of moving objects
Calculating operators of continuously moving objects presents some unique challenges, especially when the operators involve aggregation or the concept of congestion, which happens when the number of moving objects in a changing or dynamic query space exceeds some threshold value. This paper presents the following six d-dimensional moving object operators: (1) MaxCount (or MinCount), which finds the Maximum (or Minimum) number of moving objects simultaneously present in the dynamic query space at any time during the query time interval. (2) CountRange, which finds a count of point objects whose trajectories intersect the dynamic query space during the query time interval. (3) ThresholdRange, which finds the set of time intervals during which the dynamic query space is congested. (4) ThresholdSum, which finds the total length of all the time intervals during which the dynamic query space is congested. (5) ThresholdCount, which finds the number of disjoint time intervals during which the dynamic query space is congested. And (6) ThresholdAverage, which finds the average length of time of all the time intervals when the dynamic query space is congested. For these operators separate algorithms are given to find only estimate or only precise values. Experimental results from more than 7,500 queries indicate that the estimation algorithms produce fast, efficient results with error under 5%
Software tools for conducting bibliometric analysis in science: An up-to-date review
Bibliometrics has become an essential tool for assessing and analyzing the output of scientists, cooperation between
universities, the effect of state-owned science funding on national research and development performance and educational
efficiency, among other applications. Therefore, professionals and scientists need a range of theoretical and practical
tools to measure experimental data. This review aims to provide an up-to-date review of the various tools available
for conducting bibliometric and scientometric analyses, including the sources of data acquisition, performance analysis
and visualization tools. The included tools were divided into three categories: general bibliometric and performance
analysis, science mapping analysis, and libraries; a description of all of them is provided. A comparative analysis of the
database sources support, pre-processing capabilities, analysis and visualization options were also provided in order to
facilitate its understanding. Although there are numerous bibliometric databases to obtain data for bibliometric and
scientometric analysis, they have been developed for a different purpose. The number of exportable records is between
500 and 50,000 and the coverage of the different science fields is unequal in each database. Concerning the analyzed
tools, Bibliometrix contains the more extensive set of techniques and suitable for practitioners through Biblioshiny.
VOSviewer has a fantastic visualization and is capable of loading and exporting information from many sources. SciMAT
is the tool with a powerful pre-processing and export capability. In views of the variability of features, the users need to
decide the desired analysis output and chose the option that better fits into their aims
Combining Residual Networks with LSTMs for Lipreading
We propose an end-to-end deep learning architecture for word-level visual
speech recognition. The system is a combination of spatiotemporal
convolutional, residual and bidirectional Long Short-Term Memory networks. We
train and evaluate it on the Lipreading In-The-Wild benchmark, a challenging
database of 500-size target-words consisting of 1.28sec video excerpts from BBC
TV broadcasts. The proposed network attains word accuracy equal to 83.0,
yielding 6.8 absolute improvement over the current state-of-the-art, without
using information about word boundaries during training or testing.Comment: Submitted to Interspeech 201
Historical collaborative geocoding
The latest developments in digital have provided large data sets that can
increasingly easily be accessed and used. These data sets often contain
indirect localisation information, such as historical addresses. Historical
geocoding is the process of transforming the indirect localisation information
to direct localisation that can be placed on a map, which enables spatial
analysis and cross-referencing. Many efficient geocoders exist for current
addresses, but they do not deal with the temporal aspect and are based on a
strict hierarchy (..., city, street, house number) that is hard or impossible
to use with historical data. Indeed historical data are full of uncertainties
(temporal aspect, semantic aspect, spatial precision, confidence in historical
source, ...) that can not be resolved, as there is no way to go back in time to
check. We propose an open source, open data, extensible solution for geocoding
that is based on the building of gazetteers composed of geohistorical objects
extracted from historical topographical maps. Once the gazetteers are
available, geocoding an historical address is a matter of finding the
geohistorical object in the gazetteers that is the best match to the historical
address. The matching criteriae are customisable and include several dimensions
(fuzzy semantic, fuzzy temporal, scale, spatial precision ...). As the goal is
to facilitate historical work, we also propose web-based user interfaces that
help geocode (one address or batch mode) and display over current or historical
topographical maps, so that they can be checked and collaboratively edited. The
system is tested on Paris city for the 19-20th centuries, shows high returns
rate and is fast enough to be used interactively.Comment: WORKING PAPE
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
Study on behavioral impedance for route planning techniques from the pedestrian's perspective: some findings and considerations
The multi-disciplinary characteristics of transportation force
a new design of geographic information systems, within which
these characteristics are considered. In this context, geographic
information systems for transportation are the result of the
integration of transportation information systems and conventional
geographic information systems. An interesting research area
in geographic information systems for transportation is constraint
management in route planning algorithms from the pedestrian s
perspective. Constraint management becomes more complex when
route planning takes into account an integrated public transportation
network (i.e. a multimodal network). A study on the theoretical
contextualization and taxonomy of a pedestrian s behavioral
impedance has been developed in order to improve the constraint
management from the pedestrian s perspective. This study entails
strategies of travel reduction by private transport (e.g. travel
by car) through switching to or substitution by alternative
public transport (e.g. travel by walk, bus or rail). The
grounded theory method has been used to develop the proposed
taxonomy. Using the partial results of a questionnaire applied
to a reduced group of people from Barcelona as a starting
point, important data are being collected to define the mathematical
model of the behavioral impedance domain. The goal of this
paper is to provide some considerations about theoretical contextualization
on identification and management of constraints regarding the
behavioral impedance domain from the pedestrian s perspective
within the urban public transportation context. The research
project where this work is included is composed of six major
phases. The first phase represents a continuous bibliographic
review. The second phase was a study on sidewalks in the university
zone of Barcelona. In this phase, an experimental application
has been proposed and the management, map and route modules
have been implemented on the ArcInfo GIS package and C++. This
paper reports the partial work of the third phase, which is
composed of two parts. The first part was a theoretical study on
behavioral impedance for route planning techniques, in which
taxonomy was proposed. The results of the second part are partially
presented in this paper. The fourth (i.e. design and implementation),
fifth (i.e. calibration and validation) and sixth (i.e. generalization
of the results) phases are characterized by the application
of the prototype regarding the multimodal network model for
urban public transportation from the pedestrian s perspective.
The main contribution of this article is the behavioral impedance
taxonomy review from the pedestrian s perspective, which will allow
designing a mathematical model and be used to implement a constraint
management algorithm. Within this context, the proposed taxonomy
could be used to model cost functions more precisely.Postprint (published version
- …