35,035 research outputs found
Decentralized Data Fusion and Active Sensing with Mobile Sensors for Modeling and Predicting Spatiotemporal Traffic Phenomena
The problem of modeling and predicting spatiotemporal traffic phenomena over
an urban road network is important to many traffic applications such as
detecting and forecasting congestion hotspots. This paper presents a
decentralized data fusion and active sensing (D2FAS) algorithm for mobile
sensors to actively explore the road network to gather and assimilate the most
informative data for predicting the traffic phenomenon. We analyze the time and
communication complexity of D2FAS and demonstrate that it can scale well with a
large number of observations and sensors. We provide a theoretical guarantee on
its predictive performance to be equivalent to that of a sophisticated
centralized sparse approximation for the Gaussian process (GP) model: The
computation of such a sparse approximate GP model can thus be parallelized and
distributed among the mobile sensors (in a Google-like MapReduce paradigm),
thereby achieving efficient and scalable prediction. We also theoretically
guarantee its active sensing performance that improves under various practical
environmental conditions. Empirical evaluation on real-world urban road network
data shows that our D2FAS algorithm is significantly more time-efficient and
scalable than state-of-the-art centralized algorithms while achieving
comparable predictive performance.Comment: 28th Conference on Uncertainty in Artificial Intelligence (UAI 2012),
Extended version with proofs, 13 page
Simple but Not Simplistic: Reducing the Complexity of Machine Learning Methods
Programa Oficial de Doutoramento en Computación . 5009V01[Resumo]
A chegada do Big Data e a explosión do Internet das cousas supuxeron un gran
reto para os investigadores en Aprendizaxe Automática, facendo que o proceso de
aprendizaxe sexa mesmo roáis complexo. No mundo real, os problemas da aprendizaxe
automática xeralmente teñen complexidades inherentes, como poden ser as
características intrínsecas dos datos, o gran número de mostras, a alta dimensión dos
datos de entrada, os cambios na distribución entre o conxunto de adestramento e
test, etc. Todos estes aspectos son importantes, e requiren novoS modelos que poi dan
facer fronte a estas situacións. Nesta tese, abordáronse todos estes problemas, tratando
de simplificar o proceso de aprendizaxe automática no escenario actual. En
primeiro lugar, realízase unha análise de complexidade para observar como inflúe
esta na tarefa de clasificación, e se é posible que a aplicación dun proceso previo
de selección de características reduza esta complexidade. Logo, abórdase o proceso
de simplificación da fase de aprendizaxe automática mediante a filosofía divide e
vencerás, usando un enfoque distribuído. Seguidamente, aplicamos esa mesma filosofía
sobre o proceso de selección de características. Finalmente, optamos por un
enfoque diferente seguindo a filosofía do Edge Computing, a cal permite que os datos
producidos polos dispositivos do Internet das cousas se procesen máis preto de
onde se crearon. Os enfoques propostos demostraron a súa capacidade para reducir
a complexidade dos métodos de aprendizaxe automática tradicionais e, polo tanto,
espérase que a contribución desta tese abra as portas ao desenvolvemento de novos
métodos de aprendizaxe máquina máis simples, máis robustos, e máis eficientes
computacionalmente.[Resumen]
La llegada del Big Data y la explosión del Internet de las cosas han supuesto
un gran reto para los investigadores en Aprendizaje Automático, haciendo que el
proceso de aprendizaje sea incluso más complejo. En el mundo real, los problemas de
aprendizaje automático generalmente tienen complejidades inherentes) como pueden
ser las características intrínsecas de los datos, el gran número de muestras, la alta
dimensión de los datos de entrada, los cambios en la distribución entre el conjunto de
entrenamiento y test, etc. Todos estos aspectos son importantes, y requieren nuevos
modelos que puedan hacer frente a estas situaciones. En esta tesis, se han abordado
todos estos problemas, tratando de simplificar el proceso de aprendizaje automático
en el escenario actual. En primer lugar, se realiza un análisis de complejidad para
observar cómo influye ésta en la tarea de clasificación1 y si es posible que la aplicación
de un proceso previo de selección de características reduzca esta complejidad.
Luego, se aborda el proceso de simplificación de la fase de aprendizaje automático
mediante la filosofía divide y vencerás, usando un enfoque distribuido. A continuación,
aplicamos esa misma filosofía sobre el proceso de selección de características.
Finalmente, optamos por un enfoque diferente siguiendo la filosofía del Edge Computing,
la cual permite que los datos producidos por los dispositivos del Internet de
las cosas se procesen más cerca de donde se crearon. Los enfoques propuestos han
demostrado su capacidad para reducir la complejidad de los métodos de aprendizaje
automático tnidicionales y, por lo tanto, se espera que la contribución de esta
tesis abra las puertas al desarrollo de nuevos métodos de aprendizaje máquina más
simples, más robustos, y más eficientes computacionalmente.[Abstract]
The advent of Big Data and the explosion of the Internet of Things, has brought
unprecedented challenges to Machine Learning researchers, making the learning task
more complexo Real-world machine learning problems usually have inherent complexities,
such as the intrinsic characteristics of the data, large number of instauces,
high input dimensionality, dataset shift, etc. AH these aspects matter, and can
fOI new models that can confront these situations. Thus, in this thesis, we have
addressed aH these issues) simplifying the machine learning process in the current
scenario. First, we carry out a complexity analysis to see how it inftuences the
classification models, and if it is possible that feature selection might result in a
deerease of that eomplexity. Then, we address the proeess of simplifying learning
with the divide-and-conquer philosophy of the distributed approaeh. Later, we aim
to reduce the complexity of the feature seleetion preprocessing through the same
philosophy. FinallYl we opt for a different approaeh following the eurrent philosophy
Edge eomputing, whieh allows the data produeed by Internet of Things deviees
to be proeessed closer to where they were ereated. The proposed approaehes have
demonstrated their eapability to reduce the complexity of traditional maehine learning
algorithms, and thus it is expeeted that the eontribution of this thesis will open
the doors to the development of new maehine learning methods that are simpler,
more robust, and more eomputationally efficient
Middleware Technologies for Cloud of Things - a survey
The next wave of communication and applications rely on the new services
provided by Internet of Things which is becoming an important aspect in human
and machines future. The IoT services are a key solution for providing smart
environments in homes, buildings and cities. In the era of a massive number of
connected things and objects with a high grow rate, several challenges have
been raised such as management, aggregation and storage for big produced data.
In order to tackle some of these issues, cloud computing emerged to IoT as
Cloud of Things (CoT) which provides virtually unlimited cloud services to
enhance the large scale IoT platforms. There are several factors to be
considered in design and implementation of a CoT platform. One of the most
important and challenging problems is the heterogeneity of different objects.
This problem can be addressed by deploying suitable "Middleware". Middleware
sits between things and applications that make a reliable platform for
communication among things with different interfaces, operating systems, and
architectures. The main aim of this paper is to study the middleware
technologies for CoT. Toward this end, we first present the main features and
characteristics of middlewares. Next we study different architecture styles and
service domains. Then we presents several middlewares that are suitable for CoT
based platforms and lastly a list of current challenges and issues in design of
CoT based middlewares is discussed.Comment: http://www.sciencedirect.com/science/article/pii/S2352864817301268,
Digital Communications and Networks, Elsevier (2017
Middleware Technologies for Cloud of Things - a survey
The next wave of communication and applications rely on the new services
provided by Internet of Things which is becoming an important aspect in human
and machines future. The IoT services are a key solution for providing smart
environments in homes, buildings and cities. In the era of a massive number of
connected things and objects with a high grow rate, several challenges have
been raised such as management, aggregation and storage for big produced data.
In order to tackle some of these issues, cloud computing emerged to IoT as
Cloud of Things (CoT) which provides virtually unlimited cloud services to
enhance the large scale IoT platforms. There are several factors to be
considered in design and implementation of a CoT platform. One of the most
important and challenging problems is the heterogeneity of different objects.
This problem can be addressed by deploying suitable "Middleware". Middleware
sits between things and applications that make a reliable platform for
communication among things with different interfaces, operating systems, and
architectures. The main aim of this paper is to study the middleware
technologies for CoT. Toward this end, we first present the main features and
characteristics of middlewares. Next we study different architecture styles and
service domains. Then we presents several middlewares that are suitable for CoT
based platforms and lastly a list of current challenges and issues in design of
CoT based middlewares is discussed.Comment: http://www.sciencedirect.com/science/article/pii/S2352864817301268,
Digital Communications and Networks, Elsevier (2017
An emerging paradigm or just another trajectory? Understanding the nature of technological changes using engineering heuristics in the telecommunications switching industry
The theoretical literature on technological changes distinguishes between paradigmatic changes and changes in trajectories. Recently several scholars have performed empirical studies on the way technological trajectories evolve in specific industries, often by predominantly looking at the artifacts. Much less - if any - empirical work has been done on paradigmatic changes, even though these have a much more profound impact on today's industry. It follows from the theory that such studies would need to focus more on the knowledge level than on the artifact level, raising questions on how to operationalize such phenomena. This study aims to fill this gap by applying network-based methodologies to knowledge networks, represented here by patents and patent citations. The rich technological history of telecommunications switches shows how engineers in the post-war period were confronted with huge challenges to meet drastically changing demands. This historical background is a starting point for an in-depth analysis of patents, in search of information about technological direction, technical bottlenecks, and engineering heuristics. We aim to identify when such changes took place over the seven different generations of technological advances this industry has seen. In this way we can easily recognize genuine paradigmatic changes compared to more regular changes in trajectory.technological trajectories; patents; network analysis; telecommunication manufacturing industry
CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap
After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in
multimedia search engines, we have identified and analyzed gaps within European research effort during our second year.
In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio-
economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown
of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on
requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the
community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our
Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as
National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core
technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research
challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal
challenges
- …