3,925 research outputs found
Models of everywhere revisited: a technological perspective
The concept ‘models of everywhere’ was first introduced in the mid 2000s as a means of reasoning about the
environmental science of a place, changing the nature of the underlying modelling process, from one in which
general model structures are used to one in which modelling becomes a learning process about specific places, in
particular capturing the idiosyncrasies of that place. At one level, this is a straightforward concept, but at another
it is a rich multi-dimensional conceptual framework involving the following key dimensions: models of everywhere,
models of everything and models at all times, being constantly re-evaluated against the most current
evidence. This is a compelling approach with the potential to deal with epistemic uncertainties and nonlinearities.
However, the approach has, as yet, not been fully utilised or explored. This paper examines the
concept of models of everywhere in the light of recent advances in technology. The paper argues that, when first
proposed, technology was a limiting factor but now, with advances in areas such as Internet of Things, cloud
computing and data analytics, many of the barriers have been alleviated. Consequently, it is timely to look again
at the concept of models of everywhere in practical conditions as part of a trans-disciplinary effort to tackle the
remaining research questions. The paper concludes by identifying the key elements of a research agenda that
should underpin such experimentation and deployment
On Identifying Disaster-Related Tweets: Matching-based or Learning-based?
Social media such as tweets are emerging as platforms contributing to
situational awareness during disasters. Information shared on Twitter by both
affected population (e.g., requesting assistance, warning) and those outside
the impact zone (e.g., providing assistance) would help first responders,
decision makers, and the public to understand the situation first-hand.
Effective use of such information requires timely selection and analysis of
tweets that are relevant to a particular disaster. Even though abundant tweets
are promising as a data source, it is challenging to automatically identify
relevant messages since tweet are short and unstructured, resulting to
unsatisfactory classification performance of conventional learning-based
approaches. Thus, we propose a simple yet effective algorithm to identify
relevant messages based on matching keywords and hashtags, and provide a
comparison between matching-based and learning-based approaches. To evaluate
the two approaches, we put them into a framework specifically proposed for
analyzing disaster-related tweets. Analysis results on eleven datasets with
various disaster types show that our technique provides relevant tweets of
higher quality and more interpretable results of sentiment analysis tasks when
compared to learning approach
Social Sensing of Floods in the UK
"Social sensing" is a form of crowd-sourcing that involves systematic
analysis of digital communications to detect real-world events. Here we
consider the use of social sensing for observing natural hazards. In
particular, we present a case study that uses data from a popular social media
platform (Twitter) to detect and locate flood events in the UK. In order to
improve data quality we apply a number of filters (timezone, simple text
filters and a naive Bayes `relevance' filter) to the data. We then use place
names in the user profile and message text to infer the location of the tweets.
These two steps remove most of the irrelevant tweets and yield orders of
magnitude more located tweets than we have by relying on geo-tagged data. We
demonstrate that high resolution social sensing of floods is feasible and we
can produce high-quality historical and real-time maps of floods using Twitter.Comment: 24 pages, 6 figure
Big data analytics:Computational intelligence techniques and application areas
Big Data has significant impact in developing functional smart cities and supporting modern societies. In this paper, we investigate the importance of Big Data in modern life and economy, and discuss challenges arising from Big Data utilization. Different computational intelligence techniques have been considered as tools for Big Data analytics. We also explore the powerful combination of Big Data and Computational Intelligence (CI) and identify a number of areas, where novel applications in real world smart city problems can be developed by utilizing these powerful tools and techniques. We present a case study for intelligent transportation in the context of a smart city, and a novel data modelling methodology based on a biologically inspired universal generative modelling approach called Hierarchical Spatial-Temporal State Machine (HSTSM). We further discuss various implications of policy, protection, valuation and commercialization related to Big Data, its applications and deployment
Using Physical and Social Sensors in Real-Time Data Streaming for Natural Hazard Monitoring and Response
Technological breakthroughs in computing over the last few decades have resulted in important advances in natural hazards analysis. In particular, integration of a wide variety of information sources, including observations from spatially-referenced physical sensors and new social media sources, enables better estimates of real-time hazard. The main goal of this work is to utilize innovative streaming algorithms for improved real-time seismic hazard analysis by integrating different data sources and processing tools into cloud applications. In streaming algorithms, a sequence of items from physical and social sensors can be processed in as little as one pass with no need to store the data locally. Massive data volumes can be analyzed in near-real time with reasonable limits on storage space, an important advantage for natural hazard analysis.
Seismic hazard maps are used by policymakers to set earthquake resistant construction standards, by insurance companies to set insurance rates and by civil engineers to estimate stability and damage potential. This research first focuses on improving probabilistic seismic hazard map production. The result is a series of maps for different frequency bands at significantly increased resolution with much lower latency time that includes a range of high-resolution sensitivity tests.
Second, a method is developed for real-time earthquake intensity estimation using joint streaming analysis from physical and social sensors. Automatically calculated intensity estimates from physical sensors such as seismometers use empirical relationships between ground motion and intensity, while those from social sensors employ questionaries that evaluate ground shaking levels based on personal observations. Neither is always sufficiently precise and/or timely. Results demonstrate that joint processing can significantly reduce the response time to a damaging earthquake and estimate preliminary intensity levels during the first ten minutes after an event. The combination of social media and network sensor data, in conjunction with innovative computing algorithms, provides a new paradigm for real-time earthquake detection, facilitating rapid and inexpensive risk reduction. In particular, streaming algorithms are an efficient method that addresses three major problems in hazard estimation by improving resolution, decreasing processing latency to near real-time standards and providing more accurate results through the integration of multiple data sets
- …