23,092 research outputs found
Context-aware Dynamic Discovery and Configuration of 'Things' in Smart Environments
The Internet of Things (IoT) is a dynamic global information network
consisting of Internet-connected objects, such as RFIDs, sensors, actuators, as
well as other instruments and smart appliances that are becoming an integral
component of the future Internet. Currently, such Internet-connected objects or
`things' outnumber both people and computers connected to the Internet and
their population is expected to grow to 50 billion in the next 5 to 10 years.
To be able to develop IoT applications, such `things' must become dynamically
integrated into emerging information networks supported by architecturally
scalable and economically feasible Internet service delivery models, such as
cloud computing. Achieving such integration through discovery and configuration
of `things' is a challenging task. Towards this end, we propose a Context-Aware
Dynamic Discovery of {Things} (CADDOT) model. We have developed a tool
SmartLink, that is capable of discovering sensors deployed in a particular
location despite their heterogeneity. SmartLink helps to establish the direct
communication between sensor hardware and cloud-based IoT middleware platforms.
We address the challenge of heterogeneity using a plug in architecture. Our
prototype tool is developed on an Android platform. Further, we employ the
Global Sensor Network (GSN) as the IoT middleware for the proof of concept
validation. The significance of the proposed solution is validated using a
test-bed that comprises 52 Arduino-based Libelium sensors.Comment: Big Data and Internet of Things: A Roadmap for Smart Environments,
Studies in Computational Intelligence book series, Springer Berlin
Heidelberg, 201
Managing Service-Heterogeneity using Osmotic Computing
Computational resource provisioning that is closer to a user is becoming
increasingly important, with a rise in the number of devices making continuous
service requests and with the significant recent take up of latency-sensitive
applications, such as streaming and real-time data processing. Fog computing
provides a solution to such types of applications by bridging the gap between
the user and public/private cloud infrastructure via the inclusion of a "fog"
layer. Such approach is capable of reducing the overall processing latency, but
the issues of redundancy, cost-effectiveness in utilizing such computing
infrastructure and handling services on the basis of a difference in their
characteristics remain. This difference in characteristics of services because
of variations in the requirement of computational resources and processes is
termed as service heterogeneity. A potential solution to these issues is the
use of Osmotic Computing -- a recently introduced paradigm that allows division
of services on the basis of their resource usage, based on parameters such as
energy, load, processing time on a data center vs. a network edge resource.
Service provisioning can then be divided across different layers of a
computational infrastructure, from edge devices, in-transit nodes, and a data
center, and supported through an Osmotic software layer. In this paper, a
fitness-based Osmosis algorithm is proposed to provide support for osmotic
computing by making more effective use of existing Fog server resources. The
proposed approach is capable of efficiently distributing and allocating
services by following the principle of osmosis. The results are presented using
numerical simulations demonstrating gains in terms of lower allocation time and
a higher probability of services being handled with high resource utilization.Comment: 7 pages, 4 Figures, International Conference on Communication,
Management and Information Technology (ICCMIT 2017), At Warsaw, Poland, 3-5
April 2017, http://www.iccmit.net/ (Best Paper Award
Challenges of Big Data Analysis
Big Data bring new opportunities to modern society and challenges to data
scientists. On one hand, Big Data hold great promises for discovering subtle
population patterns and heterogeneities that are not possible with small-scale
data. On the other hand, the massive sample size and high dimensionality of Big
Data introduce unique computational and statistical challenges, including
scalability and storage bottleneck, noise accumulation, spurious correlation,
incidental endogeneity, and measurement errors. These challenges are
distinguished and require new computational and statistical paradigm. This
article give overviews on the salient features of Big Data and how these
features impact on paradigm change on statistical and computational methods as
well as computing architectures. We also provide various new perspectives on
the Big Data analysis and computation. In particular, we emphasis on the
viability of the sparsest solution in high-confidence set and point out that
exogeneous assumptions in most statistical methods for Big Data can not be
validated due to incidental endogeneity. They can lead to wrong statistical
inferences and consequently wrong scientific conclusions
Addressing the Challenges in Federating Edge Resources
This book chapter considers how Edge deployments can be brought to bear in a
global context by federating them across multiple geographic regions to create
a global Edge-based fabric that decentralizes data center computation. This is
currently impractical, not only because of technical challenges, but is also
shrouded by social, legal and geopolitical issues. In this chapter, we discuss
two key challenges - networking and management in federating Edge deployments.
Additionally, we consider resource and modeling challenges that will need to be
addressed for a federated Edge.Comment: Book Chapter accepted to the Fog and Edge Computing: Principles and
Paradigms; Editors Buyya, Sriram
An Integrated Approach for Characterizing Aerosol Climate Impacts and Environmental Interactions
Aerosols exert myriad influences on the earth's environment and climate, and on human health. The complexity of aerosol-related processes requires that information gathered to improve our understanding of climate change must originate from multiple sources, and that effective strategies for data integration need to be established. While a vast array of observed and modeled data are becoming available, the aerosol research community currently lacks the necessary tools and infrastructure to reap maximum scientific benefit from these data. Spatial and temporal sampling differences among a diverse set of sensors, nonuniform data qualities, aerosol mesoscale variabilities, and difficulties in separating cloud effects are some of the challenges that need to be addressed. Maximizing the long-term benefit from these data also requires maintaining consistently well-understood accuracies as measurement approaches evolve and improve. Achieving a comprehensive understanding of how aerosol physical, chemical, and radiative processes impact the earth system can be achieved only through a multidisciplinary, inter-agency, and international initiative capable of dealing with these issues. A systematic approach, capitalizing on modern measurement and modeling techniques, geospatial statistics methodologies, and high-performance information technologies, can provide the necessary machinery to support this objective. We outline a framework for integrating and interpreting observations and models, and establishing an accurate, consistent, and cohesive long-term record, following a strategy whereby information and tools of progressively greater sophistication are incorporated as problems of increasing complexity are tackled. This concept is named the Progressive Aerosol Retrieval and Assimilation Global Observing Network (PARAGON). To encompass the breadth of the effort required, we present a set of recommendations dealing with data interoperability; measurement and model integration; multisensor synergy; data summarization and mining; model evaluation; calibration and validation; augmentation of surface and in situ measurements; advances in passive and active remote sensing; and design of satellite missions. Without an initiative of this nature, the scientific and policy communities will continue to struggle with understanding the quantitative impact of complex aerosol processes on regional and global climate change and air quality
Models of everywhere revisited: a technological perspective
The concept ‘models of everywhere’ was first introduced in the mid 2000s as a means of reasoning about the
environmental science of a place, changing the nature of the underlying modelling process, from one in which
general model structures are used to one in which modelling becomes a learning process about specific places, in
particular capturing the idiosyncrasies of that place. At one level, this is a straightforward concept, but at another
it is a rich multi-dimensional conceptual framework involving the following key dimensions: models of everywhere,
models of everything and models at all times, being constantly re-evaluated against the most current
evidence. This is a compelling approach with the potential to deal with epistemic uncertainties and nonlinearities.
However, the approach has, as yet, not been fully utilised or explored. This paper examines the
concept of models of everywhere in the light of recent advances in technology. The paper argues that, when first
proposed, technology was a limiting factor but now, with advances in areas such as Internet of Things, cloud
computing and data analytics, many of the barriers have been alleviated. Consequently, it is timely to look again
at the concept of models of everywhere in practical conditions as part of a trans-disciplinary effort to tackle the
remaining research questions. The paper concludes by identifying the key elements of a research agenda that
should underpin such experimentation and deployment
- …