964 research outputs found
End-to-End Privacy for Open Big Data Markets
The idea of an open data market envisions the creation of a data trading
model to facilitate exchange of data between different parties in the Internet
of Things (IoT) domain. The data collected by IoT products and solutions are
expected to be traded in these markets. Data owners will collect data using IoT
products and solutions. Data consumers who are interested will negotiate with
the data owners to get access to such data. Data captured by IoT products will
allow data consumers to further understand the preferences and behaviours of
data owners and to generate additional business value using different
techniques ranging from waste reduction to personalized service offerings. In
open data markets, data consumers will be able to give back part of the
additional value generated to the data owners. However, privacy becomes a
significant issue when data that can be used to derive extremely personal
information is being traded. This paper discusses why privacy matters in the
IoT domain in general and especially in open data markets and surveys existing
privacy-preserving strategies and design techniques that can be used to
facilitate end to end privacy for open data markets. We also highlight some of
the major research challenges that need to be address in order to make the
vision of open data markets a reality through ensuring the privacy of
stakeholders.Comment: Accepted to be published in IEEE Cloud Computing Magazine: Special
Issue Cloud Computing and the La
SuperPCA: A Superpixelwise PCA Approach for Unsupervised Feature Extraction of Hyperspectral Imagery
As an unsupervised dimensionality reduction method, principal component
analysis (PCA) has been widely considered as an efficient and effective
preprocessing step for hyperspectral image (HSI) processing and analysis tasks.
It takes each band as a whole and globally extracts the most representative
bands. However, different homogeneous regions correspond to different objects,
whose spectral features are diverse. It is obviously inappropriate to carry out
dimensionality reduction through a unified projection for an entire HSI. In
this paper, a simple but very effective superpixelwise PCA approach, called
SuperPCA, is proposed to learn the intrinsic low-dimensional features of HSIs.
In contrast to classical PCA models, SuperPCA has four main properties. (1)
Unlike the traditional PCA method based on a whole image, SuperPCA takes into
account the diversity in different homogeneous regions, that is, different
regions should have different projections. (2) Most of the conventional feature
extraction models cannot directly use the spatial information of HSIs, while
SuperPCA is able to incorporate the spatial context information into the
unsupervised dimensionality reduction by superpixel segmentation. (3) Since the
regions obtained by superpixel segmentation have homogeneity, SuperPCA can
extract potential low-dimensional features even under noise. (4) Although
SuperPCA is an unsupervised method, it can achieve competitive performance when
compared with supervised approaches. The resulting features are discriminative,
compact, and noise resistant, leading to improved HSI classification
performance. Experiments on three public datasets demonstrate that the SuperPCA
model significantly outperforms the conventional PCA based dimensionality
reduction baselines for HSI classification. The Matlab source code is available
at https://github.com/junjun-jiang/SuperPCAComment: 13 pages, 10 figures, Accepted by IEEE TGR
An infrastructure service recommendation system for cloud applications with real-time QoS requirement constraints
The proliferation of cloud computing has revolutionized the hosting and delivery of Internet-based application services. However, with the constant launch of new cloud services and capabilities almost every month by both big (e.g., Amazon Web Service and Microsoft Azure) and small companies (e.g., Rackspace and Ninefold), decision makers (e.g., application developers and chief information officers) are likely to be overwhelmed by choices available. The decision-making problem is further complicated due to heterogeneous service configurations and application provisioning QoS constraints. To address this hard challenge, in our previous work, we developed a semiautomated, extensible, and ontology-based approach to infrastructure service discovery and selection only based on design-time constraints (e.g., the renting cost, the data center location, the service feature, etc.). In this paper, we extend our approach to include the real-time (run-time) QoS (the end-to-end message latency and the end-to-end message throughput) in the decision-making process. The hosting of next-generation applications in the domain of online interactive gaming, large-scale sensor analytics, and real-time mobile applications on cloud services necessitates the optimization of such real-time QoS constraints for meeting service-level agreements. To this end, we present a real-time QoS-aware multicriteria decision-making technique that builds over the well-known analytic hierarchy process method. The proposed technique is applicable to selecting Infrastructure as a Service (IaaS) cloud offers, and it allows users to define multiple design-time and real-time QoS constraints or requirements. These requirements are then matched against our knowledge base to compute the possible best fit combinations of cloud services at the IaaS layer. We conducted extensive experiments to prove the feasibility of our approach
A cloud-based remote sensing data production system
The data processing capability of existing remote sensing system has not kept pace with the amount of data typically received and need to be processed. Existing product services are not capable of providing users with a variety of remote sensing data sources for selection, either. Therefore, in this paper, we present a product generation programme using multisource remote sensing data, across distributed data centers in a cloud environment, so as to compensate for the low productive efficiency, less types and simple services of the existing system. The programme adopts “master–slave” architecture. Specifically, the master center is mainly responsible for the production order receiving and parsing, as well as task and data scheduling, results feedback, and so on; the slave centers are the distributed remote sensing data centers, which storage one or more types of remote sensing data, and mainly responsible for production task execution. In general, each production task only runs on one data center, and the data scheduling among centers adopts a “minimum data transferring” strategy. The logical workflow of each production task is organized based on knowledge base, and then turned into the actual executed workflow by Kepler. In addition, the scheduling strategy of each production task mainly depends on the Ganglia monitoring results, thus the computing resources can be allocated or expanded adaptively. Finally, we evaluated the proposed programme using test experiments performed at global, regional and local areas, and the results showed that our proposed cloud-based remote sensing production system could deal with massive remote sensing data and different products generating, as well as on-demand remote sensing computing and information service
An efficient online direction-preserving compression approach for trajectory streaming data
Online trajectory compression is an important method of efficiently managing massive volumes of trajectory streaming data. Current online trajectory methods generally do not preserve direction information and lack high computing performance for the fast compression. Aiming to solve these problems, this paper first proposed an online direction-preserving simplification method for trajectory streaming data, online DPTS by modifying an offline direction-preserving trajectory simplification (DPTS) method. We further proposed an optimized version of online DPTS called online DPTS+ by employing a data structure called bound quadrant system (BQS) to reduce the compression time of online DPTS. To provide a more efficient solution to reduce compression time, this paper explored the feasibility of using contemporary general-purpose computing on a graphics processing unit (GPU). The GPU-aided approach paralleled the major computing part of online DPTS+ that is the SP-theo algorithm. The results show that by maintaining a comparable compression error and compression rate, (1) the online DPTS outperform offline DPTS with up to 21% compression time, (2) the compression time of online DPTS+ algorithm is 3.95 times faster than that of online DPTS, and (3) the GPU-aided method can significantly reduce the time for graph construction and for finding the shortest path with a speedup of 31.4 and 7.88 (on average), respectively. The current approach provides a new tool for fast online trajectory streaming data compression
Virtual Environments for multiphysics code validation on Computing Grids
We advocate in this paper the use of grid-based infrastructures that are
designed for seamless approaches to the numerical expert users, i.e., the
multiphysics applications designers. It relies on sophisticated computing
environments based on computing grids, connecting heterogeneous computing
resources: mainframes, PC-clusters and workstations running multiphysics codes
and utility software, e.g., visualization tools. The approach is based on
concepts defined by the HEAVEN* consortium. HEAVEN is a European scientific
consortium including industrial partners from the aerospace, telecommunication
and software industries, as well as academic research institutes. Currently,
the HEAVEN consortium works on a project that aims to create advanced services
platforms. It is intended to enable "virtual private grids" supporting various
environments for users manipulating a suitable high-level interface. This will
become the basis for future generalized services allowing the integration of
various services without the need to deploy specific grid infrastructures
Investigation into Impact of Ageing on Rubber Component in Used Engine Mount
A 2014 KPMG customer survey report demonstrated the increasing demand of driving comfort and sustainable development. With the longer lifespan of modern vehicles, more attention has been placed on to products’ lifetime performance. Ageing of rubber components in the engine mount was known to be one of the key elements related to the compromised driving experience in used vehicles. This thesis will investigate how the properties of the rubber component change, and why. Links among mechanical properties, microstructures and chemical composition of the aged carbon black filled vulcanised natural rubber used in a commercial engine mount are to be revealed.
By investigating used engine mounts, the changes of stiffness for rubber in the used engine mount was established, which were identified to be related to post-curing, thermal degradation, oxidative degradation, filler re-agglomeration and loss of additives. Among these ageing mechanisms, the most dominating factors were post-curing and loss of additives, which increased the stiffness of the rubber by 45% in a four-year-old car that has driven by 80 thousand kilometres.
The impact of the acting ageing mechanisms was identified through aerobic and anaerobic artificial ageing experiments. The artificial ageing experiments provided knowledge about how each ageing mechanism progresses in the material and how they interact with each other. It also demonstrated the limitations of artificial ageing on simulating certain ageing mechanisms.
This is the first time such a comprehensive investigation has been made to identify the causes of different ageing mechanisms on specimens from real vehicles and discuss how the ageing mechanisms impact on the material individually. Hopefully this work could provide useful information for the industry and other academics in the area when designing rubber relevant products or investigating ageing behaviour of similar materials
A cumulus project: design and implementation
The Cloud computing becomes an innovative computing paradigm, which aims to provide reliable, customized and QoS guaranteed computing infrastructures for users. This paper presents our early experience of Cloud computing based on the Cumulus project for compute centers. In this paper, we introduce the Cumulus project with its various aspects, such as design pattern, infrastructure, and middleware
- …
