9,228 research outputs found
Big Data and the Internet of Things
Advances in sensing and computing capabilities are making it possible to
embed increasing computing power in small devices. This has enabled the sensing
devices not just to passively capture data at very high resolution but also to
take sophisticated actions in response. Combined with advances in
communication, this is resulting in an ecosystem of highly interconnected
devices referred to as the Internet of Things - IoT. In conjunction, the
advances in machine learning have allowed building models on this ever
increasing amounts of data. Consequently, devices all the way from heavy assets
such as aircraft engines to wearables such as health monitors can all now not
only generate massive amounts of data but can draw back on aggregate analytics
to "improve" their performance over time. Big data analytics has been
identified as a key enabler for the IoT. In this chapter, we discuss various
avenues of the IoT where big data analytics either is already making a
significant impact or is on the cusp of doing so. We also discuss social
implications and areas of concern.Comment: 33 pages. draft of upcoming book chapter in Japkowicz and Stefanowski
(eds.) Big Data Analysis: New algorithms for a new society, Springer Series
on Studies in Big Data, to appea
Recommended from our members
Sixteen years of bathymetry and waves at San Diego beaches.
Sustained, quantitative observations of nearshore waves and sand levels are essential for testing beach evolution models, but comprehensive datasets are relatively rare. We document beach profiles and concurrent waves monitored at three southern California beaches during 2001-2016. The beaches include offshore reefs, lagoon mouths, hard substrates, and cobble and sandy (medium-grained) sediments. The data span two energetic El Niño winters and four beach nourishments. Quarterly surveys of 165 total cross-shore transects (all sites) at 100 m alongshore spacing were made from the backbeach to 8 m depth. Monthly surveys of the subaerial beach were obtained at alongshore-oriented transects. The resulting dataset consists of (1) raw sand elevation data, (2) gridded elevations, (3) interpolated elevation maps with error estimates, (4) beach widths, subaerial and total sand volumes, (5) locations of hard substrate and beach nourishments, (6) water levels from a NOAA tide gauge (7) wave conditions from a buoy-driven regional wave model, and (8) time periods and reaches with alongshore uniform bathymetry, suitable for testing 1-dimensional beach profile change models
Dynamic Energy Management
We present a unified method, based on convex optimization, for managing the
power produced and consumed by a network of devices over time. We start with
the simple setting of optimizing power flows in a static network, and then
proceed to the case of optimizing dynamic power flows, i.e., power flows that
change with time over a horizon. We leverage this to develop a real-time
control strategy, model predictive control, which at each time step solves a
dynamic power flow optimization problem, using forecasts of future quantities
such as demands, capacities, or prices, to choose the current power flow
values. Finally, we consider a useful extension of model predictive control
that explicitly accounts for uncertainty in the forecasts. We mirror our
framework with an object-oriented software implementation, an open-source
Python library for planning and controlling power flows at any scale. We
demonstrate our method with various examples. Appendices give more detail about
the package, and describe some basic but very effective methods for
constructing forecasts from historical data.Comment: 63 pages, 15 figures, accompanying open source librar
Measurements of electron density and temperature in the H-1 heliac plasma by helium line intensity ratios
Electron density and temperature distributions in the H-1 heliac plasma are measured using the helium line intensity ratio technique based on a collisional-radiative model. An inversion approach with minimum Fisher regularization is developed to reconstruct the ratios of the local emission radiances from detected line-integrated intensities. The electron density and temperature inferred from the He I 667.8/728.1 and He I 728.1/706.5 nm line ratios are in good agreement with those from other diagnostic techniques in the inner region of the plasma. The electron density and temperature values appear to be a little high in the outer region of the plasma. Some possible causes of the discrepancy in the outer region are discussed
Cadmium hyperaccumulation protects Thlaspi caerulescens from leaf feeding damage by thrips (Frankliniella occidentalis)
Metal hyperaccumulation has been proposed as a plant defensive strategy. Here, we investigated whether cadmium (Cd) hyperaccumulation protected Thlaspi caerulescens from leaf feeding damage by thrips (Frankliniella occidentalis). Two ecotypes differing in Cd accumulation, Ganges (high) and Prayon (low), were grown in compost amended with 0-1000 mg Cd kg(-1) in two experiments under glasshouse conditions. F-2 and F-3 plants from the Prayon x Ganges crosses were grown with 5 mg Cd kg(-1). Plants were naturally colonized by thrips and the leaf feeding damage index (LFDI) was assessed. The LFDI decreased significantly with increasing Cd in both ecotypes, and correlated with shoot Cd concentration in a log-linear fashion. Prayon was more attractive to thrips than Ganges, but the ecotypic difference in the LFDI was largely accounted for by the shoot Cd concentration. In the F-2 and F-3 plants, the LFDI correlated significantly and negatively with shoot Cd, but not with shoot zinc (Zn) or sulphur (S) concentrations. We conclude that Cd hyperaccumulation deters thrips from feeding on T. caerulescens leaves, which may offer an adaptive benefit to the plant
A synthetic alanyl-initiator tRNA with initiator tRNA properties as determined by fluorescence measurements: comparison to synthetic alanyl-elongator tRNA
Upper bounds for the secure key rate of decoy state quantum key distribution
The use of decoy states in quantum key distribution (QKD) has provided a
method for substantially increasing the secret key rate and distance that can
be covered by QKD protocols with practical signals. The security analysis of
these schemes, however, leaves open the possibility that the development of
better proof techniques, or better classical post-processing methods, might
further improve their performance in realistic scenarios. In this paper, we
derive upper bounds on the secure key rate for decoy state QKD. These bounds
are based basically only on the classical correlations established by the
legitimate users during the quantum communication phase of the protocol. The
only assumption about the possible post-processing methods is that double click
events are randomly assigned to single click events. Further we consider only
secure key rates based on the uncalibrated device scenario which assigns
imperfections such as detection inefficiency to the eavesdropper. Our analysis
relies on two preconditions for secure two-way and one-way QKD: The legitimate
users need to prove that there exists no separable state (in the case of
two-way QKD), or that there exists no quantum state having a symmetric
extension (one-way QKD), that is compatible with the available measurements
results. Both criteria have been previously applied to evaluate single-photon
implementations of QKD. Here we use them to investigate a realistic source of
weak coherent pulses. The resulting upper bounds can be formulated as a convex
optimization problem known as a semidefinite program which can be efficiently
solved. For the standard four-state QKD protocol, they are quite close to known
lower bounds, thus showing that there are clear limits to the further
improvement of classical post-processing techniques in decoy state QKD.Comment: 10 pages, 3 figure
Introducing PHAEDRA: a new spectral code for simulations of relativistic magnetospheres
We describe a new scheme for evolving the equations of force-free
electrodynamics, the vanishing-inertia limit of magnetohydrodynamics. This
pseudospectral code uses global orthogonal basis function expansions to take
accurate spatial derivatives, allowing the use of an unstaggered mesh and the
complete force-free current density. The method has low numerical dissipation
and diffusion outside of singular current sheets. We present a range of one-
and two-dimensional tests, and demonstrate convergence to both smooth and
discontinuous analytic solutions. As a first application, we revisit the
aligned rotator problem, obtaining a steady solution with resistivity localised
in the equatorial current sheet outside the light cylinder.Comment: 23 pages, 18 figures, accepted for publication in MNRA
On the Necessary Memory to Compute the Plurality in Multi-Agent Systems
We consider the Relative-Majority Problem (also known as Plurality), in
which, given a multi-agent system where each agent is initially provided an
input value out of a set of possible ones, each agent is required to
eventually compute the input value with the highest frequency in the initial
configuration. We consider the problem in the general Population Protocols
model in which, given an underlying undirected connected graph whose nodes
represent the agents, edges are selected by a globally fair scheduler.
The state complexity that is required for solving the Plurality Problem
(i.e., the minimum number of memory states that each agent needs to have in
order to solve the problem), has been a long-standing open problem. The best
protocol so far for the general multi-valued case requires polynomial memory:
Salehkaleybar et al. (2015) devised a protocol that solves the problem by
employing states per agent, and they conjectured their upper bound
to be optimal. On the other hand, under the strong assumption that agents
initially agree on a total ordering of the initial input values, Gasieniec et
al. (2017), provided an elegant logarithmic-memory plurality protocol.
In this work, we refute Salehkaleybar et al.'s conjecture, by providing a
plurality protocol which employs states per agent. Central to our
result is an ordering protocol which allows to leverage on the plurality
protocol by Gasieniec et al., of independent interest. We also provide a
-state lower bound on the necessary memory to solve the problem,
proving that the Plurality Problem cannot be solved within the mere memory
necessary to encode the output.Comment: 14 pages, accepted at CIAC 201
Systematic study of the Sr clock transition in an optical lattice
With ultracold Sr confined in a magic wavelength optical lattice, we
present the most precise study (2.8 Hz statistical uncertainty) to-date of the
- optical clock transition with a detailed analysis of
systematic shifts (20 Hz uncertainty) in the absolute frequency measurement of
429 228 004 229 867 Hz. The high resolution permits an investigation of the
optical lattice motional sideband structure. The local oscillator for this
optical atomic clock is a stable diode laser with its Hz-level linewidth
characterized across the optical spectrum using a femtosecond frequency comb.Comment: 4 pages, 4 figures, 1 tabl
- …