70,485 research outputs found
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
3D video coding and transmission
The capture, transmission, and display of
3D content has gained a lot of attention in the last few
years. 3D multimedia content is no longer con fined to
cinema theatres but is being transmitted using stereoscopic
video over satellite, shared on Blu-RayTMdisks,
or sent over Internet technologies. Stereoscopic displays
are needed at the receiving end and the viewer needs to
wear special glasses to present the two versions of the
video to the human vision system that then generates
the 3D illusion. To be more e ffective and improve the
immersive experience, more views are acquired from a
larger number of cameras and presented on di fferent displays,
such as autostereoscopic and light field displays.
These multiple views, combined with depth data, also
allow enhanced user experiences and new forms of interaction
with the 3D content from virtual viewpoints.
This type of audiovisual information is represented by a
huge amount of data that needs to be compressed and
transmitted over bandwidth-limited channels. Part of
the COST Action IC1105 \3D Content Creation, Coding
and Transmission over Future Media Networks" (3DConTourNet)
focuses on this research challenge.peer-reviewe
Report of the Higgs Working Group of the Tevatron Run 2 SUSY/Higgs Workshop
This report presents the theoretical analysis relevant for Higgs physics at
the upgraded Tevatron collider and documents the Higgs Working Group
simulations to estimate the discovery reach in Run 2 for the Standard Model and
MSSM Higgs bosons. Based on a simple detector simulation, we have determined
the integrated luminosity necessary to discover the SM Higgs in the mass range
100-190 GeV. The first phase of the Run 2 Higgs search, with a total integrated
luminosity of 2 fb-1 per detector, will provide a 95% CL exclusion sensitivity
comparable to that expected at the end of the LEP2 run. With 10 fb-1 per
detector, this exclusion will extend up to Higgs masses of 180 GeV, and a
tantalizing 3 sigma effect will be visible if the Higgs mass lies below 125
GeV. With 25 fb-1 of integrated luminosity per detector, evidence for SM Higgs
production at the 3 sigma level is possible for Higgs masses up to 180 GeV.
However, the discovery reach is much less impressive for achieving a 5 sigma
Higgs boson signal. Even with 30 fb-1 per detector, only Higgs bosons with
masses up to about 130 GeV can be detected with 5 sigma significance. These
results can also be re-interpreted in the MSSM framework and yield the required
luminosities to discover at least one Higgs boson of the MSSM Higgs sector.
With 5-10 fb-1 of data per detector, it will be possible to exclude at 95% CL
nearly the entire MSSM Higgs parameter space, whereas 20-30 fb-1 is required to
obtain a 5 sigma Higgs discovery over a significant portion of the parameter
space. Moreover, in one interesting region of the MSSM parameter space (at
large tan(beta)), the associated production of a Higgs boson and a b b-bar pair
is significantly enhanced and provides potential for discovering a non-SM-like
Higgs boson in Run 2.Comment: 185 pages, 124 figures, 55 table
Recommended from our members
Multimedia delivery in the future internet
The term “Networked Media” implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizens’ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications “on the move”, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
The LHCb experiment: status and recent results
The LHCb experiment is one of the major research projects at the Large Hadron
Collider. Its acceptance and instrumentation is optimised to perform
high-precision studies of flavour physics and particle production in a unique
kinematic range at unprecedented collision energies. Using large data samples
accumulated in the years 2010-2012, the LHCb collaboration has conducted a
series of measurements providing a sensitive test of the Standard Model and
strengthening our knowledge of flavour physics, QCD and electroweak processes.
The status of the experiment and some of its recent results are presented here.Comment: 8 pages, 12 figure
- …