32,432 research outputs found
Hypersonic Research Vehicle (HRV) real-time flight test support feasibility and requirements study. Part 2: Remote computation support for flight systems functions
The requirements are assessed for the use of remote computation to support HRV flight testing. First, remote computational requirements were developed to support functions that will eventually be performed onboard operational vehicles of this type. These functions which either cannot be performed onboard in the time frame of initial HRV flight test programs because the technology of airborne computers will not be sufficiently advanced to support the computational loads required, or it is not desirable to perform the functions onboard in the flight test program for other reasons. Second, remote computational support either required or highly desirable to conduct flight testing itself was addressed. The use is proposed of an Automated Flight Management System which is described in conceptual detail. Third, autonomous operations is discussed and finally, unmanned operations
High-speed detection of emergent market clustering via an unsupervised parallel genetic algorithm
We implement a master-slave parallel genetic algorithm (PGA) with a bespoke
log-likelihood fitness function to identify emergent clusters within price
evolutions. We use graphics processing units (GPUs) to implement a PGA and
visualise the results using disjoint minimal spanning trees (MSTs). We
demonstrate that our GPU PGA, implemented on a commercially available general
purpose GPU, is able to recover stock clusters in sub-second speed, based on a
subset of stocks in the South African market. This represents a pragmatic
choice for low-cost, scalable parallel computing and is significantly faster
than a prototype serial implementation in an optimised C-based
fourth-generation programming language, although the results are not directly
comparable due to compiler differences. Combined with fast online intraday
correlation matrix estimation from high frequency data for cluster
identification, the proposed implementation offers cost-effective,
near-real-time risk assessment for financial practitioners.Comment: 10 pages, 5 figures, 4 tables, More thorough discussion of
implementatio
DynamO: A free O(N) general event-driven molecular-dynamics simulator
Molecular-dynamics algorithms for systems of particles interacting through
discrete or "hard" potentials are fundamentally different to the methods for
continuous or "soft" potential systems. Although many software packages have
been developed for continuous potential systems, software for discrete
potential systems based on event-driven algorithms are relatively scarce and
specialized. We present DynamO, a general event-driven simulation package which
displays the optimal O(N) asymptotic scaling of the computational cost with the
number of particles N, rather than the O(N log(N)) scaling found in most
standard algorithms. DynamO provides reference implementations of the best
available event-driven algorithms. These techniques allow the rapid simulation
of both complex and large (>10^6 particles) systems for long times. The
performance of the program is benchmarked for elastic hard sphere systems,
homogeneous cooling and sheared inelastic hard spheres, and equilibrium
Lennard-Jones fluids. This software and its documentation are distributed under
the GNU General Public license and can be freely downloaded from
http://marcusbannerman.co.uk/dynamo
Experimental quantum key distribution with simulated ground-to-satellite photon losses and processing limitations
Quantum key distribution (QKD) has the potential to improve communications
security by offering cryptographic keys whose security relies on the
fundamental properties of quantum physics. The use of a trusted quantum
receiver on an orbiting satellite is the most practical near-term solution to
the challenge of achieving long-distance (global-scale) QKD, currently limited
to a few hundred kilometers on the ground. This scenario presents unique
challenges, such as high photon losses and restricted classical data
transmission and processing power due to the limitations of a typical satellite
platform. Here we demonstrate the feasibility of such a system by implementing
a QKD protocol, with optical transmission and full post-processing, in the
high-loss regime using minimized computing hardware at the receiver. Employing
weak coherent pulses with decoy states, we demonstrate the production of secure
key bits at up to 56.5 dB of photon loss. We further illustrate the feasibility
of a satellite uplink by generating secure key while experimentally emulating
the varying channel losses predicted for realistic low-Earth-orbit satellite
passes at 600 km altitude. With a 76 MHz source and including finite-size
analysis, we extract 3374 bits of secure key from the best pass. We also
illustrate the potential benefit of combining multiple passes together: while
one suboptimal "upper-quartile" pass produces no finite-sized key with our
source, the combination of three such passes allows us to extract 165 bits of
secure key. Alternatively, we find that by increasing the signal rate to 300
MHz it would be possible to extract 21570 bits of secure finite-sized key in
just a single upper-quartile pass.Comment: 12 pages, 7 figures, 2 table
A Novel Deep Learning Framework for Internal Gross Target Volume Definition from 4D Computed Tomography of Lung Cancer Patients
In this paper, we study the reliability of a novel deep learning framework for internal gross target volume (IGTV) delineation from four-dimensional computed tomography (4DCT), which is applied to patients with lung cancer treated by Stereotactic Body Radiation Therapy (SBRT). 77 patients who underwent SBRT followed by 4DCT scans were incorporated in a retrospective study. The IGTV_DL was delineated using a novel deep machine learning algorithm with a linear exhaustive optimal combination framework, for the purpose of comparison, three other IGTVs base on common methods was also delineated, we compared the relative volume difference (RVI), matching index (MI) and encompassment index (EI) for the above IGTVs. Then, multiple parameter regression analysis assesses the tumor volume and motion range as clinical influencing factors in the MI variation. Experimental results demonstrated that the deep learning algorithm with linear exhaustive optimal combination framework has a higher probability of achieving optimal MI compared with other currently widely used methods. For patients after simple breathing training by keeping the respiratory frequency in 10 BMP, the four phase combinations of 0%, 30%, 50% and 90% can be considered as a potential candidate for an optimal combination to synthesis IGTV in all respiration amplitudes
A nonparametric Bayesian approach toward robot learning by demonstration
In the past years, many authors have considered application of machine learning methodologies to effect robot learning by demonstration. Gaussian mixture regression (GMR) is one of the most successful methodologies used for this purpose. A major limitation of GMR models concerns automatic selection of the proper number of model states, i.e., the number of model component densities. Existing methods, including likelihood- or entropy-based criteria, usually tend to yield noisy model size estimates while imposing heavy computational requirements. Recently, Dirichlet process (infinite) mixture models have emerged in the cornerstone of nonparametric Bayesian statistics as promising candidates for clustering applications where the number of clusters is unknown a priori. Under this motivation, to resolve the aforementioned issues of GMR-based methods for robot learning by demonstration, in this paper we introduce a nonparametric Bayesian formulation for the GMR model, the Dirichlet process GMR model. We derive an efficient variational Bayesian inference algorithm for the proposed model, and we experimentally investigate its efficacy as a robot learning by demonstration methodology, considering a number of demanding robot learning by demonstration scenarios
Optimizing memory management for optimistic simulation with reinforcement learning
Simulation is a powerful technique to explore complex scenarios and analyze systems related to a wide range of disciplines. To allow for an efficient exploitation of the available computing power, speculative Time Warp-based Parallel Discrete Event Simulation is universally recognized as a viable solution. In this context, the rollback operation is a fundamental building block to support a correct execution even when causality inconsistencies are a posteriori materialized. If this operation is supported via checkpoint/restore strategies, memory management plays a fundamental role to ensure high performance of the simulation run. With few exceptions, adaptive protocols targeting memory management for Time Warp-based simulations have been mostly based on a pre-defined analytic models of the system, expressed as a closed-form functions that map system's state to control parameters. The underlying assumption is that the model itself is optimal. In this paper, we present an approach that exploits reinforcement learning techniques. Rather than assuming an optimal control strategy, we seek to find the optimal strategy through parameter exploration. A value function that captures the history of system feedback is used, and no a-priori knowledge of the system is required. An experimental assessment of the viability of our proposal is also provided for a mobile cellular system simulation
- …