26 research outputs found

    Hiding in the Shadows II: Collisional Dust as Exoplanet Markers

    Get PDF
    Observations of the youngest planets (\sim1-10 Myr for a transitional disk) will increase the accuracy of our planet formation models. Unfortunately, observations of such planets are challenging and time-consuming to undertake even in ideal circumstances. Therefore, we propose the determination of a set of markers that can pre-select promising exoplanet-hosting candidate disks. To this end, N-body simulations were conducted to investigate the effect of an embedded Jupiter mass planet on the dynamics of the surrounding planetesimal disk and the resulting creation of second generation collisional dust. We use a new collision model that allows fragmentation and erosion of planetesimals, and dust-sized fragments are simulated in a post process step including non-gravitational forces due to stellar radiation and a gaseous protoplanetary disk. Synthetic images from our numerical simulations show a bright double ring at 850 μ\mum for a low eccentricity planet, whereas a high eccentricity planet would produce a characteristic inner ring with asymmetries in the disk. In the presence of first generation primordial dust these markers would be difficult to detect far from the orbit of the embedded planet, but would be detectable inside a gap of planetary origin in a transitional disk.Comment: Accepted for publication in Ap

    Forming Circumbinary Planets: N-body Simulations of Kepler-34

    Full text link
    Observations of circumbinary planets orbiting very close to the central stars have shown that planet formation may occur in a very hostile environment, where the gravitational pull from the binary should be very strong on the primordial protoplanetary disk. Elevated impact velocities and orbit crossings from eccentricity oscillations are the primary contributors towards high energy, potentially destructive collisions that inhibit the growth of aspiring planets. In this work, we conduct high resolution, inter-particle gravity enabled N-body simulations to investigate the feasibility of planetesimal growth in the Kepler-34 system. We improve upon previous work by including planetesimal disk self-gravity and an extensive collision model to accurately handle inter-planetesimal interactions. We find that super-catastrophic erosion events are the dominant mechanism up to and including the orbital radius of Kepler-34(AB)b, making in-situ growth unlikely. It is more plausible that Kepler-34(AB)b migrated from a region beyond 1.5 AU. Based on the conclusions that we have made for Kepler-34 it seems likely that all of the currently known circumbinary planets have also migrated significantly from their formation location with the possible exception of Kepler-47(AB)c.Comment: 6 pages, 5 figures, accepted for publication in ApJ

    Time series classification with ensembles of elastic distance measures

    Get PDF
    Several alternative distance measures for comparing time series have recently been proposed and evaluated on time series classification (TSC) problems. These include variants of dynamic time warping (DTW), such as weighted and derivative DTW, and edit distance-based measures, including longest common subsequence, edit distance with real penalty, time warp with edit, and move–split–merge. These measures have the common characteristic that they operate in the time domain and compensate for potential localised misalignment through some elastic adjustment. Our aim is to experimentally test two hypotheses related to these distance measures. Firstly, we test whether there is any significant difference in accuracy for TSC problems between nearest neighbour classifiers using these distance measures. Secondly, we test whether combining these elastic distance measures through simple ensemble schemes gives significantly better accuracy. We test these hypotheses by carrying out one of the largest experimental studies ever conducted into time series classification. Our first key finding is that there is no significant difference between the elastic distance measures in terms of classification accuracy on our data sets. Our second finding, and the major contribution of this work, is to define an ensemble classifier that significantly outperforms the individual classifiers. We also demonstrate that the ensemble is more accurate than approaches not based in the time domain. Nearly all TSC papers in the data mining literature cite DTW (with warping window set through cross validation) as the benchmark for comparison. We believe that our ensemble is the first ever classifier to significantly outperform DTW and as such raises the bar for future work in this area

    A Significantly Faster Elastic-Ensemble for Time-Series Classification

    Get PDF
    The Elastic-Ensemble [7] has one of the longest build times of all constituents of the current state of the art algorithm for time series classification: the Hierarchical Vote Collective of Transformation-based Ensembles (HIVE-COTE) [8]. We investigate two simple and intuitive techniques to reduce the time spent training the Elastic Ensemble to consequently reduce HIVE-COTE train time. Our techniques reduce the effort involved in tuning parameters of each constituent nearest-neighbour classifier of the Elastic Ensemble. Firstly, we decrease the parameter space of each constituent to reduce tuning effort. Secondly, we limit the number of training series in each nearest neighbour classifier to reduce parameter option evaluation times during tuning. Experimentation over 10-folds of the UEA/UCR time-series classification problems show both techniques and give much faster build times and, crucially, the combination of both techniques give even greater speedup, all without significant loss in accuracy

    Simulating the cloudy atmospheres of HD 209458 b and HD 189733 b with the 3D Met Office Unified Model

    Get PDF
    Aims.To understand and compare the 3D atmospheric structure of HD 209458 b and HD 189733 b, focusing on the formation and distribution of cloud particles, as well as their feedback on the dynamics and thermal profile. Methods. We coupled the 3D Met Office Unified Model (UM), including detailed treatments of atmospheric radiative transfer anddynamics, to a kinetic cloud formation scheme. The resulting model self–consistently solves for the formation of condensation seeds,surface growth and evaporation, gravitational settling and advection, cloud radiative feedback via absorption, and crucially, scattering. We used fluxes directly obtained from the UM to produce synthetic spectral energy distributions and phase curves. Results. Our simulations show extensive cloud formation in both HD 209458 b and HD 189733 b. However, cooler temperatures in the latter result in higher cloud particle number densities. Large particles, reaching 1μm in diameter, can form due to high particle growth velocities, and sub-μm particles are suspended by vertical flows leading to extensive upper-atmosphere cloud cover. A combination of meridional advection and efficient cloud formation in cooler high latitude regions, results in enhanced cloud coverage for latitudes above 30° and leads to a zonally banded structure for all our simulations. The cloud bands extend around the entire planet, for HD209458 b and HD 189733 b, as the temperatures, even on the day side, remain below the condensation temperature of silicates and oxides. Therefore, the simulated optical phase curve for HD 209458 b shows no ‘offset’, in contrast to observations. Efficient scattering of stellar irradiation by cloud particles results in a local maximum cooling of up to 250 K in the upper atmosphere, and an advection-driven fluctuating cloud opacity causes temporal variability in the thermal emission. The inclusion of this fundamental cloud-atmosphere radiative feedback leads to significant differences with approaches neglecting these physical elements, which have been employed to interpret observations and determine thermal profiles for these planets. This suggests that readers should be cautious of interpretations neglecting such cloud feedback and scattering, and that the subject merits further study.PostprintPeer reviewe

    The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances

    Get PDF
    In the last five years there have been a large number of new time series classification algorithms proposed in the literature. These algorithms have been evaluated on subsets of the 47 data sets in the University of California, Riverside time series classification archive. The archive has recently been expanded to 85 data sets, over half of which have been donated by researchers at the University of East Anglia. Aspects of previous evaluations have made comparisons between algorithms difficult. For example, several different programming languages have been used, experiments involved a single train/test split and some used normalised data whilst others did not. The relaunch of the archive provides a timely opportunity to thoroughly evaluate algorithms on a larger number of datasets. We have implemented 18 recently proposed algorithms in a common Java framework and compared them against two standard benchmark classifiers (and each other) by performing 100 resampling experiments on each of the 85 datasets. We use these results to test several hypotheses relating to whether the algorithms are significantly more accurate than the benchmarks and each other. Our results indicate that only 9 of these algorithms are significantly more accurate than both benchmarks and that one classifier, the Collective of Transformation Ensembles, is significantly more accurate than all of the others. All of our experiments and results are reproducible: we release all of our code, results and experimental details and we hope these experiments form the basis for more rigorous testing of new algorithms in the future

    25th annual computational neuroscience meeting: CNS-2016

    Get PDF
    The same neuron may play different functional roles in the neural circuits to which it belongs. For example, neurons in the Tritonia pedal ganglia may participate in variable phases of the swim motor rhythms [1]. While such neuronal functional variability is likely to play a major role the delivery of the functionality of neural systems, it is difficult to study it in most nervous systems. We work on the pyloric rhythm network of the crustacean stomatogastric ganglion (STG) [2]. Typically network models of the STG treat neurons of the same functional type as a single model neuron (e.g. PD neurons), assuming the same conductance parameters for these neurons and implying their synchronous firing [3, 4]. However, simultaneous recording of PD neurons shows differences between the timings of spikes of these neurons. This may indicate functional variability of these neurons. Here we modelled separately the two PD neurons of the STG in a multi-neuron model of the pyloric network. Our neuron models comply with known correlations between conductance parameters of ionic currents. Our results reproduce the experimental finding of increasing spike time distance between spikes originating from the two model PD neurons during their synchronised burst phase. The PD neuron with the larger calcium conductance generates its spikes before the other PD neuron. Larger potassium conductance values in the follower neuron imply longer delays between spikes, see Fig. 17.Neuromodulators change the conductance parameters of neurons and maintain the ratios of these parameters [5]. Our results show that such changes may shift the individual contribution of two PD neurons to the PD-phase of the pyloric rhythm altering their functionality within this rhythm. Our work paves the way towards an accessible experimental and computational framework for the analysis of the mechanisms and impact of functional variability of neurons within the neural circuits to which they belong

    Webbsystem för matprisjämförelser

    No full text
    This is a thesis project in the field of information technology which begins by evaluating existing food price comparison systems and subsequently aims to design an improved, web based system for said purpose. The system features interfaces for price reporting, price comparisons as well as various search and review functions. The objective is to create a system able to compete with existing price comparison systems, which requires careful selection of products that can be found in all foodstores that also represent different households; a user-friendly price reporting and price comparison interface and an efficient tool for ensuring the validity of reported prices. The designed system features a more accurate formula for calculating average prices in events where the real price is missing; a more reliable security mechanism forv alidation of reported prices and a more comprehensible interface for viewing price reports, reporting prices, reviewing stores and products in comparison with existing systems. The system does, however, suffer from some optimization issues primarily in the form of inefficient database modeling; SQL queries and JavaScript code although testing has shown that these shortcomings do not hamper overall performance to such a degree that usability is compromised. Further usability testing performed on the system with five test subjects shows that a high level of overall user-friendliness has been achieved even though some special features, such as the detailed report containing price information per product for each store need to be improved. It is not possible to say at this time whether or not the designed system will be able to compete with existing ones or if the very concept of food price comparisons per se is viable in terms of profitability and marketability. Such an investigation will only be possible to conduct after a longer period of time where the designed system is subjected to testing on a larger scale than what has been performed as part of this thesis project
    corecore