2,268 research outputs found
Allocation in Practice
How do we allocate scarcere sources? How do we fairly allocate costs? These
are two pressing challenges facing society today. I discuss two recent projects
at NICTA concerning resource and cost allocation. In the first, we have been
working with FoodBank Local, a social startup working in collaboration with
food bank charities around the world to optimise the logistics of collecting
and distributing donated food. Before we can distribute this food, we must
decide how to allocate it to different charities and food kitchens. This gives
rise to a fair division problem with several new dimensions, rarely considered
in the literature. In the second, we have been looking at cost allocation
within the distribution network of a large multinational company. This also has
several new dimensions rarely considered in the literature.Comment: To appear in Proc. of 37th edition of the German Conference on
Artificial Intelligence (KI 2014), Springer LNC
Service Learning: A Creative Strategy for Inclusive Classrooms
The movement for full inclusion is often hindered by the lack of creative and alternative teaching methodologies in regular classrooms. Service learning not only offers an alternative to traditional classroom teaching methods, it is also a vehicle to provide inclusive community based instruction, to promote the development of communities and to provide functional skills training. This paper defines service learning and its components while also discussing applicability of service learning for all students
Studying Millisecond Pulsars and Pulsar Tails in the Very-High-Energy Gamma-ray Regime with VERITAS
Years have passed since the first detection of pulsed very-high-energy (VHE; E \textgreater 100 GeV) gamma-rays from the Crab pulsar with VERITAS, yet much is still unresolved in relation to the nature of pulsar emission mechanisms (see \cite{Venter:2018iax}) and how they interact with the surrounding medium.
No completely satisfactory model has been produced that can accurately describe all aspects of the pulsed gamma-ray emission observed from the Crab and other pulsars. Understanding the properties of VHE emission detected in observations made by many different experiments still poses a significant challenge to theoreticians, and hence experimentalists, working in the field. The crux of the issue remains; is the Crab pulsar unique\footnote{Although not as `canonical' as the Crab pulsar, there are confirmed pulsed VHE gamma-rays from one or two other pulsars (see section \ref{Section:MotivationforInvestigating MSPs}), as of writing.}, or do other pulsars also exhibit the same behavior in the VHE regime, and, in either case, what are the underlying mechanisms? To try and answer this question, while also learning more about the pulsar population and the physics of VHE gamma-ray production, this work will present the results of a search for pulsed emission in the VHE band from six Millisecond Pulsars (MSPs) in the archival VERITAS data-set, the first such survey of MSPs, and the most sensitive VHE measurements ever made for the targets. I test to see if significant pulsed emission is detected, report the observed VHE pulsed flux and gamma-ray conversion efficiency of these MSPs, to determine if there is an appearance of a VHE flux element at these energies, for the sources studied here. As the analyses result in non-detections, in every case, upper limits are placed on the aforementioned quantities. The upper limits are compared with a modern, comprehensive pulsar model energy spectrum and are found to be compatible with the proposed theoretical scenario, although we are limited by a lack of target-specific predictions. In addition, PSR J0030+0451 is proposed as a promising candidate for future study with CTA; as the limits placed here indicate that, with similar exposure and assuming a non-detection, CTA would likely produce flux limits that challenge the scenario of for the MSP population.
Pulsars are also sources of non-pulsed gamma-rays. However, at the time of writing, there has been no decisive detection of the TeV emission expected by current models from any pulsar tail that is also seen in the X-ray or radio bands. An observational campaign has been carried out by VERITAS to hunt for VHE gamma-ray emission from the candidate tail regions associated with three nearby pulsars (PSR~B0355+54, PSR~J0357+3205 and PSR~J1740+1000) that move supersonically and exhibit significant X-ray tails. The results of this analysis provide quantification of the TeV flux and luminosity, from the tail regions of the targets, for comparison with other pulsar wind nebulae observations and the predictions of modern pulsar tail models. The results of this search also provide guidance for the selection of additional candidates, and quantifying the properties of pulsar tails, for new pulsars tails that may be observed in the VHE regime.
In order to analyze data from IACTs, such as VERITAS, detailed and extensive simulation works are necessary to understand the gamma-ray-induced EASs and the detector response. I will detail the work I undertook to produce the most modern and comprehensive simulation set for VERITAS to date.
In addition to the aforementioned research, that aims to further our understanding of pulsars in the VHE domain, in this document, I will describe my contributions to the building of the Cherenkov Telescope Array (CTA), the most sensitive IACT instrument ever constructed to observe the gamma-ray sky. As the timescales of such huge projects are so long, it is natural for researchers to work with an existing instrument (in this case, VERITAS) and help run and improve the experiment, along with analyzing data products, while also contributing to the building of future instruments, that build on the previous observatory's endeavors. The research, herein, will be the most up to date analysis of the target sources, and so provides the most modern insights into the nature of these objects, but also serves as an excellent guide for source-selection, and even the models to be tested, for future works. For example, the improved sensitivity that CTA will achieve, over the current generation IACTs, will allow even deeper investigation of the pulsars studied here. Directly quantifying the standards that need to be met for the next generation of IACTs is a hugely important task and works, such as this one, aid in achieving this goal and also help bridge the gap between the generations of IACTs. This is an integral part of the evolution of the field and this thesis ties together the current era with the future research in the CTA era.
I will also include details on my contribution to a novel study of Lorentz-Invariance Violation and, hence, what we can learn about possible quantum substructure of spacetime through VHE gamma-ray observations, via collaboration with the other major IACT groups.Ph.D
Random Costs in Combinatorial Optimization
The random cost problem is the problem of finding the minimum in an
exponentially long list of random numbers. By definition, this problem cannot
be solved faster than by exhaustive search. It is shown that a classical
NP-hard optimization problem, number partitioning, is essentially equivalent to
the random cost problem. This explains the bad performance of heuristic
approaches to the number partitioning problem and allows us to calculate the
probability distributions of the optimum and sub-optimum costs.Comment: 4 pages, Revtex, 2 figures (eps), submitted to PR
Phase Transition in the Number Partitioning Problem
Number partitioning is an NP-complete problem of combinatorial optimization.
A statistical mechanics analysis reveals the existence of a phase transition
that separates the easy from the hard to solve instances and that reflects the
pseudo-polynomiality of number partitioning. The phase diagram and the value of
the typical ground state energy are calculated.Comment: minor changes (references, typos and discussion of results
Recommended from our members
Theory Learning with Symmetry Breaking
This paper investigates the use of a Prolog coded SMT solver in tackling a well known constraints problem, namely packing a given set of consecutive squares into a given rectangle, and details the developments in the solver that this motivates. The packing problem has a natural model in the theory of quantifier-free integer difference logic, a theory supported by many SMT solvers. The solver used in this work exploits a data structure consisting of an incremental Floyd-Warshall matrix paired with a watch matrix that monitors the entailment status of integer difference constraints. It is shown how this structure can be used to build unsatisfiable theory cores on the fly, which in turn allows theory learning to be incorporated into the solver. Further, it is shown that a problem-specific and non-standard approach to learning can be taken where symmetry breaking is incorporated into the learning stage, magnifying the effect of learning. It is argued that the declarative framework allows the solver to be used in this white box manner and is a strength of the solver. The approach is experimentally evaluated
Ocean circulation and Tropical Variability in the Coupled Model ECHAM5/MPI-OM
This paper describes the mean ocean circulation and the tropical variability simulated by the Max Planck Institute for Meteorology (MPI-M) coupled atmosphere–ocean general circulation model (AOGCM). Results are presented from a version of the coupled model that served as a prototype for the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) simulations. The model does not require flux adjustment to maintain a stable climate. A control simulation with present-day greenhouse gases is analyzed, and the simulation of key oceanic features, such as sea surface temperatures (SSTs), large-scale circulation, meridional heat and freshwater transports, and sea ice are compared with observations.
A parameterization that accounts for the effect of ocean currents on surface wind stress is implemented in the model. The largest impact of this parameterization is in the tropical Pacific, where the mean state is significantly improved: the strength of the trade winds and the associated equatorial upwelling weaken, and there is a reduction of the model’s equatorial cold SST bias by more than 1 K. Equatorial SST variability also becomes more realistic. The strength of the variability is reduced by about 30% in the eastern equatorial Pacific and the extension of SST variability into the warm pool is significantly reduced. The dominant El Niño–Southern Oscillation (ENSO) period shifts from 3 to 4 yr. Without the parameterization an unrealistically strong westward propagation of SST anomalies is simulated. The reasons for the changes in variability are linked to changes in both the mean state and to a reduction in atmospheric sensitivity to SST changes and oceanic sensitivity to wind anomalies
Oscillating Fracture in Rubber
We have found an oscillating instability of fast-running cracks in thin
rubber sheets. A well-defined transition from straight to oscillating cracks
occurs as the amount of biaxial strain increases. Measurements of the amplitude
and wavelength of the oscillation near the onset of this instability indicate
that the instability is a Hopf bifurcation
Optimization by Quantum Annealing: Lessons from hard 3-SAT cases
The Path Integral Monte Carlo simulated Quantum Annealing algorithm is
applied to the optimization of a large hard instance of the Random 3-SAT
Problem (N=10000). The dynamical behavior of the quantum and the classical
annealing are compared, showing important qualitative differences in the way of
exploring the complex energy landscape of the combinatorial optimization
problem. At variance with the results obtained for the Ising spin glass and for
the Traveling Salesman Problem, in the present case the linear-schedule Quantum
Annealing performance is definitely worse than Classical Annealing.
Nevertheless, a quantum cooling protocol based on field-cycling and able to
outperform standard classical simulated annealing over short time scales is
introduced.Comment: 10 pages, 6 figures, submitted to PR
Exponentially hard problems are sometimes polynomial, a large deviation analysis of search algorithms for the random Satisfiability problem, and its application to stop-and-restart resolutions
A large deviation analysis of the solving complexity of random
3-Satisfiability instances slightly below threshold is presented. While finding
a solution for such instances demands an exponential effort with high
probability, we show that an exponentially small fraction of resolutions
require a computation scaling linearly in the size of the instance only. This
exponentially small probability of easy resolutions is analytically calculated,
and the corresponding exponent shown to be smaller (in absolute value) than the
growth exponent of the typical resolution time. Our study therefore gives some
theoretical basis to heuristic stop-and-restart solving procedures, and
suggests a natural cut-off (the size of the instance) for the restart.Comment: Revtex file, 4 figure
- …