1,726 research outputs found
Capturing trade-offs between daily scheduling choices
We propose a new modelling approach for daily activity scheduling which integrates the different daily scheduling choice dimensions (activity participation, location, schedule, duration and transportation mode) into a single optimisation problem. The fundamental behavioural principle behind our approach is that individuals schedule their day to maximise their overall derived utility from the activities they complete, according to their individual needs, constraints, and preferences. By combining multiple choices into a single optimisation problem, our framework is able to capture the complex trade-offs between scheduling decisions for multiple activities. These trade-offs could include how spending longer in one activity will reduce the time-availability for other activities or how the order of activities determines the travel-times. The implemented framework takes as input a set of considered activities, with associated locations and travel modes, and uses these to produce empirical distributions of individual schedules from which different daily schedules can be drawn. The model is illustrated using historic trip diary data from the Swiss Mobility and Transport Microcensus. The results demonstrate the ability of the proposed framework to generate complex and realistic distributions of starting time and duration for different activities within the tight time constraints. The generated schedules are then compared to the aggregate distributions from the historical data to demonstrate the feasibility and flexibility of our approach
Beyond Absurd: Jim Thorpe and a Proposed Taxonomy for the Absurdity Doctrine
In light of the Third Circuit\u27s recent decision interpreting the Native American Graves Repatriation Act, this Article argues that the Supreme Court must clarify the Absurdity Doctrine of statutory interpretation. The Article offers a framework for doing so
Assisted specification of discrete choice models
Determining appropriate utility specifications for discrete choice models is time-consuming and prone to errors. With the availability of larger and larger datasets, as the number of possible specifications exponentially grows with the number of variables under consideration, the analysts need to spend increasing amounts of time on searching for good models through trial-and-error, while expert knowledge is required to ensure these models are sound. This paper proposes an algorithm that aims at assisting modelers in their search. Our approach translates the task into a multi-objective combinatorial optimization problem and makes use of a variant of the variable neighborhood search algorithm to generate sets of promising model specifications. We apply the algorithm both to semi-synthetic data and to real mode choice datasets as a proof of concept. The results demonstrate its ability to provide relevant insights in reasonable amounts of time so as to effectively assist the modeler in developing interpretable and powerful models
Stable Fermion Bag Solitons in the Massive Gross-Neveu Model: Inverse Scattering Analysis
Formation of fermion bag solitons is an important paradigm in the theory of
hadron structure. We study this phenomenon non-perturbatively in the 1+1
dimensional Massive Gross-Neveu model, in the large limit. We find,
applying inverse scattering techniques, that the extremal static bag
configurations are reflectionless, as in the massless Gross-Neveu model. This
adds to existing results of variational calculations, which used reflectionless
bag profiles as trial configurations. Only reflectionless trial configurations
which support a single pair of charge-conjugate bound states of the associated
Dirac equation were used in those calculations, whereas the results in the
present paper hold for bag configurations which support an arbitrary number of
such pairs. We compute the masses of these multi-bound state solitons, and
prove that only bag configurations which bear a single pair of bound states are
stable. Each one of these configurations gives rise to an O(2N) antisymmetric
tensor multiplet of soliton states, as in the massless Gross-Neveu model.Comment: 10 pages, revtex, no figures; v2: typos corrected, references added;
v3: version accepted for publication in the PRD. referencess added. Some
minor clarifications added at the beginning of section
Society issues, painkiller solutions, dependence and sustainable agriculture
Mahatma Gandhi listed seven blunders of humanity: Wealth without work, Pleasure without conscience, Commerce without morality, Worship without sacrifice, Politics without principles, Knowledge without character, and Science without humanity. Here I tackle three major issues, climate change, financial crisis and nation security, to disclose weak points of current remedies, and to propose sustainable solutions. Global warming and the unexpected financial crisis will undoubtedly impact all nations. Treating those two critical issues solely by painkiller solutions will fail because only adverse consequences are healed, not their causes. Therefore all sources of issues must be treated at the same time by enhancing collaboration between politicians and scientists. Furthermore, the adverse consequences of globalisation of markets for energy, food and other goods have been overlooked, thus deeply weakening the security of society structures in the event of major breakdowns. Therefore dependence among people, organisations and nations must be redesigned and adapted to take into account ecological, social and security impacts. Solving climate, financial and security issues can be done by using tools and principles developed by agronomists because agronomy integrates mechanisms occurring at various space and time levels. Agriculture is also a central driver for solving most society issues because society has been founded by agriculture, and agriculture is the activity that provides food, renewable energies and materials to humans. I present a to-do list summarising the major practices, principles and benefits of sustainable agriculture based on about 100 recently-published review articles. The practices are agroforestry, allelopathy, aquaculture, beneficial microorganisms and insects, biofertilisation, biofuels, biological control, biological nitrogen fixation, breeding, carbon sequestration, conservation agriculture, crop rotation, cover crops, decision support systems, grass strips, integrated pest management, intercropping, irrigation, mechanical weed control, mulching, no tillage, organic amendments, organic farming, phytoremediation, precision agriculture, seed invigoration, sociology, soil restoration, suicidal germination, terracing, transgenic crops, trap crops, and urban agriculture
A latent variable ranking model for content-based retrieval
34th European Conference on IR Research, ECIR 2012, Barcelona, Spain, April 1-5, 2012. ProceedingsSince their introduction, ranking SVM models [11] have become a powerful tool for training content-based retrieval systems. All we need for training a model are retrieval examples in the form of triplet constraints, i.e. examples specifying that relative to some query, a database item a should be ranked higher than database item b. These types of constraints could be obtained from feedback of users of the retrieval system. Most previous ranking models learn either a global combination of elementary similarity functions or a combination defined with respect to a single database item. Instead, we propose a “coarse to fine” ranking model where given a query we first compute a distribution over “coarse” classes and then use the linear combination that has been optimized for queries of that class. These coarse classes are hidden and need to be induced by the training algorithm. We propose a latent variable ranking model that induces both the latent classes and the weights of the linear combination for each class from ranking triplets. Our experiments over two large image datasets and a text retrieval dataset show the advantages of our model over learning a global combination as well as a combination for each test point (i.e. transductive setting). Furthermore, compared to the transductive approach our model has a clear computational advantages since it does not need to be retrained for each test query.Spanish Ministry of Science and Innovation (JCI-2009-04240)EU PASCAL2 Network of Excellence (FP7-ICT-216886
Computing in Additive Networks with Bounded-Information Codes
This paper studies the theory of the additive wireless network model, in
which the received signal is abstracted as an addition of the transmitted
signals. Our central observation is that the crucial challenge for computing in
this model is not high contention, as assumed previously, but rather
guaranteeing a bounded amount of \emph{information} in each neighborhood per
round, a property that we show is achievable using a new random coding
technique.
Technically, we provide efficient algorithms for fundamental distributed
tasks in additive networks, such as solving various symmetry breaking problems,
approximating network parameters, and solving an \emph{asymmetry revealing}
problem such as computing a maximal input.
The key method used is a novel random coding technique that allows a node to
successfully decode the received information, as long as it does not contain
too many distinct values. We then design our algorithms to produce a limited
amount of information in each neighborhood in order to leverage our enriched
toolbox for computing in additive networks
Predicting the operability of damaged compressors using machine learning
Abstract
The application of machine learning to aerospace problems faces a particular challenge. For successful learning a large amount of good quality training data is required, typically tens of thousands of cases. However, due to the time and cost of experimental aerospace testing, this data is scarce. This paper shows that successful learning is possible with two novel techniques: The first technique is rapid testing. Over the last five years the Whittle Laboratory has developed a capability where rebuild and test times of a compressor stage now take 15 minutes instead of weeks. The second technique is to base machine learning on physical parameters, derived from engineering wisdom developed in industry over many decades.
The method is applied to the important industry problem of predicting the effect of blade damage on compressor operability. The current approach has high uncertainty, it is based on human judgement and correlation of a handful of experimental test cases. It is shown using 100 training cases and 25 test cases that the new method is able to predict the operability of damaged compressor stages with an accuracy of 2% in a 95% confidence interval; far better than is possible by even the most experienced compressor designers. Use of the method is also shown to generate new physical understanding, previously unknown by any of the experts involved in this work. Using this method in the future offers an exciting opportunity to generate understanding of previously intractable problems in aerospace.Aerospace Technology Institute
Rolls-Royce plc
- …