2,744 research outputs found
Recommended from our members
Game-theoretic modeling of the steering interaction between a human driver and a vehicle collision avoidance controller
Development of vehicle active steering collision avoidance systems calls for mathematical
models capable of predicting a human driver’s response so as to reduce the cost involved in field tests whilst
accelerate product development. This article provides a discussion on the paradigms that may be used for
modelling a driver’s steering interaction with vehicle collision avoidance control in path-following scenarios.
Four paradigms, namely decentralized, noncooperative Nash, noncooperative Stackelberg and cooperative
Pareto are established. The decentralized paradigm, developed based on optimal control theory, represents a
driver’s interaction with the collision avoidance controllers that disregard driver steering control. The
noncooperative Nash and Stackelberg paradigms are used for predicting a driver’s steering behaviour in
response to the collision avoidance control that actively compensates for driver steering action. These two
are devised based on the principles of equilibria in noncooperative game theory. The cooperative Pareto
paradigm is derived from cooperative game theory to model a driver’s interaction with the collision
avoidance systems that take into account the driver’s target path. The driver and the collision avoidance
controllers’ optimization problems and their resulting steering strategies arise in each paradigm are
delineated. Two mathematical approaches applicable to these optimization problems, namely the distributed
Model Predictive Control and the Linear Quadratic dynamic optimization approaches are described in some
detail. A case study illustrating a conflict in steering control between driver and vehicle collision avoidance
system is performed via simulation. It was found that variation of driver path-error cost function weights
results in a variety of steering behaviours which are distinct between paradigms.This is the accepted manuscript. The final version is available from IEEE at http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6951458&refinements%3D4262294079%26sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A7008592%29
Recommended from our members
Modelling of a Human Driver’s Interaction with Vehicle Automated Steering using Cooperative Game Theory
The introduction of automated driving systems raised questions about how the human driver interacts with the automated system. Non-cooperative game theory is increasingly used for modelling and understanding such interaction, while its counterpart, cooperative game theory is rarely discussed for similar applications despite it may be potentially more suitable. This paper describes the modelling of a human driver’s steering interaction with an automated steering system using cooperative game theory. The distributed Model Predictive Control approach is adopted to derive the driver’s and the automated steering system’s strategies in a Pareto equilibrium sense, namely their cooperative Pareto steering strategies. Two separate numerical studies are carried out to study the influence of strategy parameters, and the influence of strategy types on the driver’s and the automated system’s steering performance. It is found that when a driver interacts with an automated steering system using a cooperative Pareto steering strategy, the driver can improve his/her performance in following a target path through increasing his/her effort in pursuing his/her own interest under the driver-automation cooperative control goal. It is also found that a driver’s adoption of cooperative Pareto steering strategy leads to a reinforcement in the driver’s steering angle control, compared to the driver’s adoption of non-cooperative Nash strategy. This in turn enables the vehicle to return from a lane-change maneuver to straight-line driving swifter
How Long It Takes for an Ordinary Node with an Ordinary ID to Output?
In the context of distributed synchronous computing, processors perform in
rounds, and the time-complexity of a distributed algorithm is classically
defined as the number of rounds before all computing nodes have output. Hence,
this complexity measure captures the running time of the slowest node(s). In
this paper, we are interested in the running time of the ordinary nodes, to be
compared with the running time of the slowest nodes. The node-averaged
time-complexity of a distributed algorithm on a given instance is defined as
the average, taken over every node of the instance, of the number of rounds
before that node output. We compare the node-averaged time-complexity with the
classical one in the standard LOCAL model for distributed network computing. We
show that there can be an exponential gap between the node-averaged
time-complexity and the classical time-complexity, as witnessed by, e.g.,
leader election. Our first main result is a positive one, stating that, in
fact, the two time-complexities behave the same for a large class of problems
on very sparse graphs. In particular, we show that, for LCL problems on cycles,
the node-averaged time complexity is of the same order of magnitude as the
slowest node time-complexity.
In addition, in the LOCAL model, the time-complexity is computed as a worst
case over all possible identity assignments to the nodes of the network. In
this paper, we also investigate the ID-averaged time-complexity, when the
number of rounds is averaged over all possible identity assignments. Our second
main result is that the ID-averaged time-complexity is essentially the same as
the expected time-complexity of randomized algorithms (where the expectation is
taken over all possible random bits used by the nodes, and the number of rounds
is measured for the worst-case identity assignment).
Finally, we study the node-averaged ID-averaged time-complexity.Comment: (Submitted) Journal versio
Semantic dementia: a complex and culturally influenced presentation.
The variants of frontotemporal dementia (FTD) require careful differentiation from primary psychiatric disorders as the neuropsychiatric manifestations can overshadow the unique cognitive deficits. The language variants of FTD are less readily recognised by trainees despite making up around 43% of cases. This educational article presents an anonymised case of one of the language variants: semantic dementia. The cognitive deficits and neuropsychiatric manifestations (delusions and hyperreligiosity) are explored in terms of aetiology and management. By the end of the article, readers should be able to differentiate FTD from Alzheimer's disease, understand the principles of management and associated risks, and develop a multifaceted approach to hyperreligiosity in dementia
Surface shear stress dependence of gas transfer velocity parameterizations using DNS
Air-water gas-exchange is studied in direct numerical simulations (DNS) of free-surface flows driven by natural convection and weak winds. The wind is modeled as a constant surface-shear-stress and the gas-transfer is modeled via a passive scalar. The simulations are characterized via a Richardson number Ri=Bν/u*4 where B, ν, and u* are the buoyancy flux, kinematic viscosity, and friction velocity respectively. The simulations comprise 0<Ri<∞ ranging from convection-dominated to shear-dominated cases. The results are used to: (i) evaluate parameterizations of the air-water gas-exchange, (ii) determine, for a given buoyancy flux, the wind speed at which gas transfer becomes primarily shear driven, and (iii) find an expression for the gas-transfer velocity for flows driven by both convection and shear. The evaluated gas transfer-velocity parametrizations are based on either the rate of turbulent kinetic energy dissipation, the surface flow-divergence, the surface heat-flux, or the wind-speed. The parametrizations based on dissipation or divergence show an unfavorable Ri dependence for flows with combined forcing whereas the parametrization based on heat-flux only shows a limited Ri dependence. The two parametrizations based on wind speed give reasonable estimates for the transfer-velocity, depending however on the surface heat-flux. The transition from convection- to shear-dominated gas-transfer-velocity is shown to be at Ri≈0.004. Furthermore, the gas-transfer is shown to be well represented by two different approaches: (i) additive forcing expressed as kg,sum=AShearu*Ri/Ric+11/4Sc-n where Ric=AShear/ABuoy4, and (ii) either buoyancy or shear dominated expressed as, kg=ABuoyBν1/4Sc-n, Ri>Ric or kg=Ashearu*Sc-n, Ri<Ric. Here ABuoy=0.4 and AShear=0.1 are constants, and n is an exponent that depends on the water surface-characteristics
Farms, pipes, streams and reforestation : reasoning about structured parallel processes using types and hylomorphisms
The increasing importance of parallelism has motivated the creation of better abstractions for writing parallel software, including structured parallelism using nested algorithmic skeletons. Such approaches provide high-level abstractions that avoid common problems, such as race conditions, and often allow strong cost models to be defined. However, choosing a combination of algorithmic skeletons that yields good parallel speedups for a program on some specific parallel architecture remains a difficult task. In order to achieve this, it is necessary to simultaneously reason both about the costs of different parallel structures and about the semantic equivalences between them. This paper presents a new type-based mechanism that enables strong static reasoning about these properties. We exploit well-known properties of a very general recursion pattern, hylomorphisms, and give a denotational semantics for structured parallel processes in terms of these hylomorphisms. Using our approach, it is possible to determine formally whether it is possible to introduce a desired parallel structure into a program without altering its functional behaviour, and also to choose a version of that parallel structure that minimises some given cost model.Postprin
Walks4work: Rationale and study design to investigate walking at lunchtime in the workplace setting
Background: Following recruitment of a private sector company, an 8week lunchtime walking intervention was implemented to examine the effect of the intervention on modifiable cardiovascular disease risk factors, and further to see if walking environment had any further effect on the cardiovascular disease risk factors. Methods. For phase 1 of the study participants were divided into three groups, two lunchtime walking intervention groups to walk around either an urban or natural environment twice a week during their lunch break over an 8week period. The third group was a waiting-list control who would be invited to join the walking groups after phase 1. In phase 2 all participants were encouraged to walk during their lunch break on self-selecting routes. Health checks were completed at baseline, end of phase 1 and end of phase 2 in order to measure the impact of the intervention on cardiovascular disease risk. The primary outcome variables of heart rate and heart rate variability were measured to assess autonomic function associated with cardiovascular disease. Secondary outcome variables (Body mass index, blood pressure, fitness, autonomic response to a stressor) related to cardiovascular disease were also measured. The efficacy of the intervention in increasing physical activity was objectively monitored throughout the 8-weeks using an accelerometer device. Discussion. The results of this study will help in developing interventions with low researcher input with high participant output that may be implemented in the workplace. If effective, this study will highlight the contribution that natural environments can make in the reduction of modifiable cardiovascular disease risk factors within the workplace. © 2012 Brown et al.; licensee BioMed Central Ltd
A study protocol to investigate the relationship between dietary fibre intake and fermentation, colon cell turnover, global protein acetylation and early carcinogenesis: the FACT study
Background: A number of studies, notably EPIC, have shown a descrease in colorectal cancer risk associated with increased fibre consumption. Whilst the underlying mechanisms are likely to be multifactorial, production of the short-chain fatty-acid butyrate fro butyratye is frequently cited as a major potential contributor to the effect. Butyrate inhibits histone deacetylases, which work on a wide range of proteins over and above histones. We therefore hypothesized that alterations in the acetylated proteome may be associated with a cancer risk phenotype in the colorectal mucosa, and that such alterations are candidate biomarkers for effectiveness of fibre interventions in cancer prevention.
Methods an design: There are two principal arms to this study: (i) a cross-sectional study (FACT OBS) of 90 subjects recruited from gastroenterology clinics and; (ii) an intervention trial in 40 subjects with an 8 week high fibre intervention. In both studies the principal goal is to investigate a link between fibre intake, SCFA production and global protein acetylation. The primary measure is level of faecal butyrate, which it is hoped will be elevated by moving subjects to a high fibre diet. Fibre intakes will be estimated in the cross-sectional group using the EPIC Food Frequency Questionnaire. Subsidiary measures of the effect of butyrate on colon mucosal function and precancerous phenotype will include measures of apoptosis, apoptotic regulators cell cycle and cell division.
Discussion: This study will provide a new level of mechanistic data on alterations in the functional proteome in response to the colon microenvironment which may underwrite the observed cancer preventive effect of fibre. The study may yield novel candidate biomarkers of fibre fermentation and colon mucosal function
Use of Short Tandem Repeat Sequences to Study Mycobacterium leprae in Leprosy Patients in Malawi and India
Molecular typing has provided an important tool for studies of many pathogens. Such methods could be particularly useful in studies of leprosy, given the many outstanding questions about the pathogenesis and epidemiology of this disease. The approach is particularly difficult with leprosy, however, because of the genetic homogeneity of M. leprae and our inability to culture it. This paper describes molecular epidemiological studies carried out on leprosy patients in Malawi and in India, using short tandem repeat sequences (STRS) as markers of M. leprae strains. It reveals evidence for continuous changes in these markers within individual patients over time, and for selection of different STRS-defined strains between different tissues (skin and nerve) in the same patient. Comparisons between patients collected under different circumstances reveal the uses and limitations of the approach—STRS analysis may in some circumstances provide a means to trace short transmission chains, but it does not provide a robust tool for distinguishing between relapse and reinfection. This encourages further work to identify genetic markers with different stability characteristics for incorporation into epidemiological studies of leprosy
Semi-analytical approach to magnetized temperature autocorrelations
The cosmic microwave background (CMB) temperature autocorrelations, induced
by a magnetized adiabatic mode of curvature inhomogeneities, are computed with
semi-analytical methods. As suggested by the latest CMB data, a nearly
scale-invariant spectrum for the adiabatic mode is consistently assumed. In
this situation, the effects of a fully inhomogeneous magnetic field are
scrutinized and constrained with particular attention to harmonics which are
relevant for the region of Doppler oscillations. Depending on the parameters of
the stochastic magnetic field a hump may replace the second peak of the angular
power spectrum. Detectable effects on the Doppler region are then expected only
if the magnetic power spectra have quasi-flat slopes and typical amplitude
(smoothed over a comoving scale of Mpc size and redshifted to the epoch of
gravitational collapse of the protogalaxy) exceeding 0.1 nG. If the magnetic
energy spectra are bluer (i.e. steeper in frequency) the allowed value of the
smoothed amplitude becomes, comparatively, larger (in the range of 20 nG). The
implications of this investigation for the origin of large-scale magnetic
fields in the Universe are discussed. Connections with forthcoming experimental
observations of CMB temperature fluctuations are also suggested and partially
explored.Comment: 40 pages, 13 figure
- …