7,081 research outputs found
Recommended from our members
The intermediate scattering function for quasi-elastic scattering in the presence of memory friction
We derive an analytical expression for the intermediate scattering function
of a particle on a flat surface obeying the Generalised Langevin Equation, with
exponential memory friction. Numerical simulations based on an extended phase
space method confirm the analytical results. The simulated trajectories provide
qualitative insight into the effect that introducing a finite memory timescale
has on the analytical line shapes. The relative amplitude of the long-time
exponential tail of the line shape is suppressed, but its decay rate is
unchanged, reflecting the fact that the cutoff frequency of the exponential
kernel affects short-time correlations but not the diffusion coefficient which
is defined in terms of a long-time limit. The exponential sensitivity of the
relative amplitudes to the decay time of the chosen memory kernel is a very
strong indicator for the prospect of inferring a friction kernel and the
physical insights from experimentally measured intermediate scattering
functions
Bounding inconsistency using a novel threshold metric for dead reckoning update packet generation
Human-to-human interaction across distributed applications requires that sufficient consistency be maintained among participants in the face of network characteristics such as latency and limited bandwidth. The level of inconsistency arising from the network is proportional to the network delay, and thus a function of bandwidth consumption. Distributed simulation has often used a bandwidth reduction technique known as dead reckoning that combines approximation and estimation in the communication of entity movement to reduce network traffic, and thus improve consistency. However, unless carefully tuned to application and network characteristics, such an approach can introduce more inconsistency than it avoids. The key tuning metric is the distance threshold. This paper questions the suitability of the standard distance threshold as a metric for use in the dead reckoning scheme. Using a model relating entity path curvature and inconsistency, a major performance related limitation of the distance threshold technique is highlighted. We then propose an alternative timeâspace threshold criterion. The timeâspace threshold is demonstrated, through simulation, to perform better for low curvature movement. However, it too has a limitation. Based on this, we further propose a novel hybrid scheme. Through simulation and live trials, this scheme is shown to perform well across a range of curvature values, and places bounds on both the spatial and absolute inconsistency arising from dead reckoning
Exploring the use of local consistency measures as thresholds for dead reckoning update packet generation
Human-to-human interaction across distributed applications requires that sufficient consistency be maintained among participants in the face of network characteristics such as latency and limited bandwidth. Techniques and approaches for reducing bandwidth usage can minimize network delays by reducing the network traffic and therefore better exploiting available bandwidth. However, these approaches induce inconsistencies within the level of human perception. Dead reckoning is a well-known technique for reducing the number of update packets transmitted between participating nodes. It employs a distance threshold for deciding when to generate update packets. This paper questions the use of such a distance threshold in the context of absolute consistency and it highlights a major drawback with such a technique. An alternative threshold criterion based on time and distance is examined and it is compared to the distance only threshold. A drawback with this proposed technique is also identified and a hybrid threshold criterion is then proposed. However, the trade-off between spatial and temporal inconsistency remains
Computational neurorehabilitation: modeling plasticity and learning to predict recovery
Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling â regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity
Recommended from our members
A method for constrained optimisation of the design of a scanning helium microscope.
We describe a method for obtaining the optimal design of a normal incidence Scanning Helium Microscope (SHeM). Scanning helium microscopy is a recently developed technique that uses low energy neutral helium atoms as a probe to image the surface of a sample without causing damage. After estimating the variation of source brightness with nozzle size and pressure, we perform a constrained optimisation to determine the optimal geometry of the instrument (i.e. the geometry that maximises intensity) for a given target resolution. For an instrument using a pinhole to form the helium microprobe, the source and atom optics are separable and Lagrange multipliers are used to obtain an analytic expression for the optimal parameters. For an instrument using a zone plate as the focal element, the whole optical system must be considered and a numerical approach has been applied. Unlike previous numerical methods for optimisation, our approach provides insight into the effect and significance of each instrumental parameter, enabling an intuitive understanding of effect of the SHeM geometry. We show that for an instrument with a working distance of 1Â mm, a zone plate with a minimum feature size of 25Â nm becomes the advantageous focussing element if the desired beam standard deviation is below about 300Â nm.The work was supported by EPSRC grant EP/R008272/1. M.B. acknowledges an EPSRC studentship and a Leathersellers Graduate scholarship
In vitro synergy and enhanced murine brain penetration of saquinavir coadministered with mefloquine.
Highly active antiretroviral therapy has substantially improved prognosis in human immunodeficiency virus (HIV). However, the integration of proviral DNA, development of viral resistance, and lack of permeability of drugs into sanctuary sites (e.g., brain and lymphocyte) are major limitations to current regimens. Previous studies have indicated that the antimalarial drug chloroquine (CQ) has antiviral efficacy and a synergism with HIV protease inhibitors. We have screened a panel of antimalarial compounds for activity against HIV-1 in vitro. A limited efficacy was observed for CQ, mefloquine (MQ), and mepacrine (MC). However, marked synergy was observed between MQ and saquinavir (SQV), but not CQ in U937 cells. Furthermore, enhancement of the antiviral activity of SQV and four other protease inhibitors (PIs) by MQ was observed in MT4 cells, indicating a class specific rather than a drug-specific phenomenon. We demonstrate that these observations are a result of inhibition of multiple drug efflux proteins by MQ and that MQ also displaces SQV from orosomucoid in vitro. Finally, coadministration of MQ and SQV in CD-1 mice dramatically altered the tissue distribution of SQV, resulting in a >3-fold and >2-fold increase in the tissue/blood ratio for brain and testis, respectively. This pharmacological enhancement of in vitro antiviral activity of PIs by MQ now warrants further examination in vivo
Fast and flexible selection with a single switch
Selection methods that require only a single-switch input, such as a button
click or blink, are potentially useful for individuals with motor impairments,
mobile technology users, and individuals wishing to transmit information
securely. We present a single-switch selection method, "Nomon," that is general
and efficient. Existing single-switch selection methods require selectable
options to be arranged in ways that limit potential applications. By contrast,
traditional operating systems, web browsers, and free-form applications (such
as drawing) place options at arbitrary points on the screen. Nomon, however,
has the flexibility to select any point on a screen. Nomon adapts automatically
to an individual's clicking ability; it allows a person who clicks precisely to
make a selection quickly and allows a person who clicks imprecisely more time
to make a selection without error. Nomon reaps gains in information rate by
allowing the specification of beliefs (priors) about option selection
probabilities and by avoiding tree-based selection schemes in favor of direct
(posterior) inference. We have developed both a Nomon-based writing application
and a drawing application. To evaluate Nomon's performance, we compared the
writing application with a popular existing method for single-switch writing
(row-column scanning). Novice users wrote 35% faster with the Nomon interface
than with the scanning interface. An experienced user (author TB, with > 10
hours practice) wrote at speeds of 9.3 words per minute with Nomon, using 1.2
clicks per character and making no errors in the final text.Comment: 14 pages, 5 figures, 1 table, presented at NIPS 2009 Mini-symposi
Eliciting conditioned taste aversion in lizards: Live toxic prey are more effective than scent and taste cues alone
© 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd Conditioned taste aversion (CTA) is an adaptive learning mechanism whereby a consumer associates the taste of a certain food with symptoms caused by a toxic substance, and thereafter avoids eating that type of food. Recently, wildlife researchers have employed CTA to discourage native fauna from ingesting toxic cane toads (Rhinella marina), a species that is invading tropical Australia. In this paper, we compare the results of 2 sets of CTA trials on large varanid lizards (âgoannas,â Varanus panoptes). One set of trials (described in this paper) exposed recently-captured lizards to sausages made from cane toad flesh, laced with a nausea-inducing chemical (lithium chloride) to reinforce the aversion response. The other trials (in a recently-published paper, reviewed herein) exposed free-ranging lizards to live juvenile cane toads. The effectiveness of the training was judged by how long a lizard survived in the wild before it was killed (fatally poisoned) by a cane toad. Both stimuli elicited rapid aversion to live toads, but the CTA response did not enhance survival rates of the sausage-trained goannas after they were released into the wild. In contrast, the goannas exposed to live juvenile toads exhibited higher long-term survival rates than did untrained conspecifics. Our results suggest that although it is relatively easy to elicit short-term aversion to toad cues in goannas, a biologically realistic stimulus (live toads, encountered by free-ranging predators) is most effective at buffering these reptiles from the impact of invasive toxic prey
Miocene Shark and Batoid Fauna from Nosy Makamby (Mahajanga Basin, Northwestern Madagascar)
published_or_final_versio
- âŠ