1,808 research outputs found
Automated long-term two-photon imaging in head-fixed walking Drosophila
The brain of Drosophila shows dynamics at multiple timescales, from the millisecond range of fast voltage or calcium transients to functional and structural changes occurring over multiple days. To relate such dynamics to behavior requires monitoring neural circuits across these multiple timescales in behaving animals. Here, we develop a technique for automated long-term two-photon imaging in fruit flies, during wakefulness and sleep, navigating in virtual reality over up to seven days. The method is enabled by laser surgery, a microrobotic arm for controlling forceps for dissection assistance, an automated feeding robot, as well as volumetric, simultaneous multiplane imaging. The approach is validated in the fly’s head direction system. Imaging in behaving flies over multiple timescales will be useful for understanding circadian activity, learning and long-term memory, or sleep
Scalable, Time-Responsive, Digital, Energy-Efficient Molecular Circuits using DNA Strand Displacement
We propose a novel theoretical biomolecular design to implement any Boolean
circuit using the mechanism of DNA strand displacement. The design is scalable:
all species of DNA strands can in principle be mixed and prepared in a single
test tube, rather than requiring separate purification of each species, which
is a barrier to large-scale synthesis. The design is time-responsive: the
concentration of output species changes in response to the concentration of
input species, so that time-varying inputs may be continuously processed. The
design is digital: Boolean values of wires in the circuit are represented as
high or low concentrations of certain species, and we show how to construct a
single-input, single-output signal restoration gate that amplifies the
difference between high and low, which can be distributed to each wire in the
circuit to overcome signal degradation. This means we can achieve a digital
abstraction of the analog values of concentrations. Finally, the design is
energy-efficient: if input species are specified ideally (meaning absolutely 0
concentration of unwanted species), then output species converge to their ideal
concentrations at steady-state, and the system at steady-state is in (dynamic)
equilibrium, meaning that no energy is consumed by irreversible reactions until
the input again changes.
Drawbacks of our design include the following. If input is provided
non-ideally (small positive concentration of unwanted species), then energy
must be continually expended to maintain correct output concentrations even at
steady-state. In addition, our fuel species - those species that are
permanently consumed in irreversible reactions - are not "generic"; each gate
in the circuit is powered by its own specific type of fuel species. Hence
different circuits must be powered by different types of fuel. Finally, we
require input to be given according to the dual-rail convention, so that an
input of 0 is specified not only by the absence of a certain species, but by
the presence of another. That is, we do not construct a "true NOT gate" that
sets its output to high concentration if and only if its input's concentration
is low. It remains an open problem to design scalable, time-responsive,
digital, energy-efficient molecular circuits that additionally solve one of
these problems, or to prove that some subset of their resolutions are mutually
incompatible.Comment: version 2: the paper itself is unchanged from version 1, but the
arXiv software stripped some asterisk characters out of the abstract whose
purpose was to highlight words. These characters have been replaced with
underscores in version 2. The arXiv software also removed the second
paragraph of the abstract, which has been (attempted to be) re-inserted.
Also, although the secondary subject is "Soft Condensed Matter", this
classification was chosen by the arXiv moderators after submission, not
chosen by the authors. The authors consider this submission to be a
theoretical computer science paper
Flow cytometric characterization and clinical outcome of CD4+ T-cell lymphoma in dogs: 67 cases.
BackgroundCanine T-cell lymphoma (TCL) is conventionally considered an aggressive disease, but some forms are histologically and clinically indolent. CD4 TCL is reported to be the most common subtype of TCL. We assessed flow cytometric characteristics, histologic features when available, and clinical outcomes of CD4+ TCL to determine if flow cytometry can be used to subclassify this group of lymphomas.ObjectiveTo test the hypothesis that canine CD4+ T-cell lymphoma (TCL) is a homogeneous group of lymphomas with an aggressive clinical course.AnimalsSixty-seven dogs diagnosed with CD4+ TCL by flow cytometry and treated at 1 of 3 oncology referral clinics.MethodsRetrospective multivariable analysis of outcome in canine CD4+ TCL including patient characteristics, treatment, and flow cytometric features.ResultsThe majority of CD4+ TCL were CD45+, expressed low class II MHC, and exhibited an aggressive clinical course independent of treatment regimen (median survival, 159 days). Histologically, CD4+ TCL were classified as lymphoblastic or peripheral T cell. Size of the neoplastic lymphocytes had a modest effect on both PFI and survival in this group. A small number of CD4+ TCL were CD45- and class II MHC high, and exhibited an apparently more indolent clinical course (median survival not yet reached).Conclusions and clinical importanceAlthough the majority of CD4+ TCL in dogs had uniform clinical and flow cytometric features and an aggressive clinical course, a subset had a unique immunophenotype that predicts significantly longer survival. This finding strengthens the utility of flow cytometry to aid in the stratification of canine lymphoma
Recommended from our members
A reassessment of Antarctic plateau reactive nitrogen based on ANTCI 2003 airborne and ground based measurements
The first airborne measurements of nitric oxide (NO) on the Antarctic plateau have demonstrated that the previously reported elevated levels of this species extend well beyond the immediate vicinity of South Pole. Although the current database is still relatively weak and critical laboratory experiments are still needed, the findings here suggest that the chemical uniqueness of the plateau may be substantially greater than first reported. For example, South Pole ground-based findings have provided new evidence showing that the dominant process driving the release of nitrogen from the snowpack during the spring/summer season (post-depositional loss) is photochemical in nature with evaporative processes playing a lesser role. There is also new evidence suggesting that nitrogen, in the form of nitrate, may undergo multiple recycling within a given photochemical season. Speculation here is that this may be a unique property of the plateau and much related to its having persistent cold temperatures even during summer. These conditions promote the efficient adsorption of molecules like HNO3 (and very likely HO2NO2) onto snow-pack surface ice where we have hypothesized enhanced photochemical processing can occur, leading to the efficient release of NOx to the atmosphere. In addition, to these process-oriented tentative conclusions, the findings from the airborne studies, in conjunction with modeling exercises suggest a new paradigm for the plateau atmosphere. The near-surface atmosphere over this massive region can be viewed as serving as much more than a temporary reservoir or holding tank for imported chemical species. It defines an immense atmospheric chemical reactor which is capable of modifying the chemical characteristics of select atmospheric constituents. This reactor has most likely been in place over geological time, and may have led to the chemical modulation of some trace species now found in ice cores. Reactive nitrogen has played a critical role in both establishing and in maintaining this reactor. © 2007 Elsevier Ltd. All rights reserved
Stub model for dephasing in a quantum dot
As an alternative to Buttiker's dephasing lead model, we examine a dephasing
stub. Both models are phenomenological ways to introduce decoherence in chaotic
scattering by a quantum dot. The difference is that the dephasing lead opens up
the quantum dot by connecting it to an electron reservoir, while the dephasing
stub is closed at one end. Voltage fluctuations in the stub take over the
dephasing role from the reservoir. Because the quantum dot with dephasing lead
is an open system, only expectation values of the current can be forced to
vanish at low frequencies, while the outcome of an individual measurement is
not so constrained. The quantum dot with dephasing stub, in contrast, remains a
closed system with a vanishing low-frequency current at each and every
measurement. This difference is a crucial one in the context of quantum
algorithms, which are based on the outcome of individual measurements rather
than on expectation values. We demonstrate that the dephasing stub model has a
parameter range in which the voltage fluctuations are sufficiently strong to
suppress quantum interference effects, while still being sufficiently weak that
classical current fluctuations can be neglected relative to the nonequilibrium
shot noise.Comment: 8 pages with 1 figure; contribution for the special issue of J.Phys.A
on "Trends in Quantum Chaotic Scattering
Corporate Legal Guardianship: An Innovative Concept in Advocacy and Protective Services
An article written by Michael Seelig and Sandra R. Chestnut and published in the May 1986 issue of Social Work, pages 221-223
Angular velocity integration in a fly heading circuit
Many animals maintain an internal representation of their heading as they move through their surroundings. Such a compass representation was recently discovered in a neural population in the Drosophila melanogaster central complex, a brain region implicated in spatial navigation. Here, we use two-photon calcium imaging and electrophysiology in head-fixed walking flies to identify a different neural population that conjunctively encodes heading and angular velocity, and is excited selectively by turns in either the clockwise or counterclockwise direction. We show how these mirror-symmetric turn responses combine with the neurons' connectivity to the compass neurons to create an elegant mechanism for updating the fly's heading representation when the animal turns in darkness. This mechanism, which employs recurrent loops with an angular shift, bears a resemblance to those proposed in theoretical models for rodent head direction cells. Our results provide a striking example of structure matching function for a broadly relevant computation
Electron fractionalization induced dephasing in Luttinger liquids
Using the appropriate fractionalization mechanism, we correctly derive the
temperature (T) and interaction dependence of the electron lifetime in
Luttinger liquids. For strong enough interactions, we report that
, with being the standard Luttinger exponent; This
reinforces that electrons are {\it not} good quasiparticles. We immediately
emphasize that this is of importance for the detection of electronic
interferences in ballistic 1D rings and carbon nanotubes, inducing
``dephasing'' (strong reduction of Aharonov-Bohm oscillations).Comment: 5 pages, 1 figure (Final version for PRB Brief Report
Binary pattern tile set synthesis is NP-hard
In the field of algorithmic self-assembly, a long-standing unproven
conjecture has been that of the NP-hardness of binary pattern tile set
synthesis (2-PATS). The -PATS problem is that of designing a tile assembly
system with the smallest number of tile types which will self-assemble an input
pattern of colors. Of both theoretical and practical significance, -PATS
has been studied in a series of papers which have shown -PATS to be NP-hard
for , , and then . In this paper, we close the
fundamental conjecture that 2-PATS is NP-hard, concluding this line of study.
While most of our proof relies on standard mathematical proof techniques, one
crucial lemma makes use of a computer-assisted proof, which is a relatively
novel but increasingly utilized paradigm for deriving proofs for complex
mathematical problems. This tool is especially powerful for attacking
combinatorial problems, as exemplified by the proof of the four color theorem
by Appel and Haken (simplified later by Robertson, Sanders, Seymour, and
Thomas) or the recent important advance on the Erd\H{o}s discrepancy problem by
Konev and Lisitsa using computer programs. We utilize a massively parallel
algorithm and thus turn an otherwise intractable portion of our proof into a
program which requires approximately a year of computation time, bringing the
use of computer-assisted proofs to a new scale. We fully detail the algorithm
employed by our code, and make the code freely available online
- …