6,398 research outputs found
Flight Gate Assignment with a Quantum Annealer
Optimal flight gate assignment is a highly relevant optimization problem from
airport management. Among others, an important goal is the minimization of the
total transit time of the passengers. The corresponding objective function is
quadratic in the binary decision variables encoding the flight-to-gate
assignment. Hence, it is a quadratic assignment problem being hard to solve in
general. In this work we investigate the solvability of this problem with a
D-Wave quantum annealer. These machines are optimizers for quadratic
unconstrained optimization problems (QUBO). Therefore the flight gate
assignment problem seems to be well suited for these machines. We use real
world data from a mid-sized German airport as well as simulation based data to
extract typical instances small enough to be amenable to the D-Wave machine. In
order to mitigate precision problems, we employ bin packing on the passenger
numbers to reduce the precision requirements of the extracted instances. We
find that, for the instances we investigated, the bin packing has little effect
on the solution quality. Hence, we were able to solve small problem instances
extracted from real data with the D-Wave 2000Q quantum annealer.Comment: Updated figure
Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity
A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed
Adaptive Horizon Model Predictive Control and Al'brekht's Method
A standard way of finding a feedback law that stabilizes a control system to
an operating point is to recast the problem as an infinite horizon optimal
control problem. If the optimal cost and the optmal feedback can be found on a
large domain around the operating point then a Lyapunov argument can be used to
verify the asymptotic stability of the closed loop dynamics. The problem with
this approach is that is usually very difficult to find the optimal cost and
the optmal feedback on a large domain for nonlinear problems with or without
constraints. Hence the increasing interest in Model Predictive Control (MPC).
In standard MPC a finite horizon optimal control problem is solved in real time
but just at the current state, the first control action is implimented, the
system evolves one time step and the process is repeated. A terminal cost and
terminal feedback found by Al'brekht's methoddefined in a neighborhood of the
operating point is used to shorten the horizon and thereby make the nonlinear
programs easier to solve because they have less decision variables. Adaptive
Horizon Model Predictive Control (AHMPC) is a scheme for varying the horizon
length of Model Predictive Control (MPC) as needed. Its goal is to achieve
stabilization with horizons as small as possible so that MPC methods can be
used on faster and/or more complicated dynamic processes.Comment: arXiv admin note: text overlap with arXiv:1602.0861
Fibronectin-Containing Extracellular Vesicles Protect Melanocytes against Ultraviolet Radiation-Induced Cytotoxicity.
Skin melanocytes are activated by exposure to UV radiation to secrete melanin-containing melanosomes to protect the skin from UV-induced damage. Despite the continuous renewal of the epidermis, the turnover rate of melanocytes is very slow, and they survive for long periods. However, the mechanisms underlying the survival of melanocytes exposed to UV radiation are not known. Here, we investigated the role of melanocyte-derived extracellular vesicles in melanocyte survival. Network analysis of the melanocyte extracellular vesicle proteome identified the extracellular matrix component fibronectin at a central node, and the release of fibronectin-containing extracellular vesicles was increased after exposure of melanocytes to UVB radiation. Using an anti-fibronectin neutralizing antibody and specific inhibitors of extracellular vesicle secretion, we demonstrated that extracellular vesicles enriched in fibronectin were involved in melanocyte survival after UVB radiation. Furthermore, we observed that in the hyperpigmented lesions of patients with melasma, the extracellular space around melanocytes contained more fibronectin compared with normal skin, suggesting that fibronectin is involved in maintaining melanocytes in pathological conditions. Collectively, our findings suggest that melanocytes secrete fibronectin-containing extracellular vesicles to increase their survival after UVB radiation. These data provide important insight into how constantly stimulated melanocytes can be maintained in pathological conditions such as melasma.1166Ysciescopu
Conclusive quantum steering with superconducting transition edge sensors
Quantum steering allows two parties to verify shared entanglement even if one
measurement device is untrusted. A conclusive demonstration of steering through
the violation of a steering inequality is of considerable fundamental interest
and opens up applications in quantum communication. To date all experimental
tests with single photon states have relied on post-selection, allowing
untrusted devices to cheat by hiding unfavourable events in losses. Here we
close this "detection loophole" by combining a highly efficient source of
entangled photon pairs with superconducting transition edge sensors. We achieve
an unprecedented ~62% conditional detection efficiency of entangled photons and
violate a steering inequality with the minimal number of measurement settings
by 48 standard deviations. Our results provide a clear path to practical
applications of steering and to a photonic loophole-free Bell test.Comment: Preprint of 7 pages, 3 figures; the definitive version is published
in Nature Communications, see below. Also, see related experimental work by
A. J. Bennet et al., arXiv:1111.0739 and B. Wittmann et al., arXiv:1111.076
First tagging data on large Atlantic bluefin tuna behaviour in newly retaken Nordic areas suggests repeated behaviour and skipped spawning
Atlantic bluefin tuna (Thunnus thynnus; ABFT) is one of the most iconic fish species in the world. Recently, after being very rare for more than half a century, large bluefin tunas have returned to Nordic waters in late summer and autumn, marking the return of the largest predatory fish in Nordic waters. By tagging 18 bluefin tunas with electronic tags (pop-up satellite archival tags), we show that bluefin tuna observed in Nordic waters undertake different migration routes, with individuals migrating into the western Atlantic Ocean, while others stay exclusively in the eastern Atlantic and enter the Mediterranean Sea to spawn. We additionally present evidence of possible skipped spawning inferred from behavioural analyses. In Nordic waters, ABFT are primarily using the upper water column, likely reflecting feeding activity. The results support the hypothesis that ABFT migrating to Nordic waters return to the same general feeding area within the region on an annual basis. These observations may have important implications for management because (1) tunas that come into Nordic waters might represent only a few year classes (as evidenced by a narrow size range), and thus may be particularly vulnerable to area-specific exploitation, and (2) challenge the assumption of consecutive spawning in adult Atlantic bluefin tuna, as used in current stock assessment models. Without careful management and limited exploitation of this part of the ABFT population, the species’ return to Nordic waters could be short-lived
Current approaches and future perspectives on strategies for the development of personalized tissue engineering therapies
Personalized tissue engineering and regenerative medicine (TERM) therapies propose patient-oriented effective solutions, considering individual needs. Cell-based therapies, for example, may benefit from cell sources that enable easier autologous set-ups or from recent developments on IPS cells technologies towards effective personalized therapeutics. Furthermore, the customization of scaffold materials to perfectly fit a patientâ s tissue defect through rapid prototyping technologies, also known as 3D printing, is now a reality. Nevertheless, the timing to expand cells or to obtain functional in vitrotissue substitutes prior to implantation prevents advancements towards routine use upon patient´s needs. Thus, personalized therapies also anticipate the importance of creating off-the-shelf solutions to enable immediately available tissue engineered products. This paper reviews the main recent developments and future challenges to enable personalized TERM approaches and to bring these technologies closer to clinical applications.The authors wish to acknowledge the financial support of
the Portuguese Foundation for Science and Technology for
the post-doctoral grant (SFRH/BPD/111729/2015) and
Recognize (UTAP-ICDT/CTM-BIO/0023/2014), and the project
RL3 -TECT -NORTE-07-0124-FEDER-000020 co-financed by
ON.2 (NSRF), through ERDF
Anomalies and the chiral magnetic effect in the Sakai-Sugimoto model
In the chiral magnetic effect an imbalance in the number of left- and
right-handed quarks gives rise to an electromagnetic current parallel to the
magnetic field produced in noncentral heavy-ion collisions. The chiral
imbalance may be induced by topologically nontrivial gluon configurations via
the QCD axial anomaly, while the resulting electromagnetic current itself is a
consequence of the QED anomaly. In the Sakai-Sugimoto model, which in a certain
limit is dual to large-N_c QCD, we discuss the proper implementation of the QED
axial anomaly, the (ambiguous) definition of chiral currents, and the
calculation of the chiral magnetic effect. We show that this model correctly
contains the so-called consistent anomaly, but requires the introduction of a
(holographic) finite counterterm to yield the correct covariant anomaly.
Introducing net chirality through an axial chemical potential, we find a
nonvanishing vector current only before including this counterterm. This seems
to imply the absence of the chiral magnetic effect in this model. On the other
hand, for a conventional quark chemical potential and large magnetic field,
which is of interest in the physics of compact stars, we obtain a nontrivial
result for the axial current that is in agreement with previous calculations
and known exact results for QCD.Comment: 35 pages, 4 figures, v2: added comments about frequency-dependent
conductivity at the end of section 4; references added; version to appear in
JHE
Law Libraries and Laboratories: The Legacies of Langdell and His Metaphor
Law Librarians and others have often referred to Harvard Law School Dean C.C. Langdell’s statements that the law library is the lawyer’s laboratory. Professor Danner examines the context of what Langdell through his other writings, the educational environment at Harvard in the late nineteenth century, and the changing perceptions of university libraries generally. He then considers how the “laboratory metaphor” has been applied by librarians and legal scholars during the twentieth century and into the twenty-first. The article closes with thoughts on Langdell’s legacy for law librarians and the usefulness of the laboratory metaphor
- …