45,285 research outputs found
The use of tricaine methanesulfonate, clove oil, metomidate, and 2-phenoxyethanol for anesthesia induction in alewives (Alosa pseudoharengus)
Anesthetics are widely used in routine aquaculture operations to immobilize animals for tagging, spawning, handling, and vaccination. A number of anesthetics are currently available for finfish, but their efficacy and optimal dosage is highly species-specific. The efficacy of the anesthetic agents (tricaine methanesulfonate (MS-222), clove oil, metomidate, and 2-phenoxyethanol (2-PE)) was studied in adult, juvenile (133.3 ± 1.5 mm, 27.5 ± 8.9 g), and larval Alewives (Alosa pseudoharengus Wilson). In an initial trial, wild-caught adults were anesthetized with doses of 87.5-112.5 mg/L MS-222, 25-40 mg/L clove oil 0.5-5.0 mg/L metomidate and 0.125-0.550 mg/L 2-PE. Optimal doses for anesthesia were similar for larvae and juveniles, and were identified as: 75-100 mg/L MS-222, 40 mg/L clove oil, 5-7 mg/L metomidate, and 500 mg/L 2-PE. All juvenile fish survived 48 hours post-exposure to each optimal dose. In a longer-term (24 hour) sedation experiment, juvenile alewives were netted and exposed to low clove oil (2.5 and 5.0 mg/L) and metomidate (0.25 and 0.50 mg/L) doses, and plasma cortisol was measured. Fish exposed to the clove oil treatments exhibited a cortisol stress response that was prolonged in the higher dose treatment. No cortisol stress response was observed in the metomidate treatments. Overall, optimal acute anesthesia doses for alewives were similar to those reported for other species, and metomidate may be useful for longer-term sedation
Decoherence effects on weak value measurements in double quantum dots
We study the effect of decoherence on a weak value measurement in a paradigm
system consisting of a double quantum dot continuously measured by a quantum
point contact. Fluctuations of the parameters controlling the dot state induce
decoherence. We find that, for measurements longer than the decoherence time,
weak values are always reduced within the range of the eigenvalues of the
measured observable. For measurements at shorter time scales, the measured weak
value strongly depends on the interplay between the decoherence dynamics of the
system and the detector backaction. In particular, depending on the
postselected state and the strength of the decoherence, a more frequent
classical readout of the detector might lead to an enhancement of weak values.Comment: published version, new figures and comments added; 15 pages, 7
figure
Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness
Polynomial approximations to boolean functions have led to many positive
results in computer science. In particular, polynomial approximations to the
sign function underly algorithms for agnostically learning halfspaces, as well
as pseudorandom generators for halfspaces. In this work, we investigate the
limits of these techniques by proving inapproximability results for the sign
function.
Firstly, the polynomial regression algorithm of Kalai et al. (SIAM J. Comput.
2008) shows that halfspaces can be learned with respect to log-concave
distributions on in the challenging agnostic learning model. The
power of this algorithm relies on the fact that under log-concave
distributions, halfspaces can be approximated arbitrarily well by low-degree
polynomials. We ask whether this technique can be extended beyond log-concave
distributions, and establish a negative result. We show that polynomials of any
degree cannot approximate the sign function to within arbitrarily low error for
a large class of non-log-concave distributions on the real line, including
those with densities proportional to .
Secondly, we investigate the derandomization of Chernoff-type concentration
inequalities. Chernoff-type tail bounds on sums of independent random variables
have pervasive applications in theoretical computer science. Schmidt et al.
(SIAM J. Discrete Math. 1995) showed that these inequalities can be established
for sums of random variables with only -wise independence,
for a tail probability of . We show that their results are tight up to
constant factors.
These results rely on techniques from weighted approximation theory, which
studies how well functions on the real line can be approximated by polynomials
under various distributions. We believe that these techniques will have further
applications in other areas of computer science.Comment: 22 page
Free-libre open source software as a public policy choice
Free Libre Open Source Software (FLOSS) is characterised by a specific programming and development paradigm. The availability and freedom of use of source code are at the core of this paradigm, and are the prerequisites for FLOSS features. Unfortunately, the fundamental role of code is often ignored among those who decide the software purchases for Canadian public agencies. Source code availability and the connected freedoms are often seen as unrelated and accidental aspects, and the only real advantage acknowledged, which is the absence of royalty fees, becomes paramount. In this paper we discuss some relevant legal issues and explain why public administrations should choose FLOSS for their technological infrastructure. We also present the results of a survey regarding the penetration and awareness of FLOSS usage into the Government of Canada. The data demonstrates that the Government of Canada shows no enforced policy regarding the implementation of a specific technological framework (which has legal, economic, business, and ethical repercussions) in their departments and agencies
Properties of measures supported on fat Sierpinski carpets
In this paper we study certain conformal iterated function schemes in two dimensions that are natural generalizations of the Sierpinski carpet construction. In particular, we consider scaling factors for which the open set condition fails. For such âfat Sierpinski carpetsâ we study the range of parameters for which the dimension of the set is exactly known, or for which the set has positive measure
Deep pockets, packets, and harbours
Deep Packet Inspection (DPI) is a set of methodologies used for the analysis of data flow over the Internet. It is the intention of this paper to describe technical details of this issue and to show that by using DPI technologies it is possible to understand the content of Transmission Control Protocol/Internet Protocol communications. This communications can carry public available content, private users information, legitimate copyrighted works, as well as infringing copyrighted works.
Legislation in many jurisdictions regarding Internet service providersâ liability, or more generally the liability of communication intermediaries, usually contains âsafe harbourâ provisions. The World Intellectual Property Organization Copyright Treaty of 1996 has a short but significant provision excluding liability for suppliers of physical facilities. The provision is aimed at communication to the public and the facilitation of physical means. Its extensive interpretation to cases of contributory or vicarious liability, in absence of specific national implementation, can prove problematic. Two of the most relevant legislative interventions in the field, the Digital Millennium Copyright Act and the European Directive on Electronic Commerce, regulate extensively the field of intermediary liability. This paper looks at the relationship between existing packet inspection technologies, especially the âdeep version,â and the international and national legal and regulatory interventions connected with intellectual property protection and with the correlated liabilities âexemptions. In analyzing the referred two main statutes, we will take a comparative look at similar interventions in Australia and Canada that can offer some interesting elements of reflection
- âŠ