2,168 research outputs found
\u3ci\u3eAmerican Pipe\u3c/i\u3e Tolling, Statutes of Repose, and Protective Filings: An Empirical Study
This paper offers a conceptual and empirical analysis of a key issue that overhangs CalPERS v. ANZ Securities, soon to be decided by the Supreme Court. In particular, the paper offers an empirical estimate of the plausible quantity of wasteful protective filings that putative class members might make if the Court were to hold that American Pipe tolling does not apply to statutes of repose in the federal securities laws
Manipulation of cell cycle progression can counteract the apparent loss of correction frequency following oligonucleotide-directed gene repair
BACKGROUND: Single-stranded oligonucleotides (ssODN) are used routinely to direct specific base alterations within mammalian genomes that result in the restoration of a functional gene. Despite success with the technique, recent studies have revealed that following repair events, correction frequencies decrease as a function of time, possibly due to a sustained activation of damage response signals in corrected cells that lead to a selective stalling. In this study, we use thymidine to slow down the replication rate to enhance repair frequency and to maintain substantial levels of correction over time. RESULTS: First, we utilized thymidine to arrest cells in G1 and released the cells into S phase, at which point specific ssODNs direct the highest level of correction. Next, we devised a protocol in which cells are maintained in thymidine following the repair reaction, in which the replication is slowed in both corrected and non-corrected cells and the initial correction frequency is retained. We also present evidence that cells enter a senescence state upon prolonged treatment with thymidine but this passage can be avoided by removing thymidine at 48 hours. CONCLUSION: Taken together, we believe that thymidine may be used in a therapeutic fashion to enable the maintenance of high levels of treated cells bearing repaired genes
Secrecy by Stipulation
GM Ignition Switch. Dalkon Shield. Oxycontin. For decades, protective orders—court orders that require parties to maintain the confidentiality of information unearthed during discovery—have hid deadly defects and pervasive abuse from the public, perpetuating unnecessary harm.
But how worrisome are these protective orders, really? Under Rule 26(c)’s plain language, protective orders are to be granted only upon a showing of “good cause.” Doesn’t that adequately cabin the orders’ entry? Prominent judges and scholars have long insisted it does, and that, under Rule 26(c), the day-to-day grant of protective orders is careful, not cavalier. Critics disagree. They charge that parties frequently agree to sidestep Rule 26(c)’s “good cause” requirement and that judges, although formally duty bound to protect the public interest, uncritically acquiesce to parties’ demands. Worried about judicial rubber-stamping, some, in fact, have spent decades pushing to tighten Rule 26(c)’s standards—while others have, just as vigorously, opposed these efforts, insisting that the status quo works well enough.
This debate has raged since the late 1980s. But until now, it has mostly run aground on the shoals of basic, but unanswered, factual questions: Are stipulated protective orders really de rigueur? Are they becoming more prevalent? And are joint motions for protective orders actually meticulously scrutinized?
Using state-of-the-art machine learning techniques, this Article analyzes an original dataset of over 2.2 million federal cases to answer these persistent and profoundly important questions. Along the way, we find that stipulated protective orders are surprisingly prevalent. Grant rates for stipulated protective orders are sky high. And even though many insist that judges are scrupulous in the entry of such orders, over our entire study period, a majority of federal judges never rejected a joint protective order request.
We offer the first comprehensive accounting of stipulated protective orders in federal litigation. In so doing, we aim not only to revitalize—and discipline—the perennial and consequential debate surrounding Rule 26(c). We also offer a fortified empirical foundation on which to ground inquiry into broader questions, including the role of transparency and privacy in a system ostensibly committed to “open courts,” tort law’s vital information-forcing function, adversarialism as a procedural cornerstone of American litigation, and trial-court discretion and fidelity to higher law
Variation in Snow Algae Blooms in the Coast Range of British Columbia
Snow algae blooms cover vast areas of summer snowfields worldwide, reducing albedo and increasing snow melt. Despite their global prevalence, little is known about the algae species that comprise these blooms. We used 18S and rbcL metabarcoding and light microscopy to characterize algae species composition in 31 snow algae blooms in the Coast Range of British Columbia, Canada. This study is the first to thoroughly document regional variation between blooms. We found all blooms were dominated by the genera Sanguina, Chloromonas, and Chlainomonas. There was considerable variation between blooms, most notably species assemblages above treeline were distinct from forested sites. In contrast to previous studies, the snow algae genus Chlainomonas was abundant and widespread in snow algae blooms. We found few taxa using traditional 18S metabarcoding, but the high taxonomic resolution of rbcL revealed substantial diversity, including OTUs that likely represent unnamed species of snow algae. These three cross-referenced datasets (rbcL, 18S, and microscopy) reveal that alpine snow algae blooms are more diverse than previously thought, with different species of algae dominating different elevations
Explainable Semantic Medical Image Segmentation with Style
Semantic medical image segmentation using deep learning has recently achieved
high accuracy, making it appealing to clinical problems such as radiation
therapy. However, the lack of high-quality semantically labelled data remains a
challenge leading to model brittleness to small shifts to input data. Most
works require extra data for semi-supervised learning and lack the
interpretability of the boundaries of the training data distribution during
training, which is essential for model deployment in clinical practice. We
propose a fully supervised generative framework that can achieve generalisable
segmentation with only limited labelled data by simultaneously constructing an
explorable manifold during training. The proposed approach creates medical
image style paired with a segmentation task driven discriminator incorporating
end-to-end adversarial training. The discriminator is generalised to small
domain shifts as much as permissible by the training data, and the generator
automatically diversifies the training samples using a manifold of input
features learnt during segmentation. All the while, the discriminator guides
the manifold learning by supervising the semantic content and fine-grained
features separately during the image diversification. After training,
visualisation of the learnt manifold from the generator is available to
interpret the model limits. Experiments on a fully semantic, publicly available
pelvis dataset demonstrated that our method is more generalisable to shifts
than other state-of-the-art methods while being more explainable using an
explorable manifold
Additive nanomanufacturing: a review
Additive manufacturing has provided a pathway for inexpensive and flexible manufacturing of specialized components and one-off parts. At the nanoscale, such techniques are less ubiquitous. Manufacturing at the nanoscale is dominated by lithography tools that are too expensive for small- and medium-sized enterprises (SMEs) to invest in. Additive nanomanufacturing (ANM) empowers smaller facilities to design, create, and manufacture on their own while providing a wider material selection and flexible design. This is especially important as nanomanufacturing thus far is largely constrained to 2-dimensional patterning techniques and being able to manufacture in 3-dimensions could open up new concepts. In this review, we outline the state-of-the-art within ANM technologies such as electrohydrodynamic jet printing, dip-pen lithography, direct laser writing, and several single particle placement methods such as optical tweezers and electrokinetic nanomanipulation. The ANM technologies are compared in terms of deposition speed, resolution, and material selection and finally the future prospects of ANM are discussed. This review is up-to-date until April 2014
Spatial and Temporal Patterns of Mercury Accumulation in Lacustrine Sediments across the Laurentian Great Lakes Region
Data from 104 sediment cores from the Great Lakes and “inland lakes” in the region were compiled to assess historical and recent changes in mercury (Hg) deposition. The lower Great Lakes showed sharp increases in Hg loading c. 1850-1950 from point-source water dischargers, with marked decreases during the past half century associated with effluent controls and decreases in the industrial use of Hg. In contrast, Lake Superior and inland lakes exhibited a pattern of Hg loading consistent with an atmospheric source - gradual increases followed by recent (post-1980) decreases. Variation in sedimentary Hg flux among inland lakes was primarily attributed to the ratio of watershed area: lake area, and secondarily to a lake’s proximity to emission sources. A consistent region-wide decrease (~20%) of sediment Hg flux suggests that controls on local and regional atmospheric Hg emissions have been effective in decreasing the supply of Hg to Lake Superior and inland lakes
Comparison of Niskin vs. in situ approaches for analysis of gene expression in deep Mediterranean Sea water samples
Author Posting. © The Author(s), 2014. This is the author's version of the work. It is posted here by permission of Elsevier for personal use, not for redistribution. The definitive version was published in Deep Sea Research Part II: Topical Studies in Oceanography 129 (2016): 213-222, doi:10.1016/j.dsr2.2014.10.020.Obtaining an accurate picture of microbial processes occurring in situ is essential for our
understanding of marine biogeochemical cycles of global importance. Water samples are
typically collected at depth and returned to the sea surface for processing and downstream
experiments. Metatranscriptome analysis is one powerful approach for investigating metabolic
activities of microorganisms in their habitat and which can be informative for determining
responses of microbiota to disturbances such as the Deepwater Horizon oil spill. For studies of
microbial processes occurring in the deep sea, however, sample handling, pressure, and other
changes during sample recovery can subject microorganisms to physiological changes that alter
the expression profile of labile messenger RNA. Here we report a comparison of gene expression
profiles for whole microbial communities in a bathypelagic water column sample collected in the
Eastern Mediterranean Sea using Niskin bottle sample collection and a new water column
sampler for studies of marine microbial ecology, the Microbial Sampler – In Situ Incubation
Device (MS-SID). For some taxa, gene expression profiles from samples collected and preserved
33 in situ were significantly different from potentially more stressful Niskin sampling and
34 preservation on deck. Some categories of transcribed genes also appear to be affected by sample
35 handling more than others. This suggests that for future studies of marine microbial ecology,
36 particularly targeting deep sea samples, an in situ sample collection and preservation approach
37 should be considered.This research was funded by NSF OCE-1061774 to VE and
CT, NSF DBI-0424599 to CT and NSF OCE-0849578 to VE and colleague J. Bernhard. Cruise
participation was partially supported by Deutsche Forschungsgemeinschaft (DFG) grant
STO414/10-1 to T. Stoeck
Fabric Image Representation Encoding Networks for Large-scale 3D Medical Image Analysis
Deep neural networks are parameterised by weights that encode feature
representations, whose performance is dictated through generalisation by using
large-scale feature-rich datasets. The lack of large-scale labelled 3D medical
imaging datasets restrict constructing such generalised networks. In this work,
a novel 3D segmentation network, Fabric Image Representation Networks
(FIRENet), is proposed to extract and encode generalisable feature
representations from multiple medical image datasets in a large-scale manner.
FIRENet learns image specific feature representations by way of 3D fabric
network architecture that contains exponential number of sub-architectures to
handle various protocols and coverage of anatomical regions and structures. The
fabric network uses Atrous Spatial Pyramid Pooling (ASPP) extended to 3D to
extract local and image-level features at a fine selection of scales. The
fabric is constructed with weighted edges allowing the learnt features to
dynamically adapt to the training data at an architecture level. Conditional
padding modules, which are integrated into the network to reinsert voxels
discarded by feature pooling, allow the network to inherently process
different-size images at their original resolutions. FIRENet was trained for
feature learning via automated semantic segmentation of pelvic structures and
obtained a state-of-the-art median DSC score of 0.867. FIRENet was also
simultaneously trained on MR (Magnatic Resonance) images acquired from 3D
examinations of musculoskeletal elements in the (hip, knee, shoulder) joints
and a public OAI knee dataset to perform automated segmentation of bone across
anatomy. Transfer learning was used to show that the features learnt through
the pelvic segmentation helped achieve improved mean DSC scores of 0.962,
0.963, 0.945 and 0.986 for automated segmentation of bone across datasets.Comment: 12 pages, 10 figure
- …