1,209 research outputs found
Empirical Health Law Scholarship: The State of the Field
The last three decades have seen the blossoming of the fields of health law and empirical legal studies and their intersection--empirical scholarship in health law and policy. Researchers in legal academia and other settings have conducted hundreds of studies using data to estimate the effects of health law on accident rates, health outcomes, health care utilization, and costs, as well as other outcome variables. Yet the emerging field of empirical health law faces significant challenges--practical, methodological, and political.
The purpose of this Article is to survey the current state of the field by describing commonly used methods, analyzing enabling and inhibiting factors in the production and uptake of this type of research by policymakers, and suggesting ways to increase the production and impact of empirical health law studies. In some areas of inquiry, high-quality research has been conducted, and the findings have been successfully imported into policy debates and used to inform evidence-based lawmaking. In other areas, the level of rigor has been uneven, and the best evidence has not translated effectively into sound policy. Despite challenges and historical shortcomings, empirical health law studies can and should have a substantial impact on regulations designed to improve public safety, increase both access to and quality of health care, and foster technological innovation
Single muscle fiber proteomics reveals unexpected mitochondrial specialization
Mammalian skeletal muscles are composed of multinucleated cells termed slow or fast fibers according to their contractile and metabolic properties. Here, we developed a high-sensitivity workflow to characterize the proteome of single fibers. Analysis of segments of the same fiber by traditional and unbiased proteomics methods yielded the same subtype assignment. We discovered novel subtype-specific features, most prominently mitochondrial specialization of fiber types in substrate utilization. The fiber type-resolved proteomes can be applied to a variety of physiological and pathological conditions and illustrate the utility of single cell type analysis for dissecting proteomic heterogeneity
CMS-RCNN: Contextual Multi-Scale Region-based CNN for Unconstrained Face Detection
Robust face detection in the wild is one of the ultimate components to
support various facial related problems, i.e. unconstrained face recognition,
facial periocular recognition, facial landmarking and pose estimation, facial
expression recognition, 3D facial model construction, etc. Although the face
detection problem has been intensely studied for decades with various
commercial applications, it still meets problems in some real-world scenarios
due to numerous challenges, e.g. heavy facial occlusions, extremely low
resolutions, strong illumination, exceptionally pose variations, image or video
compression artifacts, etc. In this paper, we present a face detection approach
named Contextual Multi-Scale Region-based Convolution Neural Network (CMS-RCNN)
to robustly solve the problems mentioned above. Similar to the region-based
CNNs, our proposed network consists of the region proposal component and the
region-of-interest (RoI) detection component. However, far apart of that
network, there are two main contributions in our proposed network that play a
significant role to achieve the state-of-the-art performance in face detection.
Firstly, the multi-scale information is grouped both in region proposal and RoI
detection to deal with tiny face regions. Secondly, our proposed network allows
explicit body contextual reasoning in the network inspired from the intuition
of human vision system. The proposed approach is benchmarked on two recent
challenging face detection databases, i.e. the WIDER FACE Dataset which
contains high degree of variability, as well as the Face Detection Dataset and
Benchmark (FDDB). The experimental results show that our proposed approach
trained on WIDER FACE Dataset outperforms strong baselines on WIDER FACE
Dataset by a large margin, and consistently achieves competitive results on
FDDB against the recent state-of-the-art face detection methods
Role of dynamic Jahn-Teller distortions in Na2C60 and Na2CsC60 studied by NMR
Through 13C NMR spin lattice relaxation (T1) measurements in cubic Na2C60, we
detect a gap in its electronic excitations, similar to that observed in
tetragonal A4C60. This establishes that Jahn-Teller distortions (JTD) and
strong electronic correlations must be considered to understand the behaviour
of even electron systems, regardless of the structure. Furthermore, in metallic
Na2CsC60, a similar contribution to T1 is also detected for 13C and 133Cs NMR,
implying the occurence of excitations typical of JT distorted C60^{2-} (or
equivalently C60^{4-}). This supports the idea that dynamic JTD can induce
attractive electronic interactions in odd electron systems.Comment: 3 figure
Similarity Learning for Authorship Verification in Social Media
Authorship verification tries to answer the question if two documents with
unknown authors were written by the same author or not. A range of successful
technical approaches has been proposed for this task, many of which are based
on traditional linguistic features such as n-grams. These algorithms achieve
good results for certain types of written documents like books and novels.
Forensic authorship verification for social media, however, is a much more
challenging task since messages tend to be relatively short, with a large
variety of different genres and topics. At this point, traditional methods
based on features like n-grams have had limited success. In this work, we
propose a new neural network topology for similarity learning that
significantly improves the performance on the author verification task with
such challenging data sets.Comment: 5 pages, 3 figures, 1 table, presented on ICASSP 2019 in Brighton, U
Analysis of the complex rheological properties of highly concentrated proteins with a closed cavity rheometer
Highly concentrated biopolymers are used in food extrusion processing. It is well known that rheological properties of biopolymers influence considerably both process conditions and product properties. Therefore, characterization of rheological properties under extrusionrelevant conditions is crucial to process and product design. Since conventional rheological methods are still lacking for this purpose, a novel approach is presented. A closed cavity rheometer known in the rubber industry was used to systematically characterize a highly concentrated soy protein, a very relevant protein in extruded meat analogues. Rheological properties were first determined and discussed in the linear viscoelastic range (SAOS). Rheological analysis was then carried out in the non-linear viscoelastic range (LAOS), as high deformations in extrusion demand for measurements at process-relevant high strains. The protein showed gel behavior in the linear range, while liquid behavior was observed in the nonlinear range. An expected increase in elasticity through addition of methylcellulosewas detected. The measurements in the non-linear range reveal significant changes of material behavior with increasing strain. As another tool for rheological characterization, a stress relaxation test was carried out which confirmed the increase of elastic behavior after methylcellulose addition
High Moisture Extrusion of Soy Protein: Investigations on the Formation of Anisotropic Product Structure
The high moisture extrusion of plant proteins is well suited for the production of protein-rich products that imitate meat in their structure and texture. The desired anisotropic product structure of these meat analogues is achieved by extrusion at high moisture content (>40%) and elevated temperatures (>100 °C); a cooling die prevents expansion of the matrix and facilitates the formation of the anisotropic structure. Although there are many studies focusing on this process, the mechanisms behind the structure formation still remain largely unknown. Ongoing discussions are based on two very different hypotheses: structure formation due to alignment and stabilization of proteins at the molecular level vs. structure formation due to morphology development in multiphase systems. The aim of this paper is, therefore, to investigate the mechanism responsible for the formation of anisotropic structures during the high moisture extrusion of plant proteins. A model protein, soy protein isolate, is extruded at high moisture content and the changes in protein–protein interactions and microstructure are investigated. Anisotropic structures are achieved under the given conditions and are influenced by the material temperature (between 124 and 135 °C). Extrusion processing has a negligible effect on protein–protein interactions, suggesting that an alignment of protein molecules is not required for the structure formation. Instead, the extrudates show a distinct multiphase system. This system consists of a water-rich, dispersed phase surrounded by a water-poor, i.e., protein-rich, continuous phase. These findings could be helpful in the future process and product design of novel plant-based meat analogues
Recommended from our members
ICP versus Laser Doppler Cerebrovascular Reactivity Indices to Assess Brain Autoregulatory Capacity
Objective: To explore the relationship between various autoregulatory indices in order to determine which approximate small-vessel/microvascular autoregulatory capacity most accurately.
Methods: Utilizing a retrospective cohort of traumatic brain injury (TBI) patients (N=41) with: transcranial Doppler (TCD), intracranial pressure (ICP) and cortical laser Doppler flowmetry (LDF), we calculated various continuous indices of autoregulation and cerebrovascular responsiveness: A. ICP derived (pressure reactivity index (PRx) – correlation between ICP and mean arterial pressure (MAP), PAx – correlation between pulse amplitude of ICP (AMP) and MAP, RAC – correlation between AMP and cerebral perfusion pressure (CPP)), B. TCD derived (Mx – correlation between mean flow velocity (FVm) and CPP, Mx_a – correlation betrween FVm and MAP, Sx – correlation between systolic flow velocity (FVs) and CPP, Sx_a – correlation between FVs and MAP, Dx – correlation between diastolic flow index (FVd) and CPP, Dx_a – correlation between FVd and MAP), and LDF derived (Lx – correlation between LDF cerebral blood flow (CBF) and CPP, Lx_a – correlation between LDF-CBF and MAP). We assessed the relationship between these indices via Pearson correlation, Friedman test, principal component analysis (PCA), agglomerative hierarchal clustering (AHC) and k-means cluster analysis (KMCA).
Results: LDF based autoregulatory index (Lx) was most associated with TCD based Mx/Mx_a and Dx/Dx_a across Pearson correlation, PCA, AHC and KMCA. Lx was only remotely associated with ICP based indices (PRx, PAx, RAC). TCD based Sx/Sx_a were more closely associated with ICP derived PRx, PAx and RAC.
This indicates that vascular derived indices of autoregulatory capacity (ie. TCD and LDF based) co-vary, with Sx/Sx_a being the exception. Whereas, indices of cerebrovascular reactivity derived from pulsatile CBV (ie. ICP indices) appear to not be closely related to those of vascular origin.
Conclusions: Transcranial Doppler Mx is the most closely associated with LDF based Lx/Lx_a. Both Sx/Sx-a and the ICP derived indices appear to be dissociated with LDF based cerebrovascular reactivity, leaving Mx/Mx-a as a better surrogate for the assessment of cortical small vessel/microvascular cerebrovascular reactivity. Sx/Sx_a co-cluster/co-vary with ICP derived indices, as seen in our previous work.This work was made possible through salary support through the Cambridge Commonwealth Trust Scholarship, the Royal College of Surgeons of Canada – Harry S. Morton Travelling Fellowship in Surgery, the University of Manitoba Clinician Investigator Program, R. Samuel McLaughlin Research and Education Award, the Manitoba Medical Service Foundation, and the University of Manitoba Faculty of Medicine Dean’s Fellowship Fund.
These studies were supported by National Institute for Healthcare Research (NIHR, UK) through the Acute Brain Injury and Repair theme of the Cambridge NIHR Biomedical Research Centre, an NIHR Senior Investigator Award to DKM. Authors were also supported by a European Union Framework Program 7 grant (CENTER-TBI; Grant Agreement No. 602150)
MC is supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number: HI17C1790).
JD is supported by a Woolf Fisher Scholarship (NZ)
'Part'ly first among equals: Semantic part-based benchmarking for state-of-the-art object recognition systems
An examination of object recognition challenge leaderboards (ILSVRC,
PASCAL-VOC) reveals that the top-performing classifiers typically exhibit small
differences amongst themselves in terms of error rate/mAP. To better
differentiate the top performers, additional criteria are required. Moreover,
the (test) images, on which the performance scores are based, predominantly
contain fully visible objects. Therefore, `harder' test images, mimicking the
challenging conditions (e.g. occlusion) in which humans routinely recognize
objects, need to be utilized for benchmarking. To address the concerns
mentioned above, we make two contributions. First, we systematically vary the
level of local object-part content, global detail and spatial context in images
from PASCAL VOC 2010 to create a new benchmarking dataset dubbed PPSS-12.
Second, we propose an object-part based benchmarking procedure which quantifies
classifiers' robustness to a range of visibility and contextual settings. The
benchmarking procedure relies on a semantic similarity measure that naturally
addresses potential semantic granularity differences between the category
labels in training and test datasets, thus eliminating manual mapping. We use
our procedure on the PPSS-12 dataset to benchmark top-performing classifiers
trained on the ILSVRC-2012 dataset. Our results show that the proposed
benchmarking procedure enables additional differentiation among
state-of-the-art object classifiers in terms of their ability to handle missing
content and insufficient object detail. Given this capability for additional
differentiation, our approach can potentially supplement existing benchmarking
procedures used in object recognition challenge leaderboards.Comment: Extended version of our ACCV-2016 paper. Author formatting modifie
- …