366 research outputs found
An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols
We describe an effort to annotate a corpus of natural language instructions
consisting of 622 wet lab protocols to facilitate automatic or semi-automatic
conversion of protocols into a machine-readable format and benefit biological
research. Experimental results demonstrate the utility of our corpus for
developing machine learning approaches to shallow semantic parsing of
instructional texts. We make our annotated Wet Lab Protocol Corpus available to
the research community
Immunoglobulin G4-related disease - diagnostic dilemma and importance of clinical judgement: a case report
Immunoglobulin G4 (IgG4)-related disease is a multi-organ, immune-mediated, fibro-inflammatory disorder characterized by tumefactive masses in the affected organs. Incidence and prevalence of IgG4-related disease (RD) are not clearly known and have slight male preponderance. It often involves multiple organs at the time of presentation or over the course of disease mimicking malignancy, Sjogren's syndrome, antineutrophil cytoplasmic antibodies associated vasculitis, infections. A thorough workup is needed to rule out these mimickers. A 33-year-old gentleman presented to us with history of progressive swelling in the right peri-orbital region for four years. On evaluation, abdominal imaging was notable for the sausage-shaped pancreas and hypoenchancing nodules in bilateral kidneys. Histological examination of right lacrimal gland revealed lymphoplasmacytic infiltrate and storiform fibrosis. Serum IgG4 levels were normal, and immunostaining was negative. A diagnosis of IgG4-RD was suggested because of multi-organ involvement, classical radiological and histopathological features. Awareness about IgG4-RD, an under-recognized entity is essential, as it is treatable, and early recognition may help in a favourable outcome. Appropriate use of clinicopathological, serological and imaging features in the right clinical context may help in accurate diagnosis. Elevated serum IgG4 levels and biopsy are not mandatory for the diagnosis
A Little Fog for a Large Turn
Small, carefully crafted perturbations called adversarial perturbations can
easily fool neural networks. However, these perturbations are largely additive
and not naturally found. We turn our attention to the field of Autonomous
navigation wherein adverse weather conditions such as fog have a drastic effect
on the predictions of these systems. These weather conditions are capable of
acting like natural adversaries that can help in testing models. To this end,
we introduce a general notion of adversarial perturbations, which can be
created using generative models and provide a methodology inspired by
Cycle-Consistent Generative Adversarial Networks to generate adversarial
weather conditions for a given image. Our formulation and results show that
these images provide a suitable testbed for steering models used in Autonomous
navigation models. Our work also presents a more natural and general definition
of Adversarial perturbations based on Perceptual Similarity.Comment: Accepted to WACV 202
iGPSe: A Visual Analytic System for Integrative Genomic Based Cancer Patient Stratification
Background: Cancers are highly heterogeneous with different subtypes. These
subtypes often possess different genetic variants, present different
pathological phenotypes, and most importantly, show various clinical outcomes
such as varied prognosis and response to treatment and likelihood for
recurrence and metastasis. Recently, integrative genomics (or panomics)
approaches are often adopted with the goal of combining multiple types of omics
data to identify integrative biomarkers for stratification of patients into
groups with different clinical outcomes. Results: In this paper we present a
visual analytic system called Interactive Genomics Patient Stratification
explorer (iGPSe) which significantly reduces the computing burden for
biomedical researchers in the process of exploring complicated integrative
genomics data. Our system integrates unsupervised clustering with graph and
parallel sets visualization and allows direct comparison of clinical outcomes
via survival analysis. Using a breast cancer dataset obtained from the The
Cancer Genome Atlas (TCGA) project, we are able to quickly explore different
combinations of gene expression (mRNA) and microRNA features and identify
potential combined markers for survival prediction. Conclusions: Visualization
plays an important role in the process of stratifying given population
patients. Visual tools allowed for the selection of possibly features across
various datasets for the given patient population. We essentially made a case
for visualization for a very important problem in translational informatics.Comment: BioVis 2014 conferenc
Recommended from our members
A Bilinear Illumination Model for Robust Face Recognition
We present a technique to generate an illumination subspace for arbitrary 3D faces based on the statistics of measured illuminations under variable lighting conditions from many subjects. A bilinear model based on the higher-order singular value decomposition is used to create a compact illumination subspace given arbitrary shape parameters from a parametric 3D face model. Using a fitting procedure based on minimizing the distance of the input image to the dynamically changing illumination subspace, we reconstruct a shape-specific illumination subspace from a single photograph. We use the reconstructed illumination subspace in various face recognition experiments with variable lighting conditions and obtain accuracies which are very competitive with previous methods that require specific training sessions or multiple images of the subject.Engineering and Applied Science
A comment on Guo et al. [arXiv:2206.11228]
In a recent article, Guo et al. [arXiv:2206.11228] report that adversarially
trained neural representations in deep networks may already be as robust as
corresponding primate IT neural representations. While we find the paper's
primary experiment illuminating, we have doubts about the interpretation and
phrasing of the results presented in the paper
Recommended from our members
Estimation of 3D Faces and Illumination from Single Photographs Using a Bilineaur Illumination Model
3D Face modeling is still one of the biggest challenges in computer graphics. In this paper we present a novel framework that acquires the 3D shape, texture, pose and illumination of a face from a single photograph. Additionally, we show how we can recreate a face under varying illumination conditions. Or, essentially relight it. Using a custom-built face scanning system, we have collected 3D face scans and light reflection images of a large and diverse group of human subjects . We derive a morphable face model for 3D face shapes and accompanying textures by transforming the data into a linear vector sub-space. The acquired images of faces under variable illumination are then used to derive a bilinear illumination model that spans 3D face shape and illumination variations. Using both models we, in turn, propose a novel fitting framework that estimates the parameters of the morphable model given a single photograph. Our framework can deal with complex face reflectance and lighting environments in an efficient and robust manner. In the results section of our paper, we compare our methods to existing ones and demonstrate its efficacy in reconstructing 3D face models when provided with a single photograph. We also provide several examples of facial relighting (on 2D images) by performing adequate decomposition of the estimated illumination using our framework.Engineering and Applied Science
Recommended from our members
Finding Optimal Views for 3D Face Shape Modeling
A fundamental problem in multi-view 3D face modeling is the determination of the set of optimal views (poses) required for accurate 3D shape estimation of a generic face. There is no analytical solution to this problem, instead (partial) solutions require (near) exhaustive combinatorial search, hence the inherent computational difficulty of this task. We build on our previous modeling framework [Silhouette-based 3D face shape recovery, Model-based 3D face capture using shape-from-silhouettes] which uses an efficient contour-based silhouette method and extend it by aggressive pruning of the view-sphere with view clustering and various imaging constraints. A multi-view optimization search is performed using both model-based (eigenheads) and data-driven (visual hull) methods, yielding comparable best views. These constitute the first reported set of optimal views for 3D face shape capture and provide useful empirical guidelines for the design of 3D face recognition systems.Engineering and Applied Science
- …