34,476 research outputs found
Inferring Types to Eliminate Ownership Checks in an Intentional JavaScript Compiler
Concurrent programs are notoriously difficult to develop due to the non-deterministic nature of thread scheduling. It is desirable to have a programming language to make such development easier. Tscript comprises such a system. Tscript is an extension of JavaScript that provides multithreading support along with intent specification. These intents allow a programmer to specify how parts of the program interact in a multithreaded context. However, enforcing intents requires run-time memory checks which can be inefficient. This thesis implements an optimization in the Tscript compiler that seeks to improve this inefficiency through static analysis. Our approach utilizes both type inference and dataflow analysis to eliminate unnecessary run-time checks
When Untapped Talent Meets Employer Need: The Boston Foundation's Allied Health Strategy
Offers a three-year evaluation of an educational pipeline created in partnership with three hospitals to help low-income workers enter urgently needed healthcare occupations in terms of workers assisted, workforce development capacity, and collaboration
The "empty shell" approach: the setup process of international administrations in Timor-Leste and Kosovo, its consequences and lessons
State-building under the aegis of international administrations has faced various hurdles and obstacles in Kosovo and Timor-Leste — failures that came to full light in March 2004 in Kosovo and in May 2006 in Timor-Leste. However, the international conception buttressing the set up of international administrations — I dub it the "empty-shell" approach — is still present in certain policy circles. This article aims to analyze this international conception by clarifying how the UN came to impose its authority over the two territories in a very similar process. While the literature on each state-building experiment is vast and com- pelling, few authors have attempted to contrast the two case studies, especially regarding the mental conception informing the governance process of these territories since 1999. This article links the empty-shell approach with the delegitimization process that came to be experi- enced by the UN in both cases. The article describes the international policies put in place by the UN to expand its control over the two terri- tories, a mix of co-option of local elites and the marginalization of the local population. Finally, the article reveals some possible solutions in order to avoid the more blatant difficulties pertaining to state-building conducted from the outside-in
Exploring the Importance of Single Nucleotide Polymorphisms of HSPA9 in DNA of Sarcoma Patients
The aim of this project was to identify genetic variants that may influence the risk and progression of sarcoma through targeted genotyping of HSPA9 gene. It is important to look at genetic variants in DNA samples because if a variant is determined to be more likely than another, a screening for the particular variant can be done to identify a patient’s risk of sarcoma. The study population was sarcoma patients from the International Sarcoma Kindred Study. These patients had no mutations in p53 or MDM2. Genotyping data from the HapMap project (hapmap.org) for HSPA9 was used to identify the polymorphisms needed to tag the entire region. In order to genotype the DNA sample, KASP reagents (KBioSciences, UK) were used. KASP uses a two-set PCR process. Allele specific primers are used to preferentially amplify each allele of a given SNP. The specific genetic variations of HSPA9 in sarcoma patient DNA samples with no mutations in p53 or MDM2 amplification are not more or less likely to occur than in DNA samples with the mutation or amplification. If continued research can show that MDM2 is not amplified, but activated through other mechanisms such as the interaction between polymorphisms of mitochondrial genes, p53, or MDM2, we can propose anti-MDM2 therapies to the patients with these polymorphisms
Cut, Paste and Learn: Surprisingly Easy Synthesis for Instance Detection
A major impediment in rapidly deploying object detection models for instance
detection is the lack of large annotated datasets. For example, finding a large
labeled dataset containing instances in a particular kitchen is unlikely. Each
new environment with new instances requires expensive data collection and
annotation. In this paper, we propose a simple approach to generate large
annotated instance datasets with minimal effort. Our key insight is that
ensuring only patch-level realism provides enough training signal for current
object detector models. We automatically `cut' object instances and `paste'
them on random backgrounds. A naive way to do this results in pixel artifacts
which result in poor performance for trained models. We show how to make
detectors ignore these artifacts during training and generate data that gives
competitive performance on real data. Our method outperforms existing synthesis
approaches and when combined with real images improves relative performance by
more than 21% on benchmark datasets. In a cross-domain setting, our synthetic
data combined with just 10% real data outperforms models trained on all real
data.Comment: To appear in ICCV 201
Watch and Learn: Semi-Supervised Learning of Object Detectors from Videos
We present a semi-supervised approach that localizes multiple unknown object
instances in long videos. We start with a handful of labeled boxes and
iteratively learn and label hundreds of thousands of object instances. We
propose criteria for reliable object detection and tracking for constraining
the semi-supervised learning process and minimizing semantic drift. Our
approach does not assume exhaustive labeling of each object instance in any
single frame, or any explicit annotation of negative data. Working in such a
generic setting allow us to tackle multiple object instances in video, many of
which are static. In contrast, existing approaches either do not consider
multiple object instances per video, or rely heavily on the motion of the
objects present. The experiments demonstrate the effectiveness of our approach
by evaluating the automatically labeled data on a variety of metrics like
quality, coverage (recall), diversity, and relevance to training an object
detector.Comment: To appear in CVPR 201
Time Series Data Mining: A Retail Application Using SAS Enterprise Miner
Modern technologies have allowed for the amassment of data at a rate never encountered before. Organizations are now able to routinely collect and process massive volumes of data. A plethora of regularly collected information can be ordered using an appropriate time interval. The data would thus be developed into a time series. With such data, analytical techniques can be employed to collect information pertaining to historical trends and seasonality. Time series data mining methodology allows users to identify commonalities between sets of time-ordered data. This technique is supported by a variety of algorithms, notably dynamic time warping (DTW). This mathematical technique supports the identification of similarities between numerous time series. The following research aims to provide a practical application of this methodology using SAS Enterprise Miner, an industry-leading software platform for business analytics. Due to the prevalence of time series data in retail settings, a realistic product sales transaction data set was analyzed. This information was provided by dunnhumbyUSA. Interpretations were drawn from output that was generated using “TS nodes” in SAS Enterprise Miner
- …
