57 research outputs found
Paid Sick Days: Attitudes and Experiences
Analyzes survey findings on Americans' views on the importance of paid sick days as a basic workers' right and support for legislation guaranteeing paid sick days by age, race/ethnicity, income, education, family structure, and political affiliation
Effective Scheduling of Grid Resources Using Failure Prediction
In large-scale grid environments, accurate failure prediction is critical to achieve effective resource allocation while assuring specified QoS levels, such as reliability. Traditional methods, such as statistical estimation techniques, can be considered to predict the reliability of resources. However, naive statistical methods often ignore critical characteristic behavior of the resources. In particular, periodic behaviors of grid resources are not captured well by statistical methods. In this paper, we present an alternative mechanism for failure prediction. In our approach, the periodic pattern of resource failures are determined and actively exploited for resource allocation with better QoS guarantees. The proposed scheme is evaluated under a realistic simulation environment of computational grids. The availability of computing resources are simulated according to real trace that was collected from our large-scale monitoring experiment on campus computers. Our evaluation results show that the proposed approach enables significantly higher resource scheduling effectiveness under a variety of workloads compared to baseline approaches
ContrastCAD: Contrastive Learning-based Representation Learning for Computer-Aided Design Models
The success of Transformer-based models has encouraged many researchers to
learn CAD models using sequence-based approaches. However, learning CAD models
is still a challenge, because they can be represented as complex shapes with
long construction sequences. Furthermore, the same CAD model can be expressed
using different CAD construction sequences. We propose a novel contrastive
learning-based approach, named ContrastCAD, that effectively captures semantic
information within the construction sequences of the CAD model. ContrastCAD
generates augmented views using dropout techniques without altering the shape
of the CAD model. We also propose a new CAD data augmentation method, called a
Random Replace and Extrude (RRE) method, to enhance the learning performance of
the model when training an imbalanced training CAD dataset. Experimental
results show that the proposed RRE augmentation method significantly enhances
the learning performance of Transformer-based autoencoders, even for complex
CAD models having very long construction sequences. The proposed ContrastCAD
model is shown to be robust to permutation changes of construction sequences
and performs better representation learning by generating representation spaces
where similar CAD models are more closely clustered. Our codes are available at
https://github.com/cm8908/ContrastCAD
An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method
We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient) and second-order (Hessian) derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality
A Lightweight CNN-Transformer Model for Learning Traveling Salesman Problems
Several studies have attempted to solve traveling salesman problems (TSPs)
using various deep learning techniques. Among them, Transformer-based models
show state-of-the-art performance even for large-scale Traveling Salesman
Problems (TSPs). However, they are based on fully-connected attention models
and suffer from large computational complexity and GPU memory usage. Our work
is the first CNN-Transformer model based on a CNN embedding layer and partial
self-attention for TSP. Our CNN-Transformer model is able to better learn
spatial features from input data using a CNN embedding layer compared with the
standard Transformer-based models. It also removes considerable redundancy in
fully-connected attention models using the proposed partial self-attention.
Experimental results show that the proposed CNN embedding layer and partial
self-attention are very effective in improving performance and computational
complexity. The proposed model exhibits the best performance in real-world
datasets and outperforms other existing state-of-the-art (SOTA)
Transformer-based models in various aspects. Our code is publicly available at
https://github.com/cm8908/CNN_Transformer3
Learning Delaunay Triangulation using Self-attention and Domain Knowledge
Delaunay triangulation is a well-known geometric combinatorial optimization
problem with various applications. Many algorithms can generate Delaunay
triangulation given an input point set, but most are nontrivial algorithms
requiring an understanding of geometry or the performance of additional
geometric operations, such as the edge flip. Deep learning has been used to
solve various combinatorial optimization problems; however, generating Delaunay
triangulation based on deep learning remains a difficult problem, and very few
research has been conducted due to its complexity. In this paper, we propose a
novel deep-learning-based approach for learning Delaunay triangulation using a
new attention mechanism based on self-attention and domain knowledge. The
proposed model is designed such that the model efficiently learns
point-to-point relationships using self-attention in the encoder. In the
decoder, a new attention score function using domain knowledge is proposed to
provide a high penalty when the geometric requirement is not satisfied. The
strength of the proposed attention score function lies in its ability to extend
its application to solving other combinatorial optimization problems involving
geometry. When the proposed neural net model is well trained, it is simple and
efficient because it automatically predicts the Delaunay triangulation for an
input point set without requiring any additional geometric operations. We
conduct experiments to demonstrate the effectiveness of the proposed model and
conclude that it exhibits better performance compared with other
deep-learning-based approaches
Mol-AIR: Molecular Reinforcement Learning with Adaptive Intrinsic Rewards for Goal-directed Molecular Generation
Optimizing techniques for discovering molecular structures with desired
properties is crucial in artificial intelligence(AI)-based drug discovery.
Combining deep generative models with reinforcement learning has emerged as an
effective strategy for generating molecules with specific properties. Despite
its potential, this approach is ineffective in exploring the vast chemical
space and optimizing particular chemical properties. To overcome these
limitations, we present Mol-AIR, a reinforcement learning-based framework using
adaptive intrinsic rewards for effective goal-directed molecular generation.
Mol-AIR leverages the strengths of both history-based and learning-based
intrinsic rewards by exploiting random distillation network and counting-based
strategies. In benchmark tests, Mol-AIR demonstrates superior performance over
existing approaches in generating molecules with desired properties without any
prior knowledge, including penalized LogP, QED, and celecoxib similarity. We
believe that Mol-AIR represents a significant advancement in drug discovery,
offering a more efficient path to discovering novel therapeutics
Social-science research and the general social surveys
'Social-science research has been transformed over the last generation by the advent and expansion of the general social surveys (GSS). The GSS model of research has created a infrastructure for the social sciences designed to address the interests and research agenda of scholars and their students; cover a wide range of topics; utilize reliable, valid, and generalizable measurement; and provide data both across nations and across time. This design in turn has generated widespread analysis and notably contributed to our understanding of social processes and societal change.' (author's abstract)
A Derivative-Free Mesh Optimization Algorithm for Mesh Quality Improvement and Untangling
We propose a derivative-free mesh optimization algorithm, which focuses on improving the worst element quality on the mesh. The mesh optimization problem is formulated as a min-max problem and solved by using a downhill simplex (amoeba) method, which computes only a function value without needing a derivative of Hessian of the objective function. Numerical results show that the proposed mesh optimization algorithm outperforms the existing mesh optimization algorithm in terms of improving the worst element quality and eliminating inverted elements on the mesh
The general social survey-national death index: an innovative new dataset for the social sciences
<p>Abstract</p> <p>Background</p> <p>Social epidemiology seeks in part to understand how social factors--ideas, beliefs, attitudes, actions, and social connections--influence health. However, national health datasets have not kept up with the evolving needs of this cutting-edge area in public health. Sociological datasets that do contain such information, in turn, provide limited health information.</p> <p>Findings</p> <p>Our team has prospectively linked three decades of General Social Survey data to mortality information through 2008 via the National Death Index. In this paper, we describe the sample, the core elements of the dataset, and analytical considerations.</p> <p>Conclusions</p> <p>The General Social Survey-National Death Index (GSS-NDI), to be released publicly in October 2011, will help shape the future of social epidemiology and other frontier areas of public health research.</p
- …