88 research outputs found
Learning Better Font Slicing Strategies from Data
Generally, the present disclosure is directed to serving font files in topical subsets. In particular, in some implementations, the systems and methods of the present disclosure can include or otherwise leverage one or more machine-learned models to predict topic labels of characters in a font based on a corpus of characters or glyphs
Conversion funnel optimization using machine learning
Conversion funnels represent the journey of a consumer through a marketing campaign resulting in a sale. The described techniques incorporate marketer inputs and apply machine learning to optimize conversion funnels in online advertising. The techniques incorporate initial inputs from marketers and apply machine learning techniques to optimize conversion funnels in the context of advertising. The initial inputs include definitions of states and optimal actions to move consumers to next states of a funnel. With user permission, a machine learning model, e.g., that uses reinforced learning is randomly applied to try other actions defined by marketers (e.g., for other states) to collect training data on various actions. The training data is used to train the model and eliminates the requirement of a large amount of training data to be provided at the beginning of the process. The trained model thus obtained predicts optimal actions to move the consumer through the conversion funnel. Predicted actions using the model for a given state, along with supporting evidence, are provided to marketers for review and approval. Marketers can modify the funnel parameters, e.g., states and actions, based on such evidence
Exploration of Problems and Key Points in Database Design in Software Development
Starting from the necessity and principles of database design, this article explores the optimization issues. Firstly, analyze the necessity of database design, elaborating on effective management, maintainability, resource utilization, and running speed; Then, a series of issues in database management were discussed, such as user management, data object design specifications, and overall design ideas; Finally, the optimization issues such as normalization rules, inter table redundancy handling, query optimization, indexing, and transactions were elaborated in detail. In the software development lifecycle, database design is indispensable. Its role is not only to ensure the safety and reliability of data, but also to ensure the overall stability and speed of the system. Strengthening the rationality and optimization of design is the key to improving software quality
Corrigendum to "A state-dependent delay equation with chaotic solutions" [Electron. J. Qual. Theory Differ. Equ. 2019, No. 22, 1–20]
We correct an error in "A state-dependent delay equation with chaotic solutions" [Electron. J. Qual. Theory Differ. Equ. 2019, No. 22, 1–20
Improving Audio-Visual Segmentation with Bidirectional Generation
The aim of audio-visual segmentation (AVS) is to precisely differentiate
audible objects within videos down to the pixel level. Traditional approaches
often tackle this challenge by combining information from various modalities,
where the contribution of each modality is implicitly or explicitly modeled.
Nevertheless, the interconnections between different modalities tend to be
overlooked in audio-visual modeling. In this paper, inspired by the human
ability to mentally simulate the sound of an object and its visual appearance,
we introduce a bidirectional generation framework. This framework establishes
robust correlations between an object's visual characteristics and its
associated sound, thereby enhancing the performance of AVS. To achieve this, we
employ a visual-to-audio projection component that reconstructs audio features
from object segmentation masks and minimizes reconstruction errors. Moreover,
recognizing that many sounds are linked to object movements, we introduce an
implicit volumetric motion estimation module to handle temporal dynamics that
may be challenging to capture using conventional optical flow methods. To
showcase the effectiveness of our approach, we conduct comprehensive
experiments and analyses on the widely recognized AVSBench benchmark. As a
result, we establish a new state-of-the-art performance level in the AVS
benchmark, particularly excelling in the challenging MS3 subset which involves
segmenting multiple sound sources. To facilitate reproducibility, we plan to
release both the source code and the pre-trained model.Comment: Dawei Hao and Yuxin Mao contribute equality to this paper. Yiran
Zhong is the corresponding author. The code will be released at
https://github.com/OpenNLPLab/AVS-bidirectiona
HyPar: Towards Hybrid Parallelism for Deep Learning Accelerator Array
With the rise of artificial intelligence in recent years, Deep Neural
Networks (DNNs) have been widely used in many domains. To achieve high
performance and energy efficiency, hardware acceleration (especially inference)
of DNNs is intensively studied both in academia and industry. However, we still
face two challenges: large DNN models and datasets, which incur frequent
off-chip memory accesses; and the training of DNNs, which is not well-explored
in recent accelerator designs. To truly provide high throughput and energy
efficient acceleration for the training of deep and large models, we inevitably
need to use multiple accelerators to explore the coarse-grain parallelism,
compared to the fine-grain parallelism inside a layer considered in most of the
existing architectures. It poses the key research question to seek the best
organization of computation and dataflow among accelerators. In this paper, we
propose a solution HyPar to determine layer-wise parallelism for deep neural
network training with an array of DNN accelerators. HyPar partitions the
feature map tensors (input and output), the kernel tensors, the gradient
tensors, and the error tensors for the DNN accelerators. A partition
constitutes the choice of parallelism for weighted layers. The optimization
target is to search a partition that minimizes the total communication during
training a complete DNN. To solve this problem, we propose a communication
model to explain the source and amount of communications. Then, we use a
hierarchical layer-wise dynamic programming method to search for the partition
for each layer.Comment: To appear in the 2019 25th International Symposium on
High-Performance Computer Architecture (HPCA 2019
A state-dependent delay equation with chaotic solutions
We exhibit a scalar-valued state-dependent delay differential equation x 0 (t) = f(x(t − d(xt))) that has a chaotic solution. This equation has continuous (semi-strictly) monotonic negative feedback, and the quantity t − d(xt) is strictly increasing along solutions
Corrigendum to "A state-dependent delay equation with chaotic solutions" - Electron. J. Qual. Theory Differ. Equ. 2019, No. 22, 1-20
We correct an error in “A state-dependent delay equation with chaotic solutions” [Electron. J. Qual. Theory Differ. Equ. 2019, No. 22, 1–20]
- …