149 research outputs found

    A globally convergent SQP-type method with least constraint violation for nonlinear semidefinite programming

    Full text link
    We present a globally convergent SQP-type method with the least constraint violation for nonlinear semidefinite programming. The proposed algorithm employs a two-phase strategy coupled with a line search technique. In the first phase, a subproblem based on a local model of infeasibility is formulated to determine a corrective step. In the second phase, a search direction that moves toward optimality is computed by minimizing a local model of the objective function. Importantly, regardless of the feasibility of the original problem, the iterative sequence generated by our proposed method converges to a Fritz-John point of a transformed problem, wherein the constraint violation is minimized. Numerical experiments have been conducted on various complex scenarios to demonstrate the effectiveness of our approach.Comment: 34 page

    Do Ownership and Culture Matter to Joint Venture Success?

    Get PDF
    Previous studies show that equity structure and cultural differences are important factors that influence the performance of joint ventures (JVs). Based on the JVs contract database and Import/Export ranking database the analysis shows that the performance of monopoly controlled JVs is better than others. However, cultural differences do not hinder performance; in fact, heterogeneity has provided better outcomes in JVs. Based on grouped data samples, it is believed that the higher the ratio of Chinese equity in JVs, the better the export performance. However, the relation between foreign equity and import orientation is not significant

    Theoretic Analysis and Extremely Easy Algorithms for Domain Adaptive Feature Learning

    Full text link
    Domain adaptation problems arise in a variety of applications, where a training dataset from the \textit{source} domain and a test dataset from the \textit{target} domain typically follow different distributions. The primary difficulty in designing effective learning models to solve such problems lies in how to bridge the gap between the source and target distributions. In this paper, we provide comprehensive analysis of feature learning algorithms used in conjunction with linear classifiers for domain adaptation. Our analysis shows that in order to achieve good adaptation performance, the second moments of the source domain distribution and target domain distribution should be similar. Based on our new analysis, a novel extremely easy feature learning algorithm for domain adaptation is proposed. Furthermore, our algorithm is extended by leveraging multiple layers, leading to a deep linear model. We evaluate the effectiveness of the proposed algorithms in terms of domain adaptation tasks on the Amazon review dataset and the spam dataset from the ECML/PKDD 2006 discovery challenge.Comment: ijca

    A New High-Speed Foreign Fiber Detection System with Machine Vision

    Get PDF
    A new high-speed foreign fiber detection system with machine vision is proposed for removing foreign fibers from raw cotton using optimal hardware components and appropriate algorithms designing. Starting from a specialized lens of 3-charged couple device (CCD) camera, the system applied digital signal processor (DSP) and field-programmable gate array (FPGA) on image acquisition and processing illuminated by ultraviolet light, so as to identify transparent objects such as polyethylene and polypropylene fabric from cotton tuft flow by virtue of the fluorescent effect, until all foreign fibers that have been blown away safely by compressed air quality can be achieved. An image segmentation algorithm based on fast wavelet transform is proposed to identify block-like foreign fibers, and an improved canny detector is also developed to segment wire-like foreign fibers from raw cotton. The procedure naturally provides color image segmentation method with region growing algorithm for better adaptability. Experiments on a variety of images show that the proposed algorithms can effectively segment foreign fibers from test images under various circumstances

    Visualizing topological edge states of single and double bilayer Bi supported on multibilayer Bi(111) films

    Full text link
    Freestanding single-bilayer Bi(111) is a two-dimensional topological insulator with edge states propagating along its perimeter. Given the interlayer coupling experimentally, the topological nature of Bi(111) thin films and the impact of the supporting substrate on the topmost Bi bilayer are still under debate. Here, combined with scanning tunneling microscopy and first-principles calculations, we systematically study the electronic properties of Bi(111) thin films grown on a NbSe2 substrate. Two types of non-magnetic edge structures, i.e., a conventional zigzag edge and a 2x1 reconstructed edge, coexist alternately at the boundaries of single bilayer islands, the topological edge states of which exhibit remarkably different energy and spatial distributions. Prominent edge states are persistently visualized at the edges of both single and double bilayer Bi islands, regardless of the underlying thickness of Bi(111) thin films. We provide an explanation for the topological origin of the observed edge states that is verified with first-principles calculations. Our paper clarifies the long-standing controversy regarding the topology of Bi(111) thin films and reveals the tunability of topological edge states via edge modifications.Comment: 36 pages, 10 figure

    Sample Efficiency Matters: A Benchmark for Practical Molecular Optimization

    Full text link
    Molecular optimization is a fundamental goal in the chemical sciences and is of central interest to drug and material design. In recent years, significant progress has been made in solving challenging problems across various aspects of computational molecular optimizations, emphasizing high validity, diversity, and, most recently, synthesizability. Despite this progress, many papers report results on trivial or self-designed tasks, bringing additional challenges to directly assessing the performance of new methods. Moreover, the sample efficiency of the optimization--the number of molecules evaluated by the oracle--is rarely discussed, despite being an essential consideration for realistic discovery applications. To fill this gap, we have created an open-source benchmark for practical molecular optimization, PMO, to facilitate the transparent and reproducible evaluation of algorithmic advances in molecular optimization. This paper thoroughly investigates the performance of 25 molecular design algorithms on 23 tasks with a particular focus on sample efficiency. Our results show that most "state-of-the-art" methods fail to outperform their predecessors under a limited oracle budget allowing 10K queries and that no existing algorithm can efficiently solve certain molecular optimization problems in this setting. We analyze the influence of the optimization algorithm choices, molecular assembly strategies, and oracle landscapes on the optimization performance to inform future algorithm development and benchmarking. PMO provides a standardized experimental setup to comprehensively evaluate and compare new molecule optimization methods with existing ones. All code can be found at https://github.com/wenhao-gao/mol_opt

    Semantic-visual Guided Transformer for Few-shot Class-incremental Learning

    Full text link
    Few-shot class-incremental learning (FSCIL) has recently attracted extensive attention in various areas. Existing FSCIL methods highly depend on the robustness of the feature backbone pre-trained on base classes. In recent years, different Transformer variants have obtained significant processes in the feature representation learning of massive fields. Nevertheless, the progress of the Transformer in FSCIL scenarios has not achieved the potential promised in other fields so far. In this paper, we develop a semantic-visual guided Transformer (SV-T) to enhance the feature extracting capacity of the pre-trained feature backbone on incremental classes. Specifically, we first utilize the visual (image) labels provided by the base classes to supervise the optimization of the Transformer. And then, a text encoder is introduced to automatically generate the corresponding semantic (text) labels for each image from the base classes. Finally, the constructed semantic labels are further applied to the Transformer for guiding its hyperparameters updating. Our SV-T can take full advantage of more supervision information from base classes and further enhance the training robustness of the feature backbone. More importantly, our SV-T is an independent method, which can directly apply to the existing FSCIL architectures for acquiring embeddings of various incremental classes. Extensive experiments on three benchmarks, two FSCIL architectures, and two Transformer variants show that our proposed SV-T obtains a significant improvement in comparison to the existing state-of-the-art FSCIL methods.Comment: Accepted by IEEE International Conference on Multimedia and Expo (ICME 2023

    A New Type of Crumb Rubber Asphalt Mixture: A Dry Process Design and Performance Evaluation

    Get PDF
    To obtain a crumb rubber asphalt mixture with excellent performance, this study combined trans-polyoctenamer rubber (TOR), crumb rubber, and other additives to establish a new type of crumb rubber (CRT). The objective of this study was to design and evaluate the road performance of the new type of crumb rubber asphalt mixture (CRTAM) with a skeleton dense texture through a dry process. First, the skeleton intrusion compact volume method was used to optimize the grading of coarse and fine aggregates, and the design of the CRTAM gradation was carried out through the same and unequal volume replacement grading method. Then, three types of road performance were analyzed: high-temperature stability, low-temperature crack resistance, and water stability. The results showed that 2% and 2.5% CRT met a low-temperature index with equal volume substitution, and the six gradations obtained by unequal volume replacement with 2% CRT complied with the requirements of a skeleton dense texture. When the substitution ratio was 1.5 and 0.5, the high-temperature performance was better. In addition, when the substitution ratio was 0.5, the flexural strain energy density was the highest and the low-temperature performance was the best. Including considerations of economic benefits, it is recommended that the CRT content be 2% and the substitution ratio be 0.5

    Palaeoenvironment and Its Control on the Formation of Miocene Marine Source Rocks in the Qiongdongnan Basin, Northern South China Sea

    Get PDF
    The main factors of the developmental environment of marine source rocks in continental margin basins have their specificality. This realization, in return, has led to the recognition that the developmental environment and pattern of marine source rocks, especially for the source rocks in continental margin basins, are still controversial or poorly understood. Through the analysis of the trace elements and maceral data, the developmental environment of Miocene marine source rocks in the Qiongdongnan Basin is reconstructed, and the developmental patterns of the Miocene marine source rocks are established. This paper attempts to reveal the hydrocarbon potential of the Miocene marine source rocks in different environment and speculate the quality of source rocks in bathyal region of the continental slope without exploratory well. Our results highlight the palaeoenvironment and its control on the formation of Miocene marine source rocks in the Qiongdongnan Basin of the northern South China Sea and speculate the hydrocarbon potential of the source rocks in the bathyal region. This study provides a window for better understanding the main factors influencing the marine source rocks in the continental margin basins, including productivity, preservation conditions, and the input of terrestrial organic matter

    Understanding ME? Multimodal Evaluation for Fine-grained Visual Commonsense

    Full text link
    Visual commonsense understanding requires Vision Language (VL) models to not only understand image and text but also cross-reference in-between to fully integrate and achieve comprehension of the visual scene described. Recently, various approaches have been developed and have achieved high performance on visual commonsense benchmarks. However, it is unclear whether the models really understand the visual scene and underlying commonsense knowledge due to limited evaluation data resources. To provide an in-depth analysis, we present a Multimodal Evaluation (ME) pipeline to automatically generate question-answer pairs to test models' understanding of the visual scene, text, and related knowledge. We then take a step further to show that training with the ME data boosts the model's performance in standard VCR evaluation. Lastly, our in-depth analysis and comparison reveal interesting findings: (1) semantically low-level information can assist the learning of high-level information but not the opposite; (2) visual information is generally under utilization compared with text.Comment: Accepted to EMNLP 2022 Long Pape
    corecore