240 research outputs found

    Mergers and Acquisitions in Europe: Analysis of EC Competition Regulations

    Full text link
    This paper analyzes three competition regulations in the European Community—article 85 and 86 of the EC Treaty and the EC Merger Regulation. Specifically, article 85 focuses on the market structure and article 86 focuses on the market dominance. The paper explores the Merger Regulation, its objectives and its scope. The amendment to the Merger Regulation extending its scope to include smaller-scale mergers and cooperative joint ventures is explained. The paper concludes with the extraterritoriality of the EC competition regulations

    Reflections on national “Sonderwege” in the era of transnational history

    Get PDF

    An Educational System Design to Support Learning Transfer from Block-based Programming Language to Text-based Programming Language

    Get PDF
    In programming education, novices normally learn block-based programming languages first, then move on to text-based programming languages. The effects of learning transfer on learning two or more languages in programming education has had positive results. However, block-based and text-based programming languages have different figurations and methods, which can occur cognitive confusion or increase cognitive overload for learners. Thus, it is necessary to develop an educational system that supports learning transfer. We suggest using the following design principles: utilization of advanced organizers, problem solving-based learning content, and simple and intuitive user interface and screen layout. Two types of screen composition modes are presented: training mode and practice mode. Future research must implement and apply this design in the educational field to verify its effectiveness

    An Analysis of Pre-service Teachers' Learning Process in Programming Learning

    Get PDF
    As the importance of computing technology increases, computer science education is being actively implemented around the world. Because computer science education is being introduced into the curriculum, research on how to effectively teach programming (which is the core of automation) is actively underway. Although the importance of block-based programming languages has increased, most studies have focused on text-based programming languages. As interest in programming increases, block-based programming languages will be taught to a variety of audiences. Therefore, this study analyzed Code.org, which provides a development environment for block-based programming; this study then investigated the programming learning process of pre-service teachers, who used Code.org. Sixteen pre-service teachers participated in the study, and their learning processes were uncovered by analyzing their programming results. This suggests that pre-service teachers can learn sequential and necessary repetition without difficulty. However, the pre-service teachers failed to use the repetition block through abstraction. Besides, for While and Until, pre-service teachers did not understand the concept of repeating according to the condition. For Counter, pre-service teachers had difficulty repeating the use of variables. In the condition, pre-service teachers were not able to separate the command, which should be executed when the condition is True and when it is False. For Event, pre-service teachers had no problem utilizing the function, but they were not able to call the function with a parameter. Based on this, it was confirmed that a pre-service teacher can understand the principle of programming development in advance by understanding the abstraction, condition, and variable in the loop statement. In this study, there was a limit to practicing block-based programming language due to the platform’s low scalability. Future research should solve these problems and diversify the research subjects

    Robust face anti-spoofing framework with Convolutional Vision Transformer

    Full text link
    Owing to the advances in image processing technology and large-scale datasets, companies have implemented facial authentication processes, thereby stimulating increased focus on face anti-spoofing (FAS) against realistic presentation attacks. Recently, various attempts have been made to improve face recognition performance using both global and local learning on face images; however, to the best of our knowledge, this is the first study to investigate whether the robustness of FAS against domain shifts is improved by considering global information and local cues in face images captured using self-attention and convolutional layers. This study proposes a convolutional vision transformer-based framework that achieves robust performance for various unseen domain data. Our model resulted in 7.3%pp and 12.9%pp increases in FAS performance compared to models using only a convolutional neural network or vision transformer, respectively. It also shows the highest average rank in sub-protocols of cross-dataset setting over the other nine benchmark models for domain generalization.Comment: ICIP 202

    Enhancing Spatiotemporal Traffic Prediction through Urban Human Activity Analysis

    Full text link
    Traffic prediction is one of the key elements to ensure the safety and convenience of citizens. Existing traffic prediction models primarily focus on deep learning architectures to capture spatial and temporal correlation. They often overlook the underlying nature of traffic. Specifically, the sensor networks in most traffic datasets do not accurately represent the actual road network exploited by vehicles, failing to provide insights into the traffic patterns in urban activities. To overcome these limitations, we propose an improved traffic prediction method based on graph convolution deep learning algorithms. We leverage human activity frequency data from National Household Travel Survey to enhance the inference capability of a causal relationship between activity and traffic patterns. Despite making minimal modifications to the conventional graph convolutional recurrent networks and graph convolutional transformer architectures, our approach achieves state-of-the-art performance without introducing excessive computational overhead.Comment: CIKM 202

    1st Place in ICCV 2023 Workshop Challenge Track 1 on Resource Efficient Deep Learning for Computer Vision: Budgeted Model Training Challenge

    Full text link
    The budgeted model training challenge aims to train an efficient classification model under resource limitations. To tackle this task in ImageNet-100, we describe a simple yet effective resource-aware backbone search framework composed of profile and instantiation phases. In addition, we employ multi-resolution ensembles to boost inference accuracy on limited resources. The profile phase obeys time and memory constraints to determine the models' optimal batch-size, max epochs, and automatic mixed precision (AMP). And the instantiation phase trains models with the determined parameters from the profile phase. For improving intra-domain generalizations, the multi-resolution ensembles are formed by two-resolution images with randomly applied flips. We present a comprehensive analysis with expensive experiments. Based on our approach, we win first place in International Conference on Computer Vision (ICCV) 2023 Workshop Challenge Track 1 on Resource Efficient Deep Learning for Computer Vision (RCV).Comment: ICCV 2023 Workshop Challenge Track 1 on RC
    • …
    corecore