125 research outputs found

    Volvo Car Corporation and its Potential Market in East Coast China

    Get PDF
    With the rapid growth of Chinese economy, China has surpassed America and become the biggest automotive market. After Zhejiang Geely Holding Group acquired Volvo Car Corporation in 2010, China was targeted to be its future biggest market in both producing automobiles and sales around the world. The purpose of this study was to find out the potential market for Volvo Car Corporation in the east coast region in China through studying Chinese automotive consumers’ automobile attitude and Chinese consumers’ automobile consumption behavior. This study used questionnaires on a selected group of Chinese consumers and interviews with a journalist and two car dealers in Shanghai. From the analysis on the results of the questionnaires and the interviews, I found that Volvo Car Corporation’s present situation in the China east coast region existed several problems and was losing consumers, its safety selling point could not attract Chinese consumers, and its advertisement could not create brand loyalty within the consumers. Thus, Volvo Car Corporation’s current and future market would be a failure from both private and public consumers

    Planck Constraints on Holographic Dark Energy

    Full text link
    We perform a detailed investigation on the cosmological constraints on the holographic dark energy (HDE) model by using the Planck data. HDE can provide a good fit to Planck high-l (l>40) temperature power spectrum, while the discrepancy at l=20-40 found in LCDM remains unsolved in HDE. The Planck data alone can lead to strong and reliable constraint on the HDE parameter c. At 68% CL, we get c=0.508+-0.207 with Planck+WP+lensing, favoring the present phantom HDE at > 2sigma CL. Comparably, by using WMAP9 alone we cannot get interesting constraint on c. By combining Planck+WP with the BAO measurements from 6dFGS+SDSS DR7(R)+BOSS DR9, the H0 measurement from HST, the SNLS3 and Union2.1 SNIa data sets, we get 68% CL constraints c=0.484+-0.070, 0.474+-0.049, 0.594+-0.051 and 0.642+-0.066. Constraints can be improved by 2%-15% if we further add the Planck lensing data. Compared with the WMAP9 results, the Planck results reduce the error by 30%-60%, and prefer a phantom-like HDE at higher CL. We find no evident tension between Planck and BAO/HST. Especially, the strong correlation between Omegam h^3 and dark energy parameters is helpful in relieving the tension between Planck and HST. The residual chi^2_{Planck+WP+HST}-chi^2_{Planck+WP} is 7.8 in LCDM, and is reduced to 1.0 or 0.3 if we switch dark energy to the w model or the holographic model. We find SNLS3 is in tension with all other data sets; for Planck+WP, WMAP9 and BAO+HST, the corresponding Delta chi^2 is 6.4, 3.5 and 4.1, respectively. Comparably, Union2.1 is consistent with these data sets, but the combination Union2.1+BAO+HST is in tension with Planck+WP+lensing, corresponding to a Delta chi^2 8.6 (1.4% probability). Thus, it is not reasonable to perform an all-combined (CMB+SNIa+BAO+HST) analysis for HDE when using the Planck data. Our tightest self-consistent constraint is c=0.495+-0.039 obtained from Planck+WP+BAO+HST+lensing.Comment: 29 pages, 11 figures, 3 tables; version accepted for publication in JCA

    Investigating the Effects of Robot Engagement Communication on Learning from Demonstration

    Full text link
    Robot Learning from Demonstration (RLfD) is a technique for robots to derive policies from instructors' examples. Although the reciprocal effects of student engagement on teacher behavior are widely recognized in the educational community, it is unclear whether the same phenomenon holds true for RLfD. To fill this gap, we first design three types of robot engagement behavior (attention, imitation, and a hybrid of the two) based on the learning literature. We then conduct, in a simulation environment, a within-subject user study to investigate the impact of different robot engagement cues on humans compared to a "without-engagement" condition. Results suggest that engagement communication significantly changes the human's estimation of the robots' capability and significantly raises their expectation towards the learning outcomes, even though we do not run actual learning algorithms in the experiments. Moreover, imitation behavior affects humans more than attention does in all metrics, while their combination has the most profound influences on humans. We also find that communicating engagement via imitation or the combined behavior significantly improve humans' perception towards the quality of demonstrations, even if all demonstrations are of the same quality.Comment: Under revie

    Weakly Supervised Point Clouds Transformer for 3D Object Detection

    Full text link
    The annotation of 3D datasets is required for semantic-segmentation and object detection in scene understanding. In this paper we present a framework for the weakly supervision of a point clouds transformer that is used for 3D object detection. The aim is to decrease the required amount of supervision needed for training, as a result of the high cost of annotating a 3D datasets. We propose an Unsupervised Voting Proposal Module, which learns randomly preset anchor points and uses voting network to select prepared anchor points of high quality. Then it distills information into student and teacher network. In terms of student network, we apply ResNet network to efficiently extract local characteristics. However, it also can lose much global information. To provide the input which incorporates the global and local information as the input of student networks, we adopt the self-attention mechanism of transformer to extract global features, and the ResNet layers to extract region proposals. The teacher network supervises the classification and regression of the student network using the pre-trained model on ImageNet. On the challenging KITTI datasets, the experimental results have achieved the highest level of average precision compared with the most recent weakly supervised 3D object detectors.Comment: International Conference on Intelligent Transportation Systems (ITSC), 202

    Charting the Future of AI in Project-Based Learning: A Co-Design Exploration with Students

    Full text link
    The increasing use of Artificial Intelligence (AI) by students in learning presents new challenges for assessing their learning outcomes in project-based learning (PBL). This paper introduces a co-design study to explore the potential of students' AI usage data as a novel material for PBL assessment. We conducted workshops with 18 college students, encouraging them to speculate an alternative world where they could freely employ AI in PBL while needing to report this process to assess their skills and contributions. Our workshops yielded various scenarios of students' use of AI in PBL and ways of analyzing these uses grounded by students' vision of education goal transformation. We also found students with different attitudes toward AI exhibited distinct preferences in how to analyze and understand the use of AI. Based on these findings, we discuss future research opportunities on student-AI interactions and understanding AI-enhanced learning.Comment: Conditionally accepted by CHI '2

    Storyfier: Exploring Vocabulary Learning Support with Text Generation Models

    Full text link
    Vocabulary learning support tools have widely exploited existing materials, e.g., stories or video clips, as contexts to help users memorize each target word. However, these tools could not provide a coherent context for any target words of learners' interests, and they seldom help practice word usage. In this paper, we work with teachers and students to iteratively develop Storyfier, which leverages text generation models to enable learners to read a generated story that covers any target words, conduct a story cloze test, and use these words to write a new story with adaptive AI assistance. Our within-subjects study (N=28) shows that learners generally favor the generated stories for connecting target words and writing assistance for easing their learning workload. However, in the read-cloze-write learning sessions, participants using Storyfier perform worse in recalling and using target words than learning with a baseline tool without our AI features. We discuss insights into supporting learning tasks with generative models.Comment: To appear at the 2023 ACM Symposium on User Interface Software and Technology (UIST); 16 pages (7 figures, 23 tables

    Ada-TTA: Towards Adaptive High-Quality Text-to-Talking Avatar Synthesis

    Full text link
    We are interested in a novel task, namely low-resource text-to-talking avatar. Given only a few-minute-long talking person video with the audio track as the training data and arbitrary texts as the driving input, we aim to synthesize high-quality talking portrait videos corresponding to the input text. This task has broad application prospects in the digital human industry but has not been technically achieved yet due to two challenges: (1) It is challenging to mimic the timbre from out-of-domain audio for a traditional multi-speaker Text-to-Speech system. (2) It is hard to render high-fidelity and lip-synchronized talking avatars with limited training data. In this paper, we introduce Adaptive Text-to-Talking Avatar (Ada-TTA), which (1) designs a generic zero-shot multi-speaker TTS model that well disentangles the text content, timbre, and prosody; and (2) embraces recent advances in neural rendering to achieve realistic audio-driven talking face video generation. With these designs, our method overcomes the aforementioned two challenges and achieves to generate identity-preserving speech and realistic talking person video. Experiments demonstrate that our method could synthesize realistic, identity-preserving, and audio-visual synchronized talking avatar videos.Comment: 6 pages, 3 figure
    • …
    corecore