4,445 research outputs found

    Heroes: Promoting and Branding an Original Designer Toys using Motion Graphics

    Get PDF
    Heroes is a series of designer toys that were designed for my future business called T-Model. This project aims to create new types of designer toys to see how effective they can be in promoting an original business and how powerful they can be in presenting with current digital media. The Heroes series is inspired by simple human-like characters such as Lego, Mr. Potato Head, and Kidrobot. This toy series consists of three different kinds of models representing different time periods, countries, and historical figures. This project is divided into three major steps: 3D character design, three motion graphic pieces, and DVD publishing. First, I created three character prototypes with unique patterns to represent their personalities and backgrounds. I also designed their weapons and outfits to show their distinctive features. For the second step, I designed three different promotional videos to introduce each toy model. The final step is to publish the videos in a DVD format, and submit them to toy stores or potential buyers. The final presentation exhibits three motion graphic pieces, and each piece is no longer than 45 seconds. These videos are mainly focused on promoting my three designer toys and my future business T-Model. I expect that this project will be the cornerstone of the T-Model studio

    Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices

    Full text link
    A recent trend in DNN development is to extend the reach of deep learning applications to platforms that are more resource and energy constrained, e.g., mobile devices. These endeavors aim to reduce the DNN model size and improve the hardware processing efficiency, and have resulted in DNNs that are much more compact in their structures and/or have high data sparsity. These compact or sparse models are different from the traditional large ones in that there is much more variation in their layer shapes and sizes, and often require specialized hardware to exploit sparsity for performance improvement. Thus, many DNN accelerators designed for large DNNs do not perform well on these models. In this work, we present Eyeriss v2, a DNN accelerator architecture designed for running compact and sparse DNNs. To deal with the widely varying layer shapes and sizes, it introduces a highly flexible on-chip network, called hierarchical mesh, that can adapt to the different amounts of data reuse and bandwidth requirements of different data types, which improves the utilization of the computation resources. Furthermore, Eyeriss v2 can process sparse data directly in the compressed domain for both weights and activations, and therefore is able to improve both processing speed and energy efficiency with sparse models. Overall, with sparse MobileNet, Eyeriss v2 in a 65nm CMOS process achieves a throughput of 1470.6 inferences/sec and 2560.3 inferences/J at a batch size of 1, which is 12.6x faster and 2.5x more energy efficient than the original Eyeriss running MobileNet. We also present an analysis methodology called Eyexam that provides a systematic way of understanding the performance limits for DNN processors as a function of specific characteristics of the DNN model and accelerator design; it applies these characteristics as sequential steps to increasingly tighten the bound on the performance limits.Comment: accepted for publication in IEEE Journal on Emerging and Selected Topics in Circuits and Systems. This extended version on arXiv also includes Eyexam in the appendi

    Leveraging Language ID to Calculate Intermediate CTC Loss for Enhanced Code-Switching Speech Recognition

    Full text link
    In recent years, end-to-end speech recognition has emerged as a technology that integrates the acoustic, pronunciation dictionary, and language model components of the traditional Automatic Speech Recognition model. It is possible to achieve human-like recognition without the need to build a pronunciation dictionary in advance. However, due to the relative scarcity of training data on code-switching, the performance of ASR models tends to degrade drastically when encountering this phenomenon. Most past studies have simplified the learning complexity of the model by splitting the code-switching task into multiple tasks dealing with a single language and then learning the domain-specific knowledge of each language separately. Therefore, in this paper, we attempt to introduce language identification information into the middle layer of the ASR model's encoder. We aim to generate acoustic features that imply language distinctions in a more implicit way, reducing the model's confusion when dealing with language switching.Comment: Accepted to The 28th International Conference on Technologies and Applications of Artificial Intelligence (TAAI), in Chinese languag

    Inductorless CMOS Receiver Front-End Circuits for 10-Gb/s Optical Communications

    Get PDF
    [[abstract]]In this paper, a 10-Gb/s inductorless CMOS receiver front end is presented, including a transimpedance amplifier and a limiting amplifier. The transimpedance amplifier incorporates Regulated Cascode (RGC), active-inductor peaking, and intersecting active feedback circuits to achieve a transimpedance gain of 56 dB and a bandwidth of 8.27 GHz with a power dissipation of 35 mW. The limiting amplifier employs interleaving active feedback to achieve a differential voltage gain of 44.5 dB and a bandwidth of 10.3 GHz while consuming 226 mW. Both circuits are realized in 0.18- m CMOS technology with a 1.8-V supply.[[notice]]補正完畢[[incitationindex]]EI[[booktype]]紙

    MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

    Full text link
    Some recent works revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs. In this work, we revisit the DNN training process that includes adversarial examples into the training dataset so as to improve DNN's resilience to adversarial attacks, namely, adversarial training. Our experiments show that different adversarial strengths, i.e., perturbation levels of adversarial examples, have different working zones to resist the attack. Based on the observation, we propose a multi-strength adversarial training method (MAT) that combines the adversarial training examples with different adversarial strengths to defend adversarial attacks. Two training structures - mixed MAT and parallel MAT - are developed to facilitate the tradeoffs between training time and memory occupation. Our results show that MAT can substantially minimize the accuracy degradation of deep learning systems to adversarial attacks on MNIST, CIFAR-10, CIFAR-100, and SVHN.Comment: 6 pages, 4 figures, 2 table

    Using Peer-to-Peer Technology for Knowledge Sharing in Communities of Practices

    Get PDF
    Communities of Practices (CoPs) are informal structures within organizations that bind people together through informal relationships and the sharing of expertise and experience. As such, they are effective tools for the creation and sharing of organizational knowledge, and, increasingly, organizations are adopting them as part of their knowledge management strategies. In this paper, we examine the knowledge sharing characteristics and roles of CoPs and develop a peer-to-peer knowledge sharing architecture that matches the behavioral characteristics of the members of the CoPs. We also propose a peer-to-peer knowledge sharing tool called KTella that enables members of CoPs to voluntarily share and retrieve knowledge more effectively
    • …
    corecore