251 research outputs found

    Made in Italy (by the Chinese) : economic restructuring and the politics of migration

    Get PDF
    People around the world are on the move and settling in new, unexpected places. In Prato, Italy, Chinese immigrants now run most of the city’s textiles-apparel companies and even subcontract for such leading designers as Giorgio Armani and Dolce & Gabbana. Italian products once made by Italian workers are now increasingly made…by the Chinese! I argue that this development resulted from an uncanny synchronicity between their business approach and the demands of Italy’s local, familybased, small-batch production environment. In other words, the Chinese thrived because they fit in well with the unique makeup and demands of Italian industry.Personas de todo el mundo se desplazan y establecen en lugares nuevos e inesperados. En Prato, Italia, los inmigrantes chinos dirigen la mayoría de las empresas de confección textil de la ciudad e incluso subcontratan la confección de famosos diseñadores como Giorgio Armani o Dolce & Gabbana. Los productos italianos, antaño fabricados por trabajadores italianos, ahora cada vez están más hechos. . . ¡por chinos! Este desarrollo es el resultado de una extraordinaria sincronía entre su propia perspectiva empresarial y la estructura de producción local italiana a pequeña escala y basada en la familia. En definitiva, los chinos prosperan porque se adaptan bien al tipo de producción y a la demanda de la industria italiana

    Made in Italy (by the Chinese) : economic restructuring and the politics of migration

    Get PDF
    Personas de todo el mundo se desplazan y establecen en lugares nuevos e inesperados.En Prato, Italia, los inmigrantes chinos dirigen la mayoría de las empresas de confección textil de la ciudad e incluso subcontratan la confección de famosos diseñadores como Giorgio Armani o Dolce & Gabbana. Los productos italianos, antaño fabricados por trabajadores italianos, ahora cada vez están más hechos... ¡por chinos! Este desarrollo es el resultado de una extraordinaria sincronía entre su propia perspectiva empresarial y la estructura de producción local italiana a pequeña escala y basada en la familia. En definitiva, los chinos prosperan porque se adaptan bien al tipo de producción y a la demanda de la industria italiana.People around the world are on the move and settling in new, unexpected places. In Prato, Italy, Chinese immigrants now run most of the city's textiles-apparel companies and even subcontract for such leading designers as Giorgio Armani and Dolce & Gabbana. Italian products once made by Italian workers are now increasingly made…by the Chinese! I argue that this development resulted from an uncanny synchronicity between their business approach and the demands of Italy's local, familybased, small-batch production environment. In other words, the Chinese thrived because they fit in well with the unique makeup and demands of Italian industry

    Style Transfer to Calvin and Hobbes comics using Stable Diffusion

    Full text link
    This project report summarizes our journey to perform stable diffusion fine-tuning on a dataset containing Calvin and Hobbes comics. The purpose is to convert any given input image into the comic style of Calvin and Hobbes, essentially performing style transfer. We train stable-diffusion-v1.5 using Low Rank Adaptation (LoRA) to efficiently speed up the fine-tuning process. The diffusion itself is handled by a Variational Autoencoder (VAE), which is a U-net. Our results were visually appealing for the amount of training time and the quality of input data that went into training.Comment: Project report for ECE 371Q Digital Image Processing at UT Austi

    CALVIN: A Benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks

    Full text link
    General-purpose robots coexisting with humans in their environment must learn to relate human language to their perceptions and actions to be useful in a range of daily tasks. Moreover, they need to acquire a diverse repertoire of general-purpose skills that allow composing long-horizon tasks by following unconstrained language instructions. In this paper, we present CALVIN (Composing Actions from Language and Vision), an open-source simulated benchmark to learn long-horizon language-conditioned tasks. Our aim is to make it possible to develop agents that can solve many robotic manipulation tasks over a long horizon, from onboard sensors, and specified only via human language. CALVIN tasks are more complex in terms of sequence length, action space, and language than existing vision-and-language task datasets and supports flexible specification of sensor suites. We evaluate the agents in zero-shot to novel language instructions and to novel environments and objects. We show that a baseline model based on multi-context imitation learning performs poorly on CALVIN, suggesting that there is significant room for developing innovative agents that learn to relate human language to their world models with this benchmark.Comment: Accepted for publication at IEEE Robotics and Automation Letters (RAL). Code, models and dataset available at http://calvin.cs.uni-freiburg.d

    Genome sequence of strain HIMB30, a novel member of the marine Gammaproteobacteria

    Get PDF
    Strain HIMB30 was isolated from coastal Hawaii seawater by extinction culturing in seawater-based oligotrophic medium. It is a phylogenetically unique member of the class Gammaproteobacteria that is only distantly related to its closest cultured relatives. Here we present the genome sequence of strain HIMB30, including genes for proteorhodopsin-based phototrophy and the Calvin-Benson-Bassham cycle

    Telehealth Sensor Authentication Through Memory Chip Variability

    Get PDF
    In light of the COVID-19 world-wide pandemic, the need for secure and readily available remote patient monitoring has never been more important. Rural and low income communities in particular have been severely impacted by the lack of accessibility to in-person healthcare. This has created the need for access to remote patient monitoring and virtual health visits in order for greater accessibility to premier care. In this paper, we propose hardware security primitives as a viable solution to meet the security challenges of the telehealth market. We have created a novel solution, called the High-Low (HiLo) method, that generates physical unclonable function (PUF) signatures based on process variation within flash memory in order to uniquely identify and authenticate remote sensors. The HiLo method consumes 20x less power than conventional authentication schemes, has an average latency of only 39ms for signature generation, and can be readily implemented through firmware on ONFI 2.2 compliant off-the-shelf NAND flash memory chips. The HiLo method generates 512 bit signatures with an average error rate of 5.9 * 10-4, while also adapting for flash chip aging. Due to its low latency, low error rate, and high power efficiency, we believe that the HiLo method could help progress the field of remote patient monitoring by accurately and efficiently authenticating remote health sensors

    Evaluation of Parameter-Scaling for Efficient Deep Learning on Small Satellites

    Get PDF
    Parameter-scaling techniques change the number of parameters in a machine-learning model in an effort to make the network more amenable to different device types or accuracy requirements. This research compares the performance of two such techniques. NeuralScale is a neural architecture search method which claims to generate deep neural networks for devices that are resource-constrained. It shrinks a network to a target number of parameters by adjusting the width of layers independently to achieve a higher accuracy than previous methods. The novel NeuralScale algorithm is compared to the baseline uniform scaling of MobileNet-style models, where the width of each layer in the model is scaled uniformly across the network. Measurements of the latency and runtime memory required for inference were gathered on the NVIDIA Jetson TX2 and Jetson AGX Xavier embedded GPUs using NVIDIA TensorRT. Measurements were also gathered on the Raspberry Pi 4 embedded CPU featuring ARM Cortex-A72 cores using ONNX Runtime. VGG-11, MobileNetV2, Pre-Activation ResNet-18, and ResNet-50 were all scaled to 0.25×, 0.50×, 0.75×, and 1.00× the original number of parameters. On embedded GPUs, this research finds that NeuralScale models do offer higher accuracy, but they run slower and consume much more runtime memory during inference than their equivalent uniform-scaling models. On average, NeuralScale is 40% as efficient as uniform scaling in terms of accuracy per megabyte of runtime memory, and NeuralScale uses 2.7× the runtime memory per parameter as uniform scaling. On the embedded CPU, NeuralScale is slightly more efficient than uniform scaling in terms of accuracy per megabyte of memory, using essentially the same amount of memory per parameter. However, there is on average an over 2.5× increase in the latency for inference. Importantly, parameter count does not guarantee performance in terms of runtime-memory usage between the scaling methods on embedded GPUs, while latency grows significantly on embedded CPUs
    • …
    corecore