858 research outputs found
Impact of experimental conditions on material response during forming of steel in semi-solid state
Semi-solid forming is an effective near-net-shape forming process to produce components with complex geometry and in fewer forming steps. It benefits from the complex thixotropic behaviour of semi-solids. However, the consequences of such behaviour on the flow during thixoforming, is still neither completely characterized and nor fully understood, especially for high melting point alloys. The study described in this paper investigates thixoextrusion for C38 low carbon steel material using dies at temperatures much lower than the slug temperature. Four different process parameters were studied: the initial slug temperature, the die temperature, the ram speed and the presence of a ceramic layer at the tool/material interface. The extruded parts were found to have an exact shape and a good surface state only if the temperature was below a certain value. This critical temperature is not an intrinsic material property since its value depends on die temperature and the presence of the Ceraspray©layer. Two kinds of flow were highlighted: a homogeneous flow controlled by the behaviour of the solid skeleton characterized by a positive strain rate sensitivity, and a non homogeneous flow (macro liquid/solid phase separation) dominated by the flow of the free liquid. With decreasing ram speed, heat losses increase so that the overall consistency of the material improves, leading to apparent negative strain rate sensitivity. Finally, some ways to optimise thixoforming are proposed
P4-compatible High-level Synthesis of Low Latency 100 Gb/s Streaming Packet Parsers in FPGAs
Packet parsing is a key step in SDN-aware devices. Packet parsers in SDN
networks need to be both reconfigurable and fast, to support the evolving
network protocols and the increasing multi-gigabit data rates. The combination
of packet processing languages with FPGAs seems to be the perfect match for
these requirements. In this work, we develop an open-source FPGA-based
configurable architecture for arbitrary packet parsing to be used in SDN
networks. We generate low latency and high-speed streaming packet parsers
directly from a packet processing program. Our architecture is pipelined and
entirely modeled using templated C++ classes. The pipeline layout is derived
from a parser graph that corresponds a P4 code after a series of graph
transformation rounds. The RTL code is generated from the C++ description using
Xilinx Vivado HLS and synthesized with Xilinx Vivado. Our architecture achieves
100 Gb/s data rate in a Xilinx Virtex-7 FPGA while reducing the latency by 45%
and the LUT usage by 40% compared to the state-of-the-art.Comment: Accepted for publication at the 26th ACM/SIGDA International
Symposium on Field-Programmable Gate Arrays February 25 - 27, 2018 Monterey
Marriott Hotel, Monterey, California, 7 pages, 7 figures, 1 tabl
Quantity, Quality, and Relevance: Central Bank Research, 1990-2003
The authors document the research output of 34 central banks from 1990 to 2003, and use proxies of research inputs to measure the research productivity of central banks over this period. Results are obtained with and without controlling for quality and for policy relevance. The authors find that, overall, central banks have been hiring more researchers and publishing more research since 1990, with the United States accounting for more than half of all published central bank research output, although the European Central Bank is rapidly establishing itself as an important research centre. When controlling for research quality and relevance, the authors generally find that there is no clear relationship between the size of an institution and its productivity. They also find preliminary evidence of positive correlations between the policy relevance and the scientific quality of central bank research. There is only very weak evidence of a positive correlation between the quantity of external partnerships and the productivity of researchers in central banks.Central bank research
PoET-BiN: Power Efficient Tiny Binary Neurons
The success of neural networks in image classification has inspired various
hardware implementations on embedded platforms such as Field Programmable Gate
Arrays, embedded processors and Graphical Processing Units. These embedded
platforms are constrained in terms of power, which is mainly consumed by the
Multiply Accumulate operations and the memory accesses for weight fetching.
Quantization and pruning have been proposed to address this issue. Though
effective, these techniques do not take into account the underlying
architecture of the embedded hardware. In this work, we propose PoET-BiN, a
Look-Up Table based power efficient implementation on resource constrained
embedded devices. A modified Decision Tree approach forms the backbone of the
proposed implementation in the binary domain. A LUT access consumes far less
power than the equivalent Multiply Accumulate operation it replaces, and the
modified Decision Tree algorithm eliminates the need for memory accesses. We
applied the PoET-BiN architecture to implement the classification layers of
networks trained on MNIST, SVHN and CIFAR-10 datasets, with near state-of-the
art results. The energy reduction for the classifier portion reaches up to six
orders of magnitude compared to a floating point implementations and up to
three orders of magnitude when compared to recent binary quantized neural
networks.Comment: Accepted in MLSys 2020 conferenc
The radion in brane cosmology
We consider the homogeneous cosmological radion, which we define as the interbrane distance in a two brane and symmetrical configuration. In a coordinate system where one of the brane is at rest, the junction conditions for the second (moving) brane give directly the (non-linear) equations of motion for the radion. We analyse the radion fluctuations and solve the non-linear dynamics in some simple cases of interest
La mondialisation rend-elle obsolètes les modèles de développement nationaux? : le cas du Québec et des économies de marché coordonnées
L'objectif de cette thèse est d'étudier les tensions qui existent entre la mondialisation et la politique d'autonomie nationale, spécialement en matière de politique économique, au Québec comme ailleurs dans le monde industrialisé. À cet égard, la mondialisation libérale serait devenue un carcan qui soumet le monde politique à la logique capitaliste et donc à la concurrence internationale aux dépens de la cohésion sociale. Cette logique néo-fonctionnaliste s'arrime avec la croissance des échanges des biens et des services et la trans-nationalisation de la production qui sont à la base de la mondialisation. Dans la mesure où le match des performances économiques semblait favoriser les États-Unis depuis le milieu des années 1990, la popularité du modèle de marché libéral allait de soi (de même que la rationalité de son émulation) sans égard aux cycles qui font l'économie ou encore aux circonstances particulières redevables au statut d'« hyper-puissance » qui appartient à ce pays. Nous verrons que cette fatalité de la convergence n'est pas absolue dans la mesure où elle repose sur le postulat néoclassique d'une économie résumée à la sphère marchande. C'est oublier que le système économique capitaliste dépend également d'un ensemble de mécanismes de coordination qui renvoient aux institutions et donc au politique. Les compromis entre intérêts différents auxquels celui-ci donne lieu sont uniques à chaque espace national et de ce fait diverses configurations du capitalisme sont possibles. Certes, la mondialisation produit une confrontation entre les forces de l'universalisme et du particularisme, mais au demeurant chaque système socio-économique s'ajuste aux impératifs liés à l'ouverture des frontières en fonction de ses caractéristiques historiques. Et sur le terrain même du discours néo-libéral, l'évidence nous montre que les économies de la configuration anglo-saxonne n'ont pas le monopole du succès économique, les pays nordiques, par exemple, ayant enregistré des performances tout aussi, sinon plus enviables, sans connaître les mêmes déséquilibres sociaux et financiers. Au Québec aussi, on pourra constater que les performances économiques sont fort honorables tandis que les inégalités sociales sont moins prononcées qu'ailleurs en Amérique du Nord. Par ailleurs, le modèle québécois de développement n'est pas menacé directement par la mondialisation mais bien par des remises en question idéologiques provenant de l'intérieur. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Mondialisation, Québec, Modèles de développement, Politiques économiques, Politiques sociales
La mondialisation rend-t-elle obsolètes les modèles de développement nationaux?
Ce travail veut démystifier le discours dominant sur la mondialisation. Selon la nouvelle orthodoxie, nous entrons dans une phase historique où les flux transfrontaliers de biens et services, des investissements, de la finance et des technologies, ont pour effet de créer un marché globalisé où la loi du prix uniforme va dominer. Il s'ensuit, selon cette logique, que l'État-nation est devenu un acteur désuet, que les capitalismes nationaux, avec leurs politiques industrielles et leurs systèmes de gouvernance particuliers, devront éventuellement converger vers un système de libre-marché de type anglo-américain. Nous tenterons de démontrer, en confrontant des auteurs soutenant cette thèse et leurs opposants, que cette conclusion est grandement exagérée. Que souvent, ce discours masque des intérêts idéologiques en faveur de la plus grande latitude possible pour le capital. De fait, la compétition industrielle se faisant de plus en plus sur la base de l'innovation et de la créativité, le rôle d'un secteur public dynamisant, comme catalyseur des efforts nationaux en ce sens, pourrait s'avérer un atout de taille dans le contexte actuel
CARLA: A Convolution Accelerator with a Reconfigurable and Low-Energy Architecture
Convolutional Neural Networks (CNNs) have proven to be extremely accurate for
image recognition, even outperforming human recognition capability. When
deployed on battery-powered mobile devices, efficient computer architectures
are required to enable fast and energy-efficient computation of costly
convolution operations. Despite recent advances in hardware accelerator design
for CNNs, two major problems have not yet been addressed effectively,
particularly when the convolution layers have highly diverse structures: (1)
minimizing energy-hungry off-chip DRAM data movements; (2) maximizing the
utilization factor of processing resources to perform convolutions. This work
thus proposes an energy-efficient architecture equipped with several optimized
dataflows to support the structural diversity of modern CNNs. The proposed
approach is evaluated by implementing convolutional layers of VGGNet-16 and
ResNet-50. Results show that the architecture achieves a Processing Element
(PE) utilization factor of 98% for the majority of 3x3 and 1x1 convolutional
layers, while limiting latency to 396.9 ms and 92.7 ms when performing
convolutional layers of VGGNet-16 and ResNet-50, respectively. In addition, the
proposed architecture benefits from the structured sparsity in ResNet-50 to
reduce the latency to 42.5 ms when half of the channels are pruned.Comment: 12 page
- …