18 research outputs found

    Principled software microengineering

    Get PDF

    Sound and noisy light: Optical control of phonons in photoswitchable structures

    Get PDF
    We present a means of controlling phonons via optical tuning. Taking as a model an array of photoresponsive materials (photoswitches) embedded in a matrix, we numerically analyze the vibrational response of an array of bistable harmonic oscillators with stochastic spring constants. Changing the intensity of light incident on the lattice directly controls the composition of the lattice and therefore the speed of sound. Furthermore, modulation of the phonon band structure at high frequencies results in a strong confinement of phonons. The applications of this regime for phonon waveguides, vibrational energy storage, and phononic transistors is examined.National Science Foundation (U.S.). Graduate Research Fellowship (Grant 1122374

    Основи на програмирање во Јава

    Get PDF
    ПРЕДГОВОР Учебникот со наслов Основи на програмирање во ЈАВА настана како резултат на реализацијата на предавањата и вежбите по предметите: 1) Информатика, 2) Информатика и дигитални комуникации, 3) Техники на визуелно програмрамирање, 4) Веб апликативен софтвер, 5) Квалитет и тестирање и на софтвер и 6) Податочни структури и алгоритми. Наведените предмети во подолг период се реализирани на следните високообразовни институции: Воената академија „Генерал Михаило Апостолски“ - Скопје и Факултетот за информатика при Европскиот универзитет - Скопје. Програмскиот пакет ECLIPSE е користен во сите наведени предмети како основна алатка за реализација на вежбите, при што се реализирани голем број примери кои се содржани во овој учебник. Меѓутоа, основата на учебникот ја чинат содржини кои се содржани во програмите за предметот информатика. Изучувањето на предметот Информатика е од суштинско значење за студентите кои следат настава од областите на компјутерското и военото инженерство. Софтверските програми имаат широк спектар апликации во рамките на наведените области, од управување на авиони, ракети, беспилотни летала и возила до управување на роботи и процеси. Од друга страна, како релативно нови наставно-научни дисциплини, особено во нашата земја, не се соодветно застапени во високото образование и не се поддржани со соодветна литература на македонски јазик. Ова и беше мотивот да се изработи овој учебник кој на прикладен начин на студентите ќе им ја приближи оваа материја и ќе им го олесни изучувањето на овие и на други сродни дисциплини. Материјалот во учебникот е поделен на дванаесет поглавја, не сметајќи ги прилозите. Учебникот е наменет за студентите на Воената академија како наставен материјал по предметите Информатика и други сродни предмети. Може да го користат и студентите на цивилните инженерски факултети. Скопје, 2022 Авторо

    Technology and Economic Performance in the American Economy

    Get PDF
    This paper examines the sources of the U. S. macroeconomic miracle of 1995-2000 and attempts to distinguish among permanent sources of American leadership in high-technology industries, as contrasted with the particular post-1995 episode of technological acceleration, and with other independent sources of the economic miracle unrelated to technology. The core of the American achievement was the maintenance of low inflation in the presence of a decline in the unemployment rate to the lowest level reached in three decades. The post-1995 technological acceleration, particularly in information technology (IT) and accompanying revival of productivity growth, directly contributed both to faster output growth and to holding down the inflation rate, but inflation was also held down by a substantial decline in real non-oil import prices, by low energy prices through early 1999, and by a temporary cessation in 1996-98 of inflation in real medical care prices. In turn low inflation allowed the Fed to maintain an easy monetary policy that fueled rapid growth in real demand, profits, and stock prices, which fed back into growth of consumption in excess of growth in income. The technological acceleration was made possible in part by permanent sources of American advantage over Europe and Japan, most notably the mixed system of government- and privately-funded research universities, the large role of U. S. government agencies providing research funding based on peer review, the strong tradition of patent and securities regulation, the leading worldwide position of U.S. business schools and U. S.-owned investment banking, accounting, and management-consulting firms, and the particular importance of the capital market for high-tech financing led by a uniquely dynamic venture capital industry. While these advantages help to explain why the IT boom happened in the United States, they did not prevent the U. S. from experiencing a dismal period of slow productivity growth between 1972 and 1995 nor from falling behind in numerous industries outside the IT sector. The 1995-2000 productivity growth revival was fragile, both because a portion rested on unsustainably rapid output growth in 1999-2000 in the growth rate of computer investment after 1995 that could not continue forever. The web could only be invented once, Y2K artificially compressed the computer replacement cycle, and some IT purchases were made by dot-coms that by early 2001 were bankrupt. As an invention, the web provided abundant consumer surplus but no recipe for most dot-coms to make a profit from providing free services. High

    Una Macchina Nozionale per Architetture dei Calcolatori come possibile collegamento tra gli insegnamenti del primo anno della laurea in Informatica

    Get PDF
    In questo articolo presentiamo un esperimento basato sull’adozione del modello RAM di Cook e Reckhow come macchina nozionale per introdurre il linguaggio macchina RISC-V e di un linguaggio visuale a blocchi per avvicinare i novizi ai concetti di base dell’assembler e delle architetture di calcolo. L’esperimento si `e tenuto nella prima parte del corso di Ar chitetture dei Calcolatori nell’anno accademico 2022/23 con 320 studenti con conoscenze molto eterogenee. La scelta di combinare RISC-V, modello RAM e linguaggio a blocchi `e stata dettata dal tentativo di adottare una macchina nozionale in grado di rappresentare i concetti di base di architetture load-store anche per novizi e dalla quale poter ricostrui re i costrutti che nel corso di Introduzione alla Programmazione venivano nel frattempo presentati attraverso il C++

    Does the "New Economy" Measure up to the Great Inventions of the Past?

    Get PDF
    During the four years 1995-99 U. S. productivity growth experienced a strong revival and achieved growth rates exceeding that of the golden age' of 1913-72. Accordingly many observers have declared the New Economy' (the Internet and the accompanying acceleration of technical change in computers and telecommunications) to be an Industrial Revolution equal in importance, or even more important, than the Second Industrial Revolution of 1860-1900 which gave us electricity, motor and air transport, motion pictures, radio, indoor plumbing, and made the golden age of productivity growth possible. This paper raises doubts about the validity of this comparison with the Great Inventions of the past. It dissects the recent productivity revival and separates the revival of 1.35 percentage points (comparing 1995-99 with 1972-95) into 0.54 of an unsustainable cyclical effect and 0.81 points of acceleration in trend growth. The entire trend acceleration is attributed to faster multi-factor productivity (MFP) growth in the durable manufacturing sector, consisting of computers, peripherals, telecommunications, and other types of durables. There is no revival of productivity growth in the 88 percent of the private economy lying outside of durables; in fact when the contribution of massive investment in computers in the nondurable economy is subtracted, MFP growth outside of durables has actually decelerated. The paper combines the Great Inventions of 1860-1900 into five clusters' and shows how their development and diffusion in the first half of the 20th century created a fundamental transformation in the American standard of living from the bad old days of the late 19th century. In comparison, computers and the Internet fall short. The rapid decline in the cost of computer power means that the marginal utility of computer characteristics like speed and memory has fallen rapidly as well, implying that the greatest contributions of computers lie in the past, not in the future. The Internet fails the hurdle test as a Great Invention on several counts. First, the invention of the Internet has not boosted the growth in the demand for computers; all of that growth can be interpreted simply as the same unit-elastic response to the decline in computer prices as was prevalent prior to 1995. Second, the Internet provides information and entertainment more cheaply and conveniently than before, but much of its use involves substitution of existing activities from one medium to another. Third, much internet investment involves defense of market share by existing companies like Borders Books faced with the rise of Amazon; social returns are less than private returns. Fourth, much Internet activity duplicates existing activity like mail order catalogues, but the latter have not faded away; the usage of paper is rising, not falling. Finally, much Internet activity, like daytime e-trading, involves an increase in the fraction of work time involving consumption on thejob

    Electronic Communications Law legal issues and the effect of it towards freedom, security and justice in the Republic of Latvia

    Get PDF
    On 15 March 2006, the European Union adopted the Data Retention Directive 2006/24/EC which regulated the Internet Service Providers storage of telecommunications data and could be used to fight serious crime in the European Union. This directive was needed, because people in the European Union needed a higher level of data protection. Since multiple countries had their own data retention laws, the European Parliament and the Council saw the need to harmonise and strengthen the data retention in the European Union. Despite the noble intentions, the European Court of Justice declared it invalid on 8 April 2014. Yet, the essence of the Directive was transposed to each and every national data retention law across European Union. In this master thesis, author examines whether member states, but particularly, The Republic of Latvia has learned anything from the invalidation of the Directive 2006/24/EC. The author of this thesis will first of all, look into the adoption and invalidation reasons of the Directive 2006/24/EC. Following that the author will look into the to see if there is any resemblance to the Directive 2006/EC/24, considering the fact that this law consist of norms that are directly transposed from the Directive 2006/24/EC. In order to conclude whether the Electronic Communications Law is affecting the freedom, security and justice in Latvia, author will analyse whether the arguments presented by the European Court of Justice are applicable to the Electronic Communications Law

    RISC-V-suoritinarkkitehtuurin virtualisointi

    Get PDF
    Virtualisointi on yksi modernien tietojärjestelmien kulmakivistä. Sen merkittävimpiä käyttökohteita ovat palvelinympäristöt sekä sulautetut järjestelmät. Esimerkiksi pilvipalveluiden toiminta perustuu fyysisten resurssien jakamiseen useiden käyttäjien kesken, mikä tehdään käytännössä virtualisointia hyödyntäen. Sulautetuissa järjestelmissä virtualisoinnin merkittävin etu on sen tarjoama eristys. Virtualisoinnin avulla useat osajärjestelmät voivat toimia samalla laitteistolla pitäen ne silti eristettyinä toisistaan. RISC-V on uusi Berkeleyn yliopistossa kehitetty suoritinarkkitehtuuri. Se erottuu muista suoritinarkkitehtuureista erityisesti sen vapaudella ja avoimuudella. Kuka tahansa voi vapaasti suunnitella, valmistaa ja myydä RISC-V-prosessoreita ilman lisenssimaksuja. RISC-V on huomattavasti uudempi suoritinarkkitehtuuri, kuin sen suurimmat kilpailijat, mutta sen suosiolle on ennustettu suurta kasvua tulevaisuudessa. Vapauden ja avoimuuden lisäksi RISC-V:n valtteja ovat sen tekniset ominaisuudet, kuten laaja virtualisoitavuus, joka on ollut yksi arkkitehtuurin suunnittelutavoitteista alusta alkaen. Tässä tutkielmassa selvitetään, miten virtualisointi on otettu huomioon RISC-V-suoritinarkkitehtuurin suunnittelussa. Ensimmäisenä tutkielman kirjallisuustutkimusosassa selvitetään, miten virtualisointi toimii käytännössä. Kirjallisuuslähteistä löydettiin yhtenäinen joukko tunnettuja virtualisointitekniikoita, joiden käyttökelpoisuus tutkitaan seuraavaksi RISC-V-suoritinarkkitehtuurilla. Tutkielman tulokset osoittavat, että virtualisointimahdollisuudet on otettu RISC-V-suoritinarkkitehtuurin suunnittelussa perusteellisesti huomioon. Arkkitehtuurista on tunnistettavissa monia suunnittelupäätöksiä, jotka on tehty virtualisointia silmällä pitäen. Seurauksena RISC-V soveltuu kaikkien tunnetuimpien virtualisointitekniikoiden käyttämiseen. Johtopäätöksinä todetaan, että RISC-V:n ennustetun suosion kasvun syyt ovat selkeästi nähtävissä. Virtualisoitavuus yhdistettynä vapauden, avoimuuden ja muiden teknisten ominaisuuksien kanssa tekevät RISC-V:stä hyvin soveltuvan suoritinarkkitehtuurin moniin käyttökohteisii

    Optimización del producto matricial sobre dispositivos de bajo consumo para inferencia en Deep Learning

    Full text link
    [ES] El aprendizaje automático mediante redes neuronales profundas ha experimentado un gran auge en la última década, principalmente por la combinación de varios factores, entre los que se incluyen la avalancha de datos para entrenar este tipo de sistemas (big data), una mayor capacidad de los sistemas de computación (procesadores gráficos de NVIDIA, TPUs de Google, etc.), los avances en técnicas algorítmicas de aprendizaje (por ejemplo, redes de tipo transformer para procesamiento del lenguaje), y la disponibilidad de entornos amigables para la tarea. En la actualidad existen diferentes paquetes de software para el entrenamiento de redes neuronales profundas sobre clusters de computadores (TensorFlow de Google y PyTorch de Facebook), e incluso los mismos paquetes tienen versiones especializadas (TensorFlow Lite, NVIDIA RT, QNNPACK, etc.) para realizar el proceso de inferencia sobre procesadores de bajo consumo, como los que pueden encontrarse en un móvil Android o iOS o en un vehículo sin conductor. Muchos de los sistemas tratan redes neuronales convolucionales, especialmente aquellos que tratan con imágenes. A un nivel más bajo de detalle podemos observar que el entrenamiento y la inferencia en las capas convolucionales de las redes neuronales mencionadas aparece un producto matricial con características particulares, bien definidas y que requieren de un tratamiento especial cuando se trata de su optimización. Este trabajo de fin de máster trata de la optimización de esta operación, en particular, sobre arquitectura ARM, cuyo procesador multinúcleo puede encontrarse en gran parte de los dispositivos de bajo consumo donde se pretende ejecutar la inferencia de una red previamente entrenada. La optimización planteada está inspirado en un paquete de rutinas optimizadas de álgebra lineal numérica denominado BLIS, de donde se obtienen los algoritmos básicos sobre los que se realiza el trabajo. El proyecto permitirá al estudiante adquirir un buen conocimiento de los aspectos computacionales relacionados con el proceso inferencia con redes neuronales profundas, así como profundizar en la interacción entre el algoritmo y la arquitectura del procesador y cómo esta determina el rendimiento.[EN] The use of machine learning in deep neural networks has experienced a boom in the last decade, mainly due to a combination of several factors, including the abundance of data to train such systems (big data), increased computing power (NVIDIA graphics processors, Google TPUs, etc.), advances in algorithmic learning techniques (transformer neural networks for language processing) and the availability of user-friendly environments for the task. There are currently different software packages for training deep neural networks on computer clusters (TensorFlow and PyTorch) and even the same packages have specialized versions (TensorFlow Lite, NVIDIA RT, QNNPACK, etc.) to perform the inference process on low-power processors, such as those that can be found in an Android or iOS mobile phone or in a driverless car. Many of the systems deal with convolutional neural networks, especially those that deal with images. At a lower level of detail, we can observe that the training and inference in the convolutional layers of the aforementioned neural networks result in a matrix product with particular, well-defined characteristics that require special treatment when it comes to optimization. This master's thesis deals with the optimization of this operation, in particular, on an ARM architecture, whose multicore processor can be found in most of the low-power devices where it is intended to execute the inference of a previously trained network. The proposed optimization is inspired by a package of optimized numerical linear algebra routines called BLIS, from which the basic algorithms on which the work is carried out are obtained. The project will allow the student to acquire a good knowledge of the computational aspects related to the inference process with deep neural networks, as well as to deepen the interaction between the algorithm and the architecture of the processor and how this determines the performance.Stabile, EB. (2021). Optimización del producto matricial sobre dispositivos de bajo consumo para inferencia en Deep Learning. Universitat Politècnica de València. http://hdl.handle.net/10251/172885TFG
    corecore