35 research outputs found

    Parallel computing techniques

    Full text link
    Parallel computing means to divide a job into several tasks and use more than one processor simultaneously to perform these tasks. Assume you have developed a new estimation method for the parameters of a complicated statistical model. After you prove the asymptotic characteristics of the method (for instance, asymptotic distribution of the estimator), you wish to perform many simulations to assure the goodness of the method for reasonable numbers of data values and for different values of parameters. You must generate simulated data, for example, 100 000 times for each length and parameter value. The total simulation work requires a huge number of random number generations and takes a long time on your PC. If you use 100 PCs in your institute to run these simulations simultaneously, you may expect that the total execution time will be 1/100. This is the simple idea of parallel computing

    Linear mixing model applied to coarse resolution satellite data

    Get PDF
    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies

    Selected Computing Research Papers Volume 7 June 2018

    Get PDF
    Contents Critical Evaluation of Arabic Sentimental Analysis and Their Accuracy on Microblogs (Maha Al-Sakran) Evaluating Current Research on Psychometric Factors Affecting Teachers in ICT Integration (Daniel Otieno Aoko) A Critical Analysis of Current Measures for Preventing Use of Fraudulent Resources in Cloud Computing (Grant Bulman) An Analytical Assessment of Modern Human Robot Interaction Systems (Dominic Button) Critical Evaluation of Current Power Management Methods Used in Mobile Devices (One Lekula) A Critical Evaluation of Current Face Recognition Systems Research Aimed at Improving Accuracy for Class Attendance (Gladys B. Mogotsi) Usability of E-commerce Website Based on Perceived Homepage Visual Aesthetics (Mercy Ochiel) An Overview Investigation of Reducing the Impact of DDOS Attacks on Cloud Computing within Organisations (Jabed Rahman) Critical Analysis of Online Verification Techniques in Internet Banking Transactions (Fredrick Tshane

    Analisis Perbandingan Performa Dari Enhanced Ant AODV dan Enhanced Ant DSR Berdasarkan Parameter Quality of Service

    Get PDF
    Salah satu masalah yang dihadapi Mobile Ad-hoc Network (MANET) adalah pemilihan rute optimal dengan mempertimbangkan Quality of Service (QoS). Pada penelitian penelitian sebelumnya telah ditemukan metode routing protocol dengan mempertimbangkan quality of service. Metode routing protocol ini yaitu Enhanced-Ant Adhoc on-demand Vector (AODV) dan Enhanced-Ant Dynamic Source Routing (DSR) yang merupakan metode yang menerapkan algoritma Ant-Colony Optimization (ACO). Kedua metode ini menghitung nilai pheromone dengan mempertimbangkan beberapa faktor QoS. Berdasarkan kedua metode tersebut, dilakukan perbandingan dengan membandingkan hasil Packet delivery ratio, packet loss ratio, dan end-to-end delay. Hasil didapatkan bahwa metode Enhanced-Ant AODV lebih baik daripada metode Enhanced-Ant DSR

    Outage Probability and Fronthaul Usage Tradeoff Caching Strategy in Cloud-RAN

    Get PDF
    In this paper, optimal content caching strategy is proposed to jointly minimize the cell average outage probability and fronthaul usage in cloud radio access network (Cloud-RAN). An accurate closed form expression of the outage probability conditioned on the user’s location is presented, and the cell average outage probability is obtained through the composite Simpson’s integration. The caching strategy for jointly optimizing the cell average outage probability and fronthaul usage is formulated as a weighted sum minimization problem, which is a nonlinear 0-1 integer NP-hard problem. In order to deal with the NP-hard problem, at first, two particular caching placement schemes are investigated: the most popular content (MPC) caching scheme and the proposed location-based largest content diversity (LB-LCD) caching scheme. Then a genetic algorithm (GA) based approach is proposed. Numerical results show that the performance of the proposed GA-based approach with significantly reduced computational complexity is close to the optimal performance achieved by exhaustive search based caching strategy

    ReCon Implementation: A Load Pair-Tracking Mechanism to Lift Security Protections on a RISC-V Processor

    Get PDF
    Moderne datamaskinprosessorer har blitt ekstremt komplekse, og følger Moores lov ved å øke antall transistorer eksponentielt gjennom årene. Ettersom datasystemer spiller stadig mer kritiske roller i våre daglige liv, fra enkle oppgaver som å beskytte personopplysninger og passord til å håndtere globale bankoperasjoner, beskytte militære hemmeligheter og sikre kryptovalutanøkler, må prosessorer sikre de høyeste nivåene av pålitelighet og sikkerhet. I de senere år har verden møtt nye typer sikkerhetsrisiko kjent som "spekulative sidekanalangrep". Disse angrepene utnytter sårbarheter i tidsrommet mellom når en prosessor utfører en instruksjon og når den bekrefter instruksjonens gyldighet, og eksponerer hemmelige data, til og med i det såkalte beskyttede adresseområdet, slik at de kan observeres av en ekstern part. For å adressere dette sikkerhetsproblemet har mekanismer som ikke-spekulativ dataadgang (Non-speculative Data Access, NDA), spekulativ sporsporing (Speculative Taint Tracking, STT), og spekulativ personvernsoppfølging (Speculative Privacy Tracking, SPT) blitt foreslått. Disse metodene forhindrer instruksjoner fra å spre hemmeligheter ved å blokkere instruksjoner fra å spre seg til resten av arkitekturen inntil de anses som fullførte. De foreslåtte sikkerhetsmekanismene medfører alle en ytelsespåvirkning på grunn av de nyinnførte begrensningene på minneparallelisme, og bytter redusert ytelse mot økt sikkerhetsnivå. En nylig foreslått mekanisme kalt ReCon har som mål å redusere disse ytelsesbegrensningene ved å oppheve restriksjonene på par av direkteavhengige lastinstruksjoner (slik som de ved pekerdereferanser) som tidligere har blitt "lekket" og hvor det ikke lenger er relevant å holde en hemmelighet. Dette muliggjør forbedret ytelse ved å dra nytte av spekulativ utførelse for påfølgende laster som retter seg mot de lekkede dataene. Arbeidet i denne avhandlingen har som mål å demonstrere gjennomførbarheten og effektiviteten av å implementere ReCon på en ekte og fullt funksjonell RISC-V-prosessor, spesifikt BOOM-kjernen. Det vil analysere ytelsen og kostnaden ved denne implementeringen for å motvirke spekulative utførelsesangrep.Modern computer processors have become extremely complex, following Moore's law and growing the number of transistors exponentially through the years. As computer systems play increasingly critical roles in our daily lives, from simple tasks such as protecting personal data and passwords to managing global banking operations, safeguarding military secrets, and securing cryptocurrency keys; processors must ensure the highest levels of dependability and security. In recent years, the world has encountered new types of security risks known as "speculative side-channel attacks". These attacks exploit vulnerabilities in the time gap between the time when a processor executes an instruction and when it confirms the instruction's validity, exposing secret data, even that in so-called protected address space, and making it observable to an external party. To address this security issue, mechanisms such as Non-speculative Data Access (\acrshort{NDA}), Speculative Taint Tracking (STT), and Speculative Privacy Tracking (SPT) have been proposed. These methods prevent instructions from propagating secrets by blocking instructions from propagating to the rest of the architecture until they are considered finalized. This proposed security mechanisms all incur a performance hit due to the newly imposed limitations on memory parallelism, trading the reduction in performance for an increased level of security. A recently proposed mechanism denominated ReCon aims to reduce these performance limitations by lifting the restrictions on pairs of directly-dependent load instructions (such as those in pointer dereferencing) that have been previously "leaked" and for which keeping a secret is no longer relevant. This enables an improved performance by taking advantage of speculative execution for subsequent loads targeting this leaked data. The work of this thesis aims to demonstrate the feasibility and effectiveness of implementing ReCon on a real and fully-functional RISC-V processor, specifically the BOOM core. It will analyze the performance and cost of this implementation in mitigating speculative execution attacks
    corecore