77 research outputs found

    Design of a Scan Chain for Side Channel Attacks on AES Cryptosystem for Improved Security

    Get PDF
    Scan chain-based attacks are side-channel attacks focusing on one of the most significant features of hardware test circuitry. A technique called Design for Testability (DfT) involves integrating certain testability components into a hardware design. However, this creates a side channel for cryptanalysis, providing crypto devices vulnerable to scan-based attacks. Advanced Encryption Standard (AES) has been proven as the most powerful and secure symmetric encryption algorithm announced by USA Government and it outperforms all other existing cryptographic algorithms. Furthermore, the on-chip implementation of private key algorithms like AES has faced scan-based side-channel attacks. With the aim of protecting the data for secure communication, a new hybrid pipelined AES algorithm with enhanced security features is implemented. This paper proposes testing an AES core with unpredictable response compaction and bit level-masking throughout the scan chain process. A bit-level scan flipflop focused on masking as a scan protection solution for secure testing. The experimental results show that the best security is provided by the randomized addition of masked scan flipflop through the scan chain and also provides minimal design difficulty and power expansion overhead with some negligible delay measures. Thus, the proposed technique outperforms the state-of-the-art LUT-based S-box and the composite sub-byte transformation model regarding throughput rate 2 times and 15 times respectively. And security measured in the avalanche effect for the sub-pipelined model has been increased up to 95 per cent with reduced computational complexity. Also, the proposed sub-pipelined S-box utilizing a composite field arithmetic scheme achieves 7 per cent area effectiveness and 2.5 times the hardware complexity compared to the LUT-based model

    CANDELS: The Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey - The Hubble Space Telescope Observations, Imaging Data Products and Mosaics

    Get PDF
    This paper describes the Hubble Space Telescope imaging data products and data reduction procedures for the Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS). This survey is designed to document the evolution of galaxies and black holes at z∌1.5−8z\sim1.5-8, and to study Type Ia SNe beyond z>1.5z>1.5. Five premier multi-wavelength sky regions are selected, each with extensive multiwavelength observations. The primary CANDELS data consist of imaging obtained in the Wide Field Camera 3 / infrared channel (WFC3/IR) and UVIS channel, along with the Advanced Camera for Surveys (ACS). The CANDELS/Deep survey covers \sim125 square arcminutes within GOODS-N and GOODS-S, while the remainder consists of the CANDELS/Wide survey, achieving a total of \sim800 square arcminutes across GOODS and three additional fields (EGS, COSMOS, and UDS). We summarize the observational aspects of the survey as motivated by the scientific goals and present a detailed description of the data reduction procedures and products from the survey. Our data reduction methods utilize the most up to date calibration files and image combination procedures. We have paid special attention to correcting a range of instrumental effects, including CTE degradation for ACS, removal of electronic bias-striping present in ACS data after SM4, and persistence effects and other artifacts in WFC3/IR. For each field, we release mosaics for individual epochs and eventual mosaics containing data from all epochs combined, to facilitate photometric variability studies and the deepest possible photometry. A more detailed overview of the science goals and observational design of the survey are presented in a companion paper.Comment: 39 pages, 25 figure

    Mapping landscape function with hyperspectral remote sensing of natural grasslands on gold mines

    Get PDF
    Thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy. School of Animal, Plant and Environmental Science, University of the Witwatersrand, Johannesburg, South Africa. October 2016.Mining has negative impacts on the environment in many different ways. One method developed to quantify some of these impacts is Landscape Function Analysis (LFA) and this has been accepted by some mining companies and regulators. In brief, LFA aims at quantifying the organization of vegetative and landscape components in a landscape into patches along a transect and quantifying, in a relative manner, three basic processes important to landscape functioning, namely: soil stability or susceptibility to erosion, infiltration or runoff, and nutrient cycling or organic matter decomposition. However, LFA is limited in large heterogeneous environments, such as those around mining operations, due to its localized nature, and the man hours required to collect a representative set of measurements for such large and complex environments. Remote sensing using satellite-acquired data can overcome these limitations by sampling the entire environment in a rapid and objective manner. What is required is a method of connecting these satellite-based measurements to LFA measurements and then being able to extrapolate these measurements across the entire mine surface. The aim of this research was to develop a method to use satellite-based hyperspectral imagery to predict landscape function analysis (LFA) using partial least squares regression (PLSR). This was broken down into three objectives: (1) Collection of the LFA data in the field and validation of the LFA indices against other environmental variables collected at the same time, (2) validation of PLSR models predicting LFA indices and various environmental variables from ground-based spectra, and (3) production of risk maps based on predicting LFA indices and above-ground biomass using PLSR models and Hyperion satellite-based hyperspectral imagery. Although the study was based in grasslands at two mining regions, West Wits and Vaal River, a suitable Hyperion image was only available for Vaal River. A minimum of 374 points were sampled for LFA indices, ground-based spectra, above-ground biomass and soil cores along 2880 m of LFA transect from both mine sites. Soil cores were weighed fresh before sieving with a 2 mm sieve to separate root and stone fractions. The sieved soil fraction was tested for pH, EC, SOM, and for the West Wits samples, organic nitrogen and total extractable inorganic nitrogen. There was one modification to the LFA method where grass patches were collapsed into homogenous units as it was deemed not feasible to sample 180 m transects at grass tuft scales of 10 – 30 cm, but other patch definitions followed the LFA manual (Tongway and Hindley, 2004). Evidence suggested that some of the different patch types, in particular the bare/biological soil crust – bare grass – sparse grass patch types, represented successional stages in a continuum although this was not conclusive. There also was evidence that the presence or absence of cattle play a role in some processes active in these grasslands and erosion is mainly through deflation, rain splash and sheet wash. Generally the environmental variables supported the LFA indices although the nutrient cycling index was representative of above-ground nutrient cycling but not below-ground nutrient cycling. Models derived with PLSR to predict the LFA indices from ground-based spectral measurements were strong at both mine sites (West Wits: LFA stability r2 = 0.63, P < 0.0001; LFA infiltration r2 = 0.75, P < 0.0001; LFA nutrient cycling r2 = 0.73, P < 0.0001; Vaal River: LFA stability r2 = 0.39, P < 0.0001, LFA infiltration r2 = 0.72, P < 0.0001, LFA nutrient cycling r2 = 0.54, P < 0.0001), as were PLSR models predicting above-ground biomass (West Wits above-ground biomass r2 = 0.55, P = 0.0003; Vaal River above-ground biomass r2 = 0.79, P < 0.0001) and soil moisture (West Wits soil moisture r2 = 0.45, P = 0.0017; Vaal River soil moisture r2 = 0.68, P < 0.0001). However, for soil organic matter (r2 = 0.50, P < 0.0001) and EC (r2 = 0.63, P < 0.0001), Vaal River had strong prediction models while West Wits had weak models for these variables (r2 = 0.31, P = 0.019 and r2 = 0.10 and P < 0.18, respectively). For EC, the wide range of soil values at Vaal River in association with gypsum crusts, and low values throughout West Wits explained these model results but for soil organic matter, no clear explanation for these site differences was identified. Patch-based models could accurately discriminate between spectrally well-defined patch types such S. plumosum patches but were less successful with patch types that were spectrally similar such as the bare/biological soil crust – bare grass – sparse grass patch continuum. Clustering similar patch types together before PLSR modelling did improve these patch-based spectral models. To test the method proposed to predict LFA indices from satellite-based hyperspectral imagery, a Hyperion image matching 6 transects at Vaal River was acquired by NASA’s EO-1 satellite and downloaded from the USGS Glovis website. LFA transects were partitioned to match and extract pixel spectra from the Hyperion data cube. Thirty-one spectra were separated into calibration (20) and validation (11) data. PLSR models were derived from the calibration data, tested with validation data to select the optimum model, and then applied to the entire Hyperion data cube to produce prediction maps for five LFA indices and above-ground biomass. The patch area index (PAI) produced particularly strong models (r2 = 0.79, P = 0.0003, n =11) with validation data, whereas the landscape organization index (LOI) produced weak models. It is argued that this difference between these two essentially similar indices is related to the fact that the PAI is a 2-dimensional index and the LOI is a 1-dimensional index. This difference in these two indices allowed the PAI to compensate for some burned pixels on the transects by “seeing” the density pattern of grass tufts and patches whereas the linear nature of the LOI was more susceptible to the changing dimensions of patch structure due to the effects of fire. Although validation models for the three LFA indices of soil stability, infiltration and nutrient cycling were strong (r2 = 0.72, P = 0.004; r2 = 0.66, P = 0.008; r2 = 0.70, P = 0.005, n = 9 respectively), prediction maps were confounded by the presence of fire on some transects. The poor quality of the Hyperion imagery also meant great care had to be taken in the selection of models to avoid poor quality prediction maps. The 31 bands from the VNIR (478 – 885 nm) portion of the Hyperion spectra were generally the best for PLSR modelling and prediction maps, presumably because of better signal-to-noise ratios due to higher energy in the shorter wavelengths. With two satellite-based hyperspectral sensors already operational, namely the US Hyperion and the Chinese HJ-1A HSI, and a number expected to be launched by various space agencies in the next few years, this research presents a method to use the strengths of LFA and hyperspectral imagery to model and predict LFA index values and thereby produce risk maps of large, heterogeneous landscapes such as mining environments. As this research documents a method of partitioning the landscape rather than the pixel spectra into pure endmembers, it makes a valuable contribution to the fields of landscape ecology and hyperspectral remote sensing.LG201

    EG Andromedae: A Symbiotic System as an Insight into Red Giant Chromospheres

    Full text link
    Symbiotic systems are interacting binary stars consisting of both hot and cool components. This results in a complex environment that is ideal for studying the latter stages of stellar evolution along with interactions within binary systems. As a star approaches the end of its life, in particular the red giant phase, it exhausts its supply of core hydrogen and begins burning its way through successively heavier elements. Red giants lose mass in the form of a dense wind that will replenish the interstellar medium with chemical elements that are formed through nuclear processes deep in the stellar interior. When these elements reach the interstellar medium they play a central role in both stellar and planetary evolution, as well as providing the essential constituents needed for life. The undoubted significance of these cool giants means the study of their atmospheres is necessary to help understand our place in the Universe. This thesis presents Hubble Space Telescope observations of the symbiotic system EG Andromedae as an insight into red giant stars. EG And is one of the brightest and closest symbiotic systems and consists of a red giant primary along with a white dwarf. The presence of the white dwarf in the system allows spatially resolved examination of the red giant primary. The benefits of using such a system to better understand the base of red giant chromospheres is shown. Along with the observations of EG And, new HST observations of an isolated red giant spectral standard HD148349 are described. The similarity between the isolated spectral standard and the red giant primary of EG And is demonstrated, showing that much of the information gleaned from a symbiotic system can be applied to the general red giant population. Using both ultraviolet and optical spectroscopy, the atmosphere of EG And and HD148349 are investigated and contrasted.Comment: PhD Thesis, Trinity College Dubli

    A petrographic, structural and geochemical study of the alkaline igneous rocks of the motzfeldt centre, south Greenland

    Get PDF
    The Motzfeldt Centre (1310 +/- 31 MY) is one of four Gardar alkaline igneous Centres belonging to the Igaliko Nepheline syenite Complex, South Greenland. Motzfeldt is a multiphase, high-level intrusive ring-centre comprised principally of nepheline syenite and emplaced in the Proterozoic Julianehab granite and the overlying Gardar volcano-sedimentary succession. The Centre commenced with the intrusion of three poorly centralised satellitic intrusions of syenite, pulaskite and nepheline syenite, collectively known as the Geologfjeld Formation. These are partly truncated by concentric, multiple intrusions comprising the Motzfeldt Ring Series whose steep-sided contacts dip outwards and individual nepheline syenite units young inwards. On the basis of field relations, petrography and geochemistry the Ring Series is further subdivided into the Motzfeldt Sþ and Flinks Dal Formations, and a number of minor intrusions collectively termed the Hypabyssal Series. The results of field surveys, carried out during two summer field seasons, are presented on a 1:50,000 geological map. The petrography and field relations are described for 16 distinct, plutonic and hypabyssal rock units which range in lithology from larvikite to lujavrite. These represent at least 10 separate intrusive episodes and show a remarkable array of rock textures, mineralogical and geochemical features.170 whole-rock (XRF), 33 Rare earth element (INAA) and over 300 mineral (EDS) geochemical analyses are presented. These show that the syenite/nepheline syenite lithologies in Motzfeldt can be subdivided chemically and mineralogically into the three groups: 'hypoalkaline', alkaline and peralkaline. The geochemical features of the various units are evaluated and elemental behaviour discussed. The data is additionally assessed, using non-parametric statistics, as a means of discriminating between the units. A number of units which have proved difficult to separate in the field are established to be geochemically distinct, whilst others are shown to be very closely associated. The peralkaline, pegmatite rich, silica saturated outer and upper margins of the Motzfeldt Sþ Formation and its associated microsyenite sheet sequence, host extensive economic reserves of Nb, Ta, Zr, U, Th and LREE. The evolution of these mineralised zones is discussed and the importance of country rock (+ water) — magma interaction emphasised. Recent works have helped clarify the magmatic development of the Gardar Province. Here emphasis has been placed on the structural evolution of the Gardar with the aim of complementing these works. The Gardar represents a prolonged (c.200 MY), cyclic period of limited, passive intracontinental extension. Crustal thinning facilitated the rise, along deep fracture zones, of magmas generated by higher thermal gradients. In response to regional, sinistral shear stresses, ENE extensional fractures and associated dyking developed. In addition, crustal decoupling occurred along several parallel WNW-ESE sinistral strike-slip faults. Motzfeldt and other ring centres of the Gardar are preferentially located at the intersections of these zones of weakness

    Searching for High-redshift Galaxies in Hubble Space Telescope Deep Data

    Get PDF
    The history of our Universe spans 13.7 billions of years and could be divided into several stages from the Big Bang up to now. Around 370 000 years after the Big Bang (z~1100) the temperature of the Universe low- ered enough for the first simple atoms to form. Matter and radiation decoupled, and the Universe became transparent to radiation. Cosmic microwave background (CMB) photons that we detect nowadays were last scattered at z~1100 and, since then, have been traveling in straight line. This is the reason why CMB is usually defined as the picture of the Universe at that redshift. Right after the CMB was emitted, the Universe entered the so called Dark Ages when no sources of light exist. Studying the early Universe, one of the most important phases is the subsequent phase-transition named reionization, i.e. the process that reionized the matter in the Universe after the formation of the first sources of light, namely the first stars and galaxies. Consequently, the detection and study of these objects are the key to unveil the early stages of the history of the Universe. Imaging plays a more important role than spectroscopy in searching for high-redshift galaxies because it permits to observe more objects at the same time, better managing the telescope time. Deep surveys, obtained by observing the same sky area for several days, are the answer to the need for detections of high-redshift galaxies. This thesis is focused on the study of the galaxy population existing when the Universe was less than 1.5 Gyr old. When studying the early Universe, the detection of high-redshift sources depends strongly on the detection limit of the survey and the surface brightness of the objects themselves. Taking this into account, we made use of the deepest datasets currently available obtained with the Hubble Space Telescope (HST) in both the optical and near-infrared (NIR) domain to carefully study how these two issues affect the identification and photometry of high-redshift galaxies. The important role played by high-redshift galaxies in cosmic reionization is no longer debated and, lately, most of studies agreed on the key relevance of galaxies that are below the current detection limit. While we are waiting for the James Webb Space Telescope (JWST) to directly observe these faint galaxies, finding an alternative way to estimate their overall light contribution is mandatory. To this aim we developed a technique based on the power spectrum to analyze background fluctuations. Relying on a Lyman break-like approach we compared the power spectra of background signal derived from observations obtained in two adjacent bands to identify the light contribution from a population of galaxies lying within a specific redshift range. Then, Monte Carlo simulations permitted us to disentangle the information embedded in the light excess identified via power spectra, in particular deriving a constraint on the faint-end slope of the luminosity function. The UDF05 dataset, follow-up of the original Hubble Ultra Deep Field (HUDF), consists in observations in the optical bands obtained with the Advanced Camera for Survey (ACS). It permitted us to constrain the slope of the luminosity function at z~6 (0.95 Gyr after the Big Bang), which turned out the be steep enough to allow bright and faint galaxies at that redshift to account for the ionizing photon budget required for cosmic reionization. The subsequent analysis aimed at deriving similar constraints on the faint-end slope of the luminosity function at z~7-8 (between 0.64 and 0.77 Gyr after the Big Bang) using deep observations in the near-infrared obtained with the infrared channel of the Wide Field Camera 3 (WFC3-IR) during the HUDF09 program. Regarding z ∌ 8, the quality of the NIR dataset did not permit to disentangle any light produced by the faint galaxy population from the background noise and spurious signals. On the basis of the drop in the star formation rate density from z~6 to z~7 and beyond, there should be a more relevant contribution in terms of photoionizing photons at z~7 than at z~8 and we expected to be able to detect it. Unfortunately, the analysis at z~7 implied dealing with different detectors that are characterized by systematics that can not be erased by simply considering the ratio of the power spectra. Up to now the understanding of all WFC3/IR related problems is not as good as for ACS and a fur- ther analysis is needed before being able to use the IR dataset for the analysis of surface brightness fluctuations. Since a perfect reduction procedure of the images turned out to be an essential requirement to study any background signal, we performed an advanced data reduction to get an improved version of the deepest image of the Universe currently available, the so called eXtreme Deep Field (XDF). The goal was to create an image that allows to verify our findings on the faint-end slope of the luminosity function at z~6 since the XDF did not permit us to get any constraint on background fluctuations. We started from raw frames obtained from several proposals over 10 years and created hyperbiases and hyperdarks taking into account all the issues affecting ACS data, including the minor ones such as the herringbone effect. Then, we masked the satellite trails, aligned all the frames, and corrected for the chip-to-chip jump. We are still working on the dataset, in particular we are focused on modelling and correcting for the electronic ghost. Anyway, the preliminary check on photometry suggests a promising, even though small, achievement in term of signal to noise of the sources. The effect of surface brightness on the detection of primordial galaxies in deep surveys is directly depending on the cosmological surface brightness dimming that can be express in the form (1 + z)−4 and that affects all the sources. The strong dependence of surface brightness dimming with increasing redshift suggests the presence of a selection bias when searching for high-redshift galaxies, i.e. we tend to detect only those galaxies with a high surface brightness. Unresolved knots of emission are not affected by surface brightness dimming, thus allowing, in principle, to test clumpiness within high-redshift galaxies. We followed an empirical approach based on HST legacy datasets characterized by different depth to study the surface brightness dimming of galaxies. We selected a sample of Lyman-break galaxies at z~4 (1.5 Gyr after the Big Bang) detected in the XDF, HUDF, and the Great Observatories Origins Deep Survey (GOODS) datasets and found no significant trend when comparing the total magnitudes measured from images with different depth. Then, we compared our results to the prediction for mock sources derived from Monte Carlo simulations. In particular, considering different surface brightness profiles for the mock galaxies we were able to rule out all the extended profiles as fit for our data, getting a confirmation on the clumpy distribution of the light in high-redshift galaxies. The study of cosmological surface brightness dimming is also important since it could affect our prediction of what the upcoming JWST can observe at higher redshifts, where younger galaxies may exhibit a larger fraction of clumpiness. Our direct comparison showing that galaxies detected in GOODS do not become significantly brighter in the HUDF suggests that most of their light is compact and hints to the fact that JWST will likely not find diffuse star forming components. Finally, to complete the study on high-redshift galaxies we also focused on lower-redshift galaxies that could enter the high-redshift sample due to photometric scatter. In general interlopers are galaxies at z~1-2 showing colors similar to those of real dropout galaxies due to the 4000 A break. Even though their colors are likely to include them in the dropout sample, contaminants have a non negligible detection in the bands blueward of the Lyman-break. The preliminary study we performed using the multi-wavelength catalog obtained from CANDELS GOODS-South shows that the number counts of contaminants are significantly different from those of dropout galaxies at z~5-6 suggesting a clear difference in the luminosity functions of the two populations and little or no evolution in the population of interlopers entering the sample at different redshifts. Finally, we used the 3D-HST catalogs for the GOODS-South field that provided us with photometric data in ground based, HST, and Spitzer/IRAC bands as well as with photometric redshfits. This catalog allowed a study on the interlopers at z~4-5

    GPU data structures for graphics and vision

    Get PDF
    Graphics hardware has in recent years become increasingly programmable, and its programming APIs use the stream processor model to expose massive parallelization to the programmer. Unfortunately, the inherent restrictions of the stream processor model, used by the GPU in order to maintain high performance, often pose a problem in porting CPU algorithms for both video and volume processing to graphics hardware. Serial data dependencies which accelerate CPU processing are counterproductive for the data-parallel GPU. This thesis demonstrates new ways for tackling well-known problems of large scale video/volume analysis. In some instances, we enable processing on the restricted hardware model by re-introducing algorithms from early computer graphics research. On other occasions, we use newly discovered, hierarchical data structures to circumvent the random-access read/fixed write restriction that had previously kept sophisticated analysis algorithms from running solely on graphics hardware. For 3D processing, we apply known game graphics concepts such as mip-maps, projective texturing, and dependent texture lookups to show how video/volume processing can benefit algorithmically from being implemented in a graphics API. The novel GPU data structures provide drastically increased processing speed, and lift processing heavy operations to real-time performance levels, paving the way for new and interactive vision/graphics applications.Graphikhardware wurde in den letzen Jahren immer weiter programmierbar. Ihre APIs verwenden das Streamprozessor-Modell, um die massive Parallelisierung auch fĂŒr den Programmierer verfĂŒgbar zu machen. Leider folgen aus dem strikten Streamprozessor-Modell, welches die GPU fĂŒr ihre hohe Rechenleistung benötigt, auch Hindernisse in der Portierung von CPU-Algorithmen zur Video- und Volumenverarbeitung auf die GPU. Serielle DatenabhĂ€ngigkeiten beschleunigen zwar CPU-Verarbeitung, sind aber fĂŒr die daten-parallele GPU kontraproduktiv . Diese Arbeit prĂ€sentiert neue Herangehensweisen fĂŒr bekannte Probleme der Video- und Volumensverarbeitung. Teilweise wird die Verarbeitung mit Hilfe von modifizierten Algorithmen aus der frĂŒhen Computergraphik-Forschung an das beschrĂ€nkte Hardwaremodell angepasst. Anderswo helfen neu entdeckte, hierarchische Datenstrukturen beim Umgang mit den Schreibzugriff-Restriktionen die lange die Portierung von komplexeren Bildanalyseverfahren verhindert hatten. In der 3D-Verarbeitung nutzen wir bekannte Konzepte aus der Computerspielegraphik wie Mipmaps, projektive Texturierung, oder verkettete Texturzugriffe, und zeigen auf welche Vorteile die Video- und Volumenverarbeitung aus hardwarebeschleunigter Graphik-API-Implementation ziehen kann. Die prĂ€sentierten GPU-Datenstrukturen bieten drastisch schnellere Verarbeitung und heben rechenintensive Operationen auf Echtzeit-Niveau. Damit werden neue, interaktive Bildverarbeitungs- und Graphik-Anwendungen möglich
    • 

    corecore