1,225 research outputs found
JPEG steganography with particle swarm optimization accelerated by AVX
Digital steganography aims at hiding secret messages in digital data transmitted over insecure channels. The JPEG format is prevalent in digital communication, and images are often used as cover objects in digital steganography. Optimization methods can improve the properties of images with embedded secret but introduce additional computational complexity to their processing. AVX instructions available in modern CPUs are, in this work, used to accelerate data parallel operations that are part of image steganography with advanced optimizations.Web of Science328art. no. e544
Estimating toner usage with laser electrophotographic printers, and object map generation from raster input image
Accurate estimation of toner usage is an area of on-going importance for laser, electrophotographic (EP) printers. In Part 1, we propose a new two-stage approach in which we first predict on a pixel-by-pixel basis, the absorptance from printed and scanned pages. We then form a weighted sum of these pixel values to predict overall toner usage on the printed page. The weights are chosen by least-squares regression to toner usage measured with a set of printed test pages. Our two-stage predictor significantly outperforms existing methods that are based on a simple pixel counting strategy in terms of both accuracy and robustness of the predictions.^ In Part 2, we describe a raster-input-based object map generation algorithm (OMGA) for laser, electrophotographic (EP) printers. The object map is utilized in the object-oriented halftoning approach, where different halftone screens and color maps are applied to different types of objects on the page in order to improve the overall printing quality. The OMGA generates object map from the raster input directly. It solves problems such as the object map obtained from the page description language (PDL) is incorrect, and an initial object map is unavailable from the processing pipeline. A new imaging pipeline for the laser EP printer incorporating both the OMGA and the object-oriented halftoning approach is proposed. The OMGA is a segmentation-based classification approach. It first detects objects according to the edge information, and then classifies the objects by analyzing the feature values extracted from the contour and the interior of each object. The OMGA is designed to be hardware-friendly, and can be implemented within two passes through the input document
Recommended from our members
AN APPRAISAL OF THE DEVELOPMENTS IN THE REPRODUCTION OF COLOUR IN COMPUTER PUBLISHING SYSTEMS
The plethora of coloured images that are reproduced in any printed media is facilitated through a variety of related processes that collectively constitute traditional printing techniques. The aim of this research is to appraise recent developments that have occurred within the colour prepress process. The colour prepress process involves the preparation of colour separated halftone films that are used in the production of offset lithographic printing plates. Over recent years the application of desktop publishing technology to perform many of the functions associated with the colour prepress process has raised a number of significant issues and debates.
The reproduction of coloured images in the printed medium demands that certain fundamental criteria are adhered to in order to maintain professional standards of colour fidelity. Such criteria include: successful digital halftone production, the elimination of moiré patterns, and maintaining colour fidelity between the coloured original and the coloured reproduction. This research thesis shall therefore establish the principles and techniques involved in the reproduction of colour in a printed medium. It will also asses whether desktop publishing systems are able to facilitate successful professional colour reproduction by examining current debates that challenge the viability of desktop publishing solutions. Current debates concerning desktop publishing solutions are primarily concerned with assessing the value of Adobe PostScript level 2 solutions, computer interchange spaces for colour matching purposes, and rational supercell techniques that attempt to eliminate moiré patterns. The research also attempts to establish the validity of current debate findings by comparing them with statistics derived hom a questionnaire (undertaken as part of the research program) that seeks the opinions of system users on the effectiveness of their individual systems at processing and delivering acceptable colour separations
Nondestructive evaluation of 3d printed, extruded, and natural polymer structures using terahertz spectroscopy and imaging
Terahertz (THz) spectroscopy and imaging are considered for the nondestructive evaluation (NDE) of various three-dimensional (3D) printed, extruded, and natural polymer structures. THz radiation is the prime candidate for many NDE challenges due to the added benefits of safety, increased contrast and depth resolution, and optical characteristic visualization when compared to other techniques. THz imaging, using a wide bandwidth pulse-based system, can evaluate the external and internal structure of most nonconductive and nonpolar materials without any permanent effects. NDE images can be created based on THz pulse attributes or a material’s spectroscopic characteristics such as refractive index, attenuation coefficient, or the level birefringence present within. The evaluation processes for polyethylene gas pipes and amber specimens lack efficient and accurate NDE techniques while 3D printed polymer structures currently have no standardized NDE methods. The primary focus of this research is to determine and evaluate the use of THz spectroscopy and imaging as a NDE technique for a variety of polymers extruded mechanically and naturally.
Results indicate the refractive indices, attenuation coefficients, and level of birefringence of several 3D printing filaments including copolyester (CPE), nylon, polycarbonate (PC), polylactic acid (PLA), and polypropylene (PP) may change depending on the printing parameters. THz spectroscopy is used to measure relative permittivity of printed ceramic samples with various sintering temperatures. THz imaging proves to be a successful method to diagnose print head misalignment in ceramic nanoparticle jetting printing processes. Proper diagnosis of surface level defects on polyethylene (PE) gas pipelines is achieved along with preliminary joint fault imaging and 3D visualization by creating an interactive detailed map of surface level defects. THz NDE imaging, combined with tailored refractive index matching materials, can construct tomographic images and 3D reconstructions of multi-million-year-old amber. Visual and THz birefringence images are created to determine stress direction within extruded PE and amber. These results suggest that THz spectroscopy and imaging have multiple confirmed uses in the NDE of polymer structures, both mechanically and naturally fabricated
Cooperative Forestry Research Unit Annual Report 2020
This past year has been a challenging time for the CFRU as we learned to cope and work during a global pandemic. More isolation became the norm and video conferences replaced the in-person meetings we are accustomed to. Despite these challenges, the CFRU remained active thanks to the dedication of our Interim Program Leader, Aaron Weiskittel, and his support staff, Leslee Canty-Noyes and Meg Fergusson. Their efforts to adapt to changing circumstances ensured that research advanced, meetings successfully carried on, spruce budworm L2 sampling was conducted, MASN sites were established, and the daily administrative functions of the CFRU continued
Automating Global Geospatial Data Set Analysis : Visualizing flood disasters in the cities of the Global South
Flooding is the most devastating natural hazard affecting tens of millions of people yearly and causing billions of USD dollars in damages globally. The people most affected by flooding globally are those with a high level of everyday vulnerability and limited resources for flood protection and recovery. Geospatial data from the Global South is severely lacking, and geospatial proficiency needs to be improved at a local level so that geospatial data and data analysis can be efficiently utilized in disaster risk reduction schemes and urban planning in the Global South. This thesis focuses on the use of automated global geospatial dataset analysis in disaster risk reduction in the Global South by using the Python programming language to produce an automated flood analysis and visualization model. In this study, the automated model was developed and tested in two, highly relevant cases: in the city of Bangkok, Thailand, and in the urban area of Tula de Allende, Mexico.
The results of the thesis show that with minimal user interaction, the automated flood model ingests flood extent and depth data produced by ICEYE, a global population estimation raster produced by the German Aerospace Agency (DLR) and OpenStreetMap (OSM) data, performs multiple relevant analyses of these data, and produces an interactive map highlighting the severity and effects of a flooding event. The automated flood model performs consistently and accurately while producing key statistics and standardized visualizations of flooding events which offers first responders a very fast first estimation of the scale of a flooding event and helps plan an appropriate response anywhere around the globe.
Global geospatial data sets are often created to examine large scale geographical phenomena; however, the results of this thesis show that they can also be used to analyze detailed local-level phenomena when paired together with supporting data. The advantage of using global geospatial data sets is that when sufficiently accurate and precise, they remove the most time-consuming part of geospatial analysis: finding suitable data. Fast reaction is of utmost importance in the first hours of a natural hazard like flooding, thus, automated analysis produced on a global scale could significantly help international humanitarian aid and first responders. Using an automated model also standardizes the results removing human errors and interpretation from the results enabling the accurate comparison of historical flood data in due time.Tulvat ovat luonnonilmiöihin liittyvistä riskeistä tuhoisimpia, ja ne vaikuttavat kymmeniin miljooniin ihmisiin vuosittain sekä aiheuttavat miljardien dollarien vahingot maailmanlaajuisesti. Tulvista kärsivät usein maailmanlaajuisesti ne ihmiset, jotka ovat jo ennestään haavoittuvia ja joilla on suhteellisesti heikoimmat keinot suojautua tulvilta ja selviytyä tulvan aiheuttamista tuhoista. Monissa globaalin etelän maissa on niukasti paikkatietoaineistoa ja paikkatieto-osaamista on syytä lisätä erityisesti paikallisella tasolla, jotta paikkatietoaineistoa ja analyysin hyödynnettävyyttä voidaan parantaa katastrofiriskien vähentämissuunnitelmissa sekä kaupunkisuunnittelussa globaalissa etelässä. Tämä opinnäytetyö keskittyy automatisoidun globaalin paikkatietoaineiston analyysin hyödyntämiseen katastrofiriskien vähentämisessä globaalissa etelässä käyttämällä Python-ohjelmointikieltä automatisoidun tulva-analyysi- ja visualisointimallin tuottamiseen. Tässä tutkimuksessa automatisoitua mallia kehitettiin ja testattiin kahdessa tulvariskien kannalta erittäin relevantissa tapauksessa: Bangkokissa, Thaimaassa ja Tula de Allende:n kaupunkialueella, Meksikossa.
Tämän tutkielman tulokset osoittavat, että automatisoitu tulvamalli osaa lukea ICEYE:n tuottaman tulvan laajuus- ja syvyysaineiston, Saksan ilmailu- ja avaruuskeskuksen (DLR) tuottaman maailmanlaajuisen väestönarviorasterin, sekä OpenStreetMap (OSM) -aineiston, suorittaa aineistolle tulvan tuhojen tulkinnan kannalta olennaisia analyyseja, ja tuottaa lopputuloksena interaktiivisen kartan, joka korostaa tulvatapahtuman laajuutta ja vaikutuksia. Automatisoitu tulvamalli toimii johdonmukaisesti ja tuottaa tilastoja sekä standardoituja visualisointeja tulvatapahtumista, mikä tarjoaa ensivastehenkilöille erittäin nopean ensimmäisen arvion tulvatapahtuman laajuudesta. Tämä auttaa kohdentamaan pelastustoimenpiteitä riskitilanteessa vaihtelevissa ympäristöissä eri puolilla maailmaa.
Globaalit paikkatietoaineistot luodaan usein laajojen maantieteellisten ilmiöiden tutkimiseen, mutta tämän tutkielman tulokset osoittavat kuitenkin, että niillä voidaan analysoida myös hyvin paikallistason ilmiöitä, kun ne yhdistetään muihin relevantteihin tietolähteisiin. Globaalien paikkatietoaineistojen käytön etuna on, että ollessaan riittävän tarkkoja ne poistavat paikkatietoanalyysin aikaa vievimmän osan: sopivan tiedon löytämisen. Nopea reagointi on äärimmäisen tärkeää luonnonuhkien, kuten tulvien, ensimmäisinä tunteina ja kansainvälisen humanitaarisen avun ja ensivastetoimijoiden tulisi hyödyntää maailmanlaajuisia automatisoituja analyysejä. Automaattinen malli myös standardoi tulokset poistaen tuloksista inhimilliset virheet ja tulkinnat, mikä mahdollistaa historiallisten tulvatietojen tarkan vertailun
Recommended from our members
Influence of wavelength and accumulated fluence at picosecond laser-induced surface roughening of copper on secondary electron yield
Ultrashort-pulse laser processing of copper is performed in air to reduce the secondary electron yield (SEY). By UV (355 nm), green
(532 nm), and IR (1064 nm) laser-light induced surface modification, this study investigates the influence of the most relevant experimental
parameters, such as laser power, scanning speed, and scanning line distance (represented as accumulated fluence) on the ablation depth,
surface oxidation, topography, and ultimately on the SEY. Increasing the accumulated laser fluence results in a gradual change from a Cu 2 O
to a CuO-dominated surface with deeper micrometer trenches, higher density of redeposited surface particles from the plasma phase, and a
reduced SEY. While the surface modifications are less pronounced for IR radiation at low accumulated fluence (,1000 J/cm2 ), analogous
results are obtained for all wavelengths when reaching the nonlinear absorption regime, for which the SEY maximum converges to 0.7.
Furthermore, independent of the extent of the structural transformations, an electron-induced surface conditioning at 250 eV allows a
reduction of the SEY maximum below unity at doses of 5×10 -4 C/mm2 . Consequently, optimization of processing parameters for application
in particle accelerators can be obtained for a sufficiently low SEY at controlled ablation depth and surface particle density, which are factors
that limit the surface impedance and the applicability of the material processing for ultrahigh vacuum systems. The relations between pro-
cessing parameters and surface features will provide guidance in treating the surface of vacuum components, especially beam screens of
selected magnets of the Large Hadron Collider or of future colliders
Cane Creek and Little Cane Creek Watershed management plan to address e. coli impairment, Oconee County, SC
This Watershed Management Plan intends to address the seemingly consistent water use impairments of bacteria loading to the Cane and Little Cane Creek Watershed. This watershed is the largest and most urban of the drainage basins to Lake Keowee, and a priority in addressing the sustained quality of that reservoir’s use for supplying drinking water and recreation. This plan may be considered unique in that it proposes short-term solutions and long-term strategies for the protection of environmental quality and its influence on quality of life in Oconee County
Spatiotemporal snowmelt patterns within a high Arctic landscape, with implications for flora and fauna
Snow conditions are important drivers of the distribution and phenology of Arctic flora and fauna, but the extent and effects of local variation in snowmelt are still inadequately studied. We analyze snowmelt patterns within the Zackenberg valley in northeast Greenland. Drawing on landscape-level snowmelt dates and meteorological data from a central climate station, we model snowmelt trends during 1998-2014. We then use time-lapse photographs to examine consistency in spatiotemporal snowmelt patterns during 2006-2014. Finally, we use monitoring data on arthropods and plants for 1998-2014 to investigate how snowmelt date affects the phenology of Arctic organisms. Despite large interannual variation in snowmelt timing, we find consistency in the relative order of snowmelt among sites within the landscape. With a slight overall advancement in snowmelt during the study period, early melting locations have advanced more than late-melting ones. Individual organism groups differ greatly in how their phenology shifts with snowmelt, with much variance attributable to variation in life history and diet. Overall, we note that local variation in snowmelt patterns may drive important ecological processes, and that more attention should be paid to variability within landscapes. Areas optimal for a given taxon vary between years, thereby creating spatial structure in a seemingly uniform landscape.Peer reviewe
- …