1,505 research outputs found
Recommended from our members
A Computational Model of Networked Small-Scale Fuel Synthesis Demonstrating Greater Production Flexibility and Specificity
The rapid pace of industrial change over the past hundred years has led to any number of paradigm shifts in the way business is conducted and technologies are applied, but economies of large scale have persisted in the energy sector. In an age of automation and mass-production of small units, however, complex networking of many small energy systems can permit novel application of established technologies. This dissertation explores how established fuel synthesis technologies might behave in an automated network in which familiar units are arranged in unfamiliar ways. The flexibility afforded by automation and small scale operation allows for potentially complementary means of exploiting the fungible nature of hydrocarbon resources. Beyond any benefits of small-scale incurred from mass production and learning, fuel synthesis is a process with sensitivities to input streams that a network could exploit in a nuanced way. The completed work demonstrates that a network of small-scale fuel synthesis reactors and thermal crackers, based on current industrial practices at large monolithic scale, can be networked to dramatically sharpen the chemical spectrum they produce. In order to study the behavior of such a network in ways that are unavailable in current software, a hierarchical numerical modeling code was developed to offer greater flexibility to nest and optimize network configurations within network configurations, reflecting the modularity of the networks it is meant to simulate. This new code is capable of simulating aggressively numerically constrained networks, dynamically substituting various configurations while optimizing them across user-specified variables. Various weighting schemes were developed to facilitate more rapid convergence to a numerical solution so that highly constrained recycling schemes could be reconciled to a steady state that would produce the specified output spectrum. Modular units were coded to simulate the essential properties of real processes and technologies, with close attention paid to the sensitivity of these processes to input conditions, so that these units could be assembled in various configurations and subjected to user-specified constraints. Coded modules were designed under the principle that these individual units need not be custom-made or technologically ahead of their time; the benefits explored by network simulations are incurred not by dramatically upgrading the processes being simulated, but rather by directing and redirecting the chemical streams which are subject to those processes to tailor the outcome to the desired product. This principle was applied to chemical separation in an analytical framework in order to derive how unremarkable separators might be networked to produce remarkable precision of separation. Such precision is important because the direction and redirection of chemical streams is predicated on the ability to select the destination of a particular chemical. The effect of networking fuel synthesis reactors and thermal crackers was studied for unidirectional flows in order to understand how repeated applications of these units at smaller scale sharpen the spectrum relative to single large scale application. These fuel synthesis reactors and thermal crackers were also configured in aggressively recycled networks, imposing more severe constraints on the output spectrum. This work demonstrated that fuel synthesis at industrial output scales need not operate in monolithic units and can benefit dramatically from judicious networking, to the point that a network of units that would otherwise have produced a broad spectrum of chemical flavors can be configured to produce only a single user-specified output chemical
Idealized computational models for auditory receptive fields
This paper presents a theory by which idealized models of auditory receptive
fields can be derived in a principled axiomatic manner, from a set of
structural properties to enable invariance of receptive field responses under
natural sound transformations and ensure internal consistency between
spectro-temporal receptive fields at different temporal and spectral scales.
For defining a time-frequency transformation of a purely temporal sound
signal, it is shown that the framework allows for a new way of deriving the
Gabor and Gammatone filters as well as a novel family of generalized Gammatone
filters, with additional degrees of freedom to obtain different trade-offs
between the spectral selectivity and the temporal delay of time-causal temporal
window functions.
When applied to the definition of a second-layer of receptive fields from a
spectrogram, it is shown that the framework leads to two canonical families of
spectro-temporal receptive fields, in terms of spectro-temporal derivatives of
either spectro-temporal Gaussian kernels for non-causal time or the combination
of a time-causal generalized Gammatone filter over the temporal domain and a
Gaussian filter over the logspectral domain. For each filter family, the
spectro-temporal receptive fields can be either separable over the
time-frequency domain or be adapted to local glissando transformations that
represent variations in logarithmic frequencies over time. Within each domain
of either non-causal or time-causal time, these receptive field families are
derived by uniqueness from the assumptions.
It is demonstrated how the presented framework allows for computation of
basic auditory features for audio processing and that it leads to predictions
about auditory receptive fields with good qualitative similarity to biological
receptive fields measured in the inferior colliculus (ICC) and primary auditory
cortex (A1) of mammals.Comment: 55 pages, 22 figures, 3 table
Worldwide Infrastructure for Neuroevolution: A Modular Library to Turn Any Evolutionary Domain into an Online Interactive Platform
Across many scientific disciplines, there has emerged an open opportunity to utilize the scale and reach of the Internet to collect scientific contributions from scientists and non-scientists alike. This process, called citizen science, has already shown great promise in the fields of biology and astronomy. Within the fields of artificial life (ALife) and evolutionary computation (EC) experiments in collaborative interactive evolution (CIE) have demonstrated the ability to collect thousands of experimental contributions from hundreds of users across the glob. However, such collaborative evolutionary systems can take nearly a year to build with a small team of researchers. This dissertation introduces a new developer framework enabling researchers to easily build fully persistent online collaborative experiments around almost any evolutionary domain, thereby reducing the time to create such systems to weeks for a single researcher. To add collaborative functionality to any potential domain, this framework, called Worldwide Infrastructure for Neuroevolution (WIN), exploits an important unifying principle among all evolutionary algorithms: regardless of the overall methods and parameters of the evolutionary experiment, every individual created has an explicit parent-child relationship, wherein one individual is considered the direct descendant of another. This principle alone is enough to capture and preserve the relationships and results for a wide variety of evolutionary experiments, while allowing multiple human users to meaningfully contribute. The WIN framework is first validated through two experimental domains, image evolution and a new two-dimensional virtual creature domain, Indirectly Encoded SodaRace (IESoR), that is shown to produce a visually diverse variety of ambulatory creatures. Finally, an Android application built with WIN, filters, allows users to interactively evolve custom image effects to apply to personalized photographs, thereby introducing the first CIE application available for any mobile device. Together, these collaborative experiments and new mobile application establish a comprehensive new platform for evolutionary computation that can change how researchers design and conduct citizen science online
Okumura-Hata Propagation Model Tuning Through Composite Function of Prediction Residual
In this paper, an innovative composite function of prediction residual-based approach for tuning Okumura-Hata propagation model in the 800-900MHz GSM frequency band is presented. The study is based on empirical measurements conducted at University Of Uyo (UNIUYO) town-campus located at latitude and longitude of 5.042976, 7.919046 respectively. The proposed path loss tuning approach is compared with RMSE based tuning approach. According to the results, the composite function of prediction residual tuned Okumura-Hata model has the lowest RMSE value of 2.164, the highest Coefficient Of Determination (R^2) value of  0.967 and the highest prediction accuracy of  98.64%. On the other hand , the RMSE- tuned Okumura-Hata model has a higher  RMSE value of 5.3, lower R^2 value of 0.814 and the lower prediction accuracy of 96.87%. Essentially, in all the three performance measures used , the composite function of prediction residual based tuning approach performed better than the RMSE based tuning approach. However, in pathloss tuning studies, RMSE value below  7dB is acceptable for the urban area. As such, the RMSE based tuning approach gave tuned model with acceptable RMSE value but with lower prediction accuracy than the model produced by the composite function of prediction residual based tuning approach
Scene-Dependency of Spatial Image Quality Metrics
This thesis is concerned with the measurement of spatial imaging performance and the modelling of spatial image quality in digital capturing systems. Spatial imaging performance and image quality relate to the objective and subjective reproduction of luminance contrast signals by the system, respectively; they are critical to overall perceived image quality.
The Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) describe the signal (contrast) transfer and noise characteristics of a system, respectively, with respect to spatial frequency. They are both, strictly speaking, only applicable to linear systems since they are founded upon linear system theory. Many contemporary capture systems use adaptive image signal processing, such as denoising and sharpening, to optimise output image quality. These non-linear processes change their behaviour according to characteristics of the input signal (i.e. the scene being captured). This behaviour renders system performance “scene-dependent” and difficult to measure accurately. The MTF and NPS are traditionally measured from test charts containing suitable predefined signals (e.g. edges, sinusoidal exposures, noise or uniform luminance patches). These signals trigger adaptive processes at uncharacteristic levels since they are unrepresentative of natural scene content. Thus, for systems using adaptive processes, the resultant MTFs and NPSs are not representative of performance “in the field” (i.e. capturing real scenes).
Spatial image quality metrics for capturing systems aim to predict the relationship between MTF and NPS measurements and subjective ratings of image quality. They cascade both measures with contrast sensitivity functions that describe human visual sensitivity with respect to spatial frequency. The most recent metrics designed for adaptive systems use MTFs measured using the dead leaves test chart that is more representative of natural scene content than the abovementioned test charts. This marks a step toward modelling image quality with respect to real scene signals.
This thesis presents novel scene-and-process-dependent MTFs (SPD-MTF) and NPSs (SPDNPS). They are measured from imaged pictorial scene (or dead leaves target) signals to account for system scene-dependency. Further, a number of spatial image quality metrics are revised to account for capture system and visual scene-dependency. Their MTF and NPS parameters were substituted for SPD-MTFs and SPD-NPSs. Likewise, their standard visual functions were substituted for contextual detection (cCSF) or discrimination (cVPF) functions. In addition, two novel spatial image quality metrics are presented (the log Noise Equivalent Quanta (NEQ) and Visual log NEQ) that implement SPD-MTFs and SPD-NPSs.
The metrics, SPD-MTFs and SPD-NPSs were validated by analysing measurements from simulated image capture pipelines that applied either linear or adaptive image signal processing. The SPD-NPS measures displayed little evidence of measurement error, and the metrics performed most accurately when they used SPD-NPSs measured from images of scenes. The benefit of deriving SPD-MTFs from images of scenes was traded-off, however, against measurement bias. Most metrics performed most accurately with SPD-MTFs derived from dead leaves signals. Implementing the cCSF or cVPF did not increase metric accuracy.
The log NEQ and Visual log NEQ metrics proposed in this thesis were highly competitive, outperforming metrics of the same genre. They were also more consistent than the IEEE P1858 Camera Phone Image Quality (CPIQ) metric when their input parameters were modified. The advantages and limitations of all performance measures and metrics were discussed, as well as their practical implementation and relevant applications
RFID-Integrated Retail Supply Chain Services: Lessons Learnt From The Smart Project
This paper proposes a service-oriented architecture that utilizes the automatic, unique identification capabilities of RFID technology, data stream management systems and web services, to support RFID-integrated supply chain services. In the lifespan of SMART project (IST-2005, FP6) two services have been deployed supporting dynamic-pricing of fresh products and management of promotion events. The two services have been field-tested in three retail stores in Greece, Ireland, and Cyprus. The valuable lessons learnt, concerning RFID readability challenges, consumer privacy, customers and store staff health concerns, investment cost, and so on, are reported to provide guidance to future developers of RFID-integrated supply chain services as well as to set an agenda for academic research
Human Factors As A Parameter For Improving Interface Usability And User Satisfaction
The endeavour to optimize HCI should integrate a wide array of user characteristics that have an effect throughout users’ interactions with a system. Human factors such as cognitive traits and current state, from a psychological point of view, are undoubtedly significant in the shaping of the perceived and objective quality of interactions with a system. The research that is presented in this paper focuses on identifying human factors that relate to users’ performance in Web applications that involve information processing, and a framework of personalization rules that are expected to increase users’ performance is depicted. The empirical results that are presented are derived from environments both learning and commercial; in the case of e-learning personalization was beneficial, while the interaction with a commercial site needs to be further investigated due to the implicit character of information processing in the Web
Acta Universitatis Sapientiae - Electrical and Mechanical Engineering
Series Electrical and Mechanical Engineering publishes original papers and surveys in various fields of Electrical and Mechanical Engineering
- …