43 research outputs found

    A Simple Optimum-Time FSSP Algorithm for Multi-Dimensional Cellular Automata

    Full text link
    The firing squad synchronization problem (FSSP) on cellular automata has been studied extensively for more than forty years, and a rich variety of synchronization algorithms have been proposed for not only one-dimensional arrays but two-dimensional arrays. In the present paper, we propose a simple recursive-halving based optimum-time synchronization algorithm that can synchronize any rectangle arrays of size m*n with a general at one corner in m+n+max(m, n)-3 steps. The algorithm is a natural expansion of the well-known FSSP algorithm proposed by Balzer [1967], Gerken [1987], and Waksman [1966] and it can be easily expanded to three-dimensional arrays, even to multi-dimensional arrays with a general at any position of the array.Comment: In Proceedings AUTOMATA&JAC 2012, arXiv:1208.249

    On relations between arrays of processing elements of different dimensionality

    Get PDF
    We are examining the power of dd-dimensional arrays of processing elements in view of a special kind of structural complexity. In particular simulation techniques are shown, which allow to reduce the dimension at an increased cost of time only. Conversely, it is not possible to regain the speed by increasing the dimension. Moreover, we demonstrate that increasing the computation time (just by a constant factor) can have a more favorable effect than increasing the dimension (arbitrari

    Drop size-dependent chemical composition in clouds and fogs

    Get PDF
    December 2001.Also issued as author's dissertation (Ph.D.) -- Colorado State University, 2002.Includes bibliographical references.Cloud drop composition varies as function of drop size. More sophisticated atmospheric chemistry models predict this and observations at many locations around the world by multiple techniques confirm this. This variation can influence the cloud processing of atmospheric species. Aqueous-phase reaction and atmospheric removal rates for scavenged species, among other processes, can be affected by drop size-dependent composition. Inferences to these processes drawn upon single bulk cloud composition measurements can be misleading according to observations obtained using cloud water collectors that separate drops into two or more size­ resolved fractions. Improved measurements of size-dependent drop composition are needed to further examine these and related issues. Two active multi-stage cloud water collectors were developed for sampling super-cooled drops in mixed-phase clouds and warm cloud drops, respectively. Both use the principles of cascade inertial impaction to separate drops into three fractions (super-cooled drop collector) and five fractions (warm cloud drop collector). While calibration suggests there is more drop overlap between stages than desired, consistently different drop fractions are still collected. FROSTY - the super-cooled drop collector - has been used successfully to obtain size-resolved drop composition information during two field campaigns in Colorado. While the data are limited, FROSTY's field performance appears to be reasonably consistent during individual cloud events, although not predictable based solely upon its collection efficiency curves. Additional factors must be considering in evaluating its performance in future campaigns. Nevertheless, the ability to obtain consistent size-resolved drop composition information from super-cooled clouds was not previously possible. Field data indicate that the warm cloud collector - the CSU 5-Stage - is able to resolve variations in the drop size-dependent composition not discernible with the two-stage size-fractionating Caltech Active Strand Cloud water Collector (sf-CASCC). Field performance evaluations suggest that the 5-Stage and the sf-CASCC compare well to each other for the range of sampling conditions experienced. Both collectors' performances differ from measurements made by the Caltech Active Strand Cloud water Collector #2 (CASCC2) in some specific sampling conditions, but otherwise agreement between the three collectors is good. Where the sf-CASCC indicates little drop variation in an orographic cloud study at Whiteface Mtn., NY, the 5-Stage indicates up to a factor of two difference may exist between the maximum and minimum drop concentrations for the major inorganic ions (ammonium, nitrate and sulfate). The sf-CASCC data suggest that typically a factor of 3 - 5 difference exists between large and small drop species' concentrations in radiation fogs measured in Davis, CA Concurrent 5-Stage samples suggest the actual variation may be up to at least a factor of 4 - 5 greater, and that the smallest drops (approximately < 11 ”min diameter) are principally responsible for the strong observed concentration gradients between sizes. While the data are limited, the 5-Stage's results are consistent for all of the sample sets obtained during both field campaigns. Data from the 5-Stage emphasize that cloud drop chemical composition cannot be considered separately from the sampled cloud's microphysics and dynamics. Interpreting the 5-Stage's results necessarily draws upon both. During the Davis campaign, additional measurements were performed to investigate species removal from the atmosphere via drop deposition and gas/liquid partitioning in-fog. Although subject to confounding effects, these investigations benefited from the additional insight 5-Stage data provided into the processes occurring. In particular, 5-Stage data and between-fog aerosol measurements suggest that deposition of the largest fog drops resulted in the relative removal of coarse mode aerosol particles from the atmosphere. 5-Stage data and gas-phase measurements suggest the ammonia/ammonium system may not be at equilibrium and provide some information about the nitrous acid/nitrite system not otherwise available. The 5-Stage has the potential to be a valuable tool in investigating the effects of fog and fog processing on the fate of ambient species.Sponsored by USEPA under grant NCERQA R82-3979-010; STAR Fellowship U-915364; NSF under grants ATM-9509596, ATM-9712603, and ATM-9980540; and the San Joaquin Valleywide Air Pollution Study Agency

    LASER Tech Briefs, Winter 1994

    Get PDF
    Topics include: Electronic Components and Circuits. Electronic Systems, Physical Sciences, Materials, Computer Programs, Mechanics, Machinery, Fabrication Technology, Mathematics and Information Sciences, Life Sciences, and Books and report

    Ice Crystal Classification Using Two Dimensional Light Scattering Patterns

    Get PDF
    An investigation is presented into methods of characterising cirrus ice crystals from in-situ light scattering data. A database of scattering patterns from modelled crystals was created using the Ray Tracing with Diffraction on Facets (RTDF) model from the University of Hertfordshire, to which experimental and modelled data was fitted. Experimental data was gathered in the form of scattering patterns from ice analogue crystals with similar optical properties and hexagonal symmetry to ice, yet stable at room temperature. A laboratory rig is described which images scattering patterns from single particles while allowing precise control over the orientation of the particle with respect to the incident beam. Images of scattering patterns were captured and compared to patterns from modelled crystals with similar geometry. Methods for introducing particles en-masse and individually to the Small Ice Detector (SID) instruments are discussed, with particular emphasis on the calibration of the gain of the SID-2 instrument. The variation in gain between detector elements is found to be significant, variable over the life of the detector, and different for different detectors. Fitting was performed by comparison of test scattering patterns (either modelled or experimental) to the reference database. Representation of the two dimensional scattering patterns by asymmetry factor, moment invariants, azimuthal intensity patterns (AIP) and the Fourier transform of the AIP are compared for fitting accuracy. Direct comparison of the AIP is found to be the most accurate method. Increased resolution of the AIP is shown to improve the fitting substantially. Case studies are presented for the fitting of two ice analogue crystals to the modelled database. Fitting accuracy is found to be negatively influenced by small amounts of surface roughness and detail not currently considered by the RTDF model. Fitting of in-situ data gathered by the SID-3 instrument during the HALO 02 campaign at the AIDA cloud chamber in Germany is presented and discussed. Saturation of detector pixels is shown to affect pattern fitting. In-flight operation of the instrument involves the variation of gain of the whole detector (as opposed to individual elements) in order to obtain unsaturated images of both large and small particles

    On the reconstruction of three-dimensional cloud fields by synergistic use of different remote sensing data

    Get PDF
    The objective of this study was to assess if new cloud datasets, namely horizontal fields of integrated cloud parameters and transects of cloud profiles becoming available from current and future satellites like MODIS and CloudSAT as well as EarthCARE will allow for the reconstruction of three-dimensional cloud fields. Because three-dimensional measured cloud fields do not exist, surrogate cloud fields were used to develop and test reconstruction techniques. In order to answer the question if surrogate cloud fields may represent real cloud fields and to evaluate potential constraints for cloud field reconstruction, statistics of surrogate cloud fields have been compared to statistics of various remote sensing retrievals. It has turned out that except for cloud droplet effective radius, which is too low, other cloud parameters are in line with parameters derived from measurements. The reconstruction approach is divided into two parts. The first one deals with the reconstruction of the cloud fields. Three techniques with varying complexity are presented constraining the reconstruction by measurements to various degrees. Whereas the first one applies only information of a satellite radiometer, the other two constrain the retrieval also by profile information measured within the domain. Comparing the reconstruction quality of the approaches, there is no superior algorithm performing better for all cloud fields. This might be ascribed to liquid water content profiles of the surrogate cloud fields close to their adiabatic reference. Consequently, the assumption of adiabatic liquid water content profiles of the first scheme yields adequate estimates and additional information from profiles does not improve the reconstruction. The second part of the reconstruction approach addresses the reconstruction quality by comparing parameters of radiative transfer describing photon path statistics as well as reflectances. Therefore three-dimensional radiative transfer simulations with a Monte Carlo code were carried out for the surrogate cloud fields as well as for the reconstructed cloud fields. It was assumed that deviations of the parameter simulated for the reconstructed cloud and the surrogate cloud field are smaller when reconstruction is more accurate. For parameter describing photon pathes it has been found that only deviations of geometrical pathlength statistics reflect the reconstruction quality to a certain degree. Deviations of other parameters like photon penetration depth do not allow for either assessing local differences in reconstruction quality by an individual reconstruction scheme or to infer the most appropriate reconstruction scheme. The differences in reflectances do also not enable to evaluate reconstruction quality. They prevent from gaining insight in local accuracy of reconstruction due to effects like horizontal photon transport weakening the relations between microphysical as well as optical properties and reflectances of the column. In order to address these effects, grids of various complexity, derived by applying photon path properties, were used to weight deviations of cloud properties when analyzing the relationships. Unfortunately, there is no increase of explained variance due to the application of the weighting grids. Additionally, the sensitivity of the results to the model set-up, namely the spatial resolution of the cloud fields as well as the simplification and neglection of ancillary parameters, were analyzed. Though one would assume a strengthening of relationships between deviations of cloud parameters and deviations of reflectances due to more reliable sampling and reduced inter-column transport of photons when column size increases, there is no indication for resolutions where an assessment of the reconstruction quality by means of reflectance deviations becomes feasible. It also has been shown that inappropriate treatment of aerosols in the radiative transfer simulation impose an error comparable in magnitude to differences in reflectances due to inaccurate cloud field reconstruction. This is especially the case when clouds are located in the boundary layer of the aerosol model. Consequently, appropriate aerosol models should be applied in the analysis. May be due to the low surface reflection and the high cloud optical depths, the representation of the surface reflection function seems to be of minor importance. Summarizing the results, differences in radiative transfer do not allow for the assessment of cloud field reconstruction quality. In order to accomplish the task of cloud field reconstruction, the reconstruction part could be constrained employing information from additional measurements. Observational geometries enabling to use tomographic methods and the application of additional wavelengths for validation might help, too.Ziel der Arbeit war die Evaluierung inwieweit DatensĂ€tze von Wolkenparametern, horizontale Felder integraler Wolkenparameter und Schnitte vertikal aufgelöster Parameter, zur Rekonstruktion dreidimensionaler Wolkenfelder genutzt werden können. Entsprechende DatensĂ€tze sind durch MODIS und CloudSAT erstmals vorhanden und werden zusĂ€tzlich mit dem Start von EarthCARE zur VerfĂŒgung stehen. Da dreidimensionale Wolkenfelder aus Messungen nicht existieren, wurden zur Entwicklung der Rekonstruktionsmethoden surrogate Wolkenfelder genutzt. Um die QualitĂ€t der surrogaten Wolkenfelder abzuschĂ€tzen und um mögliche Randbedingungen zur Rekonstruktion aufzuzeigen, wurden Statistiken der surrogaten Wolkenfelder mit denen unterschiedlicher Fernerkundungsprodukte verglichen. Dabei zeigte sich, dass, abgesehen von den gegenĂŒber Messungen zu geringen Effektivradien der Wolkentropfen in den surrogaten Wolkenfeldern, die ĂŒbrigen Wolkenparameter gut ĂŒbereinstimmen. Der Rekonstruktionsansatz gliedert sich in zwei Teile. Der erste Teil beinhaltet die Rekonstruktion der Wolkenfelder. Dazu werden drei Techniken unterschiedlicher KomplexitĂ€t genutzt, wobei die KomplexitĂ€t durch den Grad der eingebundenen Messungen bestimmt wird. WĂ€hrend die einfachste Technik lediglich Informationen, wie sie aus Messungen mit einem Satellitenradiometer gewonnen werden können, nutzt, binden die anderen Techniken zusĂ€tzlich Profilinformationen aus dem beobachteten Gebiet ein. Analysen zeigten, dass keine der Methoden fĂŒr alle untersuchten Wolkenfelder den anderen Methoden ĂŒberlegen ist. Dies mag daran liegen, dass die FlĂŒssigwasserprofile der surrogaten Wolkenfelder nur geringfĂŒgig von den in der ersten Rekonstruktionsmethode angenommenen adiabatischen FlĂŒssigwasserprofilen abweichen, so dass die Nutzung der Profile kaum zusĂ€tzliche Information fĂŒr die Rekonstruktion liefert. Im zweiten Teil des Rekonstruktionsansatzes wird die QualitĂ€t der rekonstruierten Wolkenfelder durch den Vergleich von Parametern des Strahlungstransfers, wie Photonenpfad-Statistiken und StrahlungsgrĂ¶ĂŸen, evaluiert. Dazu wurden sowohl fĂŒr die surrogaten Wolkenfelder als auch fĂŒr die rekonstruierten Wolkenfelder dreidimensionale Strahlungstransfersimulationen mit einem Monte-Carlo-Modell durchgefĂŒhrt. Angenommen wurde hierbei, dass eine bessere RekonstruktionsqualitĂ€t durch geringere Abweichungen der betrachteten Strahlungsparameter aus Simulationen mit rekonstruierten und surrogaten Wolkenfeldern gekennzeichnet ist. Bei den Parametern, die die Photonenwege beschreiben, unterstĂŒtzen lediglich die Abweichungen der geometrischen PhotonenweglĂ€ngen diese These. Weder erlauben die Abweichungen der ĂŒbrigen Parameter, zum Beispiel der Eindringtiefen, RĂŒckschlĂŒsse auf die lokale RekonstruktionsqualitĂ€t der einzelnen Methoden zu ziehen, noch ermöglichen sie die beste Rekonstruktionsmethode zu identifizieren. Auch die Unterschiede der simulierten Reflektanzen können nicht zur Bestimmung der RekonstruktionsqualitĂ€t herangezogen werden. Durch Effekte wie horizontale Photonentransporte werden die ZusammenhĂ€nge zwischen mikrophysikalischen und optischen Eigenschaften und Reflektanzen der jeweiligen GittersĂ€ule aufgeweicht, und folglich sind keine RĂŒckschlĂŒsse auf die lokale RekonstruktionsqualitĂ€t möglich. Um auf entsprechende Effekte einzugehen, wurden fĂŒr die Analyse Wichtungsfelder unterschiedlicher KomplexitĂ€t aus Photonenwegeigenschaften generiert, um diese zur Wichtung der Abweichungen der Wolkeneigenschaften zu nutzen. Der Anteil der erklĂ€rten Varianz konnte jedoch durch die Nutzung der entsprechenden Wichtungsfelder nicht erhöht werden. ZusĂ€tzlich wurden SensitivitĂ€tsstudien hinsichtlich einzelner Vorgaben der Untersuchung durchgefĂŒhrt. Dazu wurden sowohl der Einfluss der rĂ€umlichen Auflösung der Wolkenfelder als auch die Vereinfachung oder Nichtbetrachtung einzelner Modellparameter analysiert. Eine Reduzierung der Auflösung einhergehend mit einem zuverlĂ€ssigeren Sampling und reduzierten Photonentransport zwischen den GittersĂ€ulen fĂŒhrte zu keinem direkteren Zusammenhang zwischen den Abweichungen der Reflektanzen und den Abweichungen der mikrophysikalischen Eigenschaften. Folglich existiert keine Auflösung, die die Anwendung des Verfahrens ermöglichen wĂŒrde. Ebenso wurde gezeigt, dass die unzureichende Einbeziehung von Aerosolen bei den Strahlungstransfersimulationen einen Fehler verursachen kann, der in der GrĂ¶ĂŸe dem Unterschied der Reflektanzen unzureichender Wolkenfeldrekonstruktionen gleichkommt. Dies ist insbesondere der Fall, wenn die Wolken sich innerhalb der Grenzschicht des Aerosolmodells befinden. Entspechend sollte in solchen Situationen dem verwendeten Aerosolmodell besondere Beachtung geschenkt werden. Hingegen ist der Einfluss des Ansatzes, wie die Bodenreflektion beschrieben wird, eher gering. Dies mag an dem verwendeten Modell mit einer geringen Albedo in Kombination mit optisch dicken Wolken liegen. Zusammenfassend kann festgestellt werden, dass die Unterschiede im Strahlungstransfer nicht zur AbschĂ€tzung der RekonstruktionsqualitĂ€t der Wolkenfelder herangezogen werden können. Um dem Ziel einer dreidimensionalen Wolkenfeldrekonstruktion nĂ€her zu kommen, könnten beim Rekonstruktionsteil Informationen aus zusĂ€tzlichen Messungen als Vorgaben genutzt werden. Ebenso könnten Beobachtungsgeometrien, welche die Anwendung tomographischer Methoden erlauben, sowie zusĂ€tzliche WellenlĂ€ngen zur Validierung der Rekonstruktionsergebnisse verwendet werden

    No Optimisation Without Representation: A Knowledge Based Systems View of Evolutionary/Neighbourhood Search Optimisation

    Get PDF
    Centre for Intelligent Systems and their ApplicationsIn recent years, research into ‘neighbourhood search’ optimisation techniques such as simulated annealing, tabu search, and evolutionary algorithms has increased apace, resulting in a number of useful heuristic solution procedures for real-world and research combinatorial and function optimisation problems. Unfortunately, their selection and design remains a somewhat ad hoc procedure and very much an art. Needless to say, this shortcoming presents real difficulties for the future development and deployment of these methods. This thesis presents work aimed at resolving this issue of principled optimiser design. Driven by the needs of both the end-user and designer, and their knowledge of the problem domain and the search dynamics of these techniques, a semi-formal, structured, design methodology that makes full use of the available knowledge will be proposed, justified, and evaluated. This methodology is centred around a Knowledge Based System (KBS) view of neighbourhood search with a number of well-defined knowledge sources that relate to specific hypotheses about the problem domain. This viewpoint is complemented by a number of design heuristics that suggest a structured series of hillclimbing experiments which allow these results to be empirically evaluated and then transferred to other optimisation techniques if desired. First of all, this thesis reviews the techniques under consideration. The case for the exploitation of problem-specific knowledge in optimiser design is then made. Optimiser knowledge is shown to be derived from either the problem domain theory, or the optimiser search dynamics theory. From this, it will be argued that the design process should be primarily driven by the problem domain theory knowledge as this makes best use of the available knowledge and results in a system whose behaviour is more likely to be justifiable to the end-user. The encoding and neighbourhood operators are shown to embody the main source of problem domain knowledge, and it will be shown how forma analysis can be used to formalise the hypotheses about the problem domain that they represent. Therefore it should be possible for the designer to experimentally evaluate hypotheses about the problem domain. To this end, proposed design heuristics that allow the transfer of results across optimisers based on a common hillclimbing class, and that can be used to inform the choice of evolutionary algorithm recombination operators, will be justified. In fact, the above approach bears some similarity to that of KBS design. Additional knowledge sources and roles will therefore be described and discussed, and it will be shown how forma analysis again plays a key part in their formalisation. Design heuristics for many of these knowledge sources will then be proposed and justified. This methodology will be evaluated by testing the validity of the proposed design heuristics in the context of two sequencing case studies. The first case study is a well-studied problem from operational research, the flowshop sequencing problem, which will provide a through test of many of the design heuristics proposed here. Also, an idle-time move preference heuristic will be proposed and demonstrated on both directed mutation and candidate list methods. The second case study applies the above methodology to design a prototype system for resource redistribution in the developing world, a problem that can be modelled as a very large transportation problem with non-linear constraints and objective function. The system, combining neighbourhood search with a constructive algorithm which reformulates the problem to one of sequencing, was able to produce feasible shipment plans for problems derived from data from the World Health Organisation’s TB programme in China that are much larger than those problems tackled by the current ‘state-of-the-art’ for transportation problems
    corecore