343 research outputs found

    Sweep-Line Extensions to the Multiple Object Intersection Problem: Methods and Applications in Graph Mining

    Get PDF
    Identifying and quantifying the size of multiple overlapping axis-aligned geometric objects is an essential computational geometry problem. The ability to solve this problem can effectively inform a number of spatial data mining methods and can provide support in decision making for a variety of critical applications. The state-of-the-art approach for addressing such problems resorts to an algorithmic paradigm, collectively known as the sweep-line or plane sweep algorithm. However, its application inherits a number of limitations including lack of versatility and lack of support for ad hoc intersection queries. With these limitations in mind, we design and implement a novel, exact, fast and scalable yet versatile, sweep-line based algorithm, named SLIG. The key idea of our algorithm lies in constructing an auxiliary data structure when the sweep line algorithm is applied, an intersection graph. This graph can effectively be used to provide connectivity properties among overlapping objects and to inform answers to ad hoc intersection queries. It can also be employed to find the location and size of the common area of multiple overlapping objects. SLIG performs significantly faster than classic sweep-line based algorithms, it is more versatile, and provides a suite of powerful querying capabilities. To demonstrate the versatility of our SLIG algorithm we show how it can be utilized for evaluating the importance of nodes in a trajectory network - a type of dynamic network where the nodes are moving objects (cars, pedestrians, etc.) and the edges represent interactions (contacts) between objects as defined by a proximity threshold. The key observation to address the problem is that the time intervals of these interactions can be represented as 1-dimensional axis-aligned geometric objects. Then, a variant of our SLIG algorithm, named SLOT, is utilized that effectively computes the metrics of interest, including node degree, triangle membership and connected components for each node, over time

    Power supply noise in delay testing

    Get PDF
    As technology scales into the Deep Sub-Micron (DSM) regime, circuit designs have become more and more sensitive to power supply noise. Excessive noise can significantly affect the timing performance of DSM designs and cause non-trivial additional delay. In delay test generation, test compaction and test fill techniques can produce excessive power supply noise. This will eventually result in delay test overkill. To reduce this overkill, we propose a low-cost pattern-dependent approach to analyze noise-induced delay variation for each delay test pattern applied to the design. Two noise models have been proposed to address array bond and wire bond power supply networks, and they are experimentally validated and compared. Delay model is then applied to calculate path delay under noise. This analysis approach can be integrated into static test compaction or test fill tools to control supply noise level of delay tests. We also propose an algorithm to predict transition count of a circuit, which can be applied to control switching activity during dynamic compaction. Experiments have been performed on ISCAS89 benchmark circuits. Results show that compacted delay test patterns generated by our compaction tool can meet a moderate noise or delay constraint with only a small increase in compacted test set size. Take the benchmark circuit s38417 for example: a 10% delay increase constraint only results in 1.6% increase in compacted test set size in our experiments. In addition, different test fill techniques have a significant impact on path delay. In our work, a test fill tool with supply noise analysis has been developed to compare several test fill techniques, and results show that the test fill strategy significant affect switching activity, power supply noise and delay. For instance, patterns with minimum transition fill produce less noise-induced delay than random fill. Silicon results also show that test patterns filled in different ways can cause as much as 14% delay variation on target paths. In conclusion, we must take noise into consideration when delay test patterns are generated

    Shallow P-wave seismic reflection event for estimating the static time corrections: implications for 3D seismic structural interpretation, Ellis County, Kansas

    Get PDF
    Master of ScienceDepartment of GeologyAbdelmoneam RaefMatthew W. TottenIn a processing flow of 2D or 3D seismic data, there are many steps that must be completed in seismic processing to produce a dataset in suitable for seismic interpretation. In case of land seismic data, it is very essential that the data-processing work flow create and utilize a static time correction to eradicate variations in arrival time associated with changes in the topography and low-velocity near surface geology (Krey 1954). This project utilizes velocity analysis, based on a near-surface reflection, to estimate near surface statics corrections to a datum at elevation of 1300 ft (Sheriff and Geldart 1995, Rogers 1981). Reviewing and Rectifying errors in geometrical aspects of the field seismic data is essential to the validity of velocity analysis and estimation. To this end, geometrical aspects of the data were validated based on spatial aspects of the survey acquisition design and acquired data attributes. The seismic workflow is a conglomeration of many steps, of which, none should be overlooked or given insufficient attention. The seismic processing workflow spans from loading the data into a processing software with the correct geometry to stacking and binning the traces for exportation to interpretation software as a seismic volume. Important steps within this workflow and ones that will be covered in this thesis include; the framework to reverse engineer a survey geometry, dynamic corrections, velocity analysis, and building of a static model to account for the near surface, or low velocity layer. This seismic processing workflow seeks to quality control most, if not all, seismic datasets in hopes to produce higher quality and more accurate three-dimensional seismic volumes for interpretation. The developed workflow represents cost-effective, rapid approach of improving the structural fidelity of land seismic data in areas with rugged topography and complex near-surface velocity variation (Selem 1955; Thralls and Mossman 1952)

    Effective SAT solving

    Get PDF
    A growing number of problem domains are successfully being tackled by SAT solvers. This thesis contributes to that trend by pushing the state-of-the-art of core SAT algorithms and their implementation, but also in several important application areas. It consists of five papers: the first details the implementation of the SAT solver MiniSat and the other four papers discuss specific issues related to different application domains. In the first paper, catering to the trend of extending and adapting SAT solvers, we present a detailed description of MiniSat, a SAT solver designed for that particular purpose. The description additionally bridges a gap between theory and practice, serving as a tutorial on modern SAT solving algorithms. Among other things, we describe how to solve a series of related SAT problems efficiently, called incremental SAT solving. For finding finite first order models the MACE-style method that is based on SAT solving is well-known. In the second paper we improve the basic method with several techniques that can be loosely classified as either transformations that make the reduction to SAT result in fewer clauses or techniques that are designed to speed up the search of the SAT solver. The resulting tool, called Paradox, won the SAT/Models division of the CASC competition in 2003 and has not been beaten since by a single general purpose model finding tool. In the last decade the interest in methods for safety property verification that are based on SAT solving has been steadily growing. One example of such a method is temporal induction. The method requires a sequence of increasingly stronger induction proofs to be performed. In the third paper we show how this sequence of proofs can be solved efficiently using incremental SAT solving. The last two papers consider two frequently occurring types of encodings: (1) the problem of encoding circuits into CNF, and (2) encoding 0-1 integer linear programming into CNF and how to use incremental SAT to solve the intended ptimization problem. There are several encoding patterns that occur over and over again in this thesis but also elsewhere. The most noteworthy are: incremental SAT, lazy encoding of constraints, and bit-wise encoding of arithmetic influenced by hardware designs for adders and multipliers. The general conclusion is: deploying SAT solvers effectively requires implementations that are efficient, yet easily adaptable to specific application needs. Moreover, to get the best results, it is worth spending effort to make sure that one uses the best codings possible for an application. However, it is important to note that this is not absolutely necessary. For some applications naive problem codings work just fine which is indeed part of the appeal of using SAT solving

    Expanding the Horizons of Manufacturing: Towards Wide Integration, Smart Systems and Tools

    Get PDF
    This research topic aims at enterprise-wide modeling and optimization (EWMO) through the development and application of integrated modeling, simulation and optimization methodologies, and computer-aided tools for reliable and sustainable improvement opportunities within the entire manufacturing network (raw materials, production plants, distribution, retailers, and customers) and its components. This integrated approach incorporates information from the local primary control and supervisory modules into the scheduling/planning formulation. That makes it possible to dynamically react to incidents that occur in the network components at the appropriate decision-making level, requiring fewer resources, emitting less waste, and allowing for better responsiveness in changing market requirements and operational variations, reducing cost, waste, energy consumption and environmental impact, and increasing the benefits. More recently, the exploitation of new technology integration, such as through semantic models in formal knowledge models, allows for the capture and utilization of domain knowledge, human knowledge, and expert knowledge toward comprehensive intelligent management. Otherwise, the development of advanced technologies and tools, such as cyber-physical systems, the Internet of Things, the Industrial Internet of Things, Artificial Intelligence, Big Data, Cloud Computing, Blockchain, etc., have captured the attention of manufacturing enterprises toward intelligent manufacturing systems

    Spectrally efficient FDM communication signals and transceivers: design, mathematical modelling and system optimization

    Get PDF
    This thesis addresses theoretical, mathematical modelling and design issues of Spectrally Efficient FDM (SEFDM) systems. SEFDM systems propose bandwidth savings when compared to Orthogonal FDM (OFDM) systems by multiplexing multiple non-orthogonal overlapping carriers. Nevertheless, the deliberate collapse of orthogonality poses significant challenges on the SEFDM system in terms of performance and complexity, both issues are addressed in this work. This thesis first investigates the mathematical properties of the SEFDM system and reveals the links between the system conditioning and its main parameters through closed form formulas derived for the Intercarrier Interference (ICI) and the system generating matrices. A rigorous and efficient mathematical framework, to represent non-orthogonal signals using Inverse Discrete Fourier Transform (IDFT) blocks, is proposed. This is subsequently used to design simple SEFDM transmitters and to realize a new Matched Filter (MF) based demodulator using the Discrete Fourier Transforms (DFT), thereby substantially simplifying the transmitter and demodulator design and localizing complexity at detection stage with no premium at performance. Operation is confirmed through the derivation and numerical verification of optimal detectors in the form of Maximum Likelihood (ML) and Sphere Decoder (SD). Moreover, two new linear detectors that address the ill conditioning of the system are proposed: the first based on the Truncated Singular Value Decomposition (TSVD) and the second accounts for selected ICI terms and termed Selective Equalization (SelE). Numerical investigations show that both detectors substantially outperform existing linear detection techniques. Furthermore, the use of the Fixed Complexity Sphere Decoder (FSD) is proposed to further improve performance and avoid the variable complexity of the SD. Ultimately, a newly designed combined FSD-TSVD detector is proposed and shown to provide near optimal error performance for bandwidth savings of 20% with reduced and fixed complexity. The thesis also addresses some practical considerations of the SEFDM systems. In particular, mathematical and numerical investigations have shown that the SEFDM signal is prone to high Peak to Average Power Ratio (PAPR) that can lead to significant performance degradations. Investigations of PAPR control lead to the proposal of a new technique, termed SLiding Window (SLW), utilizing the SEFDM signal structure which shows superior efficacy in PAPR control over conventional techniques with lower complexity. The thesis also addresses the performance of the SEFDM system in multipath fading channels confirming favourable performance and practicability of implementation. In particular, a new Partial Channel Estimator (PCE) that provides better estimation accuracy is proposed. Furthermore, several low complexity linear and iterative joint channel equalizers and symbol detectors are investigated in fading channels conditions with the FSD-TSVD joint equalization and detection with PCE obtained channel estimate facilitating near optimum error performance, close to that of OFDM for bandwidth savings of 25%. Finally, investigations of the precoding of the SEFDM signal demonstrate a potential for complexity reduction and performance improvement. Overall, this thesis provides the theoretical basis from which practical designs are derived to pave the way to the first practical realization of SEFDM systems

    Seismic safety of the Paks Nuclear Power Plant

    Get PDF

    Women in Science 2016

    Get PDF
    Women in Science 2016 summarizes research done by Smith College’s Summer Research Fellowship (SURF) Program participants. Ever since its 1967 start, SURF has been a cornerstone of Smith’s science education. In 2016, 150 students participated in SURF (144 hosted on campus and nearby eld sites), supervised by 56 faculty mentor-advisors drawn from the Clark Science Center and connected to its eighteen science, mathematics, and engineering departments and programs and associated centers and units. At summer’s end, SURF participants were asked to summarize their research experiences for this publication.https://scholarworks.smith.edu/clark_womeninscience/1005/thumbnail.jp

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Historical resource use and ecological change in semi-natural woodland : western oakwoods in Argyll, Scotland

    Get PDF
    This thesis investigates the ecological history of western oakwoods in the Loch Awe area,Argyll, Scotland. By combining historical evidence for human use of woodland resources with palaeoecological evidence for past ecological change the influence of man on the current condition of biologically important semi-natural woods is assessed. A chronology of human activities relevant to the woodland ecology of the study area is assembled from estate papers and other documentary sources. Vegetation change during the last c. 1000 years is elucidated by pollen analysis of radioisotope dated sediments from small hollows located within three areas of western oakwood believed to be ancient. The results are related to current condition and the hypothesis that the species composition of the woods exhibited temporal stability in the recent past is tested. Mechanisms of change culminating in the modem species compositions of the woods are suggested by synthesizing independent findings from historical and palaeoecological approaches. The documentary record indicates management in the 18th and 19th centuries to supply oak bark and coppice wood for commercial purposes. In the 20th century woodland use has been relatively minor except as a grazing resource. In the period before 1700 AD the woods were used for wood for local domestic needs and to shelter livestock. The palaeoecological record indicates a lack of stability in species composition during the last millennium. Relatively diverse woods still containing natural features such as old-growth were transformed in the medieval period into disturbed open stands depleted in natural features. Declining productivity was locally alleviated by the introduction of new modes of exploitation around or prior to 1700 AD. The current condition of the woods, rather than being the direct result of an economic design, is the consequence of post-disturbance biotic processes following the abandonment of management in the late 19th century. The findings are related to the conservation of the wider western oakwood resource
    • …
    corecore