65 research outputs found

    Well Testing of Fracture Corridors in Naturally - fractured Reservoirs (NFR)

    Get PDF
    Geological folding and/or faulting may create fractured reservoirs containing a semi-parallel system of long, sparsely - spaced fracture corridors separated by exclusion zones. Presently, the best method for detecting and assessing fracture corridor networks requires drilling, logging, and coring from horizontal wells. However, there is no method to evaluate such reservoirs by pressure testing vertical wells. Vertical wells - completed either in highly-conductive corridors (fracture wells) or in the exclusion zone (matrix wells) would respond quite differently to well testing. Therefore, their pressure response patterns can be used to identify well\u27s placement in the corridor system and some other properties such as permeability of the exclusion zone, for example. (The actual permeability of the exclusion zone - due to diffuse fractures - is higher than the rock matrix permeability measured on the core samples.) The objective of this study is to apply the well flow testing analysis technique to estimate well’s location, permeability of the exclusion zone, distance from well to fracture corridor, corridor length, and conductivity. In this study, pattern recognition technique is used to analyze diagnostic plots of pressure drawdown generated by simulated flow tests with commercial software (CMG). A unique simulation model has been built by combining a local model of fracture well or matrix well with adjacent fracture corridor and a homogenized global model of the remaining corridor network. The global model generalizes the corridor network using single-porosity and radial permeability approach. The approach is verified as being sufficiently accurate. The results show that diagnostic plots of bottom hole pressure response to constant production rate for the matrix and fracture wells clearly indicate the well\u27s location as the plot patterns are quite different. Moreover, for matrix well (completed outside the fracture corridor) permeability of the exclusion zone and well-to-corridor distance can be determined from the initial radial flow regime after removing the wellbore storage effect by ÎČ-deconvolution. It is also shown that for fracture well (intercepting fracture corridor) diagnostic plot of the bilinear flow regime provides data for finding the fracture corridor conductivity and fracture corridor length. The corridor length, however, can be estimated with more precision from the pseudosteady-state flow regime plot representing reservoir boundary and reservoir shape factor effects. However, the approach is only practical for production rather than transient flow testing. This study also employs statistics - the cumulative logit models - to qualify accuracy of two techniques: finding permeability of the exclusion zone and distance from the well to the nearest corridor. The results show that the more distant the well from the corridor and the lower the exclusion zone permeability the more accurate permeability estimation becomes. Also, accuracy of the well-corridor distance estimation improves for longer corridors and lower permeable exclusion zones

    Detecting Abnormal Behavior in Web Applications

    Get PDF
    The rapid advance of web technologies has made the Web an essential part of our daily lives. However, network attacks have exploited vulnerabilities of web applications, and caused substantial damages to Internet users. Detecting network attacks is the first and important step in network security. A major branch in this area is anomaly detection. This dissertation concentrates on detecting abnormal behaviors in web applications by employing the following methodology. For a web application, we conduct a set of measurements to reveal the existence of abnormal behaviors in it. We observe the differences between normal and abnormal behaviors. By applying a variety of methods in information extraction, such as heuristics algorithms, machine learning, and information theory, we extract features useful for building a classification system to detect abnormal behaviors.;In particular, we have studied four detection problems in web security. The first is detecting unauthorized hotlinking behavior that plagues hosting servers on the Internet. We analyze a group of common hotlinking attacks and web resources targeted by them. Then we present an anti-hotlinking framework for protecting materials on hosting servers. The second problem is detecting aggressive behavior of automation on Twitter. Our work determines whether a Twitter user is human, bot or cyborg based on the degree of automation. We observe the differences among the three categories in terms of tweeting behavior, tweet content, and account properties. We propose a classification system that uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot or cyborg. Furthermore, we shift the detection perspective from automation to spam, and introduce the third problem, namely detecting social spam campaigns on Twitter. Evolved from individual spammers, spam campaigns manipulate and coordinate multiple accounts to spread spam on Twitter, and display some collective characteristics. We design an automatic classification system based on machine learning, and apply multiple features to classifying spam campaigns. Complementary to conventional spam detection methods, our work brings efficiency and robustness. Finally, we extend our detection research into the blogosphere to capture blog bots. In this problem, detecting the human presence is an effective defense against the automatic posting ability of blog bots. We introduce behavioral biometrics, mainly mouse and keyboard dynamics, to distinguish between human and bot. By passively monitoring user browsing activities, this detection method does not require any direct user participation, and improves the user experience

    THE IMPACT OF INTERACTIVE FUNCTIONALITY ON LEARNING OUTCOMES: AN APPLICATION OF OUTCOME INTERACTIVITY THEORY

    Get PDF
    Scholars have examined a variety of dimensions and models of interactivity in an attempt to articulate a comprehensive definition. Outcome Interactivity Theory (OIT) considers interactivity to be the result of a communication event involving the successful integration of three predictive dimensions: the presence of actual interactive technological features, the presence of similarly reactive content elements, and relevant user experiences that empower the user to employ these interactive elements within the communication event toward a desirable outcome. This dissertation accomplishes three major objectives: clarify the literature relating to the interactivity construct; introduce Outcome Interactivity Theory as a new theory-based conceptualization of the interactivity construct; and test Outcome Interactivity Theory using a pre-test post-test control group full experimental design. The study tests the impact of interactivity on knowledge acquisition and satisfaction student learning outcomes. In addition, the OIT model itself is tested to measure the effect of interactivity on knowledge acquisition and satisfaction. Finally, this study presents a new set of highly reliable interactivity measurement scales to quantify the influence of specific individual dimensions and elements on interactivity as defined by the OIT model. Results are described, and limitations and practical implications are discussed

    A direct search for Dirac magnetic monopoles

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, June 2005.Includes bibliographical references (p. 159-164).Magnetic monopoles are highly ionizing and curve in the direction of the magnetic field. A new dedicated magnetic monopole trigger at CDF, which requires large light pulses in the scintillators of the time-of-flight system, remains highly efficient to monopoles while consuming a tiny fraction of the available trigger bandwidth. A specialized offline reconstruction checks the central drift chamber for large dE/dx tracks which do not curve in the plane perpendicular to the magnetic field. We observed zero monopole candidate events in 35.7 pb⁻Âč of proton-antiproton collisions at ... = 1.96 TeV. This implies a monopole production cross section limit [sigma] 360 GeV.by Michael James Mulhearn.Ph.D

    Search for Second-Generation Leptoquarks in Proton-Antiproton Collisions

    Get PDF
    This document describes the search for second-generation leptoquarks (LQ_2) in around 114 pb^-1 of proton-antiproton collisions, recorded with the D0 detector between September 2002 and June 2003 at a centre-of-mass energy of sqrt{s} = 1.96 TeV. The predictions of the Standard Model and models including scalar leptoquark production are compared to the data for various kinematic distributions. Since no excess of data over the Standard Model prediction has been observed, a lower limit on the leptoquark mass of M(LQ_2)_{beta=1} > 200 GeV/c^2 has been calculated at 95% confidence level (C.L.), assuming a branching fraction of beta = BF(LQ_2 --> mu j) = 100% into a charged lepton and a quark. The corresponding limit for beta = 1/2 is M(LQ_2)_{beta=1/2} > 152 GeV/c^2. Finally, the results were combined with those from the search in the same channel at D0 Run I. This combination yields the exclusion limit of 222 GeV/c^2 (177 GeV/c^2) for beta=1 (1/2) at 95% C.L., which is the best exclusion limit for scalar second-generation leptoquarks (for beta=1) from a single experiment to date.In diesem Dokument wird die Suche nach Leptoquarks der zweiten Generation (LQ_2) in Proton-Antiproton-Kollisionen beschrieben, die mit dem D0-Detektor am TeVatron-Beschleuniger aufgezeichnet wurden. Im Zeitraum von September 2002 bis Juni 2003 wurde eine integrierte LuminositĂ€t von rund 114 pb^-1 bei einer Schwerpunktsenergie von sqrt{s} = 1.96 TeV gesammelt. Die Vorhersagen des Standardmodells der Teilchenphysik und darĂŒber hinausgehender Modelle mit skalaren Leptoquarks wurden mit den Daten verglichen. Da kein Überschuss an Daten ĂŒber der Standardmodellvorhersage beobachtet werden konnte, wurde unter der Annahme, dass Leptoquarks zu 100% in geladene Leptonen und Quarks zerfallen (beta = BF(LQ_2 --> mu j) = 100%), eine untere Schranke von M(LQ_2)_{beta=1} > 200 GeV/c^2 (95% C.L.) fĂŒr die Masse von skalaren Leptoquarks der zweiten Generation ermittelt. Die entsprechende Ausschlussgrenze fĂŒr beta=1/2 liegt bei M(LQ_2)_{beta=1/2} > 152 GeV/c^2. Schließlich wurden die Resultate mit den Ergebnissen einer Suche im gleichen Kanal bei D0 Run I kombiniert. Diese Kombination liefert die Ausschlussgrenzen M(LQ_2)_{beta=1} > 222 GeV/c^2 (177 GeV/c^2) fĂŒr beta=1 (1/2) und ist somit fĂŒr beta=1 das zur Zeit beste Ergebnis fĂŒr skalare Leptoquarks der zweiten Generation eines einzelnen Experimentes

    Management of Coastal Navigation Channels Based on Vessel Underkeel Clearance in Transit

    Get PDF
    The United States Army Corps of Engineers (USACE) spends approximately 2billionannuallytoinvestigate,construct,andmaintainprojectsinitsportfolioofcoastalnavigationinfrastructure.Ofthatexpenditure,approximately2 billion annually to investigate, construct, and maintain projects in its portfolio of coastal navigation infrastructure. Of that expenditure, approximately 1 billion is spent annually on maintenance dredging to increase the depth of maintained channels. The USACE prioritizes maintenance funding using a variety of metrics reflecting the amount of cargo moving through maintained projects but does not directly consider the reduction in the likelihood for the bottom of a vessel\u27s hull to make contact with the bottom of the channel that results from maintenance dredging investments. Net underkeel clearance, which remains between the channel bottom and the vessel’s keel after considering several important factors that act to increase the necessary keel depth, is used as an indicator of potential reduction of navigation safety. This dissertation presents a model formulated to estimate net underkeel clearance using archival Automatic Identification System (AIS) data and applies it to the federal navigation project in Charleston, South Carolina. Observations from 2011 including 3,961 vessel transits are used to determine the probability that a vessel will have less than 0 feet of net underkeel clearance as it transits from origin to destination. The probability that a vessel had net underkeel clearance greater than or equal to 0 feet was 0.993. A Monte-Carlo approach is employed to prioritize reach maintenance improvement order. A value heuristic is used to rank 7,500 dredging alternatives. 159 options were identified that meet an arbitrarily selected minimum reliability of 0.985. Cost reductions associated with options that met the minimum reliability requirement ranged from 7.7% to 42.6% on an annualized basis. Fort Sumter Range, Hog Island Reach, and Wando Lower Reach are identified as the most important reaches to maintain. The underkeel clearance reliability model developed in this work provides a more accurate representation of the waterway users’ ability to safely transit dredged channels with respect to available depth that is currently available to USACE waterway managers. The transit reliability metric developed provides an accurate representation of the benefit obtained from channel dredging investments, and directly relates the benefit to dredging cost

    Management of Coastal Navigation Channels Based on Vessel Underkeel Clearance in Transit

    Get PDF
    The United States Army Corps of Engineers (USACE) spends approximately 2billionannuallytoinvestigate,construct,andmaintainprojectsinitsportfolioofcoastalnavigationinfrastructure.Ofthatexpenditure,approximately2 billion annually to investigate, construct, and maintain projects in its portfolio of coastal navigation infrastructure. Of that expenditure, approximately 1 billion is spent annually on maintenance dredging to increase the depth of maintained channels. The USACE prioritizes maintenance funding using a variety of metrics reflecting the amount of cargo moving through maintained projects but does not directly consider the reduction in the likelihood for the bottom of a vessel\u27s hull to make contact with the bottom of the channel that results from maintenance dredging investments. Net underkeel clearance, which remains between the channel bottom and the vessel’s keel after considering several important factors that act to increase the necessary keel depth, is used as an indicator of potential reduction of navigation safety. This dissertation presents a model formulated to estimate net underkeel clearance using archival Automatic Identification System (AIS) data and applies it to the federal navigation project in Charleston, South Carolina. Observations from 2011 including 3,961 vessel transits are used to determine the probability that a vessel will have less than 0 feet of net underkeel clearance as it transits from origin to destination. The probability that a vessel had net underkeel clearance greater than or equal to 0 feet was 0.993. A Monte-Carlo approach is employed to prioritize reach maintenance improvement order. A value heuristic is used to rank 7,500 dredging alternatives. 159 options were identified that meet an arbitrarily selected minimum reliability of 0.985. Cost reductions associated with options that met the minimum reliability requirement ranged from 7.7% to 42.6% on an annualized basis. Fort Sumter Range, Hog Island Reach, and Wando Lower Reach are identified as the most important reaches to maintain. The underkeel clearance reliability model developed in this work provides a more accurate representation of the waterway users’ ability to safely transit dredged channels with respect to available depth that is currently available to USACE waterway managers. The transit reliability metric developed provides an accurate representation of the benefit obtained from channel dredging investments, and directly relates the benefit to dredging cost
    • 

    corecore