3,342 research outputs found
Recommended from our members
Mathematical modeling of the interaction between two-phase environmental flow and protective hydraulic structures
In August 2005, Hurricane Katrina struck the Gulf Coast of the United States. Over a thousand people lost their lives and the total damage was about 108 billion USD. It was the costliest United States hurricane. Two-thirds of the deaths and majority of economical loss were related to the protection system failure. This drives the study of fluid structure interaction to properly design the levees and floodwalls in the future for flooding vulnerable areas. Fluid-structure interaction is the interaction between a deformable structure and the surrounding flow. The fluid causes the deformation of the solid, and the solid reacts to the fluid. This thesis will focus on the interactions between two-phase environmental flow (air and water), and hydraulic structures (e.g. floodwalls, etc.) which are partially submerged in the water to disrupt the flow. Hydrodynamic and hydrostatic forces and impact loads from high water levels and velocities applied to the interface must be carefully monitored, as well as their impact on the structural stability. The main purpose of this work is to give a deeper understanding to the interaction processes and the coupling effects, and to determine the possible deformation or critical values of overturning moments for more robust future designs of floodwalls and levees. There are two main approaches to simulate fluid-solid interactions: the monolithic approach and the partitioned approach. In this work we use the partitioned approach by looking into the separate flow and structure models and simulating the interaction process. For the two-phase flow subproblem, the interface of air and water is treated as a material discontinuity in modeling, and is tracked by the level set method. The coupled system consists of Navier-Stokes Equation, level set method and the volume of fraction method, solved by the splitting method with residual-based variational multi-scale methods for stabilization. The structural mechanics is modeled by linear elasticity. Different types of floodwalls and two factors of safety against sliding and overturning are studied. In the Galveston area, the soil and floodwall properties determine the necessity to include soil as a part of the model. Hyperelastic and plastic models are discussed in simulating the soil behavior. The interaction process is modeled by imposing the matching conditions on the common fluid-structure boundary. Both one-way and two-way interaction models under synthetic waves are discussed and compared. One-way interaction is saving in computation and used widely in engineering design. Two-way interaction is formulated under the Arbitrary Lagrangian-Eulerian(ALE) framework. The operator splitting technique is developed for the coupled system to reduce computing cost while remain high accuracy.Computational Science, Engineering, and Mathematic
Recommended from our members
Hydrophilic polymer foam and microsphere templates for fabrication of microcellular nickel and graphene foams with energy storage applications
Hydrophilic polymer foam and microsphere templates have attracted tremendous attentions in the past decade due to their applicability in numerous areas such as catalyst carriers and mini-reactors, filtration media, carbon foam fabrication templates, thermal and electrical insulators, and tissue engineering scaffolds. Hydrophilic polymer sphere and foam templates can be used to fabricate microcellular nickel foams and graphene foams that are finding unique opportunities in energy storage applications, including battery electrodes and matrices for solar energy storage. In this study open celled hydrophilic polymer foams and microsphere templates with controllable pore size and porosity were fabricated via solid state foaming and vacuum-assisted assembling methods. Hydrophilic polymer foams were fabricated with disulfonated poly(arylene ether sulfone) (BPS) and poly(ethylene glycol) (PEG) miscible blends. Polymer microsphere templates made with PMMA, paraffin, and EAA spheres were used as templates for fabricating bulk nickel foams, which were further used as a template to fabricate graphene foams. In order to achieve bulk microcellular nickel and graphene foams, a novel electro-polishing-assisted electroless nickel (Ni) deposition process was developed to mitigate the diffusion limitation problem. Fundamental mechanisms of the proposed process were studied using a finite difference model considering both ion diffusion and chemical reaction inside the porous media. The fabricated microcellular Ni foams exhibited sufficient thermal stability and were used to fabricate three dimensional (3D) few-layer-graphene (FLG) foams using a chemical vapor deposition (CVD) method. The resulting graphene foams had a pore size less than 100 ÎŒm, density of 0.0020 g·cmâ»Âł, and strut wall thickness of 5 nm. The surface-to-volume ratio of the foam was 2.5Ă10â” mÂČ·mâ»Âł.Materials Science and Engineerin
Recommended from our members
Technology entrepreneurship and value creation on open innovation platforms
This dissertation studies how entrepreneurial firms create economic value from open source technology platforms, interfaces on which firms disclose knowledge and distribute innovation for free without retaining any proprietary rights. Despite their increasing importance in innovation and growing popularity among profit-seeking new ventures, open source platforms present a major challenge for value creation, as they lack price signals to guide venturesâ transactions and forfeit venturesâ control over key resources and knowledge for innovation. Those features are in contrast with the fundamental assumption about price and revenue in economics. They also run counter to the central tenet in strategy research that private knowledge and rare resources are central to competitive advantage and profiting from innovation.
To address this puzzle about value creation from free technologies base on free knowledge and resources, this dissertation specifically focus on the economic implications of strategies ventures can leverage within and across open source development communities. Chapter I reviews the literature relevant to entrepreneurship in an open and inter-dependent innovation environment. Exploring research opportunities emerged from the literature review, Chapter II explores the possibility that multihoming, a critical growth strategy of ventures as open source complementors in platform competition, allows ventures to reinforce their existing user base â a prerequisite of value creation from open source. Chapter III directly addresses value creation by investigating how collaborating with external contributors, another critical open source strategy, influences venture capital investment. Both essays highlight how platform network effects unfold without price signals and proprietary rights of the technologies in shaping the outcome for venturesâ strategies. They also emphasize those strategiesâ demand side implications on users, participants on another side of open source platforms.
The empirical analyses of this dissertation are based on multiple open source technologies platforms, with data obtained from on GitHub, the worldsâ largest open source software storage provider, containing 5 Terabytes of information on 2.1 million ventures, 96 million technologies and over 2 billion development activities, under research designs for deriving causal references. Overall, the dissertation seeks to advance the understanding of value creation in entrepreneurship through open source platforms, an increasingly important phenomenon in contemporary economy.Managemen
Recommended from our members
Combining static analysis with deep learning for type inference and code editing
For many programming tasks, state-of-the-art machine learning techniques treat programs as sequences of tokens and encode only local syntactic information. While this approach has achieved impressive results on tasks such as code autocompletion and program synthesis, many other tasks require analyzing programs at the project level. In this thesis, we propose techniques that combine lightweight static analysis and code transformations with machine learning to tackle two challenging problems from this category. We first focus on probabilistic type inference, where the goal is to predict missing type annotations for programs written in a gradually typed language like JavaScript and Python. Global information is essential for this task as the model needs to consider how a function is used throughout the project and be aware of the new types defined elsewhere. Our first approach, LambdaNet, uses lightweight static analysis to generate a program abstraction called a type dependency graph, which is then processed by a graph neural network to make type predictions. Our more recent work, TypeT5, models type inference as a code-infilling task and fine-tunes a pre-trained code-infilling model on type annotation labels. To best utilize the transformer model's limited receptive field, TypeT5 uses static analysis to construct a dynamic context for each code element. During inference time, we also propose a sequential decoding scheme to incorporate previously predicted types into the dynamic context, allowing information exchange between distant but related code elements. We then focus on contextual code change prediction, where the goal is to predict how to edit a piece of code based on other relevant changes made elsewhere in the same project. We introduce Coeditor, a fine-tuned CodeT5 model specifically designed for code editing tasks. We again model this task as code infilling using a line-diff-based code change encoding scheme and employ static analysis to form large customized model contexts, ensuring appropriate information for prediction. Coeditor significantly outperforms the best code completion approach in a simplified single-round, single-edit task. In the proposed multi-round, multi-edit setting, Coeditor demonstrates substantial gains by iteratively conditioning on additional user edits. To encourage future research, we open-source our code, data, and model weights, and release a VSCode extension powered by our model for interactive usage.Computer Science
The local innovation system of the oil and gas industry in the North Sea : the application of patent data in the study of innovation systems
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2008.Cataloged from PDF version of thesis.Includes bibliographical references (p. 177-179).The North Sea oil province, one of the world's major centers of petroleum and natural gas production, has been in play for four decades. Production rates have approached their peaks in recent years and are expected to decline continuously in the future. The economies of certain cities and regions bordering on the North Sea have become heavily dependent on the oil and gas industry. How these local economies will sustain themselves in the future as resource depletion continues is a critical question. To gain insight into this question, we selected a matched pair of city-regions, each of which is an important center of the oil and gas industry in the North Sea province: Aberdeen in Scotland and Stavanger in Norway. By studying the similarities and differences between the local innovation systems in the two regions, we can gain a general understanding of how local economies respond to changes in their environment. U.S. patenting data are used as a tool to describe the behavior and performance of the two local innovation systems. The patent data provide a means of systematically and consistently estimating knowledge flows. The use of U.S. patent and patent citation data provides evidence, references, and guidelines to the project from a quantitative perspective. Several indicators were developed to describe these knowledge flows, along with a model providing further insight into how knowledge was acquired and introduced into the two local innovation systems, how and to what extent local innovation capabilities were developed, and how knowledge created locally has spread elsewhere.(cont.) Both Stavanger and Aberdeen have worked hard to strengthen their local innovation capabilities by learning from the world's most advanced firms, especially those from the U.S.,and by building capabilities of their own. At the same time, attracted by the extensive reserves of oil and gas, multinational firms, many from the U.S., moved into the North Sea region. The involvement of multinational firms helped reinforce local innovation capabilities. However, because of the different policy approaches pursued in the two regions, U.S. firms, the international leaders in oil and gas technology, have played more important roles in Aberdeen than in Stavanger. In the Stavanger area, local innovation activities have been led by national oil companies rather than by foreign firms.by Wei Gao.Ph.D
Convective-core overshooting and the final fate of massive stars
Massive stars can explode in powerful supernovae (SNe) forming neutron stars
but they may also collapse directly into black holes (BHs). Understanding and
predicting their final fate is increasingly important, e.g, in the context of
gravitational-wave astronomy. The interior mixing of stars in general and
convective boundary mixing remain some of the largest uncertainties in their
evolution. Here, we investigate the influence of convective boundary mixing on
the pre-SN structure and explosion properties of massive stars. Using the 1D
stellar evolution code Mesa, we model single, non-rotating stars of solar
metallicity with initial masses of and convective core
step-overshooting of . Stars are evolved until the onset
of iron core collapse, and the pre-SN models are exploded using a parametric,
semi-analytic SN code. We use the compactness parameter to describe the
interior structure of stars at core collapse. Larger convective core
overshooting shifts the location of the compactness peak by
to higher . As the luminosity of the
pre-SN progenitor is determined by , we predict BH formation for
progenitors with luminosities and
. The luminosity range of BH formation agrees
well with the observed luminosity of the red supergiant star N6946BH1 that
disappeared without a bright SN and likely collapsed into a BH. While some of
our models in the luminosity range indeed
collapse to form BHs, this does not fully explain the lack of observed SN~IIP
progenitors at these luminosities, ie the missing red-supergiant problem.
Convective core overshooting affects the BH masses, the pre-SN location of
stars in the Hertzsprung-Russell diagram, the plateau luminosity and duration
of SN~IIP lightcurves.[Abridged]Comment: Accepted for publication in Astronomy & Astrophysics: 23 pages, 14
figure
Evolution of Iron K Line Emission in the Black Hole Candidate GX 339-4
GX 339-4 was regularly monitored with RXTE during a period (in 1999) when its
X-ray flux decreased significantly (from 4.2 erg cm to 7.6 erg cms in the 3--20 keV band),
as the source settled into the ``off state''. Our spectral analysis revealed
the presence of a prominent iron K line in the observed spectrum of
the source for all observations. The line shows an interesting evolution: it is
centered at 6.4 keV when the measured flux is above 5
erg cm, but is shifted to 6.7 keV at lower fluxes. The
equivalent width of the line appears to increase significantly toward lower
fluxes, although it is likely to be sensitive to calibration uncertainties.
While the fluorescent emission of neutral or mildly ionized iron atoms in the
accretion disk can perhaps account for the 6.4 keV line, as is often invoked
for black hole candidates, it seems difficult to understand the 6.7 keV line
with this mechanism, because the disk should be less ionized at lower fluxes
(unless its density changes drastically). On the other hand, the 6.7 keV line
might be due to recombination cascade of hydrogen or helium like iron ions in
an optically thin, highly ionized plasma. We discuss the results in the context
of proposed accretion models.Comment: 18 pages, 2 figures, accepted for publication in the ApJ in v552n2p
May 10, 2001 issu
Pairing Symmetry in Iron-Pnictide Superconductor KFeAs
The pairing symmetry is one of the major issues in the study of iron-based
superconductors. We adopt a low-energy effective kinetic model based on the
first-principles band structure calculations combined with the -
model for KFeAs, the phase diagram of pairing symmetries is
constructed. Putting the values of and of the - model
obtained by the first-principles calculations into this phase diagram, we find
that the pairing symmetry for KFeAs is a nodal -wave in the
folded Brillouin zone with two iron atoms per unit cell. This is in good
agreement with experiments observed a nodal order parameter.Comment: 5 pages, 4 figures (The pairing symmetry is dependent on choosing an
effective tight-binding model. In the publication version, we adopt a
ten-orbital model by using the maximally localized Wannier functions based on
the first-principles band structure calculations, and give an s-wave pairing
for KFeAs
A new critical curve for the Lane-Emden system
We study stable positive radially symmetric solutions for the Lane-Emden
system in , in , where .
We obtain a new critical curve that optimally describes the existence of such
solutions.Comment: 13 pages, 1 figur
Recommended from our members
From active to passive spatial acoustic sensing and applications
The active acoustic sensing system emits modulated acoustic waves and analyzes reflection signals. It is dominant in acoustic spatial sensing. On the other side, the passive acoustic sensing system receives and investigates nature sounds directly. It is good at semantic tasks but has weak performance on spatial sensing. In this dissertation, we manage to bridge three gaps in existing systems. They are the gap between the assumption of signal processing algorithms and the real acoustic environment, the gap between powerful active spatial sensing and limited passive spatial sensing, and the gap between the semantic features and spatial information. We evolve the acoustic sensing system design and extend the functionalities by three novel systems.
First, we develop a fully active spatial sensing system DeepRange which can adapt to the real environment easily. We develop an effective mechanism to generate synthetic training data that captures noise, speaker/mic distortion, and interference in the signals. It removes the need of collecting a large volume of data. We then design a deep range neural network (DRNet) to estimate the distance from raw acoustic signals. It is inspired by signal processing that an ultra-long convolution kernel size helps to combat noise and interference. The model is fully trained over synthetic data, but it can achieve sub-centimeter error robustly in real data despite various environments, background noise, interference, and mobile phone models.
Second, we develop a fused active and passive spatial sensing system for speech separation noted as Spatial Aware Multi-task learning-based Separation (SAMS). We leverage both active sensing and passive sensing to improve AoA estimation and jointly optimize the semantic task and the spatial task. SAMS estimates the spatial location and extracts speech for the target user during teleconferencing simultaneously. We first generate fine-grained spatial embeddings from the userâs voice and inaudible tracking sound, which contains the userâs position and rich multipath information. Furthermore, we develop a deep neural network with multi-task learning to jointly optimize source separation and location. We significantly speed up inference to provide a real-time guarantee.
Finally, we deeply fuse the semantic features and spatial cues to combat the interference and noise in the real environment as well as enable depth sensing in a fully passive setup. Inspired by the âflash-to-bangâ phenomenon (i.e.hearing the thunder after seeing the lightning), we propose FBDepth to measure the depth of the sound source. We formulate the problem as an audio-visual event localization task for collision events. Specifically, FBDepth first aligns correspondence between the video track and audio track to locate the target object and target sound in a coarse granularity. Based on the observation of moving objectsâ trajectories, it proposes to estimate the intersection of optical flow before and after the collision to locate video events in time. It feeds the estimated timestamp of the video event and the other modalities for the final depth estimation. We use a mobile phone to collect the 3.6K+ video clips involving 24 different objects at up to 60m. FBDepth shows superior performance especially at a long range compared to monocular and stereo methods.Computer Science
- âŠ