4,855 research outputs found

    A maximal LpL_p-regularity theory to initial value problems with time measurable nonlocal operators generated by additive processes

    Full text link
    Let Z=(Zt)t≥0Z=(Z_t)_{t\geq0} be an additive process with a bounded triplet (0,0,Λt)t≥0(0,0,\Lambda_t)_{t\geq0}. Then the infinitesimal generators of ZZ is given by time dependent nonlocal operators as follows: \begin{align*} \mathcal{A}_Z(t)u(t,x) &=\lim_{h\downarrow0}\frac{\mathbb{E}[u(t,x+Z_{t+h}-Z_t)-u(t,x)]}{h}=\int_{\mathbb{R}^d}(u(t,x+y)-u(t,x)-y\cdot \nabla u(t,x)1_{|y|\leq1})\Lambda_t(dy). \end{align*} Suppose that L\'evy measures Λt\Lambda_t have a lower bound (Assumption 2.10) and satisfy a weak-scaling property (Assumption 2.11). We emphasize that there is no regularity condition on L\'evy measures Λt\Lambda_t and they do not have to be symmetric. In this paper, we establish the LpL_p-solvability to initial value problem (IVP) \begin{equation} \label{20.07.15.17.02} \frac{\partial u}{\partial t}(t,x)=\mathcal{A}_Z(t)u(t,x),\quad u(0,\cdot)=u_0,\quad (t,x)\in(0,T)\times\mathbb{R}^d, \end{equation} where u0u_0 is contained in a scaled Besov space Bp,qs;γ−2q(Rd)B_{p,q}^{s;\gamma-\frac{2}{q}}(\mathbb{R}^d) (see Definition 2.8) with a scaling function ss, exponent p∈(1,∞)p \in (1,\infty), q∈[1,∞)q\in[1,\infty), and order γ∈[0,∞)\gamma \in [0,\infty). We show that IVP is uniquely solvable and the solution uu obtains full-regularity gain from the diffusion generated by a stochastic process ZZ. In other words, there exists a unique solution uu to IVP in Lq((0,T);Hpμ;γ(Rd))L_q((0,T);H_p^{\mu;\gamma}(\mathbb{R}^d)), where Hpμ;γ(Rd)H_p^{\mu;\gamma}(\mathbb{R}^d) is a generalized Bessel potential space (see Definition 2.3). Moreover, the solution uu satisfies ∥u∥Lq((0,T);Hpμ;γ(Rd))≤N(1+T2)∥u0∥Bp,qs;γ−2q(Rd), \|u\|_{L_q((0,T);H_p^{\mu;\gamma}(\mathbb{R}^d))}\leq N(1+T^2)\|u_0\|_{B_{p,q}^{s;\gamma-\frac{2}{q}}(\mathbb{R}^d)}, where NN is independent of uu, u0u_0, and TT.Comment: 44 page

    Collapse transition of a square-lattice polymer with next nearest-neighbor interaction

    Full text link
    We study the collapse transition of a polymer on a square lattice with both nearest-neighbor and next nearest-neighbor interactions, by calculating the exact partition function zeros up to chain length 36. The transition behavior is much more pronounced than that of the model with nearest-neighbor interactions only. The crossover exponent and the transition temperature are estimated from the scaling behavior of the first zeros with increasing chain length. The results suggest that the model is of the same universality class as the usual theta point described by the model with only nearest-neighbor interaction.Comment: 14 pages, 5 figure

    A TABU SEARCH FOR MULTIPLE MULTI-LEVEL REDUNDANCY ALLOCATION PROBLEM IN SERIES-PARALLEL SYSTEMS

    Get PDF
    The traditional RAP (Redundancy Allocation Problem) is to consider only the component redundancy at the lowest-level. A system can be functionally decomposed into system, module, and component levels. Modular redundancy can be more effective than component redundancy at the lowest-level. We consider a MMRAP (Multiple Multi-level Redundancy Allocation Problem) in which all available items for redundancy (system, module, and component) can be simultaneously chosen. A tabu search of memory-based mechanisms that balances intensification with diversification via the short-term and long-term memory is proposed for its solution. To the best of our knowledge, this is the first attempt to use a TS for MMRAP. Our algorithm is compared with the previous genetic algorithm for MMRAP on the new composed test problems as well as the benchmark problems from the literature. Computational results show that the tabu search outstandingly outperforms the genetic algorithm for all test problems

    A Societally-Optimized Resource Distribution (SORD) Framework for Community Flood Recovery

    Get PDF
    Natural hazards and disasters affect different populations within communities unevenly. However, natural hazards do not have the ability to discriminate between the population of the community; rather, it is the pre-existing socioeconomic conditions and the responses made to the hazards that cause disasters to be inequitable. Despite historical studies showing the disproportionate damages occurring to socially disadvantaged inhabitants, there has been limited studies demonstrating a systematic pursuit of equitable outcomes to natural hazards, particularly related to federal policy. Increasing rates and intensities of natural hazards, coupled with rising urbanization and more expensive infrastructure, underlines the criticality of addressing this shortcoming. A Societally-Optimized Resource Distribution (SORD) framework has been proposed to tackle the issue of socially unjust disasters. The novel framework centers the design of disaster resource distribution around the principles of social justice; equality and equity. Using computational optimization, it is intended that the resource distribution strategies developed through the SORD framework are first and foremost designed for the goal of fairness in the outcomes of a natural hazard. The SORD framework uses six main steps to achieve this goal: 1) hazard identification, 2) choosing societal damage indicators, 3) developing a community portfolio, 4) choosing resource types and amounts, 5) performing optimization, and 6) evaluation and decision-making. In order to demonstrate the SORD framework, an illustrative case study is provided using the 2016 flooding in Lumberton, NC. A community portfolio was developed for Lumberton using post-disaster household and business surveys completed as part of a longitudinal disaster recovery study by the NIST Center for Risk-Based Community Resilience Planning. Through the SORD framework, equality- and equity-based resource distribution strategies were developed and evaluated for the case of riverine flooding caused by the heavy rains of 2016 Hurricane Matthew. Structural retrofits were used as the resources for disaster mitigation, and household dislocation duration and business downtime duration were used as the metrics to gauge societal fairness. Using these metrics, equity was described using an average difference in days of dislocation and downtime amongst households and businesses, respectively, where a lower average difference is more equitable. The evaluations of the retrofit distributions obtained for Lumberton demonstrated that equity-based strategies were desirable compared to those based on equality. Equitable strategies were observed to have greater cost-efficiency not only in increasing equity per $1 million spent, but also in decreasing total days of dislocation and downtime. The high cost-efficiency was achieved with only minimal increases in total days of dislocation and downtime, compared to the equality-based distribution strategies. The results of the case study demonstrate great promise in the current version of the SORD framework. Future work in developing the SORD framework includes providing direction on considering long-term hazards, such as droughts, and non-structural types of resources for disaster mitigation and recovery

    Surgical anatomy of the uncinate process and transverse foramen determined by computer tomography

    Get PDF
    Study Design Computed tomography–based cohort study. Objective Although there are publications concerning the relationship between the vertebral artery and uncinate process, there is no practical guide detailing the dimensions of this region to use during decompression of the intervertebral foramen. The purpose of this study is to determine the anatomic parameters that can be used as a guide for thorough decompression of the intervertebral foramen. Methods Fifty-one patients with three-dimensional computed tomography scans of the cervical spine from 2003 to 2012 were included. On axial views, we measured the distance from the midline to the medial and lateral cortices of the pedicle bilaterally from C3 to C7. On coronal reconstructed views, we measured the minimum height of the uncinate process from the cranial cortex of the pedicle adjacent to the posterior cortex of vertebral body and the maximal height of the uncinate process from the cranial cortex of the pedicle at the midportion of the vertebral body bilaterally from C3 to C7. Results The mean distances from midline to the medial and lateral cortices of the pedicle were 10.1 ± 1.3 mm and 13.9 ± 1.5 mm, respectively. The mean minimum height of the uncinate process from the cranial cortex of the pedicle was 4.6 ± 1.6 mm and the mean maximal height was 6.1 ± 1.7 mm. Conclusions Our results suggest that in most cases, one can thoroughly decompress the intervertebral foramen by removing the uncinate out to 13 mm laterally from the midline and 4 mm above the pedicle without violating the transverse foramen
    • …
    corecore