64 research outputs found
Disjoint path protection in multi-hop wireless networks with interference constraints
We consider the problem of providing protection against failures in wireless networks by using disjoint paths. Disjoint path routing is commonly used in wired networks for protection, but due to the interference between transmitting nodes in a wireless setting, this approach has not been previously examined for wireless networks. In this paper, we develop a non-disruptive and resource-efficient disjoint path scheme that guarantees protection in wireless networks by utilizing capacity "recapturing" after a failure. Using our scheme, protection can oftentimes be provided for all demands using no additional resources beyond what was required without any protection. We show that the problem of disjoint path protection in wireless networks is not only NP-hard, but in fact remains NP-hard to approximate. We provide an ILP formulation to find an optimal solution, and develop corresponding time-efficient algorithms. Our approach utilizes 87% less protection resources on average than the traditional disjoint path routing scheme. For the case of 2-hop interference, which corresponds to the IEEE 802.11 standard, our protection scheme requires only 8% more resources on average than providing no protection whatsoever.National Science Foundation (U.S.) (CNS-1116209)National Science Foundation (U.S.) (CNS-1017800)United States. Defense Threat Reduction Agency (HDTRA-09-1-005
Network protection with guaranteed recovery times using recovery domains
We consider the problem of providing network protection that guarantees the maximum amount of time that flow can be interrupted after a failure. This is in contrast to schemes that offer no recovery time guarantees, such as IP rerouting, or the prevalent local recovery scheme of Fast ReRoute, which often over-provisions resources to meet recovery time constraints. To meet these recovery time guarantees, we provide a novel and flexible solution by partitioning the network into failure-independent “recovery domains”, where within each domain, the maximum amount of time to recover from a failure is guaranteed. We show the recovery domain problem to be NP-Hard, and develop an optimal solution in the form of an MILP for both the case when backup capacity can and cannot be shared. This provides protection with guaranteed recovery times using up to 45% less protection resources than local recovery. We demonstrate that the network-wide optimal recovery domain solution can be decomposed into a set of easier to solve subproblems. This allows for the development of flexible and efficient solutions, including an optimal algorithm using Lagrangian relaxation, which simulations show to converge rapidly to an optimal solution. Additionally, an algorithm is developed for when backup sharing is allowed. For dynamic arrivals, this algorithm performs better than the solution that tries to greedily optimize for each incoming demand.National Science Foundation (U.S.) (NSF grant CNS-1017800)National Science Foundation (U.S.) (grant CNS-0830961)United States. Defense Threat Reduction Agency (grant HDTRA-09-1-005)United States. Defense Threat Reduction Agency (grant HDTRA1-07-1-0004)United States. Air Force (Air Force contract # FA8721-05-C-0002
Providing protection in multi-hop wireless networks
We consider the problem of providing protection against failures in wireless networks subject to interference constraints. Typically, protection in wired networks is provided through the provisioning of backup paths. This approach has not been previously considered in the wireless setting due to the prohibitive cost of backup capacity. However, we show that in the presence of interference, protection can often be provided with no loss in throughput. This is due to the fact that after a failure, links that previously interfered with the failed link can be activated, thus leading to a “recapturing” of some of the lost capacity. We provide both an ILP formulation for the optimal solution, as well as algorithms that perform close to optimal. More importantly, we show that providing protection in a wireless network uses as much as 72% less protection resources as compared to similar protection schemes designed for wired networks, and that in many cases, no additional resources for protection are needed.National Science Foundation (U.S.) (Grant CNS-1116209)National Science Foundation (U.S.) (Grant CNS-0830961)United States. Defense Threat Reduction Agency (Grant HDTRA-09-1-005)United States. Air Force (Contract FA8721-05-C-0002
Analysis and algorithms for partial protection in mesh networks
This paper develops a mesh network protection scheme that guarantees a quantifiable minimum grade of service upon a failure within a network. The scheme guarantees that a fraction q of each demand remains after any single link failure. A linear program is developed to find the minimum-cost capacity allocation to meet both demand and protection requirements. For q ≤ 1/2, an exact algorithmic solution for the optimal routing and allocation is developed using multiple shortest paths. For q >; 1/2, a heuristic algorithm based on disjoint path routing is developed that performs, on average, within 1.4% of optimal, and runs four orders of magnitude faster than the minimum-cost solution achieved via the linear program. Moreover, the partial protection strategies developed achieve reductions of up to 82% over traditional full protection schemes.National Science Foundation (U.S.) (NSF grant CNS-0626781)National Science Foundation (U.S.) (NSF grant CNS-0830961)United States. Defense Threat Reduction Agency (grant HDTRA1-07-1-0004)United States. Defense Threat Reduction Agency (grant HDTRA-09-1-005)United States. Air Force (Air Force contract #FA8721-05-C-0002
Network protection with multiple availability guarantees
We develop a novel network protection scheme that provides guarantees on both the fraction of time a flow has full connectivity, as well as a quantifiable minimum grade of service during downtimes. In particular, a flow can be below the full demand for at most a maximum fraction of time; then, it must still support at least a fraction q of the full demand. This is in contrast to current protection schemes that offer either availability-guarantees with no bandwidth guarantees during the downtime, or full protection schemes that offer 100% availability after a single link failure. We develop algorithms that provide multiple availability guarantees and show that significant capacity savings can be achieved as compared to full protection. If a connection is allowed to drop to 50% of its bandwidth for 1 out of every 20 failures, then a 24% reduction in spare capacity can be achieved over traditional full protection schemes. In addition, for the case of q = 0, corresponding to the standard availability constraint, an optimal pseudo-polynomial time algorithm is presented.National Science Foundation (U.S.) (NSF grants CNS-1116209)National Science Foundation (U.S.) (NSF grants CNS-0830961)United States. Defense Threat Reduction Agency (grant HDTRA-09-1-005)United States. Defense Threat Reduction Agency (grant HDTRA1-07-1-0004)United States. Air Force (Air Force contract # FA8721-05-C-0002
Network protection with service guarantees
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2013.This electronic version was submitted and approved by the author's academic department as part of an electronic thesis pilot project. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from department-submitted PDF version of thesis.Includes bibliographical references (p. 167-174).With the increasing importance of communication networks comes an increasing need to protect against network failures. Traditional network protection has been an "all-or-nothing" approach: after any failure, all network traffic is restored. Due to the cost of providing this full protection, many network operators opt to not provide protection whatsoever. This is especially true in wireless networks, where reserving scarce resources for protection is often too costly. Furthermore, network protection often does not come with guarantees on recovery time, which becomes increasingly important with the widespread use of real-time applications that cannot tolerate long disruptions. This thesis investigates providing protection for mesh networks under a variety of service guarantees, offering significant resource savings over traditional protection schemes. First, we develop a network protection scheme that guarantees a quantifiable minimum grade of service upon a failure within the network. Our scheme guarantees that a fraction q of each demand remains after any single-link failure, at a fraction of the resources required for full protection. We develop both a linear program and algorithms to find the minimum-cost capacity allocation to meet both demand and protection requirements. Subsequently, we develop a novel network protection scheme that provides guarantees on both the fraction of time a flow has full connectivity, as well as a quantifiable minimum grade of service during downtimes. In particular, a flow can be below the full demand for at most a maximum fraction of time; then, it must still support at least a fraction q of the full demand. This is in contrast to current protection schemes that offer either availability-guarantees with no bandwidth guarantees during the down-time, or full protection schemes that offer 100% availability after a single link failure. We show that the multiple availability guaranteed problem is NP-Hard, and develop solutions using both a mixed integer linear program and heuristic algorithms. Next, we consider the problem of providing resource-efficient network protection that guarantees the maximum amount of time that flow can be interrupted after a failure. This is in contrast to schemes that offer no recovery time guarantees, such as IP rerouting, or the prevalent local recovery scheme of Fast ReRoute, which often over-provisions resources to meet recovery time constraints. To meet these recovery time guarantees, we provide a novel and flexible solution by partitioning the network into failure-independent "recovery domains", where within each domain, the maximum amount of time to recover from a failure is guaranteed. Finally, we study the problem of providing protection against failures in wireless networks subject to interference constraints. Typically, protection in wired networks is provided through the provisioning of backup paths. This approach has not been previously considered in the wireless setting due to the prohibitive cost of backup capacity. However, we show that in the presence of interference, protection can often be provided with no loss in throughput. This is due to the fact that after a failure, links that previously interfered with the failed link can be activated, thus leading to a "recapturing" of some of the lost capacity. We provide both an ILP formulation for the optimal solution, as well as algorithms that perform close to optimal.by Gregory Kuperman.Ph.D
How to Build Reputation in Financial Markets
A company's reputation for accountability and trustworthiness is a critical factor in its ability to attract the financial resources required to support its strategies. However, there has been little research done on how companies build and preserve the trust of financial markets. This research highlights a number of practices and features that seem to positively influence the formation of corporate reputation in financial markets. Collectively, the findings indicate that companies are guided by knowledgeable, respected and committed leaders, that are transparent and comprehensive in their communication of corporate plans, and that display credible and independent control systems are more likely to gather the consensus of the financial community around bold strategic plans
Lexical frequency effects on articulation:a comparison of picture naming and reading aloud
The present study investigated whether lexical frequency, a variable that is known to affect the time taken to utter a verbal response, may also influence articulation. Pairs of words that differed in terms of their relative frequency, but were matched on their onset, vowel, and number of phonemes (e.g. map vs. mat, where the former is more frequent than the latter) were used in a picture naming and a reading aloud task. Low-frequency items yielded slower response latencies than high-frequency items in both tasks, with the frequency effect being significantly larger in picture naming compared to reading aloud. Also, initial-phoneme durations were longer for low-frequency items than for high-frequency items. The frequency effect on initial-phoneme durations was slightly more prominent in picture naming than in reading aloud, yet its size was very small, thus preventing us from concluding that lexical frequency exerts an influence on articulation. Additionally, initial-phoneme and whole-word durations were significantly longer in reading aloud compared to picture naming. We discuss our findings in the context of current theories of reading aloud and speech production, and the approaches they adopt in relation to the nature of information flow (staged vs. cascaded) between cognitive and articulatory levels of processing
The MR neuroimaging protocol for the Accelerating Medicines Partnership® Schizophrenia Program.
Neuroimaging with MRI has been a frequent component of studies of individuals at clinical high risk (CHR) for developing psychosis, with goals of understanding potential brain regions and systems impacted in the CHR state and identifying prognostic or predictive biomarkers that can enhance our ability to forecast clinical outcomes. To date, most studies involving MRI in CHR are likely not sufficiently powered to generate robust and generalizable neuroimaging results. Here, we describe the prospective, advanced, and modern neuroimaging protocol that was implemented in a complex multi-site, multi-vendor environment, as part of the large-scale Accelerating Medicines Partnership® Schizophrenia Program (AMP® SCZ), including the rationale for various choices. This protocol includes T1- and T2-weighted structural scans, resting-state fMRI, and diffusion-weighted imaging collected at two time points, approximately 2 months apart. We also present preliminary variance component analyses of several measures, such as signal- and contrast-to-noise ratio (SNR/CNR) and spatial smoothness, to provide quantitative data on the relative percentages of participant, site, and platform (i.e., scanner model) variance. Site-related variance is generally small (typically <10%). For the SNR/CNR measures from the structural and fMRI scans, participant variance is the largest component (as desired; 40-76%). However, for SNR/CNR in the diffusion scans, there is substantial platform-related variance (>55%) due to differences in the diffusion imaging hardware capabilities of the different scanners. Also, spatial smoothness generally has a large platform-related variance due to inherent, difficult to control, differences between vendors in their acquisitions and reconstructions. These results illustrate some of the factors that will need to be considered in analyses of the AMP SCZ neuroimaging data, which will be the largest CHR cohort to date.Watch Dr. Harms discuss this article at https://vimeo.com/1059777228?share=copy#t=0
Analysis of shared heritability in common disorders of the brain
Disorders of the brain can exhibit considerable epidemiological comorbidity and often share symptoms, provoking debate about their etiologic overlap. We quantified the genetic sharing of 25 brain disorders from genome-wide association studies of 265,218 patients and 784,643 control participants and assessed their relationship to 17 phenotypes from 1,191,588 individuals. Psychiatric disorders share common variant risk, whereas neurological disorders appear more distinct from one another and from the psychiatric disorders. We also identified significant sharing between disorders and a number of brain phenotypes, including cognitive measures. Further, we conducted simulations to explore how statistical power, diagnostic misclassification, and phenotypic heterogeneity affect genetic correlations. These results highlight the importance of common genetic variation as a risk factor for brain disorders and the value of heritability-based methods in understanding their etiology.</p
- …