71 research outputs found
Panel 2 - Unreported Shortcomings of Title IX
MODERATOR: Hello, everyone, and welcome to our second panel, Unreported Shortcomings of Title IX. I’m going to start off with a quick introduction of our moderator. Today we have Dean Lisa Taylor who is our Dean for Diversity, Inclusion and Affinity Relations at WCL. She is much beloved by students of the Journal and students of WCL in general. And I know she is going to kick off a great panel. Dean Taylor, it’s all yours
Analysis of minimal path routing schemes in the presence of faults
The design and analysis of fault tolerant message routing schemes for large parallel systems has been the focus of much recent research. In this paper, we present a framework for the analysis of routing schemes in distributed memory multiprocessor systems containing faulty or unusable components. We introduce techniques for the derivation of the probabilities of succesfully routing a single message using minimal path routing schemes. Using this framework, we derive closed form solutions for a wide range of routing schemes on the hypercube and on the two- dimensional mesh. The results obtained show the surprising resilience of the hypercube to a potentially large number of faults while demonstrating the inability of the mesh to tolerate a comparatively smaller number of faults.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/29944/1/0000302.pd
Community School District 22 and the Flatbush Community: A School District's Response to Demographic Changes
This thesis focuses on public education, focusing on the schools and the communities they are within. These are not two entirely separate entities as schools and communities should be integrally related to one another and work together to create a cohesive system. Included is the theoretical framework of James Coleman’s community conflict theory to promote understanding as to how school district policies affect communities. The author focuses on Coleman’s “Conditions for Controversy” theory and his take on “population shifts and heterogeneous values.” Methodology includes interviews
Generating De Bruijn Sequences: An Efficient Implementation
This paper presents a concise and efficient implementation of a method of producing De Bruijn sequences. The implementation is based on a recursive method due to Lempel [5]. We provide code for a function that for each pair of integers n # 2 and 0 x returns a unique De Bruijn sequence of order-n. The implementation requires only O(2 ) bit operation
Recommended from our members
Interconnection networks: Regularity and richness
In this dissertation we investigate a model of a general-purpose parallel machine commonly referred to as a massively parallel processor array (MPPA). An MPPA is able to solve large problems efficiently by employing a large number of intercommunicating processing elements. It is our thesis that certain structural properties of interconnection networks, which we will refer to as regularity and richness, impact on the computational behavior of MPPAs in essential ways. We support this thesis by examining the significant role these structural properties have in the understanding of parallel computation productivity. In the Prolegomena (Part 0) we introduce terminology, our computational model, and mathematical preliminaries. We show how both structured, regular networks and unstructured, random-like networks are constructed. For each of these two classes of networks we examine their computational strengths and weaknesses. In Part 1 we focus on the issue of fault tolerance of MPPA machines. We show that MPPAs built upon certain structurally regular, decomposable networks have an ability to perform efficient computations even when the PEs fail randomly. We develop a formal framework in which to design reconfiguration algorithms that allow a MPPA possessing randomly distributed faulty PEs to emulate the computations of a fault-free network of the same size and topology. Our results show that, with high probability, an n-node torus network with uniformly distributed faulty processors can emulate a fault-free n-node torus network with a slowdown factor of . For the n-node deBruijn and the n-node butterfly networks, our reconfiguration algorithm yields an emulation that incurs a slowdown factor of O(loglog n). In Part 2 we present a case study that suggests a network cannot simultaneously achieve regularity and richness. We show that Cayley graphs do not yield the rich connectivity properties associated with random-like graphs when the algebraic structure of the underlying group is not complicated. By deriving upper bounds on the size of node bisectors, and lower bounds on the diameters of these classes of Cayley graphs we show that such regularly structured graphs cannot enjoy the expansion property that has been shown to be such a useful tool in theoretical studies. (Abstract shortened with permission of author.
Ranking Algorithms For Hamiltonian Paths In Hypercubic Networks
. Given a labeled set that is linearly ordered, a ranking algorithm returns the rank-position of an element in the linear order when input with the label of that element. In this paper we provide ranking (and unranking) algorithms for certain classes of graphs where the linear order on the vertexset of a graph is determined by a Hamiltonian path. The classes of graphs we consider include the Hypercube, the De Bruijn, and the Butterfly, the so-called hypercubic networks. These graphs are widely recognized as important interconnection topologies for parallel computations. Our ranking and unranking algorithms can be applied to yield efficient implementations of certain network emulations using SIMD-style parallel algorithms for translating node labels. 1. Introduction One unavoidable step in the implementation of parallel algorithms on interconnection networks is the generation of software that "reconfigures" the physical architecture, given the algorithm's specified logical interconnect..
Dominating Connectivity and Reliability of Heterogeneous Sensor Networks
Abstract — Consider a placement of heterogeneous, wireless sensors that can vary the transmission range by increasing or decreasing power. The problem of determining an optimal assignment of transmission radii, so that the resulting network is strongly-connected and more generally k-connected has been studied in the literature. In traditional k-connectedness, the network is able resist the failure of up to k − 1 nodes anywhere in the network, and still remain strongly-connected. In this paper we introduce a much stronger k-connectedness property, which we show can be implemented efficiently, and without great increase in the radii of transmission needed to simply achieve connectedness. We say that a network is dominating kconnected if, for any simultaneous failure of nodes throughout the network, with at most k − 1 nodes failures occurring in the out-neighborhood any surviving (up) node, the set U of up nodes forms a dominating set and induces a strongly-connected subdigraph. In this paper, we give a simple characterization of the networks that are dominating k-connected and design an associated efficient algorithm for determining the dominating connectivity, i.e., the maximum k such that the network is dominating k-connected. We also present an efficient algorithm for computing an assignment of transmission radii that results in a dominating k-connected network which minimizes the maximum radius. Furthermore, we show that the maximum radius in this assignment is no more than a multiplicative factor of k greater than the percolation radius ρperc, i.e., the minimum that the maximum transmission radius can be so that the network remains connected. We show through empirical testing that this multiplicative factor can, in practice, be considerably less than k and only slightly greater than that required to achieve traditional k-connectedness. Finally, we show that for sensors placed on the lattice points of a two-dimensional square, we can achieve dominating k-connectedness with a multiplicative factor of at most √ 2 ⌊ √ k +.5 ⌋ greater than ρperc. I
Distributed Models and Algorithms for Survivability in Network Routing (Extended Abstract)
We introduce a natural distributed model for analyzing the survivability of distributed networks and network routing schemes based on considering the effects of local constraints on network connectivity. We investigate the computational consequences of two fundamental interpretations of the meaning of local constraints. We consider a fullduplex model in which constraints are applied to edges of a graph representing two-way communication links and a half-duplex model in which constraints are applied to edges representing one-way links. We show that the problem of determining the survivability of a network under the fullduplex model is NP-hard, even in the restricted case of simply defined constraints. We also show that the problem of determining the survivability of network routing under the half-duplex model is highly tractable. We are able to effectively determine fault-tolerant routing schemes that are able to dynamically adapt to withstand any connectivity threats consistent with the constraints of the half-duplex model. The routing scheme is based on a multitree data structure, and we are able to generate optimal multitrees of minimum weighted depth. We also investigate an optimization problem related to achieving survivable network routing using a small sets of retransmission or landmark sites. Although the associated optimization problem is NP-hard, we show that sufficiently dense graphs can achieve survivable routing schemes using a small sets of landmarks. We prove an associated extremal result that is optimal over all graphs with minimum degree #
- …