1,202 research outputs found

    ‘I’ll just Google it!’: Should lawyers’ perceptions of Google inform the design of electronic legal resources?

    Get PDF
    Lawyers, like many user groups, regularly use Google to find information for their work. We present results of a series of interviews with academic and practicing lawyers, where they discuss in what situations they use various electronic resources and why. We find lawyers use Google due to a variety of factors, many of which are related to the need to find information quickly. Lawyers also talk about Google with a certain affection not demonstrated when discussing other resources. Although we can design legal resources to emulate Google or design them based on factors perceived to make Google successful, we suggest this is unlikely to better support legal information-seeking. Instead, we suggest the importance of taking a number of inter-related tradeoffs, related to the factors identified in our study, into account when designing electronic legal resources to help ensure they are useful, usable and used

    Studying Law Students’ Information Seeking Behaviour to Inform the Design of Digital Law Libraries

    Get PDF
    In this paper, we describe our ongoing work which involves examining the information seeking behaviour of legal professionals. This work involves studying the behaviour of both academic and practicing lawyers with the long-term aim of integrating user-centred legal information seeking support into digital law libraries. We report preliminary findings from the initial phase of the study, which comprised a series of semistructured interviews and naturalistic observations of academic law students looking for information that they require for their work. This group of academic lawyers often found it difficult to find the information that they were looking for when using digital law libraries. A potential symptom of this difficulty was that hazy and incorrect knowledge of the digital library system and information sources within the system were rife. This suggests the need for students to understand more about the digital library systems that they use (within-systems knowledge). We also found that although this group of academic lawyers often used several electronic resources in a complementary fashion to conduct legal information seeking, they often chose to rely primarily on one of either the LexisNexis or Westlaw digital law library platforms. Their preference was often based upon vague or sometimes flawed rationale and suggests the need for students to appreciate the situations in which different electronic resources might be useful (between-systems knowledge)

    Recognition of non-Milankovitch sea-level highstands at 185 and 343 thousand years ago from U-Th dating of Bahamas sediment

    Get PDF
    Thirty-one new bulk-sediment U-Th dates are presented, together with an improved δ18O stratigraphy, for ODP Site 1008A on the slopes of the Bahamas Banks. These ages supplement and extend those from previous studies and provide constraints on the timing of sea-level highstands associated with marine isotope stages (MIS) 7 and 9. Ages are screened for reliability based on their initial U and Th isotope ratios, and on the aragonite fraction of the sediment. Twelve 'reliable' dates for MIS 7 suggest that its start is concordant with that predicted if climate is forced by northern-hemisphere summer insolation following the theory of Milankovitch. But U-Th and δ18O data indicate the presence of an additional highstand which post-dates the expected end of MIS 7 by up to 10 ka. This event is also seen in coral reconstructions of sea-level. It suggests that sea-level is not responding in any simple way to northern-hemisphere summer insolation, and that tuned chronologies which make such an assumption are in error by ≈10 ka at this time. U-Th dates for MIS 9 also suggest a potential mismatch between the actual timing of sea-level and that predicted by simple mid-latitude northern-hemisphere forcing. Four dates are earlier than that predicted for the start of MIS 9. Although the most extreme of these dates may not be reliable (based on the low-aragonite content of the sediment) the other three appear robust and suggest that full MIS 9 interglacial conditions were established at 343 ka. This is ≈8 ka prior to the date expected if this warm period were driven by northern-hemisphere summer insolation. © 2006 Elsevier Ltd. All rights reserved

    A cross sectional study of the prevalence and associated risks for bursitis in 6250 weaner, grower and finisher pigs from 103 British pig farms

    Get PDF
    A cross-sectional study of 93 farms in England was carried out to estimate the prevalence and associated risk factors for bursitis. A total of 6250 pigs aged 6–22 weeks were examined for presence and severity of bursitis. Details of pen construction, pen quality and farm management were recorded including floor type, presence of bedding, condition of the floor and floor materials. The prevalence of bursitis was 41.2% and increased with each week of age (OR 1.1). Two-level logistic regression models were developed with the outcome as the proportion of pigs affected with bursitis in a pen. Pigs kept on soil floors with straw bedding were used as the reference level. In comparison with these soil floors, bursitis increased on concrete floors where the bedding was deep throughout (OR 4.6), deep in part (OR 3.7), and sparse throughout (OR 9.0), part slatted floors (OR 8.0), and fully slatted floors (OR 18.8). Slip or skid marks in the dunging area (OR 1.5), pigs observed slipping during the examination of the pen (OR 1.3) and wet floors (OR 3.6) were also associated with an increased risk of bursitis. The results indicate that bursitis is a common condition of growing pigs and that the associated risk factors for bursitis were a lack of bedding in the lying area, presence of voids and pen conditions which increased the likelihood of injury

    Contention elimination by replication of sequential sections in distributed shared memory programs

    Get PDF
    In shared memory programs contention often occurs at the transition between a sequential and a parallel section of the code. As all threads start executing the parallel section, they often access data just modified by the thread that executed the sequential section, causing a flurry of data requests to converge on that processor.We address this problem in a software distributed shared memory system by replicating the execution of the sequential sections on all processors. Communication during this replicated sequential execution is reduced by using multicast.We have implemented replicated sequential execution with multicast support in OpenMP/NOW, a version of of OpenMP that runs on networks of workstations. We do not rely on compile-time data analysis, and therefore we can handle irregular and pointer-based applications. We show significant improvement for two pointer-based applications that suffer from severe contention without replicated sequential execution

    Improving Fine-Grained Irregular Shared-Memory Benchmarks by Data Reordering

    Get PDF
    We demonstrate that data reordering can substantially improve the performance of fine-grained irregular sharedmemory benchmarks, on both hardware and software shared-memory systems. In particular, we evaluate two distinct data reordering techniques that seek to co-locate in memory objects that are in close proximity in the physical system modeled by the computation. The effects of these techniques are increased spatial locality and reduced false sharing. We evaluate the effectiveness of the data reordering techniques on a set of five irregular applications from SPLASH-2 and Chaos. We implement both techniques in a small library, allowing us to enable them in an application by adding less than 10 lines of code. Our results on one hardware and two software shared-memory systems show that, with data reordering during initialization, the performance of these applications is improved by 12%–99% on the Origin 2000, 30%–366% on TreadMarks, and 14%–269% on HLRC

    TreadMarks: Distributed Shared Memory on Standard Workstations and Operating Systems

    Get PDF
    TreadMarks is a distributed shared memory (DSM) system for standard Unix systems such as SunOS and Ultrix. This paper presents a performance evaluation of TreadMarks running on Ultrix using DECstation-5000/240's that are connected by a 100-Mbps switch-based ATM LAN and a 10-Mbps Ethernet. Our objective is to determine the efficiency of a user-level DSM implementation on commercially available workstations and operating systems. We achieved good speedups on the 8-processor ATM network for Jacobi (7.4), TSP (7.2), Quicksort (6.3), and ILINK (5.7). For a slightly modified version ofWater from the SPLASH benchmark suite, we achieved only moderate speedups (4.0) due to the high communication and synchronization rate. Speedups decline on the 10-Mbps Ethernet (5.5 for Jacobi, 6.5 for TSP, 4.2 for Quicksort, 5.1 for ILINK, and 2.1 for Water), re ecting the bandwidth limitations of the Ethernet. These results support the contention that, with suitable networking technology, DSM is a viable technique for parallel computation on clusters of workstations. To achieve these speedups, TreadMarks goes to great lengths to reduce the amount of communication performed to maintain memory consistency. It uses a lazy implementation of release consistency, and it allows multiple concurrent writers to modify a page, reducing the impact of false sharing. Great care was taken to minimize communication overhead. In particular, on the ATM network, we used a standard low-level protocol, AAL3/4, bypassing the TCP/IP protocol stack. Unix communication overhead, however, remains the main obstacle in the way of better performance for programs like Water. Compared to the Unix communication overhead, memory management cost (both kernel and user level) is small and wire time is negligible

    Distributed Versioning: Consistent Replication for Scaling Back-end Databases of Dynamic Content Sites

    Get PDF
    Dynamic content Web sites consist of a front-end Web server, an application server and a back-end database. In this paper we introduce distributed versioning, a new method for scaling the back-end database through replication. Distributed versioning provides both the consistency guarantees of eager replication and the scaling properties of lazy replication. It does so by combining a novel concurrency control method based on explicit versions with conflict-aware query scheduling that reduces the number of lock conflicts. We evaluate distributed versioning using three dynamic content applications: the TPC-W e-commerce benchmark with its three workload mixes, an auction site benchmark, and a bulletin board benchmark. We demonstrate that distributed versioning scales better than previous methods that provide consistency. Furthermore, we demonstrate that the benefits of relaxing consistency are limited, except for the conflict-heavy TPC-W ordering mix
    corecore