41,370 research outputs found
Recommended from our members
Scrunch, growze, or chobble?: investigating regional variation in sound symbolism in the Survey of English Dialects
This paper draws on data extracted from Upton et al.âs (1994) Survey of English Dialects: The Dictionary and Grammar in investigating the regional distribution across England of sound symbolic phonesthemes, that is, word-initial consonant clusters which appear to carry with them a non-arbitrary relationship between sound and meaning. Using such empirical data and employing systematic quantitative analysis, this study avoids the criticism often aimed at sound symbolism research that evidence is speculative and anecdotal. In operating on the intersection between sound symbolism and dialectology, the research here addresses a field currently understudied due to the scholarly attention paid to the morphological status of phonesthemes and their universality across languages. The results suggest that phonesthemes are to some extent subject to regional variation, indicating that certain phonesthemes are more common in some areas of England than alternatives which appear to carry the same sound-meaning relationship, often producing clear distributional patterns. In turn, these patterns are discussed, and explanations offered, in light of existing dialectological and variationist theoretical constructs. The significance of these findings underlines the contribution that such exploration can make to both the sound symbolism and dialectology fields, as well as highlighting the continuing opportunities for innovative research offered by the Survey of English Dialects material
Some conservative stopping rules for the operational testing of safety-critical software
Operational testing, which aims to generate sequences of test cases with the same statistical properties as those that would be experienced in real operational use, can be used to obtain quantitative measures of the reliability of software. In the case of safety critical software it is common to demand that all known faults are removed. This means that if there is a failure during the operational testing, the offending fault must be identified and removed. Thus an operational test for safety critical software takes the form of a specified number of test cases (or a specified period of working) that must be executed failure-free. This paper addresses the problem of specifying the numbers of test cases (or time periods) required for a test, when the previous test has terminated as a result of a failure. It has been proposed that, after the obligatory fix of the offending fault, the software should be treated as if it were completely novel, and be required to pass exactly the same test as originally specified. The reasoning here claims to be conservative, inasmuch as no credit is given for any previous failure-free operation prior to the failure that terminated the test. We show that, in fact, this is not a conservative approach in all cases, and propose instead some new Bayesian stopping rules. We show that the degree of conservatism in stopping rules depends upon the precise way in which the reliability requirement is expressed. We define a particular form of conservatism that seems desirable on intuitive grounds, and show that the stopping rules that exhibit this conservatism are also precisely the ones that seem preferable on other grounds
The use of multilegged arguments to increase confidence in safety claims for software-based systems: A study based on a BBN analysis of an idealized example
The work described here concerns the use of so-called multi-legged arguments to support dependability claims about software-based systems. The informal justification for the use of multi-legged arguments is similar to that used to support the use of multi-version software in pursuit of high reliability or safety. Just as a diverse, 1-out-of-2 system might be expected to be more reliable than each of its two component versions, so a two-legged argument might be expected to give greater confidence in the correctness of a dependability claim (e.g. a safety claim) than would either of the argument legs alone.
Our intention here is to treat these argument structures formally, in particular by presenting a formal probabilistic treatment of âconfidenceâ, which will be used as a measure of efficacy. This will enable claims for the efficacy of the multi-legged approach to be made quantitatively, answering questions such as âHow much extra confidence about a systemâs safety will I have if I add a verification argument leg to an argument leg based upon statistical testing?â
For this initial study, we concentrate on a simplified and idealized example of a safety system in which interest centres upon a claim about the probability of failure on demand. Our approach is to build a BBN (âBayesian Belief Networkâ) model of a two-legged argument, and manipulate this analytically via parameters that define its node probability tables. The aim here is to obtain greater insight than is afforded by the more usual BBN treatment, which involves merely numerical manipulation.
We show that the addition of a diverse second argument leg can, indeed, increase confidence in a dependability claim: in a reasonably plausible example the doubt in the claim is reduced to one third of the doubt present in the original single leg. However, we also show that there can be some unexpected and counter-intuitive subtleties here; for example an entirely supportive second leg can sometimes undermine an original argument, resulting overall in less confidence than came from this original argument. Our results are neutral on the issue of whether such difficulties will arise in real life - i.e. when real experts judge real systems
Recommended from our members
Identifying idiolect in forensic authorship attribution: an n-gram textbite approach
Forensic authorship attribution is concerned with identifying authors of disputed or anonymous documents, which are potentially evidential in legal cases, through the analysis of linguistic clues left behind by writers. The forensic linguist âapproaches this problem of questioned authorship from the theoretical position that every native speaker has their own distinct and individual version of the language [. . . ], their own idiolectâ (Coulthard, 2004: 31). However, given the diXculty in empirically substantiating a theory of idiolect, there is growing concern in the Veld that it remains too abstract to be of practical use (Kredens, 2002; Grant, 2010; Turell, 2010). Stylistic, corpus, and computational approaches to text, however, are able to identify repeated collocational patterns, or n-grams, two to six word chunks of language, similar to the popular notion of soundbites: small segments of no more than a few seconds of speech that journalists are able to recognise as having news value and which characterise the important moments of talk. The soundbite oUers an intriguing parallel for authorship attribution studies, with the following question arising: looking at any set of texts by any author, is it possible to identify ân-gram textbitesâ, small textual segments that characterise that authorâs writing, providing DNA-like chunks of identifying material
Results of thin-route satellite communication system analyses including estimated service costs
A variety of cost and performance tradeoffs are addressed and the preliminary design of a communications satellite system capable of meeting isolated rural users' needs is presented. Small inexpensive rural earth stations are linked via the satellite to a nation wide network of large earth stations which are, in turn, interconnected to the switching exchanges of the conventional telephone network. Optimum earth station EIRP and G/T and satellite transponder power are defined as a function of a wide variety of system options
Bibliography on Clean Rooms
Bibliography on contamination control, and air filtering for application to electronics and surgical instrument
Aspects of the circulation in the Rockall Trough
An investigation is made of the circulation and structure of the water masses in the Rockall Trough in spring, combining the results of a recent synoptic survey (May 1998) with those from a high-resolution ocean circulation model. In the near-surface layer, saline flows are carried northwards by a "Shelf Edge Current" around the eastern slopes, possibly with some branching in the northern Trough. Fresher waters from the west inflow between 52 and 538N and partially mix with these saline flows in the southern Trough, so that waters of intermediate salinity are also swept northwards. In the southern approaches to the Trough, Labrador Sea Water (LSW) also flows strongly in from the west between 52 and 538N, and while much of this turns south, a proportion penetrates north to join a cyclonic gyre in the Trough extending to 56.58N. The northwestern limb of this gyre is fed by, and mixes with, more saline waters which result from overflows across the WyvilleâThomson Ridge. Furthermore, salinity and CFC data suggest episodic inflow of LSW into the central Trough. The circulation of the North East Atlantic Deep Water in the Trough follows a cyclonic pattern similar to, and lying below, that of the LSW. The WyvilleâThomson Ridge overflows in the model extend to higher densities than in the survey, are topographically steered southwestward down the Feni Ridge system, and eventually join a deep cyclonic circulation in the North East Atlantic basin. Overall, the model and the observations are in good agreement, particularly in the central Rockall Trough, and this has allowed conclusions to be drawn which are significantly more robust than those which would result from either the survey or the model alone. In particular, we have been able to infer cyclonic circulation pathways for the intermediate and deeper waters in the Rockall Trough for (we believe) the first time. The study has also contributed to an ongoing community effort to assess the realism of, and improve, our current generation of ocean circulation models
Recommended from our members
Some conservative stopping rules for the operational testing of saftey-critical software
Operational testing, which aims to generate sequences of test cases with the same statistical properties as those that would be experienced in real operational use, can be used to obtain quantitative measures of the reliability of software. In the case of safety critical software it is common to demand that all known faults are removed. This means that if there is a failure during the operational testing, the offending fault must be identified and removed. Thus an operational test for safety critical software takes the form of a specified number of test cases (or a specified period of working) that must be executed failure-free. This paper addresses the problem of specifiying the number of test cases (or time periods) required for a test, when the previous test has terminated as a result of a failue. It has been proposed that, after the obligatory fix of the offending fault, the software should be treated as if it were completely novel, and be required to pass exactly the same test as originally specified. The reasoning here claims to be conservative, inasmuch as no credit is given for any previous failure-free operation prior to the failure that terminated the test. We show that, in fact, this is not a conservative approach in all cases, and propose instead some new Bayesian stopping rules. We show that the degree of conservatism in stopping rules depends upon the precise way in which the reliability requirements is expressed. We show that some rules are 'completely' conservative and argue that these are also precisely the ones that should be preferred on intuitive grounds
A smart vision sensor for detecting risk factors of a toddler's fall in a home environment
This paper presents a smart vision sensor for detecting risk factors of a toddler's fall in an indoor home environment assisting parents' supervision to prevent fall injuries. We identified the risk factors by analyzing real fall injury stories and referring to a related organization's suggestions to prevent falls. In order to detect the risk factors using computer vision, two major image processing methods, clutter detection and toddler tracking, were studied with using only one commercial web-camera. For practical purposes, there is no need for a toddler to wear any sensors or markers. The algorithms for detection have been developed, implemented and tested
- âŠ