19 research outputs found
Virginia\u27s pelagic recreational fishery: Biological, socioeconomic and fishery components
Catch, effort, fleet size and boat owner expenditure data were collected on Virginia\u27s recreational marlin/tuna fishery for the 1983-1985 seasons. Logbooks, dockside interviews and a telephone survey were evaluated to determine which method was the most efficient and effective for collecting and estimating catch and effort for Virginia\u27s pelagic recreational fishery. In 1984, logbooks were used to collect catch and effort data and fishing effort was estimated using Bochenek\u27s method. Very few fishermen returned their logbooks and as a result this data is probably less reliable than the data collected in other years. Due to the poor return of logbooks, this method should not be used to assess Virginia\u27s marlin/tuna fishery. For the 1985 season, Figley\u27s telephone survey (1984) was compared to the NMFS dockside interview technique for large pelagics. Both the telephone survey using Figley\u27s technique (1984) and dockside interviews using Bochenek\u27s method for calculating effort appear to provide similar estimates of projected total catch. However, the dockside method is very labor intensive, costly and fraught with problems in estimating fishing effort. Therefore, the telephone survey technique using Figley\u27s method for estimating effort appears to be a better method for analyzing this fishery. If telephone interviewing will not work in an area and dockside sampling methods must be relied upon to study the pelagic fishery, Bochenek\u27s method appears to produce a better estimate of fishing effort. Using Figley\u27s (1984) mark-recapture technique, Virginia\u27s pelagic recreational fleet was estimated at 455 and 774 vessels in 1983 and 1985, respectively. Boat owner expenditures for this fleet were estimated at &3,863,045 in 1983, \&4,057,020 in 1984 and &5,538,191 in 1985. Bluefin tuna were caught at SST ranging from 58-83 F but seem to prefer SST of 70 to 75 F. Yellowfin tuna were caught at SST ranging from 68-86 F with the majority landed at SST of 76-80 F. White marlin appear to prefer SST of 74 to 81 F
Methods for structural design at elevated temperatures
A procedure which can be used to design elevated temperature structures is discussed. The desired goal is to have the same confidence in the structural integrity at elevated temperature as the factor of safety gives on mechanical loads at room temperature. Methods of design and analysis for creep, creep rupture, and creep buckling are presented. Example problems are included to illustrate the analytical methods. Creep data for some common structural materials are presented. Appendix B is description, user's manual, and listing for the creep analysis program. The program predicts time to a given creep or to creep rupture for a material subjected to a specified stress-temperature-time spectrum. Fatigue at elevated temperature is discussed. Methods of analysis for high stress-low cycle fatigue, fatigue below the creep range, and fatigue in the creep range are included. The interaction of thermal fatigue and mechanical loads is considered, and a detailed approach to fatigue analysis is given for structures operating below the creep range
Social Norms, Status Spending and Household Debt: Evidence from Kyrgyzstan
Development economists have two key paradigms concerning poverty and financial markets. One considers the poor in the developing world as operating in imperfect markets. Another view is that the poor are subject to constraints. The policy prescriptions stemming from these views would be improving market access and redistribution. We consider one important constraint the poor are facing: social norms which require spending on ceremonial activities. This paper adds to the literature by providing empirical evidence that having access to loans makes households spend more on ceremonies and with the higher ceremonial spending they increase the likelihood of debt thus creating a vicious circle which might keep households in poverty. Thus policies which are aimed at either removing market frictions or providing benefits to the poor will not have a desired effect. These measures have to be combined with reforms aimed at changing the existing institutions
Cost Analysis of Nondeterministic Probabilistic Programs
We consider the problem of expected cost analysis over nondeterministic
probabilistic programs, which aims at automated methods for analyzing the
resource-usage of such programs. Previous approaches for this problem could
only handle nonnegative bounded costs. However, in many scenarios, such as
queuing networks or analysis of cryptocurrency protocols, both positive and
negative costs are necessary and the costs are unbounded as well.
In this work, we present a sound and efficient approach to obtain polynomial
bounds on the expected accumulated cost of nondeterministic probabilistic
programs. Our approach can handle (a) general positive and negative costs with
bounded updates in variables; and (b) nonnegative costs with general updates to
variables. We show that several natural examples which could not be handled by
previous approaches are captured in our framework.
Moreover, our approach leads to an efficient polynomial-time algorithm, while
no previous approach for cost analysis of probabilistic programs could
guarantee polynomial runtime. Finally, we show the effectiveness of our
approach by presenting experimental results on a variety of programs, motivated
by real-world applications, for which we efficiently synthesize tight
resource-usage bounds.Comment: A conference version will appear in the 40th ACM Conference on
Programming Language Design and Implementation (PLDI 2019
Cost analysis of nondeterministic probabilistic programs
We consider the problem of expected cost analysis over nondeterministic probabilistic programs,
which aims at automated methods for analyzing the resource-usage of such programs.
Previous approaches for this problem could only handle nonnegative bounded costs.
However, in many scenarios, such as queuing networks or analysis of cryptocurrency protocols,
both positive and negative costs are necessary and the costs are unbounded as well.
In this work, we present a sound and efficient approach to obtain polynomial bounds on the
expected accumulated cost of nondeterministic probabilistic programs.
Our approach can handle (a) general positive and negative costs with bounded updates in
variables; and (b) nonnegative costs with general updates to variables.
We show that several natural examples which could not be
handled by previous approaches are captured in our framework.
Moreover, our approach leads to an efficient polynomial-time algorithm, while no
previous approach for cost analysis of probabilistic programs could guarantee polynomial runtime.
Finally, we show the effectiveness of our approach using experimental results on a variety of programs for which we efficiently synthesize tight resource-usage bounds
Recommended from our members
Secondary use of electronic medical records for early identification of raised condition likelihoods in individuals: a machine learning approach
With many symptoms being common to multiple diseases, there is a challenge in producing an initial diagnosis or recommendation for diagnostic tests from a set of symptoms that could have been produced by a number of diseases. Often the initial choice of diagnosis or testing is based on a clinician’s impression of the likelihood of that condition in a general population; however the opportunity may exist for modification of these likelihoods based on individuals’ recorded medical histories. This data-driven approach utilises existing data and is thus cheap and non-invasive. A method is proposed by which an individual’s likelihoods of having specified medical conditions are modified by the similarity of that individual’s medical history to the medical histories of other individuals, comparing the prevalence of conditions in those other individuals’ records who are similar to the individual of interest versus the prevalence of the conditions in those individuals who are dissimilar. In order to maximise the number of records available for analysis, a process was developed for the merging of data from disparate sources that used different clinical coding systems, including extensive development of a technique for semi automatically mapping clinical events coded in ICD9-CM to Clinical Terms Version 3 (CTV3), for which no existing mapping table was found. Semantically similar fields in the source code sets were identified and retained in the combined data set. ‘Codelists’ comprising multiple CTV3 codes for a variety of conditions were built that defined the presence of those conditions within individual records. The hierarchical structure of the CTV3 code table was utilised as a method of identifying codes that differed in structure but had clinically similar or related meaning. The optimum degree of granularity of the coded data to use in identifying similar records was investigated and used in subsequent analysis.
Two methods were used for discovering groups of similar and dissimilar individuals: the ‘nearest neighbours’ method and the grouping of records using a clustering process. Altered likelihoods for a range of conditions were investigated and results for the nearest-neighbours approach compared to the clustering approach. Results for adjusted condition likelihoods for 18 conditions are reported, together with a discussion of possible reasons for a change, or otherwise, in the condition likelihood, and a discussion of the clinical significance and potential use of information about such a change. logistic regressions performed on a selection of conditions KNN performed better than logistic regression when judged by F-score (or sensitivity and specificity separately), however situation more nuanced when looking at likelihood ratios: Logistic regression produced higher (better) positive likelihood ratios, but KNN produced lower (better) negative likelihood ratios. Logistic regression produced higher odds ratios
Automated Reasoning
This volume, LNAI 13385, constitutes the refereed proceedings of the 11th International Joint Conference on Automated Reasoning, IJCAR 2022, held in Haifa, Israel, in August 2022. The 32 full research papers and 9 short papers presented together with two invited talks were carefully reviewed and selected from 85 submissions. The papers focus on the following topics: Satisfiability, SMT Solving,Arithmetic; Calculi and Orderings; Knowledge Representation and Jutsification; Choices, Invariance, Substitutions and Formalization; Modal Logics; Proofs System and Proofs Search; Evolution, Termination and Decision Prolems. This is an open access book