1,923 research outputs found
Using SMVAM as a linear approximation to a nonlinear function: a note
A study contending that the linear statistical market-value accounting model (SMVAM) is a reasonable approximation of the relationship between market and book equity for firms with positive balance sheets, but that the linear approximation is inadequate when the data sample includes firms whose balance sheets show a low or negative liquidation value.Corporate profits
Comparison of Bacteroides Human Markers for Pollution Diagnostics in Recreational Waters
This presentation was given during the Great Lakes Beach Association Annual Conference
A study of the time of hospital discharge of differentiated thyroid cancer patients after receiving iodine-131 for thyroid remnant ablation treatment
The aim of this study was to measure the radiation exposure rate from differentiated thyroid carcinoma (DTC) patients who had received iodine-131 (131I) treatment, and to evaluate hospital discharge planning in relation to three different sets of regulations. We studied 100 patients, 78 females and 22 males, aged 13 to 79 years (mean 44.40±15.83 years) with DTC, in three Groups who were treated with 3.7, 5.5 or 7.4GBq of 131I, respectively. The external whole-body dose rates following oral administration of 131I were measured after each one of the first three hospitalization days. A multivariant linear analysis was performed, considering exposure rates as dependent variables to the administered dose for treatment, age, gender, regional and/or distant metastases, thyroglobulin (Tg), antibodies to Tg and thyroid remnant in the three dose groups. We found that the exposure rates after each of the three first days of hospitalization were 30, 50 and 70μSvh-1 at 1m. All our DTC patients had an acceptable dose rate on days 2 and 3 that allowed their hospital discharge. After only 1 day of hospitalization, just 3/11 cases showed not permissible exposure rates above 70μSvh-1. In conclusion, it is the opinion of the authors that after measuring the exposure rates, most treated, DTC patients could be discharged after only one day of hospitalization, even some of those treated with high doses of 131I (7.4GBq). Patients, who received the higher doses of 131I, should not be released before their individual exposure rate is measured
A critical take on the role of random and local search-oriented components of modern computational intelligence-based optimization algorithms
This is the final version. Available on open access from Springer via the DOI in this recordData availability: All used data have been presented in the paper. The codes used to test these algorithms are available at the corresponding author’s GitHub page: https://github.com/BabakZolghadrAsli/randomness_in_CI_optimizationThe concept of computational intelligence (CI)-based optimization algorithms emerged in the early 1960s as a more practical approach to the contemporary derivate-based approaches. This paved the way for many modern algorithms to arise with an unprecedented growth rate in recent years, each claiming to have a novel and present a profound breakthrough in the field. That said, many have raised concerns about the performance of these algorithms and even identified fundamental flaws that could potentially undermine the integrity of their results. On that note, the premise of this study was to replicate some of the more prevalent, fundamental components of these algorithms in an abstract format as a measure to observe their behavior in an isolated environment. Six pseudo algorithms were designed to create a spectrum of intelligence behavior ranging from absolute randomness to local search-oriented computational architecture. These were then used to solve a set of centered and non-centered benchmark suites to see if statistically different patterns would emerge. The obtained result clearly highlighted that the algorithm’s performance would suffer significantly as these benchmarks got more intricate. This is not just in terms of the number of dimensions in the search space but also the mathematical structure of the benchmark. The implication is that, in some cases, sheer processing resources can mask the algorithm’s lack of sufficient intelligence. But as importantly, this study attempted to identify some mechanics and concepts that could potentially cause or amplify this problem. For instance, the excessive use of greedy strategy, a prevalent measure embedded in many modern CI-based algorithms, has been identified as potentially one of these reasons. The result, however, highlights a more fundamental problem in the CI-based optimization field. That is, these algorithms are often treated as a black box. This perception cultivated the culture of not exploring the underlying structure of these algorithms as long as they were deemed capable of generating acceptable results, which permits similar biases to go undetected
Obogaćivanje kefira vlaknima jabuke i limuna
The effects of apple and lemon fiber addition on some properties of kefir were investigated. Five different kefirs were produced (A is control, B, C, D, E, F and G: contain 0.25 % apple fiber, 0.5 % apple fiber, 1 % apple fiber, 0.25 % lemon fiber, 0.5 % lemon fiber and 1 % lemon fiber, respectively) and stored for 20 days at 4±1 °C. pH, titratable acidity, dry matter, water activity, water holding capacity, viscosity, L, a and b values, sensorial analysis, total lactic bacteria, Lactococcus spp., Leuconostoc spp. and yeast counts of kefirs were determined at 1st, 10th and 20th days of storage. The addition of apple and lemon fiber enhanced rheological, microbiological and sensorial properties of kefirs (p<0.01). Apple and lemon fiber could be used for kefir production at a rate of 0.25 or 0.5 %.Istražen je utjecaj dodatka vlakana jabuke i limuna na neka svojstva kefira. U tu svrhu proizvedeno je pet različitih kefira (A je kontrolni, B, C, D, E, F i G: sadrže 0,25 % jabučnih vlakana, 0,5 % jabučnih vlakana, 1 % jabučnih vlakana, 0,25 % limunskih vlakana, 0,5% limunskih vlakana i 1 % limunskih vlakana), koji su bili pohranjeni 20 dana na 4±1 °C. pH, titriracijska kiselost, suha tvar, aktivnost vode, kapacitet zadržavanja vode, viskoznost, L, a i b vrijednosti, senzorska analiza, ukupni broj bakterija mliječne kiseline, Lactococcus spp., Leuconostoc spp. i broj kvasaca u kefiru određeni su 1., 10. i 20. dana skladištenja. Dobiveni rezultati upućuju na to da dodatak vlakana jabuke i limuna poboljšava reološka, mikrobiološka i senzorska svojstva kefira (p<0,01), te da se vlakna jabuke i limuna mogu koristiti za proizvodnju kefira do udjela od 0,25 % ili 0,5 %
Hydrogen production from phenol steam reforming over Ni-Co/ZrO2 catalyst: effect of catalyst dilution
This study looked into the hydrogen production from phenol steam reforming over Zirconia (ZrO2)-supported nickel-cobalt catalysts diluted with silicon carbide (SiC). The objective of this study is to obtain the effect of catalyst dilution on hydrogen production and the phenol conversion in various SiC dilutions. The catalysts were prepared by impregnation method and their performance tests were carried out in a micro fixed bed reactor at atmospheric pressure and 800 °C temperature, feed flow rate 0.36 mL/min, weight of catalyst 0.2 g, and dilution range of 0.05 to 0.35 g (1:0 to 1:1.75). The results showed that the catalyst dilution does not affect much on the catalyst activity toward phenol conversion. However, it does improve the conversion of phenol with the presence of SiC. The maximum conversion was at 0.3 g (1:1.5) SiC dilution, which was of 98.9 % and 0.6 mole fraction of hydrogen
Nationwide public perceptions regarding the acceptance of using wastewater for community health monitoring in the United States
To assess the levels of infection across communities during the coronavirus disease 2019 pandemic, researchers have measured severe acute respiratory syndrome coronavirus 2 RNA in feces dissolved in sewer water. This activity is colloquially known as sewer monitoring and is referred to as wastewater-based epidemiology in academic settings. Although global ethical principles have been described, sewer monitoring is unregulated for health privacy protection when used for public health surveillance in the United States. This study used Qualtrics XM, a national research panel provider, to recruit participants to answer an online survey. Respondents (N = 3,083) answered questions about their knowledge, perceptions of what is to be monitored, where monitoring should occur, and privacy concerns related to sewer monitoring as a public health surveillance tool. Furthermore, a privacy attitude questionnaire was used to assess the general privacy boundaries of respondents. Participants were more likely to support monitoring for diseases (92%), environmental toxins (92%), and terrorist threats (88%; e.g., anthrax). Two-third of the respondents endorsed no prohibition on location sampling scale (e.g., monitoring single residence to entire community was acceptable); the most common location category respondents wanted to prohibit sampling was at personal residences. Sewer monitoring is an emerging technology, and our study sheds light on perceptions that could benefit from educational programs in areas where public acceptance is comparatively lower. Respondents clearly communicated guard rails for sewer monitoring, and public opinion should inform future policy, application, and regulation measures
A call for a fundamental shift from model-centric to data-centric approaches in hydroinformatics
This is the final version. Available on open access from Cambridge University Press via the DOI in this record. Data availability statement: All used data have been presented in the paper.Over the years, data-driven models have gained notable traction in water and environmental engineering. The adoption of these cutting-edge frameworks is still in progress in the grand scheme of things, yet for the most part, such attempts have been centered around the models themselves, and their internal computational architecture, that is, the model-centric approach. These endeavors can certainly pave the way for more tailor-fitted models capable of producing accurate results. However, such a perspective often neglects a fundamental assumption of these models, which is the importance of reliability, correctness, and accessibility of the data used in constructing them. This challenge arises from the prevalent model-centric paradigm of thinking in the field. An alternative approach, however, would prioritize placing data at the focal point, focusing on systematically enhancing current datasets and devising frameworks to improve data collection schemes. This suggests a paradigm shift toward more data-centric thinking in water and environmental engineering. Practically, this shift is not without challenges and necessitates smarter data collection rather than an excessive one. Equally important is the ethical and accurate collection of data, making it available to everyone while safeguarding the rights of individuals and other legal entities involved in the process.European Union Horizon 202
A Qualitative Modeling Method for Platform Design
The development of a collection of related products sharing a common platform represents an important approach in modern product design. an ongoing problem is the application of design methods toward a limited set of evolving product data and archived design knowledge to search and explore alternative platform options. with the goal of maximizing the reuse of end item artifacts and supply chain elements, we propose a design modeling method using basic qualitative relationships among relevant performance, design, and noise parameters in the system of interest. by using qualitative models related to multiple levels of design data, the method provides a single high level graphical representation among design data including archived knowledge in a design repository. © 2005 IEEE
Development of the ChatGPT, Generative Artificial Intelligence and Natural Large Language Models for Accountable Reporting and Use (CANGARU) Guidelines
The swift progress and ubiquitous adoption of Generative AI (GAI), Generative
Pre-trained Transformers (GPTs), and large language models (LLMs) like ChatGPT,
have spurred queries about their ethical application, use, and disclosure in
scholarly research and scientific productions. A few publishers and journals
have recently created their own sets of rules; however, the absence of a
unified approach may lead to a 'Babel Tower Effect,' potentially resulting in
confusion rather than desired standardization. In response to this, we present
the ChatGPT, Generative Artificial Intelligence, and Natural Large Language
Models for Accountable Reporting and Use Guidelines (CANGARU) initiative, with
the aim of fostering a cross-disciplinary global inclusive consensus on the
ethical use, disclosure, and proper reporting of GAI/GPT/LLM technologies in
academia. The present protocol consists of four distinct parts: a) an ongoing
systematic review of GAI/GPT/LLM applications to understand the linked ideas,
findings, and reporting standards in scholarly research, and to formulate
guidelines for its use and disclosure, b) a bibliometric analysis of existing
author guidelines in journals that mention GAI/GPT/LLM, with the goal of
evaluating existing guidelines, analyzing the disparity in their
recommendations, and identifying common rules that can be brought into the
Delphi consensus process, c) a Delphi survey to establish agreement on the
items for the guidelines, ensuring principled GAI/GPT/LLM use, disclosure, and
reporting in academia, and d) the subsequent development and dissemination of
the finalized guidelines and their supplementary explanation and elaboration
documents.Comment: 20 pages, 1 figure, protoco
- …