110 research outputs found
Cat and Mouse Based Task Optimization Model for Optimized Data Collection in Smart Agriculture
Data collection from agricultural fields is tiring and requires novel methodologies to produce reliable outcomes. The combination of edge and wireless sensor networks (WSN) for smart farming enabled the efficient collection of data from remote fields to a vast extent. Adopting an optimization algorithm to achieve the data collection task is prioritized in the proposed work, and a new and effective data collection framework is proposed. The proposed framework initially collects the data from the agricultural fields via sensors and then transmits it to the edge server. The path between the sensors and the edge server is optimally obtained using the cat and mouse based task optimization (CMTO) model. The sensed data are transmitted through the optimal route, and then the edge server obtains and evaluates the data based on the data quality metrics such as precision, correctness, completeness and reliability. The valid data are then identified and transferred to the cloud servers for storage. The simulation of the work is done in Python platform and evaluated using the crop recommender dataset. The evaluations proved the method's efficacy compared to the existing state-of-the-art algorithms. The proposed work also provided upto 12.5% of improvement in terms of energy consumption, 7.14% of improvement in terms of communication latency, 4% of improvement in terms of execution cost, 2.27% of improvement in terms of completeness, 1.12% of improvement in terms of precision, 9.52% of improvement in terms of correctness, and 3.37% of improvement in terms of reliability
Optimized data collection and analysis process for studying solar-thermal desalination by machine learning
An effective interdisciplinary study between machine learning and
solar-thermal desalination requires a sufficiently large and well-analyzed
experimental datasets. This study develops a modified dataset collection and
analysis process for studying solar-thermal desalination by machine learning.
Based on the optimized water condensation and collection process, the proposed
experimental method collects over one thousand datasets, which is ten times
more than the average number of datasets in previous works, by accelerating
data collection and reducing the time by 83.3%. On the other hand, the effects
of dataset features are investigated by using three different algorithms,
including artificial neural networks, multiple linear regressions, and random
forests. The investigation focuses on the effects of dataset size and range on
prediction accuracy, factor importance ranking, and the model's generalization
ability. The results demonstrate that a larger dataset can significantly
improve prediction accuracy when using artificial neural networks and random
forests. Additionally, the study highlights the significant impact of dataset
size and range on ranking the importance of influence factors. Furthermore, the
study reveals that the extrapolation data range significantly affects the
extrapolation accuracy of artificial neural networks. Based on the results,
massive dataset collection and analysis of dataset feature effects are
important steps in an effective and consistent machine learning process flow
for solar-thermal desalination, which can promote machine learning as a more
general tool in the field of solar-thermal desalination
Detecting Specific Saccharides via a Single Indicator
An improved synthesis of a rhodamine boronic acid indicator is reported. This compound is used in an optimized data collection protocol for wavelength- and time-dependent selectivity of sugars such as fructose and ribose derivatives. One indicator is thus used to selectively distinguish structurally related sugar analytes
Product range optimization – case study
The paper presents the optimizing procedure of the product range manufactured by a company which operates in the area of production and sales of the milling and bakery products. The paper authors have taken this company as an example since the products it manufactures and commercializes address to population and are required to meet both qualitative and quantitative market requirements. After a brief overview of the company, one analyses its production capacity, staff structure and distribution as well as the concern of the company for the employees’ training in its activity field, so as to fulfill all the necessary and required conditions for developing a qualitative activity. The analysis continues with: the cost of the product before optimization, software presentation and computerized optimization procedure, the selection of the products to be optimized, data collection, the optimization procedure, results obtaining and analysis, choosing the optimal solution for the analyzed products analysis and for the future competitiveness of the company
Understanding of the Mole Concept Achieved by Students in a Constructivist General Chemistry Course
The purpose of this research project was to study the conceptual understanding achieved in a general chemistry course based on a constructivist approach. A group of 28 students participated in repeated measures obtained by means of conceptual maps about the mole concept prepared three times during the course: at the beginning the course, immediately after the concept was studied, and after studying other related concepts. In addition, eight students selected from the group of 28 were interviewed. The interviews were carried out focusing on their conceptual maps. The analysis of the repeated measures indicated significant differences among the three times, especially between the first two. It was evidenced, therefore, that these students obtained a significantly higher level of understanding of the mole concept. The qualitative analysis carried out with students identified a broad range of responses that represent different levels of hierarchical organization, of progressive differentiation, and of formation of significant relations of the mole concept. Some recommendations offered are to develop and implement teaching methods that promote understanding of scientific concepts, and to prepare science professors and teachers to emphasize teaching for conceptual understanding
Radiation damage to nucleoprotein complexes in macromolecular crystallography
Significant progress has been made in macromolecular crystallography over recent years in both the understanding and mitigation of X-ray induced radiation damage when collecting diffraction data from crystalline proteins. In contrast, despite the large field that is productively engaged in the study of radiation chemistry of nucleic acids, particularly of DNA, there are currently very few X-ray crystallographic studies on radiation damage mechanisms in nucleic acids. Quantitative comparison of damage to protein and DNA crystals separately is challenging, but many of the issues are circumvented by studying pre-formed biological nucleoprotein complexes where direct comparison of each component can be made under the same controlled conditions. Here a model protein-DNA complex C.Esp1396I is employed to investigate specific damage mechanisms for protein and DNA in a biologically relevant complex over a large dose range (2.07-44.63 MGy). In order to allow a quantitative analysis of radiation damage sites from a complex series of macromolecular diffraction data, a computational method has been developed that is generally applicable to the field. Typical specific damage was observed for both the protein on particular amino acids and for the DNA on, for example, the cleavage of base-sugar N1-C and sugar-phosphate C-O bonds. Strikingly the DNA component was determined to be far more resistant to specific damage than the protein for the investigated dose range. At low doses the protein was observed to be susceptible to radiation damage while the DNA was far more resistant, damage only being observed at significantly higher doses
Dissimilarity metric based on local neighboring information and genetic programming for data dissemination in vehicular ad hoc networks (VANETs)
This paper presents a novel dissimilarity metric based on local neighboring information
and a genetic programming approach for efficient data dissemination in Vehicular Ad Hoc Networks
(VANETs). The primary aim of the dissimilarity metric is to replace the Euclidean distance in
probabilistic data dissemination schemes, which use the relative Euclidean distance among vehicles
to determine the retransmission probability. The novel dissimilarity metric is obtained by applying a
metaheuristic genetic programming approach, which provides a formula that maximizes the Pearson
Correlation Coefficient between the novel dissimilarity metric and the Euclidean metric in several
representative VANET scenarios. Findings show that the obtained dissimilarity metric correlates with
the Euclidean distance up to 8.9% better than classical dissimilarity metrics. Moreover, the obtained
dissimilarity metric is evaluated when used in well-known data dissemination schemes, such as
p-persistence, polynomial and irresponsible algorithm. The obtained dissimilarity metric achieves
significant improvements in terms of reachability in comparison with the classical dissimilarity
metrics and the Euclidean metric-based schemes in the studied VANET urban scenarios
Relationships between chlorophyll density and ocean radiance as measured by U2/OCS: Algorithms, examples and comparison
An ocean atmosphere radiative transfer process computation method which is suitable for determining lower boundary ocean albedo and other radiation components from spectral measurements of upwelling radiance taken from a high altitude platform is described. The method was applied to a set of color scanner data taken from slope water of the South Atlantic Bight to determine the influence of cholorophyll-a pigments in the sea on the ratio of upwelling radiance to down welling irradiance as a function of wavelength. The resulting chlorophyll concentrations are compared with measurements made by ships stationed along the flight path
- …