39 research outputs found

    Neural network-based shape retrieval using fuzzy clustering and moment-based representations.

    Get PDF

    HISTOLOGICAL STUDIES OF BREWERY SPENT GRAINS IN DIETARY PROTEIN FORMULATION IN DONRYU RATS

    Get PDF
    The increasing production of large tonnage of products in brewing industries continually generates lots of solid waste which includes spent grains, surplus yeast, malt sprout and cullet. The disposal of spent grains is often a problem and poses major health and environmental challenges, thereby making it imminently necessary to explore alternatives for its management. This paper focuses on investigating the effects of Brewery Spent Grain formulated diet on haematological, biochemical, histological and growth performance of Donryu rats. The rats were allocated into six dietary treatment groups and fed on a short-term study with diet containing graded levels of spent grains from 0, 3, 6, 9, 12 and 100% weight/weight. The outcome demonstrated that formulated diet had a positive effect on the growth performance of the rats up to levels of 6% inclusions, while the haematological and biochemical evaluation revealed that threshold limit should not exceed 9% of the grain. However, the histological study on the liver indicated a limit of 3% inclusion in feed without serious adverse effect. Thus invariably showing that blend between ranges 1-3% is appropriate for the utilization of the waste in human food without adverse effect on the liver organ. The economic advantage accruing from this waste conversion process not only solves problem of waste disposal but also handle issues of malnutrition in feeding ration

    Pump Scheduling for Optimised Energy Cost and Water Quality in Water Distribution Networks

    Get PDF
    Delivering water to customers in sufficient quantity and quality and at low cost is the main driver for many water utilities around the world. One way of working toward this goal is to optimize the operation of a water distribution system. This means scheduling the operation of pumps in a way that results in minimal cost of energy used. It is not an easy process due to nonlinearity of hydraulic system response to different schedules and complexity of water networks in general. This thesis reviewed over 250 papers about pump scheduling published in the last 5 decades. The review revealed that, despite a lot of good work done in the past, the existing pump scheduling methods have several drawbacks revolving mainly around the ability to find globally optimal pump schedules and in a computationally efficient manner whilst dealing with water quality and other complexities of large pipe networks. A new pump scheduling method, entitled iterative Extended Lexicographic Goal Programming (iELGP) method, is developed and presented in this thesis with aim to overcome above drawbacks. The pump scheduling problem is formulated and solved as an optimisation problem with objectives being the electricity cost and the water age (used as a surrogate for water quality). The developed pump scheduling method is general and can be applied to any water distribution network configuration. Moreover, the new method can optimize the operation of fixed and variable speed pumps. The new method was tested on three different case studies. Each case study has different topography, demand patterns, number of pumps and number of tanks. The objective in the first and second case studies is to minimise energy cost only, whereas in the third case study, energy cost and water age are minimized simultaneously. The results obtained by using the new method are compared with results obtained from other pump scheduling methods that were applied to the same case studies. The results obtained demonstrate that the iELGP method is capable of determining optimal, low cost pump schedules whilst trading-off energy costs and water quality. The optimal schedules can be generated in a computationally very efficient manner. Given this, the iELGP method has potential to be applied in real-time scheduling of pumps in larger water distribution networks and without the need to simplify the respective hydraulic models or replace these with surrogate models

    Development of a spectral unmixing procedure using a genetic algorithm and spectral shape

    Get PDF
    xvi, 85 leaves : ill. (chiefly col.) ; 29 cmSpectral unmixing produces spatial abundance maps of endmembers or ‘pure’ materials using sub-pixel scale decomposition. It is particularly well suited to extracting a greater portion of the rich information content in hyperspectral data in support of real-world issues such as mineral exploration, resource management, agriculture and food security, pollution detection, and climate change. However, illumination or shading effects, signature variability, and the noise are problematic. The Least Square (LS) based spectral unmixing technique such as Non-Negative Sum Less or Equal to One (NNSLO) depends on “shade” endmembers to deal with the amplitude errors. Furthermore, the LS-based method does not consider amplitude errors in abundance constraint calculations, thus, often leads to abundance errors. The Spectral Angle Constraint (SAC) reduces the amplitude errors, but the abundance errors remain because of using fully constrained condition. In this study, a Genetic Algorithm (GA) was adapted to resolve these issues using a series of iterative computations based on the Darwinian strategy of ‘survival of the fittest’ to improve the accuracy of abundance estimates. The developed GA uses a Spectral Angle Mapper (SAM) based fitness function to calculate abundances by satisfying a SAC-based weakly constrained condition. This was validated using two hyperspectral data sets: (i) a simulated hyperspectral dataset with embedded noise and illumination effects and (ii) AVIRIS data acquired over Cuprite, Nevada, USA. Results showed that the new GA-based unmixing method improved the abundance estimation accuracies and was less sensitive to illumination effects and noise compared to existing spectral unmixing methods, such as the SAC and NNSLO. In case of synthetic data, the GA increased the average index of agreement between true and estimated abundances by 19.83% and 30.10% compared to the SAC and the NNSLO, respectively. Furthermore, in case of real data, GA improved the overall accuracy by 43.1% and 9.4% compared to the SAC and NNSLO, respectively

    Spatial combination of sensor data deriving from mobile platforms for precision farming applications

    Get PDF
    This thesis combines optical sensors on a ground and on an aerial platform for field measurements in wheat, to identify nitrogen (N) levels, estimating biomass (BM) and predicting yield. The Multiplex Research (MP) fluorescence sensor was used for the first time in wheat. The individual objectives were: (i) Evaluation of different available sensors and sensor platforms used in Precision Farming (PF) to quantify the crop nutrition status, (ii) Acquisition of ground and aerial sensor data with two ground spectrometers, an aerial spectrometer and a ground fluorescence sensor, (iii) Development of effective post-processing methods for correction of the sensor data, (iv) Analysis and evaluation of the sensors with regard to the mapping of biomass, yield and nitrogen content in the plant, and (v) Yield simulation as a function of different sensor signals. This thesis contains three papers, published in international peer-reviewed journals. The first publication is a literature review on sensor platforms used in agricultural research. A subdivision of sensors and their applications was done, based on a detailed categorization model. It evaluates strengths and weaknesses, and discusses research results gathered with aerial and ground platforms with different sensors. Also, autonomous robots and swarm technologies suitable for PF tasks were reviewed. The second publication focuses on spectral and fluorescence sensors for BM, yield and N detection. The ground sensors were mounted on the Hohenheim research sensor platform Sensicle. A further spectrometer was installed in a fixed-wing Unmanned Aerial Vehicle (UAV). In this study, the sensors of the Sensicle and the UAV were used to determine plant characteristics and yield of three-year field trials at the research station Ihinger Hof, Renningen (Germany), an institution of the University of Hohenheim, Stuttgart (Germany). Winter wheat (Triticum aestivum L.) was sown on three research fields, with different N levels applied to each field. The measurements in the field were geo-referenced and logged with an absolute GPS accuracy of ±2.5 cm. The GPS data of the UAV was corrected based on the pitch and roll position of the UAV at each measurement. In the first step of the data analysis, raw data obtained from the sensors was post-processed and was converted into indices and ratios relating to plant characteristics. The converted ground sensor data were analysed, and the results of the correlations were interpreted related to the dependent variables (DV) BM weight, wheat yield and available N. The results showed significant positive correlations between the DVs and the Sensicle sensor data. For the third paper, the UAV sensor data was included into the evaluations. The UAV data analysis revealed low significant results for only one field in the year 2011. A multirotor UAV was considered as a more viable aerial platform, that allows for more precision and higher payload. Thereby, the ground sensors showed their strength at a close measuring distance to the plant and a smaller measurement footprint. The results of the two ground spectrometers showed significant positive correlations between yield and the indices from CropSpec, NDVI (Normalised Difference Vegetation Index) and REIP (Red-Edge Inflection Point). Also, FERARI and SFR (Simple Fluorescence Ratio) of the MP fluorescence sensor were chosen for the yield prediction model analysis. With the available N, CropSpec and REIP correlated significantly. The BM weight correlated with REIP even at a very early growing stage (Z 31), and with SAVI (Soil-Adjusted Vegetation Index) at ripening stage (Z 85). REIP, FERARI and SFR showed high correlations to the available N, especially in June and July. The ratios and signals of the MP sensor were highly significant compared to the BM weight above Z 85. Both ground spectrometers are suitable for data comparison and data combination with the active MP fluorescence sensor. Through a combination of fluorescence ratios and spectrometer indices, linear models for the prediction of wheat yield were generated, correlating significantly over the course of the vegetative period for research field Lammwirt (LW) in 2012. The best model for field LW in 2012 was selected for cross-validation with the measurements of the fields Inneres Täle (IT) and Riech (RI) in 2011 and 2012. However, it was not significant. By exchanging only one spectral index with a fluorescence ratio in a similar linear model, it showed significant correlations. This work successfully proves the combination of different sensor ratios and indices for the detection of plant characteristics, offering better and more robust predictions and quantifications of field parameters without employing destructive methods. The MP sensor proved to be universally applicable, showing significant correlations to the investigated characteristics such as BM weight, wheat yield and available N.Diese Arbeit kombiniert optische Sensoren auf einer Sensorplattform (SPF) am Boden und in der Luft bei Messungen in Weizen, um die Stickstoff-(N)-Werte zu identifizieren, während gleichzeitig die Biomasse (BM) geschätzt und der Ertrag vorhergesagt wird. Erstmals wurde hierfür der Fluoreszenzsensor Multiplex Research (MP) in Weizen eingesetzt. Die Ziele dieser Dissertation umfassen: (i) Bewertung verfügbarer Sensoren und SPF, die in der Präzisionslandwirtschaft zur Quantifizierung des Ernährungszustandes von Nutzpflanzen verwendet werden, (ii) Erfassung von Daten mit zwei Spektrometern am Boden, einem Spektrometer auf einem Modellflugzeug (UAV) und einem Fluoreszenzsensor am Boden, (iii) Erstellung effektiver Nachbearbeitungsmethoden für die Datenkorrektur, (iv) Analyse und Evaluation der Sensoren für die Abbildung der BM, des Ertrags und des N-Gehaltes in der Pflanze, und (v) Ertragssimulation als Funktion von Merkmalen unterschiedlicher Sensorsignale. Diese Arbeit enthält drei Artikel, die in international begutachteten Fachzeitschriften publiziert wurden. Die erste Veröffentlichung ist eine Literaturrecherche über SPF in der Agrarforschung. Ein detailliertes Kategorisierungsmodell wird für eine allgemeine Unterteilung der Sensoren und deren Anwendungsgebiete herangenommen, die Stärken und Schwächen bewertet, und die Forschungsergebnisse von Luft- und Bodenplattformen mit unterschiedlicher Sensorik diskutiert. Außerdem werden autonome Roboter und für landwirtschaftliche Aufgaben geeignete Schwarmtechnologien beschrieben. Die zweite Publikation fokussiert sich auf Spektral- und Fluoreszenzsensoren für die Erfassung von BM, Ertrag und N. In der Arbeit wurden die Bodensensoren auf der Hohenheimer Forschungs-SPF Sensicle und der Sensor auf dem UAV in dreijährigen Feldversuchen auf der Versuchsstation Ihinger Hof der Universität Hohenheim in Renningen für die Bestimmung von Pflanzenmerkmalen und des Ertrags eingesetzt. Auf drei Versuchsfeldern wurde Winterweizen ausgesät, und in einem randomisierten Versuchsdesign unterschiedliche N-Düngestufen angelegt. Die Sensormessungen im Feld wurden mit einer absoluten GPS Genauigkeit von ±2,5 cm verortet. Die GPS Daten des UAVs wurden mittels der Nick- und Rollposition lagekorrigiert. Im ersten Schritt der Datenanalyse wurden die Sensorrohdaten nachbearbeitet und in Indizes und Ratios umgerechnet. Die Bodensensordaten wurden analysiert, und die Ergebnisse der Korrelationen in Bezug zu den abhängigen Variablen (DV) BM-Gewicht, Weizenertrag, verfügbarer sowie aufgenommener N dargestellt. Die Ergebnisse zeigen signifikant positive Korrelationen zwischen den DVs und den Sensicle-Sensordaten. Für die dritte Publikation wurden die Sensordaten des UAV in die Auswertungen miteinbezogen. Die Analyse der UAV Daten zeigte niedrige signifikante Ergebnisse für nur ein Feld im Versuchsjahr 2011. Ein Multikopter wird als zuverlässigere Luftplattform erachtet, der mehr Präzision und eine höhere Nutzlast ermöglicht. Die Sensoren auf dem Sensicle zeigten ihren Vorteil bedingt durch einen kürzeren Messabstand zur Pflanze und eine kleinere Messfläche. Die Ergebnisse der beiden Sensicle-Spektrometer zeigten signifikant positive Korrelationen zwischen dem Ertrag und den Indizes von CropSpec, NDVI (Normalised Difference Vegetation Index) und REIP (Red-Edge Inflection Point). Auch FERARI und SFR (Simple Fluorescence Ratio) des MP-Sensors wurden für die Analyse des Ertragsvorhersagemodells ausgewählt. Mit dem verfügbaren N korrelierten CropSpec und REIP hochsignifikant. Das BM-Gewicht korrelierte bereits ab einem sehr frühen Wachstumsstadium (Z31) mit REIP und im Reifestadium (Z85) mit SAVI (Soil-Adjusted Vegetation Index). REIP, FERARI und SFR zeigten hohe Korrelationen mit dem verfügbaren N, insbesondere im Juni und Juli. Die Ratios und Signale des MP Sensors sind vor allem ab Z85 gegenüber dem BM-Gewicht hochsignifikant. Durch eine Kombination von Fluoreszenzwerten und Spektrometerindizes wurden lineare Modelle zur Vorhersage des Weizenertrags erstellt, die im Verlauf der Vegetationsperiode für das Versuchsfeld Lammwirt (LW) im Jahr 2012 signifikant korrelierten. Das beste Modell für das Feld LW im Jahr 2012 wurde für die Kreuzvalidierung mit den Messungen der Versuchsfelder Inneres Täle (IT) und Riech (RI) in den Jahren 2011 und 2012 ausgewählt. Sie waren nicht signifikant, jedoch zeigten sich durch den Austausch nur eines Spektralindexes mit einem Fluoreszenzratio in einem ähnlichen linearen Modell signifikante Korrelationen. Die vorliegende Arbeit zeigt erfolgreich, dass sich die Kombination verschiedener Sensorwerte und Sensorindizes zur Erkennung von Pflanzenmerkmalen gut eignet, und ohne den Einsatz destruktiver Methoden die Möglichkeit für bessere und robustere Vorhersagen bietet. Vor allem der MP-Fluoreszenzsensor erwies sich als universell einsetzbarer Sensor, der signifikante Korrelationen zu den untersuchten Merkmalen BM-Gewicht, Weizenertrag und verfügbarem N aufzeigte

    Space Science

    Get PDF
    The all-encompassing term Space Science was coined to describe all of the various fields of research in science: Physics and astronomy, aerospace engineering and spacecraft technologies, advanced computing and radio communication systems, that are concerned with the study of the Universe, and generally means either excluding the Earth or outside of the Earth's atmosphere. This special volume on Space Science was built throughout a scientifically rigorous selection process of each contributed chapter. Its structure drives the reader into a fascinating journey starting from the surface of our planet to reach a boundary where something lurks at the edge of the observable, light-emitting Universe, presenting four Sections running over a timely review on space exploration and the role being played by newcomer nations, an overview on Earth's early evolution during its long ancient ice age, a reanalysis of some aspects of satellites and planetary dynamics, to end up with intriguing discussions on recent advances in physics of cosmic microwave background radiation and cosmology

    Solar water heating systems with thermal storage for application in Newfoundland

    Get PDF
    Solar water heating systems are commonly used in many parts of the world. Some such systems have thermal energy storage option. No such example could be found for Newfoundland. Such renewable energy systems could be designed and implemented for Newfoundland to reduc

    Multi-Level Trace Abstraction, Linking and Display

    Get PDF
    RÉSUMÉ Certains types de problèmes logiciels et de bogues ne peuvent être identifiées et résolus que lors de l'analyse de l'exécution des applications. L'analyse d'exécution (et le débogage) des systèmes parallèles et distribués est très difficile en utilisant uniquement le code source et les autres artefacts logiciels statiques. L'analyse dynamique par trace d'exécution est de plus en plus utilisée pour étudier le comportement d'un système. Les traces d'exécution contiennent généralement une grande quantité d'information sur l'exécution du système, par exemple quel processus/module interagit avec quels autres processus/modules, ou encore quel fichier est touché par celui-ci, et ainsi de suite. Les traces au niveau du système d'exploitation sont un des types de données des plus utiles et efficaces qui peuvent être utilisés pour détecter des problèmes d'exécution complexes. En effet, ils contiennent généralement des informations détaillées sur la communication inter-processus, sur l'utilisation de la mémoire, le système de fichiers, les appels système, la couche réseau, les blocs de disque, etc. Cette information peut être utilisée pour raisonner sur le comportement d'exécution du système et investiguer les bogues ainsi que les problèmes d'exécution. D'un autre côté, les traces d'exécution peuvent rapidement avoir une très grande taille en peu de temps à cause de la grande quantité d'information qu'elles contiennent. De plus, les traces contiennent généralement des données de bas niveau (appels système, interruptions, etc ) pour lesquelles l'analyse et la compréhension du contexte requièrent des connaissances poussées dans le domaine des systèmes d'exploitation. Très souvent, les administrateurs système et analystes préfèrent des données de plus haut niveau pour avoir une idée plus générale du comportement du système, contrairement aux traces noyau dont le niveau d'abstraction est très bas. Pour pouvoir générer efficacement des analyses de plus haut niveau, il est nécessaire de développer des algorithmes et des outils efficaces pour analyser les traces noyau et mettre en évidence les événements les plus pertinents. Le caractère expressif des événements de trace permet aux analystes de raisonner sur l'exécution du système à des niveaux plus élevés, pour découvrir le sens de l'exécution différents endroits, et détecter les comportements problématiques et inattendus. Toutefois, pour permettre une telle analyse, un outil de visualisation supplémentaire est nécessaire pour afficher les événements abstraits à de plus hauts niveaux d'abstraction. Cet outil peut permettre aux utilisateurs de voir une liste des problèmes détectés, de les suivre dans les couches de plus bas niveau (ex. directement dans la trace détaillée) et éventuellement de découvrir les raisons des problèmes détectés. Dans cette thèse, un cadre d'application est présenté pour relever ces défis : réduire la taille d'une trace ainsi que sa complexité, générer plusieurs niveaux d'événements abstraits pour les organiser d'une facon hiérarchique et les visualiser à de multiples niveaux, et enfin permettre aux utilisateurs d'effectuer une analyse verticale et à plusieurs niveaux d'abstraction, conduisant à une meilleure connaissance et compréhension de l'exécution du système. Le cadre d'application proposé est étudié en deux grandes parties : d'abord plusieurs niveaux d'abstraction de trace, et ensuite l'organisation de la trace à plusieurs niveaux et sa visualisation. La première partie traite des techniques utilisées pour les données de trace abstraites en utilisant soit les traces des événements contenus, un ensemble prédéfini de paramètres et de mesures, ou une structure basée l'abstraction pour extraire les ressources impliquées dans le système. La deuxième partie, en revanche, indique l'organisation hiérarchique des événements abstraits générés, l'établissement de liens entre les événements connexes, et enfin la visualisation en utilisant une vue de la chronologie avec échelle ajustable. Cette vue affiche les événements à différents niveaux de granularité, et permet une navigation hiérarchique à travers différentes couches d'événements de trace, en soutenant la mise a l'échelle sémantique. Grâce à cet outil, les utilisateurs peuvent tout d'abord avoir un aperçu de l'exécution, contenant un ensemble de comportements de haut niveau, puis peuvent se déplacer dans la vue et se concentrer sur une zone d'intérêt pour obtenir plus de détails sur celle-ci. L'outil proposé synchronise et coordonne les différents niveaux de la vue en établissant des liens entre les données, structurellement ou sémantiquement. La liaison structurelle utilise la délimitation par estampilles de temps des événements pour lier les données, tandis que le second utilise une pré-analyse des événements de trace pour trouver la pertinence entre eux ainsi que pour les lier. Lier les événements relie les informations de différentes couches qui appartiennent théoriquement à la même procédure ou spécifient le même comportement. Avec l'utilisation de la liaison de la correspondance, des événements dans une couche peuvent être analysés par rapport aux événements et aux informations disponibles dans d'autres couches, ce qui conduit à une analyse à plusieurs niveaux et la compréhension de l'exécution du système sous-jacent. Les exemples et les résultats expérimentaux des techniques d'abstraction et de visualisation proposés sont présentes dans cette thèse qui prouve l'efficacité de l'approche. Dans ce projet, toutes les évaluations et les expériences ont été menées sur la base des événements de trace au niveau du système d'exploitation recueillies par le traceur Linux Trace Toolkit Next Generation ( LTTng ). LTTng est un outil libre, léger et à faible impact qui fournit des informations d'exécution détaillées à partir des différents modules du système sous-jacent, tant au niveau du noyau Linux que de l'espace utilisateur. LTTng fonctionne en instrumentant le noyau et les applications utilisateur en insérant quelques points de traces à des endroits différents.----------ABSTRACT Some problems and bugs can only be identified and resolved using runtime application behavior analysis. Runtime analysis of multi-threaded and distributed systems is very difficult, almost impossible, by only analyzing the source code and other static software artifacts. Therefore, dynamic analysis through execution traces is increasingly used to study system runtime behavior. Execution traces usually contain large amounts of valuable information about the system execution, e.g., which process/module interacts with which other processes/modules, which file is touched by which process/module, which function is called by which process/module/function and so on. Operating system level traces are among the most useful and effective information sources that can be used to detect complex bugs and problems. Indeed, they contain detailed information about inter-process communication, memory usage, file system, system calls, networking, disk blocks, etc. This information can be used to understand the system runtime behavior, and to identify a large class of bugs, problems, and misbehavior. However, execution traces may become large, even within a few seconds or minutes of execution, making the analysis difficult. Moreover, traces are often filled with low-level data (system calls, interrupts, etc.) so that people need a complete understanding of the domain knowledge to analyze these data. It is often preferable for analysts to look at relatively abstract and high-level events, which are more readable and representative than the original trace data, and reveal the same behavior but at higher levels of granularity. However, to achieve such high-level data, effective algorithms and tools must be developed to process trace events, remove less important ones, highlight only necessary data, generalize trace events, and finally aggregate and group similar and related events. The expressive nature of the synthetic events allows analysts to reason about system execution at higher levels, to uncover execution behavior in different areas, and detect its problematic and unexpected aspects. However, to allow such analysis, an additional visualization tool may be required to display abstract events at different levels and explore them easily. This tool may enable users to see a list of detected problems, follow the problems in the detailed levels (e.g., within the raw trace events), analyze, and possibly discover the reasons for the detected problems. In this thesis, a framework is presented to address those challenges: to reduce the execution trace size and complexity, to generate several levels of abstract events, to organize the data in a hierarchy, to visualize them at multiple levels, and finally to enable users to perform a top-down and multiscale analysis over trace data, leading to a better understanding and comprehension of underlying system execution. The proposed framework is studied in two major parts: multi-level trace abstraction, and multi-level trace organization and visualization. The first part discusses the techniques used to abstract out trace data using either the trace events content, a predefined set of metrics and measures, or structure-based abstraction to extract the resources involved in the system execution. The second part determines the hierarchical organization of the generated abstract events, establishes links between the related events, and finally visualizes events using a zoomable timeline view. This view displays the events at different granularity levels, and enables a hierarchical navigation through different layers of trace events by supporting the semantic zooming. Using this tool, users can first see an overview of the execution, and then can pan around the view, and focus and zoom on any area of interest for more details and insight. The proposed view synchronizes and coordinates the different view levels by establishing links between data, structurally or semantically. The structural linking uses bounding times-tamps of the events to link the data, while the latter uses a pre-analysis of the trace events to find their relevance, and to link them together. Establishing Links connects the information that are conceptually related together ( e.g., events belong to the same process or specify the same behavior), so that events in one layer can be analyzed with respect to events and information in other layers, leading to a multi-level analysis and a better comprehension of the underlying system execution. In this project, all evaluations and experiments were conducted on operating system level traces obtained with the Linux Trace Toolkit Next Generation (LTTng). LTTng is a lightweight and low-impact open source tracing tool that provides valuable runtime information from the various modules of the underlying system at both the Linux kernel and user-space levels. LTTng works by instrumenting the (kernel and user-space) applications, with statically inserted tracepoints, and by generating log entries at runtime each time a tracepoint is hit

    Decision Support Systems

    Get PDF
    Decision support systems (DSS) have evolved over the past four decades from theoretical concepts into real world computerized applications. DSS architecture contains three key components: knowledge base, computerized model, and user interface. DSS simulate cognitive decision-making functions of humans based on artificial intelligence methodologies (including expert systems, data mining, machine learning, connectionism, logistical reasoning, etc.) in order to perform decision support functions. The applications of DSS cover many domains, ranging from aviation monitoring, transportation safety, clinical diagnosis, weather forecast, business management to internet search strategy. By combining knowledge bases with inference rules, DSS are able to provide suggestions to end users to improve decisions and outcomes. This book is written as a textbook so that it can be used in formal courses examining decision support systems. It may be used by both undergraduate and graduate students from diverse computer-related fields. It will also be of value to established professionals as a text for self-study or for reference

    IoT Applications Computing

    Get PDF
    The evolution of emerging and innovative technologies based on Industry 4.0 concepts are transforming society and industry into a fully digitized and networked globe. Sensing, communications, and computing embedded with ambient intelligence are at the heart of the Internet of Things (IoT), the Industrial Internet of Things (IIoT), and Industry 4.0 technologies with expanding applications in manufacturing, transportation, health, building automation, agriculture, and the environment. It is expected that the emerging technology clusters of ambient intelligence computing will not only transform modern industry but also advance societal health and wellness, as well as and make the environment more sustainable. This book uses an interdisciplinary approach to explain the complex issue of scientific and technological innovations largely based on intelligent computing
    corecore