6,205 research outputs found
Flood dynamics derived from video remote sensing
Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models.
Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science
DIC-Transformer: interpretation of plant disease classification results using image caption generation technology
Disease image classification systems play a crucial role in identifying disease categories in the field of agricultural diseases. However, current plant disease image classification methods can only predict the disease category and do not offer explanations for the characteristics of the predicted disease images. Due to the current situation, this paper employed image description generation technology to produce distinct descriptions for different plant disease categories. A two-stage model called DIC-Transformer, which encompasses three tasks (detection, interpretation, and classification), was proposed. In the first stage, Faster R-CNN was utilized to detect the diseased area and generate the feature vector of the diseased image, with the Swin Transformer as the backbone. In the second stage, the model utilized the Transformer to generate image captions. It then generated the image feature vector, which is weighted by text features, to improve the performance of image classification in the subsequent classification decoder. Additionally, a dataset containing text and visualizations for agricultural diseases (ADCG-18) was compiled. The dataset contains images of 18 diseases and descriptive information about their characteristics. Then, using the ADCG-18, the DIC-Transformer was compared to 11 existing classical caption generation methods and 10 image classification models. The evaluation indicators for captions include Bleu1–4, CiderD, and Rouge. The values of BLEU-1, CIDEr-D, and ROUGE were 0.756, 450.51, and 0.721. The results of DIC-Transformer were 0.01, 29.55, and 0.014 higher than those of the highest-performing comparison model, Fc. The classification evaluation metrics include accuracy, recall, and F1 score, with accuracy at 0.854, recall at 0.854, and F1 score at 0.853. The results of DIC-Transformer were 0.024, 0.078, and 0.075 higher than those of the highest-performing comparison model, MobileNetV2. The results indicate that the DIC-Transformer outperforms other comparison models in classification and caption generation
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Resource-aware scheduling for 2D/3D multi-/many-core processor-memory systems
This dissertation addresses the complexities of 2D/3D multi-/many-core processor-memory systems, focusing on two key areas: enhancing timing predictability in real-time multi-core processors and optimizing performance within thermal constraints. The integration of an increasing number of transistors into compact chip designs, while boosting computational capacity, presents challenges in resource contention and thermal management. The first part of the thesis improves timing predictability. We enhance shared cache interference analysis for set-associative caches, advancing the calculation of Worst-Case Execution Time (WCET). This development enables accurate assessment of cache interference and the effectiveness of partitioned schedulers in real-world scenarios. We introduce TCPS, a novel task and cache-aware partitioned scheduler that optimizes cache partitioning based on task-specific WCET sensitivity, leading to improved schedulability and predictability. Our research explores various cache and scheduling configurations, providing insights into their performance trade-offs. The second part focuses on thermal management in 2D/3D many-core systems. Recognizing the limitations of Dynamic Voltage and Frequency Scaling (DVFS) in S-NUCA many-core processors, we propose synchronous thread migrations as a thermal management strategy. This approach culminates in the HotPotato scheduler, which balances performance and thermal safety. We also introduce 3D-TTP, a transient temperature-aware power budgeting strategy for 3D-stacked systems, reducing the need for Dynamic Thermal Management (DTM) activation. Finally, we present 3QUTM, a novel method for 3D-stacked systems that combines core DVFS and memory bank Low Power Modes with a learning algorithm, optimizing response times within thermal limits. This research contributes significantly to enhancing performance and thermal management in advanced processor-memory systems
Effects of municipal smoke-free ordinances on secondhand smoke exposure in the Republic of Korea
ObjectiveTo reduce premature deaths due to secondhand smoke (SHS) exposure among non-smokers, the Republic of Korea (ROK) adopted changes to the National Health Promotion Act, which allowed local governments to enact municipal ordinances to strengthen their authority to designate smoke-free areas and levy penalty fines. In this study, we examined national trends in SHS exposure after the introduction of these municipal ordinances at the city level in 2010.MethodsWe used interrupted time series analysis to assess whether the trends of SHS exposure in the workplace and at home, and the primary cigarette smoking rate changed following the policy adjustment in the national legislation in ROK. Population-standardized data for selected variables were retrieved from a nationally representative survey dataset and used to study the policy action’s effectiveness.ResultsFollowing the change in the legislation, SHS exposure in the workplace reversed course from an increasing (18% per year) trend prior to the introduction of these smoke-free ordinances to a decreasing (−10% per year) trend after adoption and enforcement of these laws (β2 = 0.18, p-value = 0.07; β3 = −0.10, p-value = 0.02). SHS exposure at home (β2 = 0.10, p-value = 0.09; β3 = −0.03, p-value = 0.14) and the primary cigarette smoking rate (β2 = 0.03, p-value = 0.10; β3 = 0.008, p-value = 0.15) showed no significant changes in the sampled period. Although analyses stratified by sex showed that the allowance of municipal ordinances resulted in reduced SHS exposure in the workplace for both males and females, they did not affect the primary cigarette smoking rate as much, especially among females.ConclusionStrengthening the role of local governments by giving them the authority to enact and enforce penalties on SHS exposure violation helped ROK to reduce SHS exposure in the workplace. However, smoking behaviors and related activities seemed to shift to less restrictive areas such as on the streets and in apartment hallways, negating some of the effects due to these ordinances. Future studies should investigate how smoke-free policies beyond public places can further reduce the SHS exposure in ROK
The Application of Data Analytics Technologies for the Predictive Maintenance of Industrial Facilities in Internet of Things (IoT) Environments
In industrial production environments, the maintenance of equipment has a decisive influence on costs and on the plannability of production capacities. In particular, unplanned failures during production times cause high costs, unplanned downtimes and possibly additional collateral damage. Predictive Maintenance starts here and tries to predict a possible failure and its cause so early that its prevention can be prepared and carried out in time. In order to be able to predict malfunctions and failures, the industrial plant with its characteristics, as well as wear and ageing processes, must be modelled. Such modelling can be done by replicating its physical properties. However, this is very complex and requires enormous expert knowledge about the plant and about wear and ageing processes of each individual component. Neural networks and machine learning make it possible to train such models using data and offer an alternative, especially when very complex and non-linear behaviour is evident.
In order for models to make predictions, as much data as possible about the condition of a plant and its environment and production planning data is needed. In Industrial Internet of Things (IIoT) environments, the amount of available data is constantly increasing. Intelligent sensors and highly interconnected production facilities produce a steady stream of data. The sheer volume of data, but also the steady stream in which data is transmitted, place high demands on the data processing systems. If a participating system wants to perform live analyses on the incoming data streams, it must be able to process the incoming data at least as fast as the continuous data stream delivers it. If this is not the case, the system falls further and further behind in processing and thus in its analyses. This also applies to Predictive Maintenance systems, especially if they use complex and computationally intensive machine learning models. If sufficiently scalable hardware resources are available, this may not be a problem at first. However, if this is not the case or if the processing takes place on decentralised units with limited hardware resources (e.g. edge devices), the runtime behaviour and resource requirements of the type of neural network used can become an important criterion.
This thesis addresses Predictive Maintenance systems in IIoT environments using neural networks and Deep Learning, where the runtime behaviour and the resource requirements are relevant. The question is whether it is possible to achieve better runtimes with similarly result quality using a new type of neural network. The focus is on reducing the complexity of the network and improving its parallelisability. Inspired by projects in which complexity was distributed to less complex neural subnetworks by upstream measures, two hypotheses presented in this thesis emerged: a) the distribution of complexity into simpler subnetworks leads to faster processing overall, despite the overhead this creates, and b) if a neural cell has a deeper internal structure, this leads to a less complex network. Within the framework of a qualitative study, an overall impression of Predictive Maintenance applications in IIoT environments using neural networks was developed. Based on the findings, a novel model layout was developed named Sliced Long Short-Term Memory Neural Network (SlicedLSTM). The SlicedLSTM implements the assumptions made in the aforementioned hypotheses in its inner model architecture.
Within the framework of a quantitative study, the runtime behaviour of the SlicedLSTM was compared with that of a reference model in the form of laboratory tests. The study uses synthetically generated data from a NASA project to predict failures of modules of aircraft gas turbines. The dataset contains 1,414 multivariate time series with 104,897 samples of test data and 160,360 samples of training data.
As a result, it could be proven for the specific application and the data used that the SlicedLSTM delivers faster processing times with similar result accuracy and thus clearly outperforms the reference model in this respect. The hypotheses about the influence of complexity in the internal structure of the neuronal cells were confirmed by the study carried out in the context of this thesis
Synthesis of multifunctional glyco-pseudodendrimers and glyco-dendrimers and their investigation as anti-Alzheimer agents
As the world population is aging, the cases of Alzheimer’s Disease (AD) are increasing. AD is a disorder of the brain which is characterized by the aggregation of amyloid beta (Aβ) plaques. This leads to the death of numerous brain cells thus affecting the cognitive and motor functions of the individual. Till date, no cure for the disease is available. Aβ are peptides with 40/42 amino acid residues but, their exact mechanism(s) of action in AD is under debate. Having different amino acid residues makes them susceptible to form hydrogen bonds. Dendrimers with sugar units are often referred to as glycopolymers and have been shown to have potential anti-amyloidogenic activity. However, they also have drawbacks, the synthesis involves multiple tedious steps, and dendrimers themselves offer only a limited number of functional units. Pseudodendrimers are another class of branched polymers based on hyperbranched polymers. Unlike the dendrimers, they are easy to synthesize with a dense shell of functional units on the surface. One of the main goals in this dissertation is the synthesis and characterization of pseudodendrimers and dendrimers based on 2,2-bis(hydroxymethyl)-propionic acid (bis-MPA), an aliphatic polyester scaffold, as it offers biocompatibility and easy degradability. Furthermore, they are decorated with mannose units on the surface using a ‘click’ reaction forming glyco-pseudodendrimers and glyco-dendrimers. A detailed characterization of their structures and physical properties was undertaken using techniques such as size exclusion chromatography, asymmetric flow field flow fractionation (AF4), and dynamic light scattering.
The second main focus of this work has been to investigate the interaction of synthesized glyco-pseudodendrimers and glyco-dendrimers with Aβ 40 peptides. For this task, five different concentrations of the synthesized glycopolymers were tested with Aβ 40 using the Thioflavin T assay. The results of the synthesized polymers which produced the best results of showing maximum anti-aggregation behavior against Aβ 40 were confirmed with circular dichroism spectroscopy. AF4 was also used to investigate Aβ 40-glycopolymer aggregates, which has never been done before and constitutes the highlight of this dissertation. Atomic force microscopy was used to image Aβ 40-glycopseudodenrimer aggregates.
A basic but important step in the development of drug delivery platforms is to evaluate the toxicity of the drugs synthesized. In this work, preliminary studies of the cytotoxicity of glyco-pseudodendrimers were performed in two different cell lines. Thus, this study comprises a preliminary investigation of the anti-amyloidogenic activity of glyco-pseudodendrimers synthesized on an aliphatic polyester backbone.:Abstract
List of Tables
List of Figures
Abbreviations
1 Introduction
1.1 Objectives of the work
1.2 Thesis overview
2 Fundamentals and Literature
2.1 Alzheimer’s Disease and its impact
2.1.1 Neurological diagnosis of AD
2.1.2 Histopathology of AD
2.1.3 Amyloid precursor protein (APP) and its role in AD
2.2. Amyloid Beta (Aβ) peptide
2.2.1 Aβ peptide
2.2.2. Location and function
2.2.3 Amyloid hypothesis
2.2.4 The mechanism of Aβ aggregation
2.2.5 Amyloid fibrils
2.2.6 Toxicity of Aβ
2.3 Research methods to study Aβ aggregates
2.3.1 Models to study the mode of action of aggregates
2.3.2 Endogenous Aβ aggregates and synthetic aggregates
2.3.3 Strategies to alter aggregation of amyloids
2.4 Treatment and therapeutics
2.4.1 Current therapeutics
2.4.2 Current therapeutic research
2.4.2.1 Reduction of Aβ production
2.4.2.2 Reduction of Aβ plaque accumulation
2.4.2.2.1 Anti-amyloid aggregation agents
2.4.2.2.2 Metals
2.4.2.2.3 Immunotherapy
2.4.2.2.4 Dendrimers as potential anti-amyloidogenic agent
2.6 Dendrimers
2.6.1 Definition
2.6.2 Structure
Table of Contents
2.6.3 Synthesis
2.6.4 Properties
2.7 Pseudodendrimers - a sub-class of hyperbranched polymer
2.7.1 Definition
2.7.2 Structure
2.7.3 Synthesis
3 Analytical Techniques
3.1 Size Exclusion Chromatography Coupled to Light Scattering (SEC-MALS)
3.2 Asymmetric Flow Field Flow Fractionation (AF4)
3.3 Dynamic Light Scattering
3.4 Molecular Dynamics Simulation
3.5 Nuclear Magnetic Resonance Spectroscopy
3.6 Thioflavin T fluorescence
3.6.1 Kinetic analysis
3.7 Circular Dichroism Spectroscopy
3.8 Atomic Force Microscopy
3.9 Cytotoxic assay
3.9.1 MTT assay
3.9.2 Determining the level of reactive oxygen species
3.9.3 Changes in mitochondrial transmembrane potential
3.9.4 Flow cytometric detection of phosphatidyl serine exposure
4 Experimental Details and Methodology
4.1 Details of chemicals/components used
4.1.1 Other materials
4.1.2 Peptide preparation
4.1.3 Buffer preparation
4.1.4 Fibril growth conditions
4.2 Synthesis and characterization of polymers
4.2.1 Synthesis and characterization of pseudodendrimers and dendrimers
4.2.1.1 Synthesis of hyperbranched polymer (1)
4.2.1.2 Synthesis of protected monomer
4.2.1.2.1 bis-MPA acetonide (2)
4.2.1.2.2 bis-MPA-acetonide anhydride (3)
4.2.1.3 Synthesis of protected pseudodendrimers (4, 6 and 8) and
protected dendrimers (10, 12, and 14)
4.2.1.4 Deprotection of pseudodendrimers (5,7, and 9) and dendrimers
(11,13 and 15)
4.2.2 Synthesis of glyco-pseudodendrimers and glyco-dendrimers
4.2.2.1 Pentynoic anhydride (16)
4.2.2.2 Synthesis of pentinate modified pseudodendrimers (17, 18
and 19) and dendrimers (20, 21 and 22)
4.2.2.3 3-Azido-1-propanol (23)
4.2.2.4 Mannose propyl azide tetraacetate (24)
Table of Contents
4.2.2.5 Mannosepropylazide (25)
4.2.2.6 Glyco-pseudodendrimers (Gl-P) (26, 27 and 28) and glyco-
dendrimers (Gl-D) (29, 30 and 31)
4.3 Analytical techniques and their general details
4.3.1 SEC-MALS - Instrumentation, software and analysis
4.3.2 AF4 - Instrumentation, software and analysis
4.3.2.1 Sample preparation
4.3.2.2 Method development for analysis of Gl-P and Gl-D
4.3.2.3 Method development for analysis of Aβ 40 and its interaction
with Gl-P and Gl-D
4.3.3 Batch DLS - Instrumentation, software and analysis
4.3.3.1 Sample preparation
4.3.4 Theoretical calculations and molecular dynamics simulations
4.3.4.1 Ab-initio calculations
4.3.4.2 Modelling of the polymer structures
4.3.4.2.1 Pseudodendrimers
4.3.4.2.2 Dendrimers
4.3.4.2.3 Modification of the polymers with special end groups
4.3.4.2.4 Preparing of the THF solvent box
4.3.4.2.5 Solvation of the polymer structures
4.3.4.3 Molecular dynamics simulations
4.3.4.3.1 Evaluation of the simulation trajectories
4.4 Investigation of interaction of Gl-P and Gl-D with amyloid beta (Aβ 40)
4.4.1 ThT Assay - Instrumentation and software
4.4.1.1 Sample preparation
4.4.1.2 Kinetics based on ThT assay- software and data analysis
4.4.2 CD spectroscopy - Instrumentation and software
4.4.2.1 Sample preparation
4.4.3 AFM - Instrumentation and software
4.4.3.1 Substrate and sample preparation
4.4.3.2 Height determination and counting procedures
4.4.3.3 Topography and diameter
4.5 Cytotoxicity
4.5.1 Zeta potential
4.5.2 Cell culturing
4.5.3 Sample preparation
4.5.4 MTT assay
4.5.5 Changes in mitochondrial transmembrane potential (JC-1 method)
4.5.6 Flow cytometric detection of phosphatidyl serine exposure
(Annexin V and PI method)
5 Results and Discussion
5.1 Synthesis and characterization of glyco-pseudodendrimers and glyco-
dendrimers
5.1.1 Synthesis and characterization of hyperbranched polyester
Table of Contents
5.1.2 Synthesis and characterization of pseudodendrimers P-G1-OH,
P-G2-OH and P-G3-OH
5.1.3 Synthesis and characterization of dendrimers D-G4-OH, D-G5-OH
and D-G6-OH
5.1.4 Synthesis and characterization of Gl-P and Gl-D
5.1.4.1 Molecular size determination of Gl-P and Gl-D using SEC
5.1.4.2 Particle size determination using batch DLS
5.1.4.3 Apparent densities
5.1.4.4 Molecular size determination of Gl-P and Gl-D using AF4 .....
5.1.5 Molecular dynamics simulation
5.2 Investigation of interaction of Gl-P and Gl-D with amyloid beta (Aβ 40) ......
5.2.1 ThT Assay
5.2.1.1 Kinetics based on ThT assay
5.2.2 CD spectroscopy
5.2.3 Time dependent AF4
5.3.2.1 Separation of Aβ 40 by AF4
5.3.2.2 Aβ 40 amyloid aggregation in the presence of Gl-P and Gl-D
5.2.4 AFM
5.2.4.1 Height
5.2.4.2 Topography and diameter
5.2.4.3 Length
5.2.4.4 Morphology
5.2.5 Cytotoxicity
5.2.5.1 MTT assay
5.2.5.2 Changes in mitochondrial transmembrane potential
5.2.5.3 Flow cytometric detection of phosphatidyl serine exposure
6 Conclusions and Outlook
7 Bibliography
Appendix
Acknowledgement
Enhancing the forensic comparison process of common trace materials through the development of practical and systematic methods
An ongoing advancement in forensic trace evidence has driven the development of new and objective methods for comparing various materials. While many standard guides have been published for use in trace laboratories, different areas require a more comprehensive understanding of error rates and an urgent need for harmonizing methods of examination and interpretation. Two critical areas are the forensic examination of physical fits and the comparison of spectral data, which depend highly on the examiner’s judgment.
The long-term goal of this study is to advance and modernize the comparative process of physical fit examinations and spectral interpretation. This goal is fulfilled through several avenues: 1) improvement of quantitative-based methods for various trace materials, 2) scrutiny of the methods through interlaboratory exercises, and 3) addressing fundamental aspects of the discipline using large experimental datasets, computational algorithms, and statistical analysis.
A substantial new body of knowledge has been established by analyzing population sets of nearly 4,000 items representative of casework evidence. First, this research identifies material-specific relevant features for duct tapes and automotive polymers. Then, this study develops reporting templates to facilitate thorough and systematic documentation of an analyst’s decision-making process and minimize risks of bias. It also establishes criteria for utilizing a quantitative edge similarity score (ESS) for tapes and automotive polymers that yield relatively high accuracy (85% to 100%) and, notably, no false positives. Finally, the practicality and performance of the ESS method for duct tape physical fits are evaluated by forensic practitioners through two interlaboratory exercises. Across these studies, accuracy using the ESS method ranges between 95-99%, and again no false positives are reported. The practitioners’ feedback demonstrates the method’s potential to assist in training and improve peer verifications.
This research also develops and trains computational algorithms to support analysts making decisions on sample comparisons. The automated algorithms in this research show the potential to provide objective and probabilistic support for determining a physical fit and demonstrate comparative accuracy to the analyst. Furthermore, additional models are developed to extract feature edge information from the systematic comparison templates of tapes and textiles to provide insight into the relative importance of each comparison feature. A decision tree model is developed to assist physical fit examinations of duct tapes and textiles and demonstrates comparative performance to the trained analysts. The computational tools also evaluate the suitability of partial sample comparisons that simulate situations where portions of the item are lost or damaged.
Finally, an objective approach to interpreting complex spectral data is presented. A comparison metric consisting of spectral angle contrast ratios (SCAR) is used as a model to assess more than 94 different-source and 20 same-source electrical tape backings. The SCAR metric results in a discrimination power of 96% and demonstrates the capacity to capture information on the variability between different-source samples and the variability within same-source samples. Application of the random-forest model allows for the automatic detection of primary differences between samples. The developed threshold could assist analysts with making decisions on the spectral comparison of chemically similar samples.
This research provides the forensic science community with novel approaches to comparing materials commonly seen in forensic laboratories. The outcomes of this study are anticipated to offer forensic practitioners new and accessible tools for incorporation into current workflows to facilitate systematic and objective analysis and interpretation of forensic materials and support analysts’ opinions
Introduction to Psychology
Introduction to Psychology is a modified version of Psychology 2e - OpenStax
- …