11,421 research outputs found
Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse
This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses.
This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups.
In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in users’ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018—6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena
Computertomographie-basierte Bestimmung von Aortenklappenkalk und seine Assoziation mit Komplikationen nach interventioneller Aortenklappenimplantation (TAVI)
Background: Severe aortic valve calcification (AVC) has generally been recognized as a key factor in the occurrence of adverse events after transcatheter aortic valve implantation (TAVI). To date, however, a consensus on a standardized calcium detection threshold for aortic valve calcium quantification in contrast-enhanced computed tomography angiography (CTA) is still lacking. The present thesis aimed at comparing two different approaches for quantifying AVC in CTA scans based on their predictive power for adverse events and survival after a TAVI procedure.
Methods: The extensive dataset of this study included 198 characteristics for each of the 965 prospectively included patients who had undergone TAVI between November 2012 and December 2019 at the German Heart Center Berlin (DHZB). AVC quantification in CTA scans was performed at a fixed Hounsfield Unit (HU) threshold of 850 HU (HU 850 approach) and at a patient-specific threshold, where the HU threshold was set by multiplying the mean luminal attenuation of the ascending aorta by 2 (+100 % HUAorta approach). The primary endpoint of this study consisted of a combination of post-TAVI outcomes (paravalvular leak ≥ mild, implant-related conduction disturbances, 30-day mortality, post-procedural stroke, annulus rupture, and device migration). The Akaike information criterion was used to select variables for the multivariable regression model. Multivariable analysis was carried out to determine the predictive power of the investigated approaches.
Results: Multivariable analyses showed that a fixed threshold of 850 HU (calcium volume cut-off 146 mm3) was unable to predict the composite clinical endpoint post-TAVI (OR=1.13, 95 % CI 0.87 to 1.48, p=0.35). In contrast, the +100 % HUAorta approach (calcium volume cut-off 1421 mm3) enabled independent prediction of the composite clinical endpoint post-TAVI (OR=2, 95 % CI 1.52 to 2.64, p=9.2x10-7). No significant difference in the Kaplan-Meier survival analysis was observed for either of the approaches.
Conclusions: The patient-specific calcium detection threshold +100 % HUAorta is more predictive of post-TAVI adverse events included in the combined clinical endpoint than the fixed HU 850 approach. For the +100 % HUAorta approach, a calcium volume cut-off of 1421 mm3 of the aortic valve had the highest predictive value.Hintergrund: Ein wichtiger Auslöser von Komplikationen nach einer Transkatheter-Aortenklappen-Implantation (TAVI) sind ausgeprägte Kalkablagerung an der Aortenklappe. Dennoch erfolgte bisher keine Einigung auf ein standardisiertes Messverfahren zur Quantifizierung der Kalklast der Aortenklappe in einer kontrastverstärkten dynamischen computertomographischen Angiographie (CTA). Die vorliegende Dissertation untersucht, inwieweit die Wahl des Analyseverfahrens zur Quantifizierung von Kalkablagerungen in der Aortenklappe die Prognose von Komplikationen und der Überlebensdauer nach einer TAVI beeinflusst.
Methodik: Der Untersuchung liegt ein umfangreicher Datensatz von 965 Patienten mit 198 Merkmalen pro Patienten zugrunde, welche sich zwischen 2012 und 2019 am Deutschen Herzzentrum Berlin einer TAVI unterzogen haben. Die Quantifizierung der Kalkablagerung an der Aortenklappe mittels CTA wurde einerseits mit einem starren Grenzwert von 850 Hounsfield Einheiten (HU) (HU 850 Verfahren) und andererseits anhand eines individuellen Grenzwertes bemessen. Letzterer ergibt sich aus der HU-Dämpfung in dem Lumen der Aorta ascendens multipliziert mit 2 (+100 % HUAorta Verfahren). Der primäre klinische Endpunkt dieser Dissertation besteht aus einem aus sechs Variablen zusammengesetzten klinischen Endpunkt, welcher ungewünschte Ereignisse nach einer TAVI abbildet (paravalvuläre Leckage ≥mild, Herzrhythmusstörungen nach einer TAVI, Tod innerhalb von 30 Tagen, post-TAVI Schlaganfall, Ruptur des Annulus und Prothesendislokation). Mögliche Störfaktoren, die auf das Eintreten der Komplikationen nach TAVI Einfluss haben, wurden durch den Einsatz des Akaike Informationskriterium ermittelt. Um die Vorhersagekraft von Komplikationen nach einer TAVI durch beide Verfahren zu ermitteln, wurde eine multivariate Regressionsanalyse durchgeführt.
Ergebnisse: Die multivariaten logistischen Regressionen zeigen, dass die Messung der Kalkablagerungen anhand der HU 850 Messung (Kalklast Grenzwert von 146 mm3) die Komplikationen und die Überlebensdauer nicht vorhersagen konnten (OR=1.13, 95 % CI 0.87 bis 1.48, p=0.35). Die nach dem +100 % HUAorta Verfahren (Kalklast Grenzwert von 1421 mm3) individualisierte Kalkmessung erwies sich hingegen als sehr aussagekräftig, da hiermit Komplikationen nach einer TAVI signifikant vorhergesagt werden konnten (OR=2, 95 % CI 1.52 bis 2.64, p=9.2x10-7). In Hinblick auf die postoperative Kaplan-Meier Überlebenszeitanalyse kann auch mit dem +100 % HUAorta Verfahren keine Vorhersage getroffen werden.
Fazit: Aus der Dissertation ergibt sich die Empfehlung, die Messung von Kalkablagerungen nach dem +100 % HUAorta Verfahren vorzunehmen, da Komplikationen wesentlich besser und zuverlässiger als nach der gängigen HU 850 Messmethode vorhergesagt werden können. Für das +100 % HUAorta Verfahren lag der optimale Kalklast Grenzwert bei 1421 mm3
Anuário científico da Escola Superior de Tecnologia da Saúde de Lisboa - 2021
É com grande prazer que apresentamos a mais recente edição (a 11.ª) do Anuário Científico da Escola Superior de Tecnologia da Saúde de Lisboa. Como instituição de ensino superior, temos o compromisso de promover e incentivar a pesquisa científica em todas as áreas do conhecimento que contemplam a nossa missão. Esta publicação tem como objetivo divulgar toda a produção científica desenvolvida pelos Professores, Investigadores, Estudantes e Pessoal não Docente da ESTeSL durante 2021. Este Anuário é, assim, o reflexo do trabalho árduo e dedicado da nossa comunidade, que se empenhou na produção de conteúdo científico de elevada qualidade e partilhada com a Sociedade na forma de livros, capítulos de livros, artigos publicados em revistas nacionais e internacionais, resumos de comunicações orais e pósteres, bem como resultado dos trabalhos de 1º e 2º ciclo. Com isto, o conteúdo desta publicação abrange uma ampla variedade de tópicos, desde temas mais fundamentais até estudos de aplicação prática em contextos específicos de Saúde, refletindo desta forma a pluralidade e diversidade de áreas que definem, e tornam única, a ESTeSL. Acreditamos que a investigação e pesquisa científica é um eixo fundamental para o desenvolvimento da sociedade e é por isso que incentivamos os nossos estudantes a envolverem-se em atividades de pesquisa e prática baseada na evidência desde o início dos seus estudos na ESTeSL. Esta publicação é um exemplo do sucesso desses esforços, sendo a maior de sempre, o que faz com que estejamos muito orgulhosos em partilhar os resultados e descobertas dos nossos investigadores com a comunidade científica e o público em geral. Esperamos que este Anuário inspire e motive outros estudantes, profissionais de saúde, professores e outros colaboradores a continuarem a explorar novas ideias e contribuir para o avanço da ciência e da tecnologia no corpo de conhecimento próprio das áreas que compõe a ESTeSL. Agradecemos a todos os envolvidos na produção deste anuário e desejamos uma leitura inspiradora e agradável.info:eu-repo/semantics/publishedVersio
Recommended from our members
Credible to Whom? The Organizational Politics of Credibility in International Relations
Why do foreign policy decision makers care about the credibility of their own state’s commitments? How does organizational identity shape policymakers’ concern for credibility, and in turn, their willingness to use force during crises? While much previous research examines how decision makers assess others’ credibility, only recently have scholars questioned when and why leaders or their advisers prioritize their own state’s credibility.
Building on classic scholarship in bureaucratic politics, I argue that organizational identity affects the dimensions of credibility that national security officials value, and ultimately, their policy advocacy around the use of force. Particular differences arise between military and diplomatic organizations; while military officials equate credibility with hard military capabilities, diplomats view credibility in terms of reputation, or demonstrating reliability and resolve to external parties.
During crises, military officials confine their advice on the use of force to what can be achieved given current capabilities, while diplomats exhibit higher willingness to use force as a signal of a strong commitment. I test these propositions using text analysis of archival records from two collections of U.S. national security policy documents, eight case studies of American, British, and French crisis decision making, and an original survey experiment involving more than 400 current or former U.S. national security officials. I demonstrate that credibility concerns affect the balance of hawkishness in advice that diplomats and military officials deliver to leaders as a function of organizational identity
RNA pull-down-confocal nanoscanning (RP-CONA), a novel method for studying RNA/protein interactions in cell extracts that detected potential drugs for Parkinson’s disease targeting RNA/HuR complexes
MicroRNAs (miRNAs, miRs) are a class of small non-coding RNAs that regulate gene expression through specific base-pair targeting. The functional mature miRNAs usually undergo a two-step cleavage from primary miRNAs (pri-miRs), then precursor miRNAs (pre-miRs). The biogenesis of miRNAs is tightly controlled by different RNA-binding proteins (RBPs). The dysregulation of miRNAs is closely related to a plethora of diseases. Targeting miRNA biogenesis is becoming a promising therapeutic strategy.
HuR and MSI2 are both RBPs. MiR-7 is post-transcriptionally inhibited by the HuR/MSI2 complex, through a direct interaction between HuR and the conserved terminal loop (CTL) of pri-miR-7-1. Small molecules dissociating pri-miR-7/HuR interaction may induce miR-7 production. Importantly, the miR-7 levels are negatively correlated with Parkinson’s disease (PD).
PD is a common, incurable neurodegenerative disease causing serious motor deficits. A hallmark of PD is the presence of Lewy bodies in the human brain, which are inclusion bodies mainly composed of an aberrantly aggregated protein named α-synuclein (α-syn). Decreasing α-syn levels or preventing α-syn aggregation are under investigation as PD treatments. Notably, α-syn is negatively regulated by several miRNAs, including miR-7, miR-153, miR-133b and others. One hypothesis is that elevating these miRNA levels can inhibit α-syn expression and ameliorate PD pathologies.
In this project, we identified miR-7 as the most effective α-syn inhibitor, among the miRNAs that are downregulated in PD, and with α-syn targeting potentials. We also observed potential post-transcriptional inhibition on miR-153 biogenesis in neuroblastoma, which may help to uncover novel therapeutic targets towards PD.
To identify miR-7 inducers that benefit PD treatment by repressing α-syn expression, we developed a novel technique RNA Pull-down Confocal Nanoscaning (RP-CONA) to monitor the binding events between pri-miR-7 and HuR. By attaching FITC-pri-miR-7-1-CTL-biotin to streptavidin-coated agarose beads and incubating them in human cultured cell lysates containing overexpressed mCherry-HuR, the bound RNA and protein can be visualised as quantifiable fluorescent rings in corresponding channels in a confocal high-content image system. A pri-miR-7/HuR inhibitor can decrease the relative mCherry/FITC intensity ratio in RP-CONA. With this technique, we performed several small-scale screenings and identified that a bioflavonoid, quercetin can largely dissociate the pri-miR-7/HuR interaction. Further studies proved that quercetin was an effective miR-7 inducer as well as α-syn inhibitor in HeLa cells.
To understand the mechanism of quercetin mediated α-syn inhibition, we tested the effects of quercetin treatment with miR-7-1 and HuR knockout HeLa cells. We found that HuR was essential in this pathway, while miR-7 hardly contributed to the α-syn inhibition. HuR can directly bind an AU-rich element (ARE) at the 3’ untranslated region (3’-UTR) of α-syn mRNA and promote translation. We believe quercetin mainly disrupts the ARE/HuR interaction and disables the HuR-induced α-syn expression.
In conclusion, we developed and optimised RP-CONA, an on-bead, lysate-based technique detecting RNA/protein interactions, as well as identifying RNA/protein modulators. With RP-CONA, we found quercetin inducing miR-7 biogenesis, and inhibiting α-syn expression. With these beneficial effects, quercetin has great potential to be applied in the clinic of PD treatment. Finally, RP-CONA can be used in many other RNA/protein interactions studies
Um modelo para suporte automatizado ao reconhecimento, extração, personalização e reconstrução de gráficos estáticos
Data charts are widely used in our daily lives, being present in regular media,
such as newspapers, magazines, web pages, books, and many others. A well constructed
data chart leads to an intuitive understanding of its underlying data
and in the same way, when data charts have wrong design choices, a redesign
of these representations might be needed. However, in most cases, these
charts are shown as a static image, which means that the original data are not
usually available. Therefore, automatic methods could be applied to extract the
underlying data from the chart images to allow these changes. The task of
recognizing charts and extracting data from them is complex, largely due to the
variety of chart types and their visual characteristics.
Computer Vision techniques for image classification and object detection are
widely used for the problem of recognizing charts, but only in images without
any disturbance. Other features in real-world images that can make this task
difficult are not present in most literature works, like photo distortions, noise,
alignment, etc. Two computer vision techniques that can assist this task and
have been little explored in this context are perspective detection and
correction. These methods transform a distorted and noisy chart in a clear
chart, with its type ready for data extraction or other uses. The task of
reconstructing data is straightforward, as long the data is available the
visualization can be reconstructed, but the scenario of reconstructing it on the
same context is complex.
Using a Visualization Grammar for this scenario is a key component, as these
grammars usually have extensions for interaction, chart layers, and multiple
views without requiring extra development effort.
This work presents a model for automated support for custom recognition, and
reconstruction of charts in images. The model automatically performs the
process steps, such as reverse engineering, turning a static chart back into its
data table for later reconstruction, while allowing the user to make modifications
in case of uncertainties. This work also features a model-based architecture
along with prototypes for various use cases. Validation is performed step by
step, with methods inspired by the literature. This work features three use
cases providing proof of concept and validation of the model.
The first use case features usage of chart recognition methods focused on
documents in the real-world, the second use case focus on vocalization of
charts, using a visualization grammar to reconstruct a chart in audio format,
and the third use case presents an Augmented Reality application that
recognizes and reconstructs charts in the same context (a piece of paper)
overlaying the new chart and interaction widgets. The results showed that with
slight changes, chart recognition and reconstruction methods are now ready for
real-world charts, when taking time, accuracy and precision into consideration.Os gráficos de dados são amplamente utilizados na nossa vida diária, estando
presentes nos meios de comunicação regulares, tais como jornais, revistas,
páginas web, livros, e muitos outros. Um gráfico bem construído leva a uma
compreensão intuitiva dos seus dados inerentes e da mesma forma, quando
os gráficos de dados têm escolhas de conceção erradas, poderá ser
necessário um redesenho destas representações. Contudo, na maioria dos
casos, estes gráficos são mostrados como uma imagem estática, o que
significa que os dados originais não estão normalmente disponíveis. Portanto,
poderiam ser aplicados métodos automáticos para extrair os dados inerentes
das imagens dos gráficos, a fim de permitir estas alterações. A tarefa de
reconhecer os gráficos e extrair dados dos mesmos é complexa, em grande
parte devido à variedade de tipos de gráficos e às suas características visuais.
As técnicas de Visão Computacional para classificação de imagens e deteção
de objetos são amplamente utilizadas para o problema de reconhecimento de
gráficos, mas apenas em imagens sem qualquer ruído. Outras características
das imagens do mundo real que podem dificultar esta tarefa não estão
presentes na maioria das obras literárias, como distorções fotográficas, ruído,
alinhamento, etc. Duas técnicas de visão computacional que podem ajudar
nesta tarefa e que têm sido pouco exploradas neste contexto são a deteção e
correção da perspetiva. Estes métodos transformam um gráfico distorcido e
ruidoso em um gráfico limpo, com o seu tipo pronto para extração de dados
ou outras utilizações. A tarefa de reconstrução de dados é simples, desde que
os dados estejam disponíveis a visualização pode ser reconstruída, mas o
cenário de reconstrução no mesmo contexto é complexo.
A utilização de uma Gramática de Visualização para este cenário é um
componente chave, uma vez que estas gramáticas têm normalmente
extensões para interação, camadas de gráficos, e visões múltiplas sem exigir
um esforço extra de desenvolvimento.
Este trabalho apresenta um modelo de suporte automatizado para o
reconhecimento personalizado, e reconstrução de gráficos em imagens
estáticas. O modelo executa automaticamente as etapas do processo, tais
como engenharia inversa, transformando um gráfico estático novamente na
sua tabela de dados para posterior reconstrução, ao mesmo tempo que
permite ao utilizador fazer modificações em caso de incertezas. Este trabalho
também apresenta uma arquitetura baseada em modelos, juntamente com
protótipos para vários casos de utilização. A validação é efetuada passo a
passo, com métodos inspirados na literatura. Este trabalho apresenta três
casos de uso, fornecendo prova de conceito e validação do modelo.
O primeiro caso de uso apresenta a utilização de métodos de reconhecimento
de gráficos focando em documentos no mundo real, o segundo caso de uso
centra-se na vocalização de gráficos, utilizando uma gramática de visualização
para reconstruir um gráfico em formato áudio, e o terceiro caso de uso
apresenta uma aplicação de Realidade Aumentada que reconhece e reconstrói
gráficos no mesmo contexto (um pedaço de papel) sobrepondo os novos
gráficos e widgets de interação. Os resultados mostraram que com pequenas
alterações, os métodos de reconhecimento e reconstrução dos gráficos estão
agora prontos para os gráficos do mundo real, tendo em consideração o
tempo, a acurácia e a precisão.Programa Doutoral em Engenharia Informátic
From wallet to mobile: exploring how mobile payments create customer value in the service experience
This study explores how mobile proximity payments (MPP) (e.g., Apple Pay) create customer value in the service experience compared to traditional payment methods (e.g. cash and card). The main objectives were firstly to understand how customer value manifests as an outcome in the MPP service experience, and secondly to understand how the customer activities in the process of using MPP create customer value. To achieve these objectives a conceptual framework is built upon the Grönroos-Voima Value Model (Grönroos and Voima, 2013), and uses the Theory of Consumption Value (Sheth et al., 1991) to determine the customer value constructs for MPP, which is complimented with Script theory (Abelson, 1981) to determine the value creating activities the consumer does in the process of paying with MPP.
The study uses a sequential exploratory mixed methods design, wherein the first qualitative stage uses two methods, self-observations (n=200) and semi-structured interviews (n=18). The subsequent second quantitative stage uses an online survey (n=441) and Structural Equation Modelling analysis to further examine the relationships and effect between the value creating activities and customer value constructs identified in stage one. The academic contributions include the development of a model of mobile payment services value creation in the service experience, introducing the concept of in-use barriers which occur after adoption and constrains the consumers existing use of MPP, and revealing the importance of the mobile in-hand momentary condition as an antecedent state. Additionally, the customer value perspective of this thesis demonstrates an alternative to the dominant Information Technology approaches to researching mobile payments and broadens the view of technology from purely an object a user interacts with to an object that is immersed in consumers’ daily life
Acoustic emission enabled particle size estimation via low stress-varied axial interface shearing
Acoustic emission (AE) refers to a rapid release of localized stress energy that propagates as a transient elastic wave and is typically used in geotechnical applications to study stick-slip during shearing, and breakage and fracture of particles. This article develops a novel method of estimating the particle size, an important characteristic of granular materials, using axial interface shearing-induced AE signals. Specifically, a test setup that enables axial interface shearing between a one-dimensional compression granular deposit and a smooth shaft surface is developed. The interface sliding speed (up to 3mm/s), the compression stress (0-135kPa), and the particle size (150μm-5mm) are varied to test the acoustic response. The start and end moments of a shearing motion, between which a burst of AE data is produced, are identified through the variation of the AE count rates, before key parameters can be extracted from the bursts of interests. Linear regression models are then built to correlate the AE parameters with particle size, where a comprehensive evaluation and comparison in terms of estimation errors is performed. For granular samples with a single size, it is found that both the AE energy related parameters and AE counts, obtained using an appropriate threshold voltage, are effective in differentiating the particle size, exhibiting low fitting errors. The value of this technique lies in its potential application to field testing, for example as an add-on to cone penetration test systems and to enable in-situ characterization of geological deposits
Synthesis and Characterisation of Low-cost Biopolymeric/mineral Composite Systems and Evaluation of their Potential Application for Heavy Metal Removal
Heavy metal pollution and waste management are two major environmental problems faced in the world today. Anthropogenic sources of heavy metals, especially effluent from industries, are serious environmental and health concerns by polluting surface and ground waters. Similarly, on a global scale, thousands of tonnes of industrial and agricultural waste are discarded into the environment annually. There are several conventional methods to treat industrial effluents, including reverse osmosis, oxidation, filtration, flotation, chemical precipitation, ion exchange resins and adsorption. Among them, adsorption and ion exchange are known to be effective mechanisms for removing heavy metal pollution, especially if low-cost materials can be used.
This thesis was a study into materials that can be used to remove heavy metals from water using low-cost feedstock materials. The synthesis of low-cost composite matrices from agricultural and industrial by-products and low-cost organic and mineral sources was carried out. The feedstock materials being considered include chitosan (generated from industrial seafood waste), coir fibre (an agricultural by-product), spent coffee grounds (a by-product from coffee machines), hydroxyapatite (from bovine bone), and naturally sourced aluminosilicate minerals such as zeolite.
The novel composite adsorbents were prepared using commercially sourced HAp and bovine sourced HAp, with two types of adsorbents being synthesized, including two- and three-component composites. Standard synthetic methods such as precipitation were developed to synthesize these materials, followed by characterization of their structural, physical, and chemical properties (by using FTIR, TGA, SEM, EDX and XRD).
The synthesized materials were then evaluated for their ability to remove metal ions from solutions of heavy metals using single-metal ion type and two-metal ion type solution systems, using the model ion solutions, with quantification of their removal efficiency. It was followed by experimentation using the synthesized adsorbents for metal ion removal in complex systems such as an industrial input stream solution system obtained from a local timber treatment company.
Two-component composites were considered as control composites to compare the removal efficiency of the three-component composites against. The heavy metal removal experiments were conducted under a range of experimental conditions (e.g., pH, sorbent dose, initial metal ion concentration, time of contact). Of the four metal ion systems considered in this study (Cd2+, Pb2+, Cu2+ and Cr as chromate ions), Pb2+ ion removal by the composites was found to be the highest in single-metal and two-metal ion type solution systems, while chromate ion removal was found to be the lowest. The bovine bone-based hydroxyapatite (bHAp) composites were more efficient at removing the metal cations than composites formed from a commercially sourced hydroxyapatite (cHAp).
In industrial input stream solution systems (containing Cu, Cr and As), the Cu2+ ion removal was the highest, which aligned with the observations recorded in the single and two-metal ion type solution systems. Arsenate ion was removed to a higher extent than chromate ion using the three-component composites, while the removal of chromate ion was found to be higher than arsenate ion when using the two-component composites (i.e., the control system).
The project also aimed to elucidate the removal mechanisms of these synthesized composite materials by using appropriate adsorption and kinetic models. The adsorption of metal ions exhibited a range of adsorption behaviours as both the models (Langmuir and Freundlich) were found to fit most of the data recorded in different adsorption systems studied. The pseudo-second-order model was found to be the best fitted to describe the kinetics of heavy metal ion adsorption in all the composite adsorbent systems studied, in single-metal ion type and two-metal ion type solution systems. The ion-exchange mechanism was considered as one of the dominant mechanisms for the removal of cations (in single-metal and two-metal ion type solution systems) and arsenate ions (in industrial input stream solution systems) along with other adsorption mechanisms. In contrast, electrostatic attractions were considered to be the dominant mechanism of removal for chromate ions
Cortical glutamatergic projection neuron types contribute to distinct functional subnetworks
The cellular basis of cerebral cortex functional architecture remains not well understood. A major challenge is to monitor and decipher neural network dynamics across broad cortical areas yet with projection neuron (PN)-type resolution in real time during behavior. Combining genetic targeting and wide-field imaging, we monitored activity dynamics of subcortical-projecting (PTFezf2) and intratelencephalic-projecting (ITPlxnD1) types across dorsal cortex of mice during different brain states and behaviors. ITPlxnD1 and PTFezf2 neurons showed distinct activation patterns during wakeful resting, spontaneous movements, and upon sensory stimulation. Distinct ITPlxnD1 and PTFezf2 subnetworks were dynamically tuned to different sensorimotor components of a naturalistic feeding behavior, and optogenetic inhibition of subnetwork nodes disrupted specific components of this behavior. Lastly, ITPlxnD1 and PTFezf2 projection patterns are consistent with their subnetwork activation patterns. Our results show that, in addition to the concept of columnar organization, dynamic areal and PN type-specific subnetworks are a key feature of cortical functional architecture linking microcircuit components with global brain networks
- …