46 research outputs found
The Limits of Conceptual Analysis in Aesthetics
In order to understand why analytic aesthetics has lost a lot of its former intellectual stature it is necessary to combine historical reconstruction with systematic consideration. In the middle of the twentieth century analytic philosophers came to the conclusion that essentialist theories of the “nature” of art are no longer tenable. As a consequence they felt compelled to move to the meta-level of conceptual analysis. Then they tried to show how a purely classificatory concept of art is used. The presupposition, however, that there actually is such a concept can only appear plausible at first sight. Upon closer inspection it turns out to be utterly misguided
FIBâSEM and ToFâSIMS Analysis of HighâTemperature PEM Fuel Cell Electrodes
The phosphoric acid (PA) distribution in the electrodes is a crucial factor for the performance of high-temperature polymer electrolyte fuel cells (HT-PEM FCs). Therefore, understanding and optimizing the electrolyte distribution is vital to maximizing power output and achieving low degradation. Although challenging, tracking the PA in nanometer-sized pores is essential because most active sites in the commonly used carbon black-supported catalysts are located in pores below 1 ”m. For this study, a cell is operated at 200 mA cmâ2 for 5 days. After this break-in period, the cathode is separated from the membrane electrode assembly and subsequently investigated by cryogenic focused ion beam scanning electron microscopy (cryo FIB-SEM) coupled with energy-dispersive X-ray spectroscopy (EDX) and time-of-flight secondary ion mass spectrometry (ToF-SIMS). PA is located on the surface and in the bulk of the cathode catalyst layer. In addition, the PA distribution can be successfully linked to the gas diffusion electrode morphology and the binder distribution. The PA preferably invades nanometer-sized pores and is uniformly distributed in the catalyst layer
Efficiency of moderately hypofractionated radiotherapy in NSCLC cell model
BackgroundThe current standard of radiotherapy for inoperable locally advanced NSCLCs with single fraction doses of 2.0 Gy, results in poor outcomes. Several fractionation schedules have been explored that developed over the past decades to increasingly more hypofractionated treatments. Moderate hypofractionated radiotherapy, as an alternative treatment, has gained clinical importance due to shorter duration and higher patient convenience. However, clinical trials show controversial results, adding to the need for pre-clinical radiobiological studies of this schedule.MethodsWe examined in comparative analysis the efficiency of moderate hypofractionation and normofractionation in four different NSCLC cell lines and fibroblasts using several molecular-biological approaches. Cells were daily irradiated with 24x2.75 Gy (moderate hypofractionation) or with 30x2 Gy (normofractionation), imitating the clinical situation. Proliferation and growth rate via direct counting of cell numbers, MTT assay and measurements of DNA-synthesizing cells (EdU assay), DNA repair efficiency via immunocytochemical staining of residual ÎłH2AX/53BP1 foci and cell surviving via clonogenic assay (CSA) were experimentally evaluated.ResultsOverall, the four tumor cell lines and fibroblasts showed different sensitivity to both radiation regimes, indicating cell specificity of the effect. The absolute cell numbers and the CSA revealed significant differences between schedules (P < 0.0001 for all employed cell lines and both assays) with a stronger effect of moderate hypofractionation.ConclusionOur results provide evidence for the similar effectiveness and toxicity of both regimes, with some favorable evidence towards a moderate hypofractionation. This indicates that increasing the dose per fraction may improve patient survival and therapy outcomes
Compression of Large-Scale Aerial Imagery : Exploring Set Redundancy Methods
Compression of data has been historically always important; more data is gettingproduced and therefore has to be stored. While hardware technology advances,compression should be a must to reduce storage occupied and to keep the data intransmission as small as possible. Set redundancy has been developed in 1996 but has since then not received a lot ofattention in research. This paper tries to implement two set redundancy methods âthe Max-Min-Predictive II and also the Intensity Mapping algorithm to see if thismethod could be used on large scale aerial imagery in the geodata field. After using the set redundancy methods, different individual image compressionmethods were applied and compared to the standard JPEG2000 in lossless mode.These compression algorithms were Huffman, LZW, and JPEG2000 itself. The data sets used were two images each taken from 2019, one pair with 60% overlap,the other with 80% overlap. Individual compression of images is still offering abetter compression ratio, but the set redundancy method produces results which areworth investigating further with more images in a set of similar images. This points to future work of compressing a larger set with more overlap and moreimages, which for greater potential matching should be overlaid more carefully toensure matching pixel values.Datakomprimering har historiskt alltid varit viktigt; mer data Ă€n nĂ„gonsin producerasoch behöver lagras. Trots teknologiska framsteg inom lagrings- och datateknologierĂ€r komprimering ett mĂ„ste för att reducera mĂ€ngden lagring som krĂ€vs och underlĂ€ttavid överföringar genom att mindre filmĂ€ngd mĂ„ste skickas. Set redundancy utvecklades 1996, men har sedan dess inte fĂ„tt sĂ„ mycket uppmĂ€rksamhetinom forskning. Det hĂ€r pappret försöker implementera tvĂ„ olika set redundancy-metoder â Max-Min-Predictive II och Intensity Mapping algoritmen, för att se omdenna metod kan anvĂ€ndas pĂ„ flygbilder frĂ„n storskalig flygbildsinsamling. Efter anvĂ€ndandet av set redundancy metoder pĂ„ ett set av flygbilder, utnyttjadesandra bildkomprimeringsmetoder för enskilda bilder pĂ„ resultatet, detta jĂ€mfördesmed den icke-förstörande JPEG2000 komprimeringen av originalbilderna. Komprimeringsalgoritmernasom anvĂ€ndes pĂ„ set redundancy-resultatet var Huffman, LZW,och JPEG2000. Det dataset som anvĂ€ndes bestod av tvĂ„ par av bilder frĂ„n 2019, dĂ€r en hade överlapppĂ„ 60% och det andra paret pĂ„ 80%. Individuell komprimering av dataseten gaven högre komprimeringsgrad Ă€n set redundancy metoder, men set redundancy har enskalningspotential nĂ€r fler bilder lĂ€ggs till i ett set, vilket Ă€r vĂ€rt att undersöka vidare. Detta pekar pĂ„ framtida arbeten dĂ€r komprimering av större dataset med högreöverlapp mellan bilder, som med en högre geografisk korrekthet lĂ€ses in ovanpĂ„varandra, kan testas
Compression of Large-Scale Aerial Imagery : Exploring Set Redundancy Methods
Compression of data has been historically always important; more data is gettingproduced and therefore has to be stored. While hardware technology advances,compression should be a must to reduce storage occupied and to keep the data intransmission as small as possible. Set redundancy has been developed in 1996 but has since then not received a lot ofattention in research. This paper tries to implement two set redundancy methods âthe Max-Min-Predictive II and also the Intensity Mapping algorithm to see if thismethod could be used on large scale aerial imagery in the geodata field. After using the set redundancy methods, different individual image compressionmethods were applied and compared to the standard JPEG2000 in lossless mode.These compression algorithms were Huffman, LZW, and JPEG2000 itself. The data sets used were two images each taken from 2019, one pair with 60% overlap,the other with 80% overlap. Individual compression of images is still offering abetter compression ratio, but the set redundancy method produces results which areworth investigating further with more images in a set of similar images. This points to future work of compressing a larger set with more overlap and moreimages, which for greater potential matching should be overlaid more carefully toensure matching pixel values.Datakomprimering har historiskt alltid varit viktigt; mer data Ă€n nĂ„gonsin producerasoch behöver lagras. Trots teknologiska framsteg inom lagrings- och datateknologierĂ€r komprimering ett mĂ„ste för att reducera mĂ€ngden lagring som krĂ€vs och underlĂ€ttavid överföringar genom att mindre filmĂ€ngd mĂ„ste skickas. Set redundancy utvecklades 1996, men har sedan dess inte fĂ„tt sĂ„ mycket uppmĂ€rksamhetinom forskning. Det hĂ€r pappret försöker implementera tvĂ„ olika set redundancy-metoder â Max-Min-Predictive II och Intensity Mapping algoritmen, för att se omdenna metod kan anvĂ€ndas pĂ„ flygbilder frĂ„n storskalig flygbildsinsamling. Efter anvĂ€ndandet av set redundancy metoder pĂ„ ett set av flygbilder, utnyttjadesandra bildkomprimeringsmetoder för enskilda bilder pĂ„ resultatet, detta jĂ€mfördesmed den icke-förstörande JPEG2000 komprimeringen av originalbilderna. Komprimeringsalgoritmernasom anvĂ€ndes pĂ„ set redundancy-resultatet var Huffman, LZW,och JPEG2000. Det dataset som anvĂ€ndes bestod av tvĂ„ par av bilder frĂ„n 2019, dĂ€r en hade överlapppĂ„ 60% och det andra paret pĂ„ 80%. Individuell komprimering av dataseten gaven högre komprimeringsgrad Ă€n set redundancy metoder, men set redundancy har enskalningspotential nĂ€r fler bilder lĂ€ggs till i ett set, vilket Ă€r vĂ€rt att undersöka vidare. Detta pekar pĂ„ framtida arbeten dĂ€r komprimering av större dataset med högreöverlapp mellan bilder, som med en högre geografisk korrekthet lĂ€ses in ovanpĂ„varandra, kan testas