46 research outputs found

    Money and Markets:Introduction

    Get PDF
    In the history of economic thought the relationship between money and market has been interpreted from two contrasting points of view. On the one hand, money is seen as an instrument created by individuals to overcome the difficulties involved in barter, its basic function being as a medium of exchange, while the other view has it that money developed before the market and that its principal function is that of a standard of value. Evidently, therefore, in the former case the unit of account function is seen to have emerged from a practice (exchange of goods and services) based on the advantages to be had for individuals seeking to maximise their utility, while in the latter case money emerges as a rule adopted by members of the community (the political authorities promoting it and ensuring it be respected) which pre-dates the market. As in the case of money, also for the market two approaches have come into confrontation in the course of the history of economic thought. With the first the market is seen as a column bearing the economies characterised by private property and freedom of enterprise. It represents the means by which members of society democratically come to decisions about the use of resources and distribution of income. In the second approach, on the other hand, the market is seen as the means by which decisions on the use of resources and distribution of income, once taken by the groups enjoying economic command, are passed on to the other members of society to be implemented. In the first case the market lies at the heart of the system and serves to prevent the enjoyment of privileges and position rents by some members of society excluding others, while in the second it merely plays a supporting role for the existing patterns of power.Money, markets, alternative theories

    Dalla giuscibernetica all'informatica giuridica decisionale. Indagine sui fondamenti e sui limiti dell'intelligenza artificiale applicata al diritto

    Get PDF
    This research aims at offering a survey about the historical and epistemological foundations and about the philosophical roots of juridical computer science. After the examination of the interdisciplinary birth of cybernetics, we make clear the estreme reductivism of that problematic interdisciplinary. In particolar, on the philosophical level, we aim to show the presence in cybernetics of identitary thought, i.e. of that very ancient thought which required the indifference of the entire being. On the juridical level, after examing the way in which legal reasoning is used in Expert Legal Systems, we talk about the problem of alorithmization of law, which is based on the algorithmic hypothesis of mind, according to which the mind works as an algotithm. Artificial Intelligence, the last and most important gain of cybernetics, showing itself able to translate theoretical-juridical conceptions in computational models of law, allows the measurement of juridical concepts and their structural reduction to computer science formalisms, allowing therefore the calculation of law. All this gives rise a lot of methodological problems to philosphers, and before them, to jurists

    Development and Validation of a New Prognostic System for Patients with Hepatocellular Carcinoma

    Get PDF
    BACKGROUND: Prognostic assessment in patients with hepatocellular carcinoma (HCC) remains controversial. Using the Italian Liver Cancer (ITA.LI.CA) database as a training set, we sought to develop and validate a new prognostic system for patients with HCC. METHODS AND FINDINGS: Prospective collected databases from Italy (training cohort, n = 3,628; internal validation cohort, n = 1,555) and Taiwan (external validation cohort, n = 2,651) were used to develop the ITA.LI.CA prognostic system. We first defined ITA.LI.CA stages (0, A, B1, B2, B3, C) using only tumor characteristics (largest tumor diameter, number of nodules, intra- and extrahepatic macroscopic vascular invasion, extrahepatic metastases). A parametric multivariable survival model was then used to calculate the relative prognostic value of ITA.LI.CA tumor stage, Eastern Cooperative Oncology Group (ECOG) performance status, Child-Pugh score (CPS), and alpha-fetoprotein (AFP) in predicting individual survival. Based on the model results, an ITA.LI.CA integrated prognostic score (from 0 to 13 points) was constructed, and its prognostic power compared with that of other integrated systems (BCLC, HKLC, MESIAH, CLIP, JIS). Median follow-up was 58 mo for Italian patients (interquartile range, 26-106 mo) and 39 mo for Taiwanese patients (interquartile range, 12-61 mo). The ITA.LI.CA integrated prognostic score showed optimal discrimination and calibration abilities in Italian patients. Observed median survival in the training and internal validation sets was 57 and 61 mo, respectively, in quartile 1 (ITA.LI.CA score 64 1), 43 and 38 mo in quartile 2 (ITA.LI.CA score 2-3), 23 and 23 mo in quartile 3 (ITA.LI.CA score 4-5), and 9 and 8 mo in quartile 4 (ITA.LI.CA score > 5). Observed and predicted median survival in the training and internal validation sets largely coincided. Although observed and predicted survival estimations were significantly lower (log-rank test, p < 0.001) in Italian than in Taiwanese patients, the ITA.LI.CA score maintained very high discrimination and calibration features also in the external validation cohort. The concordance index (C index) of the ITA.LI.CA score in the internal and external validation cohorts was 0.71 and 0.78, respectively. The ITA.LI.CA score's prognostic ability was significantly better (p < 0.001) than that of BCLC stage (respective C indexes of 0.64 and 0.73), CLIP score (0.68 and 0.75), JIS stage (0.67 and 0.70), MESIAH score (0.69 and 0.77), and HKLC stage (0.68 and 0.75). The main limitations of this study are its retrospective nature and the intrinsically significant differences between the Taiwanese and Italian groups. CONCLUSIONS: The ITA.LI.CA prognostic system includes both a tumor staging-stratifying patients with HCC into six main stages (0, A, B1, B2, B3, and C)-and a prognostic score-integrating ITA.LI.CA tumor staging, CPS, ECOG performance status, and AFP. The ITA.LI.CA prognostic system shows a strong ability to predict individual survival in European and Asian populations

    Development and Validation of a New Prognostic System for Patients with Hepatocellular Carcinoma

    Get PDF
    Background: Prognostic assessment in patients with hepatocellular carcinoma (HCC) remains controversial. Using the Italian Liver Cancer (ITA.LI.CA) database as a training set, we sought to develop and validate a new prognostic system for patients with HCC. Methods and Findings: Prospective collected databases from Italy (training cohort, n = 3,628; internal validation cohort, n = 1,555) and Taiwan (external validation cohort, n = 2,651) were used to develop the ITA.LI.CA prognostic system. We first defined ITA.LI.CA stages (0, A, B1, B2, B3, C) using only tumor characteristics (largest tumor diameter, number of nodules, intra- and extrahepatic macroscopic vascular invasion, extrahepatic metastases). A parametric multivariable survival model was then used to calculate the relative prognostic value of ITA.LI.CA tumor stage, Eastern Cooperative Oncology Group (ECOG) performance status, Child–Pugh score (CPS), and alpha-fetoprotein (AFP) in predicting individual survival. Based on the model results, an ITA.LI.CA integrated prognostic score (from 0 to 13 points) was constructed, and its prognostic power compared with that of other integrated systems (BCLC, HKLC, MESIAH, CLIP, JIS). Median follow-up was 58 mo for Italian patients (interquartile range, 26–106 mo) and 39 mo for Taiwanese patients (interquartile range, 12–61 mo). The ITA.LI.CA integrated prognostic score showed optimal discrimination and calibration abilities in Italian patients. Observed median survival in the training and internal validation sets was 57 and 61 mo, respectively, in quartile 1 (ITA.LI.CA score ≤ 1), 43 and 38 mo in quartile 2 (ITA.LI.CA score 2–3), 23 and 23 mo in quartile 3 (ITA.LI.CA score 4–5), and 9 and 8 mo in quartile 4 (ITA.LI.CA score > 5). Observed and predicted median survival in the training and internal validation sets largely coincided. Although observed and predicted survival estimations were significantly lower (log-rank test, p < 0.001) in Italian than in Taiwanese patients, the ITA.LI.CA score maintained very high discrimination and calibration features also in the external validation cohort. The concordance index (C index) of the ITA.LI.CA score in the internal and external validation cohorts was 0.71 and 0.78, respectively. The ITA.LI.CA score’s prognostic ability was significantly better (p < 0.001) than that of BCLC stage (respective C indexes of 0.64 and 0.73), CLIP score (0.68 and 0.75), JIS stage (0.67 and 0.70), MESIAH score (0.69 and 0.77), and HKLC stage (0.68 and 0.75). The main limitations of this study are its retrospective nature and the intrinsically significant differences between the Taiwanese and Italian groups. Conclusions: The ITA.LI.CA prognostic system includes both a tumor staging—stratifying patients with HCC into six main stages (0, A, B1, B2, B3, and C)—and a prognostic score—integrating ITA.LI.CA tumor staging, CPS, ECOG performance status, and AFP. The ITA.LI.CA prognostic system shows a strong ability to predict individual survival in European and Asian populations

    Off-label long acting injectable antipsychotics in real-world clinical practice: a cross-sectional analysis of prescriptive patterns from the STAR Network DEPOT study

    Get PDF
    Introduction Information on the off-label use of Long-Acting Injectable (LAI) antipsychotics in the real world is lacking. In this study, we aimed to identify the sociodemographic and clinical features of patients treated with on- vs off-label LAIs and predictors of off-label First- or Second-Generation Antipsychotic (FGA vs. SGA) LAI choice in everyday clinical practice. Method In a naturalistic national cohort of 449 patients who initiated LAI treatment in the STAR Network Depot Study, two groups were identified based on off- or on-label prescriptions. A multivariate logistic regression analysis was used to test several clinically relevant variables and identify those associated with the choice of FGA vs SGA prescription in the off-label group. Results SGA LAIs were more commonly prescribed in everyday practice, without significant differences in their on- and off-label use. Approximately 1 in 4 patients received an off-label prescription. In the off-label group, the most frequent diagnoses were bipolar disorder (67.5%) or any personality disorder (23.7%). FGA vs SGA LAI choice was significantly associated with BPRS thought disorder (OR = 1.22, CI95% 1.04 to 1.43, p = 0.015) and hostility/suspiciousness (OR = 0.83, CI95% 0.71 to 0.97, p = 0.017) dimensions. The likelihood of receiving an SGA LAI grew steadily with the increase of the BPRS thought disturbance score. Conversely, a preference towards prescribing an FGA was observed with higher scores at the BPRS hostility/suspiciousness subscale. Conclusion Our study is the first to identify predictors of FGA vs SGA choice in patients treated with off-label LAI antipsychotics. Demographic characteristics, i.e. age, sex, and substance/alcohol use co-morbidities did not appear to influence the choice towards FGAs or SGAs. Despite a lack of evidence, clinicians tend to favour FGA over SGA LAIs in bipolar or personality disorder patients with relevant hostility. Further research is needed to evaluate treatment adherence and clinical effectiveness of these prescriptive patterns

    The Role of Attitudes Toward Medication and Treatment Adherence in the Clinical Response to LAIs: Findings From the STAR Network Depot Study

    Get PDF
    Background: Long-acting injectable (LAI) antipsychotics are efficacious in managing psychotic symptoms in people affected by severe mental disorders, such as schizophrenia and bipolar disorder. The present study aimed to investigate whether attitude toward treatment and treatment adherence represent predictors of symptoms changes over time. Methods: The STAR Network \u201cDepot Study\u201d was a naturalistic, multicenter, observational, prospective study that enrolled people initiating a LAI without restrictions on diagnosis, clinical severity or setting. Participants from 32 Italian centers were assessed at three time points: baseline, 6-month, and 12-month follow-up. Psychopathological symptoms, attitude toward medication and treatment adherence were measured using the Brief Psychiatric Rating Scale (BPRS), the Drug Attitude Inventory (DAI-10) and the Kemp's 7-point scale, respectively. Linear mixed-effects models were used to evaluate whether attitude toward medication and treatment adherence independently predicted symptoms changes over time. Analyses were conducted on the overall sample and then stratified according to the baseline severity (BPRS &lt; 41 or BPRS 65 41). Results: We included 461 participants of which 276 were males. The majority of participants had received a primary diagnosis of a schizophrenia spectrum disorder (71.80%) and initiated a treatment with a second-generation LAI (69.63%). BPRS, DAI-10, and Kemp's scale scores improved over time. Six linear regressions\u2014conducted considering the outcome and predictors at baseline, 6-month, and 12-month follow-up independently\u2014showed that both DAI-10 and Kemp's scale negatively associated with BPRS scores at the three considered time points. Linear mixed-effects models conducted on the overall sample did not show any significant association between attitude toward medication or treatment adherence and changes in psychiatric symptoms over time. However, after stratification according to baseline severity, we found that both DAI-10 and Kemp's scale negatively predicted changes in BPRS scores at 12-month follow-up regardless of baseline severity. The association at 6-month follow-up was confirmed only in the group with moderate or severe symptoms at baseline. Conclusion: Our findings corroborate the importance of improving the quality of relationship between clinicians and patients. Shared decision making and thorough discussions about benefits and side effects may improve the outcome in patients with severe mental disorders

    Pervasive gaps in Amazonian ecological research

    Get PDF
    Biodiversity loss is one of the main challenges of our time,1,2 and attempts to address it require a clear un derstanding of how ecological communities respond to environmental change across time and space.3,4 While the increasing availability of global databases on ecological communities has advanced our knowledge of biodiversity sensitivity to environmental changes,5–7 vast areas of the tropics remain understudied.8–11 In the American tropics, Amazonia stands out as the world’s most diverse rainforest and the primary source of Neotropical biodiversity,12 but it remains among the least known forests in America and is often underrepre sented in biodiversity databases.13–15 To worsen this situation, human-induced modifications16,17 may elim inate pieces of the Amazon’s biodiversity puzzle before we can use them to understand how ecological com munities are responding. To increase generalization and applicability of biodiversity knowledge,18,19 it is thus crucial to reduce biases in ecological research, particularly in regions projected to face the most pronounced environmental changes. We integrate ecological community metadata of 7,694 sampling sites for multiple or ganism groups in a machine learning model framework to map the research probability across the Brazilian Amazonia, while identifying the region’s vulnerability to environmental change. 15%–18% of the most ne glected areas in ecological research are expected to experience severe climate or land use changes by 2050. This means that unless we take immediate action, we will not be able to establish their current status, much less monitor how it is changing and what is being lostinfo:eu-repo/semantics/publishedVersio

    Pervasive gaps in Amazonian ecological research

    Get PDF

    Pervasive gaps in Amazonian ecological research

    Get PDF
    Biodiversity loss is one of the main challenges of our time,1,2 and attempts to address it require a clear understanding of how ecological communities respond to environmental change across time and space.3,4 While the increasing availability of global databases on ecological communities has advanced our knowledge of biodiversity sensitivity to environmental changes,5,6,7 vast areas of the tropics remain understudied.8,9,10,11 In the American tropics, Amazonia stands out as the world's most diverse rainforest and the primary source of Neotropical biodiversity,12 but it remains among the least known forests in America and is often underrepresented in biodiversity databases.13,14,15 To worsen this situation, human-induced modifications16,17 may eliminate pieces of the Amazon's biodiversity puzzle before we can use them to understand how ecological communities are responding. To increase generalization and applicability of biodiversity knowledge,18,19 it is thus crucial to reduce biases in ecological research, particularly in regions projected to face the most pronounced environmental changes. We integrate ecological community metadata of 7,694 sampling sites for multiple organism groups in a machine learning model framework to map the research probability across the Brazilian Amazonia, while identifying the region's vulnerability to environmental change. 15%–18% of the most neglected areas in ecological research are expected to experience severe climate or land use changes by 2050. This means that unless we take immediate action, we will not be able to establish their current status, much less monitor how it is changing and what is being lost

    Dalla giuscibernetica all'informatica giuridica decisionale. Indagine sui fondamenti e sui limiti dell'intelligenza artificiale applicata al diritto

    Get PDF
    This research aims at offering a survey about the historical and epistemological foundations and about the philosophical roots of juridical computer science. After the examination of the interdisciplinary birth of cybernetics, we make clear the estreme reductivism of that problematic interdisciplinary. In particolar, on the philosophical level, we aim to show the presence in cybernetics of identitary thought, i.e. of that very ancient thought which required the indifference of the entire being. On the juridical level, after examing the way in which legal reasoning is used in Expert Legal Systems, we talk about the problem of alorithmization of law, which is based on the algorithmic hypothesis of mind, according to which the mind works as an algotithm. Artificial Intelligence, the last and most important gain of cybernetics, showing itself able to translate theoretical-juridical conceptions in computational models of law, allows the measurement of juridical concepts and their structural reduction to computer science formalisms, allowing therefore the calculation of law. All this gives rise a lot of methodological problems to philosphers, and before them, to jurists.La presente ricerca intende offrire una disamina sui fondamenti storico-epistemologici e sulle matrici filosofiche dell’informatica giuridica, in relazione alla sfida tecnologica (S. Cotta) lanciata dalla tecnologia informatica allo studio e alla risoluzione di questioni giuridiche. Ci si propone di sondare in profondità il terreno della c.d. giuscibernetica, mostrando come la forza e il fascino, ma anche le non poche criticità di quel programma epistemologico, si siano riversate nella disciplina oramai consolidata dell’informatica giuridica. Che la fruizione di strumenti informatici non sia soltanto un problema “tecnico”, ma investa prepotentemente campi propri della riflessione giuridica, è stato già ampiamente dimostrato; quello che ci si propone in questa sede di evidenziare è l’importanza, nello statuto epistemologico della giuscibernetica (e quindi della succedanea informatica giuridica decisionale) della matrice metafisica di quel pensiero in seno al quale è nata l’epistemologia algoritmica e della quale la macchina computante non è che l’ipostasi manifesta. Il prodigio elettronico è sì il risultato dell’evoluzione tecnologica della tecnica (E. Severino – N. Irti), ma è soprattutto la concreta dimostrazione della forza, tutt’altro che teorica, di un riduzionismo filosofico – tanto silente quanto non poco problematico – che ha assorbito gran parte degli sforzi teoretici e delle energie intellettuali dell’uomo moderno e postmoderno: un riduzionismo epistemologico che affonda le sue origini nel terreno della metafisica identitaria, cioè dalla metafisica del rifiuto del logos della differenza . Mi propongo dunque, sulla scorta di una ricerca che intende essere interdisciplinare, di dimostrare come la pretesa – auspicata da non pochi giuristi e suffragata dalle progressive evoluzioni dei programmi di Intelligenza Artificiale – della realizzazione di un modello cibernetico del diritto, a fronte degli immediati vantaggi in termini di legal inquiry, di judicial predicting e di judicial decision making, ponga problemi tutt’altro che trascurabili sia sul piano metodologico sia sul piano ontologico, andando a ridurre la complessità del fenomeno giuridico ad un sistema di mero calcolo. Si evidenzierà dunque come l’imperativo della razionalizzazione, evidente lascito epistemologico dell’imago mentis algoritmica, finisca per intaccare la natura del diritto e come il metodo giuridico, sempre più improntato ai canoni dell’efficienza algoritmica, finisca per essere progressivamente ridotto agli schemi delle scienze esatte. L’analisi verte quindi sui problemi posti dal sottaciuto tentativo dell’epistemologia giuridica algoritmica di ridisegnare l’ontologia del diritto, andando a configurare le nuove frontiere del diritto (P. Barcellona), ossia le frontiere del diritto artificiale (V. Frosini). Esaminata la nascita interdisciplinare della cibernetica, si fa luce sulla portata estremamente riduttivistica di quella problematica epistemologia. Viene evidenziato, in particolar modo, come la cifra ultima del programma cibernetico coincida con quella del pensiero identitario, la cui complessa trama verrà svolta, nel presente lavoro, con l’ausilio della filosofia teoretica, a partire dallo studio delle categorie metafisiche rivoluzionate e ripensate in chiave radicalmente identitaria. Ulteriormente, si affronta il problema giuscibernetico della c.d. algebrizzazione del diritto alla luce del “panlinguismo dell’identità”, ovvero alla luce di quel grandioso progetto di riforma logico-sintattico-semantica del linguaggio che, in ultima analisi, riposa sulla potentissima teoresi identitaria. Verrà quindi prestata particolare attenzione alla characteristica univeralis leibniziana, la quale costituisce un evidente paradigma protocibernetico di quel pensiero che, dall’ ars combinatoria di lulliana memoria all’algebra booleana, mira alla purificazione-formalizzazione-normalizzazione – in breve: alla “cibernetizzazione” – del linguaggio. Sul piano giuridico, dopo aver esaminato le modalità in cui si declina il legal reasoning che sta alla base dei sistemi esperti legali, si affronta la c.d. algoritmizzazione del diritto, evidenziando come l’ipotesi algoritmica della mente – secondo cui il pensiero umano si potrebbe (e secondo alcuni dovrebbe) ridurre ad algoritmo – domini il modello cibernetico del diritto. L’ I.A., l’ultimo e più lauto guadagno dell’epistemologia cibernetica, nella misura in cui si dimostra capace di tradurre le concezioni teorico-giuridiche in modelli computazionali del diritto, consente la misurazione dei concetti giuridici e la loro strutturazione in formalismi informatici, permettendo quindi il calcolo del giuridico, aspirazione di quella ragione algoritmica che affonda le proprie radici in terreni ontologici, e che pone non pochi interrogativi, in primis metodologici, ai filosofi e ancor prima ai giuristi
    corecore