17 research outputs found

    Service quality and trust in e-government: Utilizing the rich measures of system usage to predict trustworthiness

    Get PDF
    What is the theoretical rationale that e-government evaluation should employ? Witnessing the changes in society and the public sector, an adaptation is proposed, to evaluate trust building as the goal of e-government, rather than looking for the best predictors of e-government adoption. Organisations that provide online services are concerned about the evaluation of service quality, user satisfaction, and the ultimate goal of the system – value creation. Considering trust as a major value that organisations wish to achieve, the impact of service quality on trust building is at the focus of this study: What are the system features that constitute trust in the organisation? To what extent would each feature explain trust building? In the context of e-government that serves the wide public it is of particular significance to scrutinize the nature of the relationships between the user and the system. Therefore, an adaptation is proposed, to evaluate trust building as the goal of system usage rather than its predictor, in a formative model. This theoretical rationale alters the conventional relationships between well-studied measures of service quality. In a modified version of ESQUAL, the service measures were turned into indicators of trust. The findings (n=395) support the viability of the model; the extent to which the user puts trust in the organisation depends on how trustworthy the system is. In addition, the findings support the conceptualization of richer measures of system usage as stronger indicators. Theoretical, methodological and practical implications are discussed

    INDEXING AND THEORIZING LOCAL E-GOVERNMENT

    Get PDF
    The evolution of E-Government provides the opportunity to explore Information and Communication Technology (ICT) adoption by individuals and organizations. A systematic study of local e-government has provided important insights into this topic. This research created an in-depth index of the local e-government in Israel, and consequently contributed to the establishment of a theory on ICT acceptance and management. 88 Internet websites of local authorities were evaluated. In an attempt to understand the differences between them, questionnaires and interviews were carried on among managers in local authorities. This study draws a line from the individual\u27s digital literacy to her ability to intuitively accomplish the normative principles of Information Systems (IS) planning and implementation

    Technophilia: A New Model For Technology Adoption

    Get PDF
    A new model for technology adoption identifies the adoption process itself as a key factor for successful life-long usage of technology. The distinctive contribution of online entertainment and communication to digital literacy is at the heart of the model, termed technophilia. Non-technophile users, who are less experienced in fun activities, are more likely to encounter the approach-avoidance conflict, to refrain from adopting an open attitude to technology, and to perceive it as more useful compared to technophile users. The current study includes findings and implications for low socioeconomic status groups in comparison with the general population

    EVALUATING AND RANKING LOCAL E-GOVERNMENT SERVICES

    Get PDF
    The evolution of E-Government provides the opportunity to explore Information and Communication Technology (ICT) adoption by individuals and organizations. A systematic study of local egovernment has provided important insights into this topic. This research created an in-depth index of the local e-government in Israel, and consequently contributed to the establishment of a theory on ICT acceptance and management. 88 Internet websites of local authorities were evaluated. In an attempt to understand the differences between them, questionnaires and interviews were carried on among managers in local authorities. This study draws a line from the individual\u27s digital literacy to her ability to intuitively accomplish the normative principles of Information Systems (IS) planning and implementation

    Resilience of Society to Recognize Disinformation: Human and/or Machine Intelligence

    Get PDF
    The paper conceptualizes the societal impacts of disinformation in hopes of developing a computational approach that can identify disinformation in order to strengthen social resilience. An innovative approach that considers the sociotechnical interaction phenomena of social media is utilized to address and combat disinformation campaigns. Based on theoretical inquiries, this study proposes conducting experiments that capture subjective and objective measures and datasets while adopting machine learning to model how disinformation can be identified computationally. The study particularly will focus on understanding communicative social actions as human intelligence when developing machine intelligence to learn about disinformation that is deliberately misleading, as well as the ways people judge the credibility and truthfulness of information. Previous experiments support the viability of a sociotechnical approach, i.e., connecting subtle language-action cues and linguistic features from human communication with hidden intentions, thus leading to deception detection in online communication. The study intends to derive a baseline dataset and a predictive model and by that to create an information system artefact with the capability to differentiate disinformation

    Explainability Using Bayesian Networks for Bias Detection: FAIRness with FDO

    Get PDF
    In this paper we aim to provide an implementation of the FAIR Data Points (FDP) spec, that will apply our bias detection algorithm and automatically calculate a FAIRness score (FNS). FAIR metrics would be themselves represented as FDOs, and could be presented via a visual dashboard, and be machine accessible (Mons 2020, Wilkinson et al. 2016). This will enable dataset owners to monitor the level of FAIRness of their data. This is a step forward in making data FAIR, i.e., Findable, Accessible, Interoperable, and Reusable; or simply, Fully AI Ready data.First we may discuss the context of this topic with respect to Deep Learning (DL) problems. Why are Bayesian Networks (BN, explained below) beneficial for such issues?Explainability – Obtaining a directed acyclic graph (DAG) from a BN training provides coherent information about independence variables in the data base. In a generic DL problem, features are functions of these variables. Thus, one can derive which variables are dominant in our system. When customers or business units are interested in the cause of a neural net outcome, this DAG structure can be both a source to provide importance and clarify the model.Dimension Reduction — BN provides the joint distribution of our variables and their associations. The latter may play a role in reducing the features that we induce to the DL engine: If we know that for random variables X,Y the conditional entropy of X in Y are low, we may omit X since Y provides its nearly entire information. We have, therefore, a tool that can statistically exclude redundant variablesTagging Behavior – This section can be less evident for those who work in domains such as vision or voice. In some frameworks, labeling can be an obscure task (to illustrate, consider a sentiment problem with many categories that may overlap). When we tag the data, we may rely on some features within the datasets and generate conditional probability. Training BN, when we initialize an empty DAG, may provide outcomes in which the target is a parent of other nodes. Observing several tested examples, these outcomes reflect these “taggers’ manners”. We can therefore use DAGs not merely for the purpose of model development in machine learning but mainly learning taggers policy and improve it if needed.The conjunction of DL and Casual inference — Causal Inference is a highly developed domain in data analytics. It offers tools to resolve questions that on the one hand, DL models commonly do not and, on the other hand, the real-world raises. There is a need to find a framework in which these tools will work in conjunction. Indeed, such frameworks already exist (e.g., GNN). But a mechanism that merges typical DL problems causality is less common. We believe that the flow, as described in this paper, is a good step in the direction of achieving benefits from this conjunction.Fairness and Bias – Bayesian networks, in their essence, are not a tool for bias detection but they reveal which of the columns (or which of the data items) is dominant and modify other variables. When we discuss noise and bias, we address these faults to the column and not to the model or to the entire data base. However, assume we have a set of tools to measure bias (Purian et al. 2022). Bayesian networks can provide information about the prominence of these columns (as they are “cause” or “effect” in the data), thus allow us to assess the overall bias in the database.What are Bayesian Networks?The motivation for using Bayesian Networks (BN) is to learn the dependencies within a set of random variables. The networks themselves are directed acyclic graphs (DAG), which mimic the joint distribution of the random variables (e.g., Perrier et al. (2008)). The graph structure follows the probabilistic dependencies factorization of the joint distribution: a node V depends only on its parents (a r.v X independent of the other nodes will be presented as a parent free node).Real-World ExampleIn this paper we present a way of using the DL engine tabular data, with the python package bnlearn. Since this project is commercial, the variable names were masked; thus, they will have meaningless names.Constructing Our DAGWe begin by finding our optimal DAG.import bnlearn as bnDAG = bn.structure_learning.fit(dataframe) We now have a DAG. It has a set of nodes and an adjacency matrix that can be found as follow:print(DAG['adjmat']) The outcome has this form Fig. 1a.Where rows are sources (namely the direction of the arc is from the left column to the elements in the row) and columns are targets (i.e., the header of the column receives the arcs). When we begin drawing the obtained DAG, we get for one set of variables the following image: Fig. 1b.We can see that the target node in the rectangle is a source for many nodes. We can see that it still points arrows itself to two nodes. We will discuss this in the discussion (i.e., Rauber 2021). We have more variables, therefore I increased the number of nodes. Adding the information provided a new source for the target (i.e., its entire row is “False”). The obtained graph is the following: Fig. 1c.So, we know how to construct a DAG. Now we need to train its parameters. Code-wise we perform this as follows:model_mle = bn.parameter_learning.fit(DAG, dataframe, methodtype='maximumlikelihood')We can change ‘maximulikelihood’ with ‘bayes’ as described beyond. The outcome of this training is a set of factorized conditional distributions that reflect the DAG’s structure. It has this form for a given variable: Fig. 1d. The code to create DAG presentation is provided in Fig. 2. DiscussionIn this paper we have presented some of the theoretical concepts of Bayesian Networks and the usage they provide in constructing an approximated DAG for a set of variables. In addition, we presented a real-world example of end to end DAG learning: Constructing it using BN, training its parameters using maximum likelihood estimation (MLE) methods, and performing and inference.FAIR metrics, represented as FDOs, can also be visualised and monitored, taking care of data FAIRness

    How should the completeness and quality of curated nanomaterial data be evaluated

    Get PDF
    Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict nanomaterials’ behaviour. This supports innovation in, and regulation of, nanotechnology. It is commonly understood that curated data need to be sufficiently complete and of sufficient quality to serve their intended purpose. However, assessing data completeness and quality is non-trivial in general and is arguably especially difficult in the nanoscience area, given its highly multidisciplinary nature. The current article, part of the Nanomaterial Data Curation Initiative series, addresses how to assess the completeness and quality of (curated) nanomaterial data. In order to address this key challenge, a variety of related issues are discussed: the meaning and importance of data completeness and quality, existing approaches to their assessment and the key challenges associated with evaluating the completeness and quality of curated nanomaterial data. Considerations which are specific to the nanoscience area and lessons which can be learned from other relevant scientific disciplines are considered. Hence, the scope of this discussion ranges from physicochemical characterisation requirements for nanomaterials and interference of nanomaterials with nanotoxicology assays to broader issues such as minimum information checklists, toxicology data quality schemes and computational approaches that facilitate evaluation of the completeness and quality of (curated) data. This discussion is informed by a literature review and a survey of key nanomaterial data curation stakeholders. Finally, drawing upon this discussion, recommendations are presented concerning the central question: how should the completeness and quality of curated nanomaterial data be evaluated

    How should the completeness and quality of curated nanomaterial data be evaluated?

    Get PDF
    Nanotechnology is of increasing significance. Curation of nanomaterial data into electronic databases offers opportunities to better understand and predict nanomaterials' behaviour. This supports innovation in, and regulation of, nanotechnology. It is commonly understood that curated data need to be sufficiently complete and of sufficient quality to serve their intended purpose. However, assessing data completeness and quality is non-trivial in general and is arguably especially difficult in the nanoscience area, given its highly multidisciplinary nature. The current article, part of the Nanomaterial Data Curation Initiative series, addresses how to assess the completeness and quality of (curated) nanomaterial data. In order to address this key challenge, a variety of related issues are discussed: the meaning and importance of data completeness and quality, existing approaches to their assessment and the key challenges associated with evaluating the completeness and quality of curated nanomaterial data. Considerations which are specific to the nanoscience area and lessons which can be learned from other relevant scientific disciplines are considered. Hence, the scope of this discussion ranges from physicochemical characterisation requirements for nanomaterials and interference of nanomaterials with nanotoxicology assays to broader issues such as minimum information checklists, toxicology data quality schemes and computational approaches that facilitate evaluation of the completeness and quality of (curated) data. This discussion is informed by a literature review and a survey of key nanomaterial data curation stakeholders. Finally, drawing upon this discussion, recommendations are presented concerning the central question: how should the completeness and quality of curated nanomaterial data be evaluated

    Value of Information in the Network: The Change in Pragmatic and Ethical Criteria

    No full text
    One of the goals set up at the beginning of this study was to develop an evaluation method for information systems (IS), which allows for a better recognition of informationally meaningful systems compared to various methods currently being used. Starting with research questions about how to build a reliable and valid index for IS evaluation, and questioning the purpose of IS, this paper ends up with a structured approach to integrating the pragmatic (economic) and ethical (social, environmental etc.) assumptions about organizations and IS success, considering the network effects. To capture the utility (or, more likely, diversified utilities) of Internet websites, a novel problem-structuring approach is proposed, derived from the concepts of multiple criteria decision making (MCDM), value of information (VI), and discourse ethics. Identifying ethical issues on which individuals or groups could differ should guide the design of IS. Finally, to demonstrate why and how this approach is direction-setting, it is applied in contexts where different actors experience different utilities: the evaluation of subjective VI is supposed to move from service orientation towards practice of sharing, as demonstrated in the context of e-government; the evaluation of realistic VI expands from the well-studied economic criteria of certain agents towards social and environmental criteria, and demonstrated in the context of green IT; and modelling normative VI is expected to change the agent perspective towards a network view. For the first two, empirical data are supplied; for the normative VI, an analytical model is being develope
    corecore