104 research outputs found

    Challenges to Computing

    Get PDF
    In posing the question as to challenges to computing, we consider what will sustain it. That is, we ask if or when will computing and computers come to their end of innovative applications. This is not a discussion about bigger and faster machines. Of course, bigger and faster computers can and will push to new limits ordinary and well explored topics. This is ongoing and will continue for centuries. We are entered into a discussion about the use of computers to solve new, even revolutionary, problems of this world. Innovation is necessary for the simple reason that problems are becoming bigger, more complex, even wicked, and some apparently impossible

    Development framework pattern for pervasive information systems

    Get PDF
    During last decade, the world watched a social acceptance of computing and computers, enhanced information technology devices, wireless networks, and Internet; they gradually became a fundamental resource for individuals. Nowadays, people, organizations, and the environment are empowered by computing devices and systems; they depend on services offered by modern Pervasive Information Systems supported by complex software systems and technology. Research on software development for PIS-delivered information, on issues and challenges on software development for them, and several other contributions have been delivered. Among these contributions are a development framework for PIS, a profiling and framing structure approach, and a SPEM 2.0 extension. This chapter, revisiting these contributions, provides an additional contribution: a pattern to support the use of the development framework and profiling approach on software development for PIS. This contribution completes a first series of contributions for the development of PIS. This chapter also presents a case study that allowed demonstrating the applicability of these contribution

    Computer graphics archive: Prototype 1

    Get PDF
    None provided

    How I got to work with Feynman on the covariant quark model

    Full text link
    In the period 1968 - 1974 I was a graduate student and then a postdoc at Caltech and was involved with the developments of the quark and parton models. Most of this time I worked in close contact with Richard Feynman and thus was present from the parton model was proposed until QCD was formulated. A personal account is presented how the collaboration took place and how the various stages of this development looked like from the inside until QCD was established as a theory for strong interactions with the partons being quarks and gluons.Comment: LaTeX, 20 pages, 2 figures. Contribution to "50 Years of Quarks", to be published by World Scientifi

    Comparative and statistical analysis between the CERN conference database and three other bases

    Get PDF
    This is a comparison between three scientific conference databases and CERN data. High Energy Physics institutes DESY and SLAC databases and the STN-FIZ commercial one's are described and analysed by statistical tables. We plan to work out a co-operation policy especially with DESY for exchange or data import

    Construction of the publication and patent clusters produced by the arbitrary terms with the use of the specialized Google tools

    Get PDF
    There has been developed the analytical technique of construction of the publication and patent clusters produced by the arbitrary terms with the use of the specialized Google tools. Different names of types of the computer calculations and devices were selected as the scienfific terms for testing with the use of Google Scholar, Google Books and Google Patents beginning with the words: Quantum, Bacterial, Cognitive, Cellular, Cloud, Ubiquitou

    A general actuator saturation compensator in the continuous-time domain.

    Get PDF
    A general compensator for actuator saturation that includes existing ones as special cases is presented. The conditions that must be satisfied for the implementation of the compensator are given. It is shown that for a given system there exists an arbitrarily large number of compensators such that the compensated system is absolute stable. The result suggests that a compensator can be derived from systems that is known to be absolute stable. If the system is unknown, then the compensator may have to be designed iteratively until the effective set-point is acceptable.published_or_final_versio

    Teacher Training in Technology Based on their Psychological Characteristics: Methods of Group Formation and Assessment

    Get PDF
    AbstractTeachers, despite adequate training in Information and Communication Technology (ICT), appear to be reluctant to incorporate ICT into their teaching practices. This is an issue of major importance, not only for educational but also for career development reasons, since the acquisition of new skills broadens a professional's career identity and enriches his/her career opportunities. Research so far has tried to explore the factors related to teacher's reluctance and personality seems to be one of them. The paper presents the first stage of an extended research study on the specific field and discusses the research methodology used to explore personality traits, as well as other psychological characteristics, such as self-efficacy related to ICT use, and anxiety and attitudes towards ICT use. The sample consisted of trainee teachers who were divided into groups, according to their personality characteristics, based on the five-factor personality model of Costa and McCrae (1992). The instruments that were constructed for the present study and were used for the assessment of in-group cooperation and teacher's intention for ICT adoption in teaching are presented and discussed

    A Modified Overlapping Partitioning Clustering Algorithm for Categorical Data Clustering

    Get PDF
    Clustering is one of the important approaches for Clustering enables the grouping of unlabeled data by partitioning data into clusters with similar patterns. Over the past decades, many clustering algorithms have been developed for various clustering problems. An overlapping partitioning clustering (OPC) algorithm can only handle numerical data. Hence, novel clustering algorithms have been studied extensively to overcome this issue. By increasing the number of objects belonging to one cluster and distance between cluster centers, the study aimed to cluster the textual data type without losing the main functions. The proposed study herein included over twenty newsgroup dataset, which consisted of approximately 20000 textual documents. By introducing some modifications to the traditional algorithm, an acceptable level of homogeneity and completeness of clusters were generated. Modifications were performed on the pre-processing phase and data representation, along with the number methods which influence the primary function of the algorithm. Subsequently, the results were evaluated and compared with the k-means algorithm of the training and test datasets. The results indicated that the modified algorithm could successfully handle the categorical data and produce satisfactory clusters
    corecore