5,101 research outputs found

    Philosophy and Education: The Engines of National Development

    Get PDF
    Philosophy represents a method of reflective, rational and constructive thinking, as well as a reasoned inquiry, while development designates the ability of a people to appropriate their total essence in a total manner as a people – a holistic vision in which they assume control of their own destiny. Similarly, education represents an enlightening experience or training in a particular subject. In line with this, this paper is an attempt to project philosophy and education as engines of national development. Against the backdrop of the diverse and heterogeneous nature of African cultures (Nigeria in particular), the paper raises two fundamental questions:  is it possible for Africa – Nigeria in particular – to develop as a nation in spite of her ethnic, religious, linguistic and cultural differences? Considering the possibility of this quest, what are the critical factors in achieving it? Using historical hermeneutics and philosophical analysis, this paper argues that philosophy and education remain the bedrocks for Africa’s (Nigeria’s) development in the 21st century. Keywords: philosophy, education, national development, nationhood, culture, essence

    21st Century Simulation: Exploiting High Performance Computing and Data Analysis

    Get PDF
    This paper identifies, defines, and analyzes the limitations imposed on Modeling and Simulation by outmoded paradigms in computer utilization and data analysis. The authors then discuss two emerging capabilities to overcome these limitations: High Performance Parallel Computing and Advanced Data Analysis. First, parallel computing, in supercomputers and Linux clusters, has proven effective by providing users an advantage in computing power. This has been characterized as a ten-year lead over the use of single-processor computers. Second, advanced data analysis techniques are both necessitated and enabled by this leap in computing power. JFCOM's JESPP project is one of the few simulation initiatives to effectively embrace these concepts. The challenges facing the defense analyst today have grown to include the need to consider operations among non-combatant populations, to focus on impacts to civilian infrastructure, to differentiate combatants from non-combatants, and to understand non-linear, asymmetric warfare. These requirements stretch both current computational techniques and data analysis methodologies. In this paper, documented examples and potential solutions will be advanced. The authors discuss the paths to successful implementation based on their experience. Reviewed technologies include parallel computing, cluster computing, grid computing, data logging, OpsResearch, database advances, data mining, evolutionary computing, genetic algorithms, and Monte Carlo sensitivity analyses. The modeling and simulation community has significant potential to provide more opportunities for training and analysis. Simulations must include increasingly sophisticated environments, better emulations of foes, and more realistic civilian populations. Overcoming the implementation challenges will produce dramatically better insights, for trainees and analysts. High Performance Parallel Computing and Advanced Data Analysis promise increased understanding of future vulnerabilities to help avoid unneeded mission failures and unacceptable personnel losses. The authors set forth road maps for rapid prototyping and adoption of advanced capabilities. They discuss the beneficial impact of embracing these technologies, as well as risk mitigation required to ensure success

    Design and implementation of a multi-agent opportunistic grid computing platform

    Get PDF
    Opportunistic Grid Computing involves joining idle computing resources in enterprises into a converged high performance commodity infrastructure. The research described in this dissertation investigates the viability of public resource computing in offering a plethora of possibilities through seamless access to shared compute and storage resources. The research proposes and conceptualizes the Multi-Agent Opportunistic Grid (MAOG) solution in an Information and Communication Technologies for Development (ICT4D) initiative to address some limitations prevalent in traditional distributed system implementations. Proof-of-concept software components based on JADE (Java Agent Development Framework) validated Multi-Agent Systems (MAS) as an important tool for provisioning of Opportunistic Grid Computing platforms. Exploration of agent technologies within the research context identified two key components which improve access to extended computer capabilities. The first component is a Mobile Agent (MA) compute component in which a group of agents interact to pool shared processor cycles. The compute component integrates dynamic resource identification and allocation strategies by incorporating the Contract Net Protocol (CNP) and rule based reasoning concepts. The second service is a MAS based storage component realized through disk mirroring and Google file-system’s chunking with atomic append storage techniques. This research provides a candidate Opportunistic Grid Computing platform design and implementation through the use of MAS. Experiments conducted validated the design and implementation of the compute and storage services. From results, support for processing user applications; resource identification and allocation; and rule based reasoning validated the MA compute component. A MAS based file-system that implements chunking optimizations was considered to be optimum based on evaluations. The findings from the undertaken experiments also validated the functional adequacy of the implementation, and show the suitability of MAS for provisioning of robust, autonomous, and intelligent platforms. The context of this research, ICT4D, provides a solution to optimizing and increasing the utilization of computing resources that are usually idle in these contexts

    Information Extraction for Event Ranking

    Get PDF
    Search engines are evolving towards richer and stronger semantic approaches, focusing on entity-oriented tasks where knowledge bases have become fundamental. In order to support semantic search, search engines are increasingly reliant on robust information extraction systems. In fact, most modern search engines are already highly dependent on a well-curated knowledge base. Nevertheless, they still lack the ability to effectively and automatically take advantage of multiple heterogeneous data sources. Central tasks include harnessing the information locked within textual content by linking mentioned entities to a knowledge base, or the integration of multiple knowledge bases to answer natural language questions. Combining text and knowledge bases is frequently used to improve search results, but it can also be used for the query-independent ranking of entities like events. In this work, we present a complete information extraction pipeline for the Portuguese language, covering all stages from data acquisition to knowledge base population. We also describe a practical application of the automatically extracted information, to support the ranking of upcoming events displayed in the landing page of an institutional search engine, where space is limited to only three relevant events. We manually annotate a dataset of news, covering event announcements from multiple faculties and organic units of the institution. We then use it to train and evaluate the named entity recognition module of the pipeline. We rank events by taking advantage of identified entities, as well as partOf relations, in order to compute an entity popularity score, as well as an entity click score based on implicit feedback from clicks from the institutional search engine. We then combine these two scores with the number of days to the event, obtaining a final ranking for the three most relevant upcoming events

    Human-Aided Artificial Intelligence: Or, How to Run Large Computations in Human Brains? Towards a Media Sociology of Machine Learning

    Get PDF
    Today, artificial intelligence, especially machine learning, is structurally dependent on human participation. Technologies such as Deep Learning (DL) leverage networked media infrastructures and human-machine interaction designs to harness users to provide training and verification data. The emergence of DL is therefore based on a fundamental socio-technological transformation of the relationship between humans and machines. Rather than simulating human intelligence, DL-based AIs capture human cognitive abilities, so they are hybrid human-machine apparatuses. From a perspective of media philosophy and social-theoretical critique, I differentiate five types of “media technologies of capture” in AI apparatuses and analyze them as forms of power relations between humans and machines. Finally, I argue that the current hype about AI implies a relational and distributed understanding of (human/artificial) intelligence, which I categorize under the term “cybernetic AI”. This form of AI manifests in socio-technological apparatuses that involve new modes of subjectivation, social control and discrimination of users
    • …
    corecore