20 research outputs found
Linguistic Diversity on the Internet: Arabic, Chinese and Cyrillic Script Top-Level Domain Names
The deployment of Arabic, Chinese, and Cyrillic top-level domain names is explored in this research by analyzing technical and policy documents of the Internet Corporation for Assigned Names and Numbers (ICANN), as well as newspaper articles in the respective language regions. The tension between English uniformity at the root level of the Internet׳s domain names system, and language diversity in the global Internet community, has resulted in various technological solutions surrounding Arabic, Chinese, and Cyrillic language domain names. These standards and technological solutions ensure the security and stability of the Internet; however, they do not comprehensively address the linguistic diversity needs of the Internet. ICANN has been transforming into an international policy organization, yet its linguistic diversity policies appear disconnected from the diversity policies of the United Nations, and remain technically oriented. Linguistic diversity in relation to IDNs at this stage mostly focus on the language representation of major languages that are spoken in powerful nation-states, who use the rhetoric of national pride, local business branding, and inclusion of non-English speakers. This situation surfaces the tension between nation-states and the new international governing institution ICANN
The 2011 IDN Homograph Attack Mitigation Survey
The advent of internationalized domain names (IDNs) has introduced a new threat, with the non-English character sets allowing for visual mimicry of domain names. Whilst this potential for this form of attack has been well recognized, many applications such as Internet browsers and e-mail clients have been slow to adopt successful mitigation strategies and countermeasures. This research examines those strategies and countermeasures, identifying areas of weakness that allow for homograph attacks. As well as examining the presentation of IDNs in e-mail clients and Internet browser URL bars, this year’s study examines the presentation of IDNs in browser-based security certificates and requests for locational data access
Beyond the Digital Divide: Language Factors, Resource Wealth, and Post-Communism in Mongolia
This chapter explores the interplay between society and Internet technology in the context of the developing former socialist country of Mongolia. This chapter goes beyond questions of access to the Internet and explores three factors of the global digital divide. First, this chapter explores how language factors such as non-Roman domain names and the use of the Cyrillic alphabet exacerbate the digital divide in the impoverished country of Mongolia. ICANN’s initiation of international domain names is an initial development toward achieving linguistic diversity on the Internet. Second, this chapter explores how post-communist settings and foreign investment and aid dependency afflict Internet development. A rapid economic growth in Mongolia has increased access to mobile phones, computers, and the Internet; however, the influx of foreign capital poured into the mining, construction, and telecommunication sectors frequently comes in non-concessional terms raising concerns over the public debt in Mongolia
ミャンマー語テキストの形式手法による音節分割、正規化と辞書順排列
国立大学法人長岡技術科学大
Towards A knowledge-Based Economy - Europe and Central Asia - Internet Development and Governance
The diversity and socio-economic differentiation of the real world prevents the full-scale cultivation of Information and Communication Technologies (ICT) to the benefit of all. Furthermore, the lack of determination and political will in some countries and slowness of responses to new technological opportunities in some others are responsible for the creation of another social divide – a digital one. The above problems were fully acknowledged by the World Summit on the Information Society (WSIS). The Summit called for a joint international effort to overcome the digital divide between and within the United Nations Member States under the Digital Solidarity umbrella. This report was prepared as a follow-up to the Summit and represents a brief review of the status and trends in the area of ICT and Internet development in the UNECE region and provides background information on the state of the art in some relevant ICT subsectors in the Member States. The report focuses on the state of the Internet critical resources and, consequently, on the ICT and Internet penetration across countries and social groups. It also looks into existing Internet governance arrangements and makes some recommendations. The report contains three parts and conclusions. The first part, “Towards a Knowledge-based Economy: Progress Assessment”, highlights the situation in the region with regards to the digital divide, both between and within countries, and national strategies and actions aiming at overcoming barriers to accessing the Internet. The second part, “Internet Development: Current State of Critical Internet Resources in the UNECE Region”, concentrates on reviewing the physical Internet backbone, interconnection and connectivity within the Internet in the UNECE Member States. The third part, “Governing the Evolving Internet in the UNECE Region”, focuses on the issues of Internet Governance in the countries of the region, challenges faced by the countries and participation of key stakeholders in ICT and Internet policy formulation and implementation. The final part contains conclusions and recommendations.Internet, governance, knowledge-based economy, Europe, Central Asia, transition economies
Recommended from our members
Understanding Flaws in the Deployment and Implementation of Web Encryption
In recent years, the web has switched from using the unencrypted HTTP protocol to using encrypted communications. Primarily, this resulted in increasing deployment of TLS to mitigate information leakage over the network. This development has led many web service operators to mistakenly think that migrating from HTTP to HTTPS will magically protect them from information leakage without any additional effort on their end to guar- antee the desired security properties. In reality, despite the fact that there exists enough infrastructure in place and the protocols have been “tested” (by virtue of being in wide, but not ubiquitous, use for many years), deploying HTTPS is a highly challenging task due to the technical complexity of its underlying protocols (i.e., HTTP, TLS) as well as the complexity of the TLS certificate ecosystem and this of popular client applications such as web browsers. For example, we found that many websites still avoid ubiquitous encryption and force only critical functionality and sensitive data access over encrypted connections while allowing more innocuous functionality to be accessed over HTTP. In practice, this approach is prone to flaws that can expose sensitive information or functionality to third parties. Thus, it is crucial for developers to verify the correctness of their deployments and implementations.
In this dissertation, in an effort to improve users’ privacy, we highlight semantic flaws in the implementations of both web servers and clients, caused by the improper deployment of web encryption protocols. First, we conduct an in-depth assessment of major websites and explore what functionality and information is exposed to attackers that have hijacked a user’s HTTP cookies. We identify a recurring pattern across websites with partially de- ployed HTTPS, namely, that service personalization inadvertently results in the exposure of private information. The separation of functionality across multiple cookies with different scopes and inter-dependencies further complicates matters, as imprecise access control renders restricted account functionality accessible to non-secure cookies. Our cookie hijacking study reveals a number of severe flaws; for example, attackers can obtain the user’s saved address and visited websites from e.g., Google, Bing, and Yahoo allow attackers to extract the contact list and send emails from the user’s account. To estimate the extent of the threat, we run measurements on a university public wireless network for a period of 30 days and detect over 282K accounts exposing the cookies required for our hijacking attacks.
Next, we explore and study security mechanisms purposed to eliminate this problem by enforcing encryption such as HSTS and HTTPS Everywhere. We evaluate each mechanism in terms of its adoption and effectiveness. We find that all mechanisms suffer from implementation flaws or deployment issues and argue that, as long as servers continue to not support ubiquitous encryption across their entire domain, no mechanism can effectively protect users from cookie hijacking and information leakage.
Finally, as the security guarantees of TLS (in turn HTTPS), are critically dependent on the correct validation of X.509 server certificates, we study hostname verification, a critical component in the certificate validation process. We develop HVLearn, a novel testing framework to verify the correctness of hostname verification implementations and use HVLearn to analyze a number of popular TLS libraries and applications. To this end, we found 8 unique violations of the RFC specifications. Several of these violations are critical and can render the affected implementations vulnerable to man-in-the-middle attacks