7 research outputs found
Proceedings of the 2004 ONR Decision-Support Workshop Series: Interoperability
In August of 1998 the Collaborative Agent Design Research Center (CADRC) of the California Polytechnic State University in San Luis Obispo (Cal Poly), approached Dr. Phillip Abraham of the Office of Naval Research (ONR) with the proposal for an annual workshop focusing on emerging concepts in decision-support systems for military applications. The proposal was considered timely by the ONR Logistics Program Office for at least two reasons. First, rapid advances in information systems technology over the past decade had produced distributed collaborative computer-assistance capabilities with profound potential for providing meaningful support to military decision makers. Indeed, some systems based on these new capabilities such as the Integrated Marine Multi-Agent Command and Control System (IMMACCS) and the Integrated Computerized Deployment System (ICODES) had already reached the field-testing and final product stages, respectively.
Second, over the past two decades the US Navy and Marine Corps had been increasingly challenged by missions demanding the rapid deployment of forces into hostile or devastate dterritories with minimum or non-existent indigenous support capabilities. Under these conditions Marine Corps forces had to rely mostly, if not entirely, on sea-based support and sustainment operations. Particularly today, operational strategies such as Operational Maneuver From The Sea (OMFTS) and Sea To Objective Maneuver (STOM) are very much in need of intelligent, near real-time and adaptive decision-support tools to assist military commanders and their staff under conditions of rapid change and overwhelming data loads.
In the light of these developments the Logistics Program Office of ONR considered it timely to provide an annual forum for the interchange of ideas, needs and concepts that would address the decision-support requirements and opportunities in combined Navy and Marine Corps sea-based warfare and humanitarian relief operations. The first ONR Workshop was held April 20-22, 1999 at the Embassy Suites Hotel in San Luis Obispo, California. It focused on advances in technology with particular emphasis on an emerging family of powerful computer-based tools, and concluded that the most able members of this family of tools appear to be computer-based agents that are capable of communicating within a virtual environment of the real world. From 2001 onward the venue of the Workshop moved from the West Coast to Washington, and in 2003 the sponsorship was taken over by ONR’s Littoral Combat/Power Projection (FNC) Program Office (Program Manager: Mr. Barry Blumenthal). Themes and keynote speakers of past Workshops have included:
1999: ‘Collaborative Decision Making Tools’ Vadm Jerry Tuttle (USN Ret.); LtGen Paul Van Riper (USMC Ret.);Radm Leland Kollmorgen (USN Ret.); and, Dr. Gary Klein (KleinAssociates)
2000: ‘The Human-Computer Partnership in Decision-Support’ Dr. Ronald DeMarco (Associate Technical Director, ONR); Radm CharlesMunns; Col Robert Schmidle; and, Col Ray Cole (USMC Ret.)
2001: ‘Continuing the Revolution in Military Affairs’ Mr. Andrew Marshall (Director, Office of Net Assessment, OSD); and,Radm Jay M. Cohen (Chief of Naval Research, ONR)
2002: ‘Transformation ... ’ Vadm Jerry Tuttle (USN Ret.); and, Steve Cooper (CIO, Office ofHomeland Security)
2003: ‘Developing the New Infostructure’ Richard P. Lee (Assistant Deputy Under Secretary, OSD); and, MichaelO’Neil (Boeing)
2004: ‘Interoperability’ MajGen Bradley M. Lott (USMC), Deputy Commanding General, Marine Corps Combat Development Command; Donald Diggs, Director, C2 Policy, OASD (NII
Continuous trust management frameworks : concept, design and characteristics
PhD ThesisA Trust Management Framework is a collection of technical components and governing
rules and contracts to establish secure, confidential, and Trustworthy transactions
among the Trust Stakeholders whether they are Users, Service Providers, or Legal
Authorities. Despite the presence of many Trust Frameworks projects, they still fail
at presenting a mature Framework that can be Trusted by all its Stakeholders. Particularly
speaking, most of the current research focus on the Security aspects that may
satisfy some Stakeholders but ignore other vital Trust Properties like Privacy, Legal
Authority Enforcement, Practicality, and Customizability. This thesis is all about
understanding and utilising the state of the art technologies of Trust Management to
come up with a Trust Management Framework that could be Trusted by all its Stakeholders
by providing a Continuous Data Control where the exchanged data would be
handled in a Trustworthy manner before and after the data release from one party to
another. For that we call it: Continuous Trust Management Framework.
In this thesis, we present a literature survey where we illustrate the general picture
of the current research main categorise as well as the main Trust Stakeholders, Trust
Challenges, and Trust Requirements. We picked few samples representing each of
the main categorise in the literature of Trust Management Frameworks for detailed
comparison to understand the strengths and weaknesses of those categorise. Showing
that the current Trust Management Frameworks are focusing on fulfilling most of the
Trust Attributes needed by the Trust Stakeholders except for the Continuous Data
Control Attribute, we argued for the vitality of our proposed generic design of the
Continuous Trust Management Framework.
To demonstrate our Design practicality, we present a prototype implementing its
basic Stakeholders like the Users, Service Providers, Identity Provider, and Auditor
on top of the OpenID Connect protocol. The sample use-case of our prototype is to
protect the Users’ email addresses. That is, Users would ask for their emails not to be
iii
shared with third parties but some Providers would act maliciously and share these
emails with third parties who would, in turn, send spam emails to the victim Users.
While the prototype Auditor would be able to protect and track data before their
release to the Service Providers, it would not be able to enforce the data access policy
after release. We later generalise our sample use-case to cover various Mass Active
Attacks on Users’ Credentials like, for example, using stolen credit cards or illegally
impersonating third-party identity.
To protect the Users’ Credentials after release, we introduce a set of theories and
building blocks to aid our Continuous Trust Framework’s Auditor that would act as
the Trust Enforcement point. These theories rely primarily on analysing the data
logs recorded by our prototype prior to releasing the data. To test our theories, we
present a Simulation Model of the Auditor to optimise its parameters. During some
of our Simulation Stages, we assumed the availability of a Data Governance Unit,
DGU, that would provide hardware roots of Trust. This DGU is to be installed in the
Service Providers’ server-side to govern how they handle the Users’ data. The final
simulation results include a set of different Defensive Strategies’ Flavours that could
be utilized by the Auditor depending on the environment where it operates.
This thesis concludes with the fact that utilising Hard Trust Measures such as DGU
without effective Defensive Strategies may not provide the ultimate Trust solution.
That is especially true at the bootstrapping phase where Service Providers would be
reluctant to adopt a restrictive technology like our proposed DGU. Nevertheless, even
in the absence of the DGU technology now, deploying the developed Defensive Strategies’
Flavours that do not rely on DGU would still provide significant improvements
in terms of enforcing Trust even after data release compared to the currently widely
deployed Strategy: doing nothing!Public Authority for Applied Education and Training in Kuwait, PAAET
Segregation and Scheduling for P2P Applications with the Interceptor Middleware System ∗
Very large size Peer-to-Peer systems are often required to implement efficient and scalable services, but usually they can be built only by assembling resources contributed by many independent users. Among the guarantees that must be provided to convince these users to join the P2P system, particularly important is the ability of ensuring that P2P applications and services run on their nodes will not unacceptably degrade the performance of their own applications because of an excessive resource consumption. In this paper we present Interceptor, a middleware-level application segregation and scheduling system that is able to strictly enforce quantitative limitations on node resource usage and, at same time, to make P2P applications achieve satisfactory performance even in face of these limitations. A proof-of-concept implementation has been carried out for the Linux operating system, and has been used to perform an extensive experimentation aimed at quantitatively evaluating Interceptor. The results we obtained clearly demonstrate that Interceptor is able to strictly enforce quantitative limitations on node resource usage, and at the same time to effectively schedule P2P applications.