Google’s New Strong Authentication Service: Adding Bricks to the Federated Identity Foundation

Google's recent announcement of their use of one-time passwords delivered via SMS messages to mobile phones to strengthen authentication to Google Applications is an important on-line security step, but not solely for the reasons you may be thinking. Yes it is nice to have a highly-visible and no extra cost example of the use of multi-factor authentication for mass-scale consumer use. Anything that improves upon userid/password-only authentication schemes is very welcome. Also it is a great and simple example of the use of a mass consumer phenomenon (mobile phone/texting) to help solve a security issue (online identity theft) that is a bane both to consumer and enterprises. However, I hope it will prompt more Web site owners to start asking themselves, "Why am I authenticating my online users when Google (or someone else) will do it for me cheaper and better as a cloud service?"



Taking it one further step, if Google (or someone else) is strongly authenticating users and supporting identity federation (which Google does), maybe Web owners should trust and use the authentication services of these specialized service providers instead of doing it for themselves? This is exactly what happened with the traditional strong authentication in the past. As organizations centralized their access controls to purely on-premise applications with Web access management systems, they simultaneously felt the need to strengthen their authentication to those applications. Strong authentication and centralized access control are closely related concepts. It is only logical as access gets centralized via a single authentication, a logical mitigating control is stronger authentication - which better protects your eggs that are in that one basket. This concept helped birth the one-time password token and the other authentication technologies of the 1990s. This is exactly what is starting to happen on the Web, but of course on Internet scale.

A key economic and security flaw of the Internet today is that every Web-site that processes sensitive data and transactions has to be in the user authentication business. Meaning that they need to conduct some level of identity proofing and credential issuance and management, just for access to their single Web site. This represents direct cost to both the Web site operator as well as the user, through an often poor Web user experience. The system of the future that will be far superior is to have a person have a relatively small number of authenticators, perhaps Google being one of them, and then having that site vouch for them at other sites. Of course all I am talking about is the mass-scale use of federated identity. Those of us in the industry have been preparing the foundations of this new marketplace for many years. Google has just laid down another brick with their deployment of stronger authentication for their massive user-base. There are important industry initiatives, such as the Kantara IAF, that are well underway to help catalyze this fledging federated authentication marketplace by building on the existing federation foundation.

More Here


Courtesy:http://community.ca.com/blogs/iam/default.aspx?PageIndex=5

Protect data, not just access to it, with CA Technologies Content-Aware IAM

If you are a regular reader of this blog, you may be aware of our ongoing vision and strategy relating to Content-Aware IAM.  The core tenet of this vision is to provide not only control over user identities and their access, but also over their information use.  And, further, we will be integrating our IAM components such that knowledge of information content will be used by the other components (e.g., CA SiteMinder) to make better and more granular access management decisions. The goal is to more effectively enforce information use policies, improve security, and simplify compliance across the entire IAM suite.
This is our strategy and roadmap.  We have heard very positive responses from both analysts and customers, and we are excited about the potential this provides for our existing and future customers as they embark on their next-generation IAM initiatives.
Today we announced several products that support Content-Aware IAM:
  • CA Identity Manager -can now directly provision, de-provision, and modify users into the CA DLP user hierarchy. As users' roles change, those changes are passed into DLP, which then automatically changes each user's data usage entitlements. For example, a user in the Finance organization accesses and sends sensitive financial information via email on a regular basis. When the user changes roles from Finance to Marketing, their entitlements will also be changed so that they won't be able to access financial information anymore.  In addition, CA Identity Manager makes this change within DLP, modifying the user's data usage privileges. Now, if this user attempts to email financial information already in his/her possession, the email will be blocked.
  •  
  • CA DLP - in addition to the integration with CA Identity Manager described above, this release includes:
    • Content registration detection technique - Scans files and creates a digital "fingerprint" to identify sensitive content as it travels within or exits an organization.
    • Policy driven data encryption for data in use - Initiates the encryption of emails, including attachments and files sent to removable devices, via integration with native and third-party encryption technologies.
    • Role-based event review - Delivers policy and role-based delegation that helps control visibility to events and enable segregation of duties in environments where CA DLP is deployed for multiple disciplines. For example, IT Security, Legal, Compliance, or HR could all deploy their own data policies and review infractions in isolation, protecting confidentiality and privacy.
  • CA TopSecret r15 and CA ACF/2 r15 - Supports Content-Aware IAM in the mainframe environment with new data classification capabilities that help satisfy regulatory needs to control data use.  The new releases of CA ACF2 and CA Top Secret for z/OS can be used to help classify data and ownership according to legal and government regulations. This allows the assignment of specific data classifications to critical resources for purposes of access policy refinement and reporting. Other security and administrative enhancements in these mainframe products include: reporting, certificate management, role-based security, operating system support, and protection of assets.

More Here


Courtesy:http://community.ca.com/blogs/iam/default.aspx?PageIndex=2

Access certification & attestation: Best practices for avoiding the rubber stamp syndrome

Access certification is an ongoing process where managers and designated approvers review who has access to what to confirm that each user/role has access only to the resources necessary to perform their job function. In doing so, organizations prevent users from accumulating unnecessary privileges and decrease their risk profile.

Accordingly, the risk mitigation benefits of access certification are only as good as how careful the approvers are in examining access rights. However, access certification efforts often suffer from the rubber stamp syndrome - this is when a manager/approver bulk-approves all access rights presented in a review by "selecting all" and clicking "approve." One common reason for rubber stamps is when approvers get constantly swamped with too many access certification requests. This can be avoided by following these recommendations:



* Once a year, have a full certification where each manager certifies all the entitlements of all their direct report team members
* On a quarterly basis, have delta certifications where managers only certify the changes in entitlements for their team in the last quarter
* To help eliminate toxic combinations (i.e., ensure segregation of duties) which might happen when an employee gets transferred to a new position, there needs to be an event-based certification where all the entitlements for this employee get examined

This might sound like more work, but actually as the delta certifications are much smaller and quicker to go through, it helps ensure that the approver actually gives it more attention and completes it properly. The drawback of a quarterly certification (and hence the need to complement it with a full yearly certification) is that the approver cannot see the bigger picture and the business implications without seeing the full set of entitlements for each team member. At the same time, an employee that gets promoted or transferred to a new position might create toxic combinations and pose business risk for an organisation (due to the fact that that the existing entitlements in addition to the new ones that get granted to this employee might allow him/her to carry out a sequence of tasks that are in violation of segregation of duties policies, e.g., raise a purchase order and approve it), it warrants that the approver immediately looks at the full entitlements for this employee. From a workload perspective, managers do not transfer their direct reports to new positions on a regular basis, so it should not be an event that occurs that often to bother the certifying manager.

There are few other reasons for the rubber stamp syndrome, including:

* Approvers don't understand the business context of what they're certifying. This is particularly the case when the certification tool doesn't offer plain-language descriptions clearly explaining the business relevance of the roles, users, access entitlements or resources involved in the process (think SAP and mainframe transaction codes, but similarly Active Directory group names are often guilty too). To create quality descriptions, you'll need to enlist the help of the application and system owners for this as they are the ones that have intimate understanding of their resources (i.e., application and systems) and what the relevant entitlements actually do. To provide business context and descriptions for the users and roles, you'll need to refer to human resources data sources as well as involve line-of-business managers and users. More importantly, you'll need strong sponsorship from the management to ensure the collaboration of all necessary stakeholders.

More Here


Courtesy:http://community.ca.com/blogs/iam/archive/2010/07/28/access-certification-amp-attestation-best-practices-for-avoiding-the-rubber-stamp-syndrome.aspx

Directory Services, Federation, and the Cloud’ Document

The document referenced in my prior posts about the Arcot and VMware acquisitions (http://blogs.gartner.com/mark-diodati/2010/08/30/ca-technologies-to-purchase-arcot/ and http://blogs.gartner.com/mark-diodati/2010/08/31/vmware%e2%80%99s-purchase-of-tricipher/) is now published (subscription required). Here is the document description:


In this assessment, Research Director Mark Diodati evaluates the abilities of off-the-shelf directory services and federation technologies that solve the increasingly prevalent provisioning and authentication challenges for cloud-based applications. Product classes evaluated include virtual directories, synchronization servers, federation products, and cloud identity management products. Use cases include provisioning and authentication to SaaS applications, the use of a cloud-based identity store, hosted applications that require Windows Kerberos authentication, and mashup of on-premises and cloud identity data. Diodati also analyzes future requirements, including increased federation token types and deeper directory services integration with Extensible Access Control Markup Language (XACML).
Directory Services, Federation, and the Cloud

More Here


Courtesy:http://blogs.gartner.com/mark-diodati/

A new world for integration: SAML and Identity

SAML was initially a standard for cross-domain SSO. A user who is logged on to the domain *.i8c.be could transparently point his browser to a web application in another domain *.cronos.be without having to authenticate again. His identity (and other attributes) are passed on transparently, behind the scenes. Many mechanisms were defined to exchange the information contained in SAML token (signed XML structures) between an Identity Provider and a Relying Party, including SOAP very early on (the SAML SOAP Binding).
But SAML was taken further. WS-Security SAML Token Profile allows the use of SAML tokens in SOAP messages secured with WS-Security. And WS-Trust and its Secure Token Service standardized the mechanism to obtain or exchange SAML (or other) tokens.


The STS is a standard (web) service to obtain such a SAML token: 1) through standard authentication mechanisms or 2) by exchanging one token for another (SAML to SAML, non-SAML to SAML or SAML to non-SAML).
But transferring SAML tokens between domains means the exchange of information between heterogeneous organizations. The SAML standard does not define how attributes within the SAML tokens should be named nor what their content should exactly look like. Every organziation is free to specify how information is structured in a SAML token:
  • what information or attributes is contained in the SAML token: name, cost center, department, …
  • how the atrributes should are named, e.g. LastName or lname?
  • how the information in the attributes is represented
Imagine a vendors of office materials (Staples) that wants to offer a SSO experience to the employees of its majore customers. If every customer (large enterprises themselves) use a different SAML token structure, the office material vendor will have a great time translating the information from these different SAML tokens to its own attributes. And what if information is missing in the SAML token, e.g. what is the maximum value that employee may purchase?

More Here


Courtesy:http://www.mashedarticles.com/development/a-new-world-for-integration-saml-and-identity/

A Federation Swiss Army Knife -- OpenID, FB, & OAuth to SAML & WS-*

Imagine you have a scenario where you want to authenticate end users to your Web site with a consumer-grade authentication protocol like OpenID, OAuth, Facebook Connect, etc. from many different Identity Providers (IdPs). Also, suppose you need to SSO from your site to another or you want to invoke a partner's Web service that requires calls to be secured w/ a WS-Security token. For instance, imagine you have an e-commerce Web site. To increase engagement, conversion rates, yadda yadda, you want to allow customers to login w/ their Twitter, Yahoo!, Facebook, or Google accounts. When a user posts a question in your forum, which you've outsourced to a third-party, you need to SSO to that partner's site which exposes a SAML endpoint. Imagine you also want to publish the user's activity in the forum to one or more of their social networks. Tall order! How might you accomplish this? been wondering this, and here's idea.

My solution leverages PingFederate and Gigya Connect (both of which are sold by companies I'm affiliated w/). Gigya Connect is used as an abstraction that hides all the IdPs and provides a unified representation of a customer. PingFederate marshals this abstract notion of the user into a SAML, WS-Federation, or WS-Trust message for SSO purposes. From a high level, we have this:

Gigya_PingFederation.gif

In the sketch, I am trying to show how each of the IdPs sends their own types of tokens (TT, TY, TFB, TGOOG) to Gigya which normalizes it into a Gigya token (TG). When the Web app receives this, it can send user-related info to PingFederate, and it sends a SAML message (TSAML) to the partner (i.e., the Service Provider or SP). At the SP, it can use the Gigya API and the user's ID passed in TSAML to publish activity back to Twitter, Google Buzz, FB, etc. Sounds easy when I write it all out like this, but the devil's certainly in the details on this one.

To make this work, I started w/ the HTML doc I wrote after chatting w/ David Brisebois on Twitter the other day. Literally, all I had to change was the location the form was submitted to (nice!):
   

    Please Sign In using one of the following providers:
   

   
https://idp:9031/idp/startSSO.ping?PartnerSpId=localhost:gigya:entityId:sp
">


If you look at the whole doc, you'll see in the JavaScript that I dynamically added each of the Gigya user object's properties to the form that is being submitted to the highlighted URL. That is a special PingFederate endpoint that instructs it to start an SSO transaction.

Here's the magic:
PingFederate can't make sense of the Gigya user data, so I used its SDK to create an adapter that converts it into something it can understand and will put into a SAML assertion. The code basically loops over the form parameters dynamically added to the request in the JavaScript and hands them back to PingFederate in code like this:

public class AuthenticationAdapter implements IdpAuthenticationAdapter
{
    ...

More Here


Courtesy:http://travisspencer.com/blog/2010/08/a-federation-swiss-army-knife.html

IdP-initiated SSO using WIF

After quite a bit of struggle that stemmed from my improper serialization of the SAML token and its digital signature (every byte matters!), I was able to concoct a SAML message using WIF that I was then able to submit to PingFederate 6.3. Once my whitespace was where it needed to be, PingFederate happily accepted my IdP-initiated SSO message :)

This code isn't rocket science, but it might save you a bit of time. (Though it's not groud breaking, keep in mind that I'm the copyright holder. You're free to use it under the turns of the GPL, which all code I post on my blog is governed by unless stated otherwise). If you have questions, shoot them my way.


Web Form

<%@ Page Language="C#" AutoEventWireup="true" 
CodeFile=
"Default.aspx.cs" Inherits="_Default" %>

<html>
<head><title>IdP-initiated SSO using WIF</title></head>
<body>
<form id="form1" runat="server" action="https://localhost:9031/sp/ACS.saml2">
<input type="text" style="width: 400px" name="RelayState"
value="http://localhost/SpSample/?foo=bar" />
<input type="hidden" name="SAMLResponse" id="SAMLResponse" runat="server" />
<input type="submit"/>
</form>
</body>
</html>

Web Form's Code Behind
using System;
using System.Collections.Generic;
using System.IdentityModel.Tokens;
using System.IO;
using System.Security.Cryptography.X509Certificates;
using System.Text;
using System.Xml;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Protocols.WSTrust;
using Microsoft.IdentityModel.SecurityTokenService;
using Microsoft.IdentityModel.Tokens;
using Microsoft.IdentityModel.Tokens.Saml2;

using SecurityTokenTypes = Microsoft.IdentityModel.Tokens.SecurityTokenTypes;

public partial class _Default : System.Web.UI.Page
{
    #region Configuration Information

    private const int tokenLifetime = 1// In minutes.
    private const string issuer = "localhost:default:idp:entityId";

More Here


Courtesy:http://travisspencer.com/blog/2010/09/idp-initiated-sso-using-wif.html

IdP-initiated SSO in .NET

The toolkit is all about SAML, so it exposes the nitty gritty parts of the protocol, making it easy to support more exotic use cases.

In my last example, the SAML response was more or less an envelope containing the SAML assertion. In some cases, you might want to do more w/ the response. For example, you might want to sign it, add a destination, an issuer, etc. You can absolutely do this w/ just WIF and .NET, but doing it w/ ComponentSoft's toolkit makes it really simple. To see how this might work, the previous example can be modified to use the following code to sign the SAML response rather than the assertion, serialize the message, etc.

using ComponentSoft.Saml2;
...

public partial class _Default2 : System.Web.UI.Page
{
    ...


    private string CreateSamlResponse()
    {
        var samlResponse = new ComponentSoft.Saml2.Response();
        var assertion = CreateAssertion();

        samlResponse.Assertions.Add(assertion);
        samlResponse.Status = new Status(SamlPrimaryStatusCode.Success, null);
        samlResponse.Issuer = new Issuer(issuer);
        samlResponse.Destination = assertionConsumerEndpoint;
        samlResponse.Sign(CertificateUtil.GetCertificate(StoreName.My,
            StoreLocation.CurrentUser, signingCertCommonName));

        return samlResponse.ToBase64String();
    }

    private Assertion CreateAssertion()
    {
        var userName = claimDescriptors[ClaimTypes.NameIdentifier];
        var subject = new Subject(new NameId(userName));
        var assertion = new Assertion
        {
            Issuer = new Issuer(issuer),
            Subject = subject,
        };

        AddConfirmationData(assertion);
        AddAuthenticationStatement(assertion);
        AddAttributeStatement(assertion);

        return assertion;
    }

    private void AddAttributeStatement(Assertion assertion)
    {
        var attributes = new AttributeStatement();

        foreach (var claim in claimDescriptors)
        {
            attributes.Attributes.Add(new Attribute(claim.Key, claim.Key, claim.Value, claim.Value));
        }

        assertion.Statements.Add(attributes);
    }

    private void AddAuthenticationStatement(Assertion assertion)
    {
        var authenticationMethod = "url:none";
        var authetnicationStatement = new AuthnStatement
        {
            AuthnContext = new AuthnContext
            {
                AuthnContextClassRef = new AuthnContextClassRef(authenticationMethod),
            },
        };
        
        assertion.Statements.Add(authetnicationStatement);
    }

    private static void AddConfirmationData(Assertion assertion)
    {        
        var subjectConfirmationData = new SubjectConfirmationData
        {
            Recipient = assertionConsumerEndpoint,
            NotOnOrAfter = System.DateTime.UtcNow.AddMinutes(tokenLifetime),
        };
        var subjectConfirmation = new SubjectConfirmation
        {
            Method = SamlSubjectConfirmationMethod.Bearer,
            SubjectConfirmationData = subjectConfirmationData,
        };
        var audienceRestriction = new AudienceRestriction();

        audienceRestriction.Audiences.Add(new Audience(appliesTo));
        assertion.Conditions = new Conditions();
        assertion.Conditions.ConditionsList.Add(audienceRestriction);

        assertion.Subject.SubjectConfirmations.Add(subjectConfirmation);
    }
}

So, there you have another way work w/ SAML in .NET. One additional thing about using ComponentSoft's toolkit that's

More Here


Courtesy:http://travisspencer.com/blog/2010/09/another-way-to-do-idp-initiate.html

Per Partner SSO Error Pages in PingFederate

These financial institutions are super concerned about their image and brand, as any large organization is. One thing that they all worry about is what happens when an error occurs while a user is SSOing from them to their service provider (SP). It's not good enough that the SP brands their service with custom URLs to make it look like they are in the bank's domain. That's a given. If an error page is shown, it must be branded w/ their graphics, logos, colors, and text. Regardless of the minuscule chance of such an error occurring, the image of the bank must not be tarnished if it actually does.

To be specific, these are the cases I'm talking about:
  • The XML of the SAML message is malformed or invalid.
  • The digital signature on the SAML message or assertion is invalid.
  • The digital signature is missing.
  • The assertion or SAML message is expired.
  • The assertion has been submitted before.
  • The assertion can't be decrypted.
  • The audience restriction of the assertion is missing or incorrect.
  • The assertion is a holder-of-key (HoK) rather than bearer type.
To complicate things, some banks want to know exactly what went wrong. They want a different error code for each of these conditions printed on the page together with their toll-free support number. Others just want a generic SSO error code. Even more challenging, some banks want the user redirected back to their Web site while others are OK with the user being left on the SP's server (as long as the page is branded). The requirements are all over the map, and it's very difficult for an SP to accommodate all of these needs. Coming up w/ a solutions for this that works for all partners is really, really hard. Here's the best I've been able to do using PingFederate for just the SAML IdP-initiated SSO scenario.

PingFederate uses the Apache Velocity templating framework to render HTML pages, including those displayed when SSO errors occur. It also allows you to provide information that should be displayed when errors happen during SSO (both IdP- and SP-initiated though I'm only talking about the former). This info can be configured on a per federation connection basis. As shown in the following screenshot, it is usually just text.



pf_error_text.gif

The trick though is to use text and HTML, and to alter the Velocity template to output it as such.

More Here


Courtesy:http://travisspencer.com/blog/2010/09/per-partner-sso-error-pages-in.html

Problems with XACML and their Solutions

After all research, conclusion is that the XACML specification and some entitlement management products built on top of it currently suffer from three major drawbacks that are impeding mass adoption:

1. The wire is not defined.
2. The attributes describing the subject presented to the PDP are not cryptographically bound to a trusted identity provider (IdP).
3. The policy authoring story is way too technical.

By the first, I mean that the transport mechanism used to communicate with a PDP is not standardized. There is the SAML profile for XACML (PDF), but that's by no means enough. IMO, there needs to be many different profiles created before the protocol will reach critical mass -- a simple SOAP interface, one for JSON, OData, WS-Trust, etc., etc. Only after this happens will it become commonplace to find PEPs and PDPs from different companies communicating because custom integration work won't be required to do so. Each vendor will ship messages through a standardized and well-defined pipe.





Another problem is that the attributes that describe a subject are not cryptographically bound to a trusted IdP. According to the XACML spec, the PDP is presented with XML containing attributes that describe who the subject is. How is it supposed to know that this information is correct? Because it and the PEP are within a trusted subsystem? That's not going to cut it in many cases. Lots of times the PEP will present the PDP with information that it was given from an upstream entity, and the PDP will have to decide if access should be granted based on who asserted it. How can it do this unless the PEP provides more than strings? It can't. Crypto is needed.

The last problem with XACML is that the authoring experience of all the products on the market that I've looked at require the user to have a computer science degree and five years of software engineering experience to use them. (I'm exaggerating, but not much.) Policy authors in most organizations, I believe, are not engineers; they are business analysts and other non-technical folks.

Solutions to these Problems

First, the wire must be defined. Period. Gerry Gebel of Axiomatics said at CIS that it was his impression that the XACML technical committee (TC) has no interest in defining transport mechanisms. I really can't understand this. I would argue that this lack of definition will cause the market to view the spec as incomplete, immature, and unusable. The solution to this problem is to be at the table w/ the TC and persuade them.

The solution to the second problem is to include a digital signature computed by the IdP in the environment element of the request sent to the PDP. This way, it will be able to recompute the signature using the attributes presented by the PEP. If the PEP or any other entity between the PDP and the IdP has altered the attributes, the signature will not match, and the PDP won't allow access to the resource. How would this work in practice? I haven't thought about it enough to say, but I'm told that that's what IBM does in their XACML product.

The third problem can be solved with better authoring tools. As Anil Saldhana of Red Hat wrote last month, editors are needed that allow non-technical professionals to specify policy in the domain they are in. Using domain specific authoring tools, the policy creator won't know or care that XACML is the underlying technology. To them, it is a helpful tool that allows them to define rules that govern access to their organization's data using the nomenclature of their company and industry.

Conclusion

More Here


Courtesy:http://travisspencer.com/blog/2010/09/problems-with-xacml-and-their.html

Firesheep and HSTS (HTTP Strict Transport Security)

Firesheep, is a Firefox add-on that enables one to easily capture HTTP application session cookies from other users communications with specific popular sites. The problem it exploits is that many sites protect the initial reusable shared password-based authentication with TLS/SSL, but then revert further communication to unsecured HTTP. This exposes any application session cookies employed by the site, and returned by users’ browsers to the site on every request, to capture and replay by an attacker. This enables one to hang out on a local network, your favorite coffee shop for instance, and hijack others’ interactions with various social networking sites and retailers, for example.

This particular class of typical website vulnerability has been known for ages, as well as techniques for addressing it. For example, websites can simply offer their entire site over TLS/SSL (i.e. via “HTTPS“), as PayPal does. Some sites do so, but for whatever reason still revert users communications to unsecured HTTP by default, or some portion of their communications remain unsecured. However, if one can configure one’s browser to only securely interact with some given site (i.e. domain), and if the site supports this, then Problem Largely Solved. See, for example, Collin Jackson and Adam Barth’s paper, ForceHTTPS: Protecting High-Security Web Sites from Network Attacks, for a description of this class of vulnerabilities, attacks, and remediation approaches.





I’ve been working with Collin and Adam on standardizing ForceHTTPS — their paper was the inspiration for the HTTP Strict Transport Security (HSTS) work and the present Internet-Draft specification thereof, and thus the HSTS implementations presently available in Firefox 3.x (via the Force-TLS and NoScript plugins), natively in Firefox 4 beta 6 and later, and natively in Chrome 4 and later. There’s also the HTTPS-Everywhere extension from the EFF that comes pre-loaded with a list of sites to use only via HTTPS, and is configurable such that one can add more (unfortunately it doesn’t support HSTS apparently)..

Now, HSTS is a website security policy that in typical cases, sites will explicitly signal to browsers (via an HTTP request header field), as PayPal presently does. However, this week, Sid Stamm, who authored the Firefox v3 HSTS add-on (Force-TLS) and native implementation, conzed-up a new Firefox v4 add-on, STS UI (Strict Transport Security User Interface), that allows one to configure one’s browser to regard given sites as HSTS sites, even if they don’t signal it. This also addresses the Bootstrap MITM Vulnerability noted in the HSTS draft spec.

Note that Chrome features “Preloaded HSTS sites”, and that NoScript (FF v3 & v4), HTTPS-Everywhere (FFv3), and Force-TLS (FFv3) all facilitate user configuration of HTTPS-only sites.

We’ll be working in the new IETF WebSec working group to finish the HSTS draft spec and get it published as an RFC, hopefully before too much of 2011 is gone. I’ll try to keep you all updated on that.

More Here


Courtesy:http://identitymeme.org/archives/2010/10/29/firesheep-and-hsts-http-strict-transport-security/

Identity and Access Management Key Initiative Overview for CIOs Gartner

This overview provides a high-level description of the Identity and Access Management Key Initiative. CIOs can use this guide to understand what they need to do to prepare for this initiative.
 
Analysis



Identity and access management (IAM) is the security discipline that enables the right individuals to access the right resources at the right times for the right reasons.
IAM addresses the mission-critical need to ensure appropriate access to resources across increasingly heterogeneous technology environments, and to meet increasingly rigorous compliance requirements. This security practice is highly business-aligned, and in recent years, enterprises have come to recognize the value of centralizing IAM across the enterprise to improve security effectiveness, realize cost savings, increase operational efficiency and — crucially — deliver business value. CIOs must develop a comprehensive understanding of this area to help their enterprises develop mature IAM capabilities with the agility to support new business initiatives.
Consider These Factors to Determine Your Readiness
CIOs with enterprises preparing to develop IAM programs, or to improve the maturity of existing programs, should consider the following factors, which may vary significantly from enterprise to enterprise and from industry to industry:
  • Current IAM capabilities: A clear understanding of existing IAM capabilities will make it possible to identify IAM technology areas that require functional improvement.

More Here


Courtesy:http://www.gartner.com/DisplayDocument?id=1445338&ref=g_fromdoc

Solaris LDAP client with OpenLDAP server

Introduction

This guide is my attempt to document the configuration of Solaris 10 clients with an OpenLDAP server. While researching the topic on the internet, I found plenty of information on how to configure PADL's LDAP clients (nss_ldap and pam_ldap) and good documentation on getting OpenLDAP server to run on Solaris. Howver I did not find much on configuring Solaris' NATIVE LDAP client for use with OpenLDAP server. The purpose of this document is to attempt to fill that void.
This document works under the premise that the reader is familiar with the operation of LDAP and already has a working LDAP tree in place. The tree should have user data in a form that works with nss_ldap and pam_ldap. If you're already using LDAP in an environment with Linux clients, you should be all set. If not, you may wish to find other HowTos first and come back to this one when you're ready.
These instructions were written for Solaris 10. In theory they should work with Solaris 9 and even Solaris 8, but there will likely be semantic differences. For more information, see the Links section at the bottom of this document.


Overview

The Solaris LDAP client differs in some key ways from the PADL LDAP client which comes bundled with nearly every modern Linux distribution. The most visible difference is Sun's dedication to the NIS-stype domain convention. When configuring a Solaris host for LDAP you must also change the system's domain name to match the information stored in LDAP. Regardless of this and other differences, the basic schema for storing the name service databases is consistent enough that Linux and Solaris can co-exist happily.

Prepare the LDAP server

To make OpenLDAP play nicely with Solaris 10, three changes need to be made. The first is to fix an interoperability problem between Solaris' ldapclient and OpenLDAP server. A patch may be applied to OpenLDAP which enables the use of Solaris' ldapclient init function. Note that this change is not strictly necessary, however it will make your life easier. The second, relatively painless change, is to add two schema files necessary for storing the data Solaris needs to manage user accounts. Finally the directory needs to be seeded with data to make it do something useful.
If you elect to skip the first step, make sure you follow the instructions for configuring Solaris with "ldapclient manual" syntax as the "ldapclient init" mechanism will not work. You may also then skip the third step of this section that deals with initializing profile information.

Patching OpenLDAP 

 

As has been well documented on other sites, Solaris' ldapclient init utility fails to configure itself unless the OpenLDAP server it converses with has been patched. The original patch was from bolthole.com and applies to OpenLDAP 2.0 (local mirror). Gary Tay updated the patch for OpenLDAP 2.2 (local mirror). Since my work environment uses Red Hat Enterprise Linux ES and AS for our LDAP servers, I have also created updated RPMS that contain this patch. The RPMS are available for RHEL 2.1 (src) and RHEL 3 (src). Naturally they come with no warranty whatsoever. I have no idea, but I bet they also disqualify you for official Red Hat support.

Installing the schema

Solaris relies on objectclasses and attributes from two schema, DUAConfigProfile and solaris, in addition to the schema that come bundled with OpenLDAP. From what I have read, DUAConfigProfile is based on draft internet standards (I believe that SuSE Linux and HP-UX also support this standard, but I have not verified that) while solaris.schema is based on work to reverse engineer the objectclasses and attributes that Solaris uses to store user account information. To use the new schema, just drop the schema files in your schema directory, add the two appropriate lines to slapd.conf and restart slapd.
Sun has documented the exact schemas used by Solaris. More information can be found here: http://docs.sun.com/app/docs/doc/806-5580/6jej518q2?a=view

Initializing the directory structure

Assuming you followed the directions to get a patched version of the OpenLDAP server in place, you can use a neat feature of ldapclient which allows the administrator to store all the information necessary to configure the LDAP client in LDAP. This may sound chicken-and-egg, but as you'll see in the steps below it does make quickly and consistently provisioning and reprovisioning LDAP clients easy. I have provided a sample LDIF file which creates the ou=profiles hierarchy with one example profile underneath example.com domain. You will need to substitute the base DN throught the LDIF before adding it to your directory.
# Example profile LDIF:

dn: ou=profile,dc=example,dc=com
objectClass: top
objectClass: organizationalUnit
ou: profile

dn: cn=Solaris,ou=profile,dc=example,dc=com
objectClass: top
objectClass: DUAConfigProfile
cn: Solaris
defaultServerList: ldap1.example.com ldap2.example.com
defaultSearchBase: dc=example,dc=com
defaultSearchScope: one
searchTimeLimit: 30
bindTimeLimit: 2
credentialLevel: anonymous
authenticationMethod: simple
followReferrals: TRUE
profileTTL: 43200
  • NOTE* These should be considered "convenient defaults." By convenient I DO NOT MEAN SECURE. There is no encryption and the directory searches are done anonymously. However this configuration adds the fewest complexities and can be used while testing Solaris LDAP.
Whether or not you choose to create profiles, one more important change is necessary. In order for Solaris to process domain searches, it expects the base DN to have the objectclasses "domain" and "domainRelatedObject" and the attribute "associatedDomain". The "associatedDomain" attribute must contain the name of the domain for the Solaris environment. A good idea would expect it to be the conventional version of the domain name/base DN that you are using. For example, if you are Example Company using the domain example.com, your base DN might be dc=example,dc=com and your associatedDomain entry would be "example.com".
dn: dc=example,dc=com
objectClass: top
objectClass: domain
objectClass: domainRelatedObject
objectClass: nisDomainObject
domainComponent: example
associatedDomain: example.com
nisDomain: example.com

Configure the client

Now that you have prepared the server with Solaris specific tweaks, the client needs to be brought online. Note that for at least Solaris 10, this can all be done without the need for a reboot. That's not to say that it won't be disruptive; it will be. If the machine is not already part of a NIS/NIS+ domain, this should go smoothly. In my case we were not configured for any domain so I do not know what extra steps, if any, are necessary.

Prepare Configuration Files

Unfortunately for us, Sun made some (in my humble opinion) poor decisions when laying out the defaults for an LDAP configured system. The biggest trouble stems from the way they try to use LDAP to resolve hosts. In most configurations, this leads to an immediate infinite loop, where the name service switch goes to look up the LDAP host to connect with, which makes a call into the name service switch to find the LDAP server which would know the IP address for the LDAP server.... All environments I have worked in use DNS as their primary host naming system with a fallback to /etc/hosts files. If your system ever hangs on boot or when logging in, check this first.
The second issue is far less critical, but one I find bothersome anyway. Sun's default configuration attempts ALL name service lookups through LDAP first, and failing that, it looks to files. I'm a firm believer personally of having local overrides checked first. In the even that LDAP is ever unreachable (and pray that it isn't!) hopefully the system will stay afloat.
  • NOTE* When editing the file make sure you are editing nsswitch.ldap and NOT nsswitch.conf. The reason for this is ldapclient will overwrite nsswitch.conf with nsswitch.ldap during the conversion process. By making your edits in nsswitch.ldap, you can ensure the appropriate defaults will be used when the client is fully configured, and not before.
If your only concern is the first issue, then the following change will get you on your way. For the "hosts" and "ipnodes" lines in /etc/nsswitch.ldap make the following changes:
# Old:
hosts: ldap [NOTFOUND=return] files
ipnodes: ldap [NOTFOUND=return] files

# New:
hosts: files dns
ipnodes: files dns
The other change I make is to reset all the other name service definitions to "files ldap". This forces lookups to check local overrides first (ex. /etc/passwd; /etc/group). DNS is configured the same way in my example (ex. /etc/hosts). The Solaris defaults will work for the most part so this change is entirely at the administrators discretion.
I have also made availble my modified nsswitch.ldap

Verify Required Packages

Through some trial-and-error, I have determined that the following packages are required to be installed for Sun's LDAP client to work. In the case of sendmail and autofs, it leaves more questions unanswered than it solves*, but this configuration Worked For Me.
The following packages are required to make ldapclient happy:
SUNWnisu  # provides ldapclient
SUNWnisr
SUNWspnego # gss-api related libs
SUNWsndmr # see note below
SUNWatfsr # see note below
SUNWlldap
  • Note: sendmail and autofs packages appear to be necessary as ldapclient calls those services to be restarted as it configures the host. They can likely be removed after running ldapclient but I can't be sure. They seem to have nothing to do with LDAP functionality, but by simply not being present ldapclient detects the error stopping/starting the services and bails out before making the changes to the system.

More Here


Courtesy:http://docs.lucidinteractive.ca/index.php/Solaris_LDAP_client_with_OpenLDAP_server

Single Sign-On System using Kerberos with LDAP

As a network environment grows, the overhead on administrators needed to manage those systems grows as well. Unfortunately, where some growth can create new "economies of scale," with systems administration it often seems that the inverse is true. This is especially the case when it comes to user management. When a network is comprised of fewer than five servers, adding a new user is a relatively painless task. Simply connect to each server, add the user account (taking care to keep a consistent UID especially in NFS environments), set the password, and notify the user. However, this process becomes annoying, and eventually unmanageable as the environment grows to ten, fifty, one hundred servers and beyond. In the Microsoft Windows world there is the concept of a Domain. The domain defines all users valid for the network and creates a central place to add users, change their password, and set policies or permissions. Microsoft is hardly unique in this construct; indeed the Active Directory system is just a veneer and user interface unification of the same two technologies we are describing here: Kerberos and LDAP. Before AD or Kerberos+LDAP, there were systems such as NIS (Network Information System, still very common in legacy Solaris environments). Kerberos and LDAP can address many of the shortcomings of NIS while adding some of the nice features AD administrators are familiar with including cross-realm (cross-domain) trust and single sign-on.

The architecture described here is divided into two distinct parts: Authentication and the User Database. The sole function of Kerberos is to securely store and manage the authentication tokens. Kerberos knows nothing about the Unix/POSIX attributes of the users, and it does not need to. LDAP stores this information and makes it available to all hosts in the realm. To make an analogy, Kerberos provides the information typically stored in /etc/shadow (referenced by PAM) while LDAP provides the information stored in /etc/passwd and /etc/group (referenced by the name service switch, or NSS). It is important to note here that while LDAP can provide the authentication services to PAM, we are choosing to use Kerberos instead to gain the functionality it provides. This document will not detail configuring LDAP authentication for hosts.


Authentication Kerberos neworks are organized into "realms," commonly referred to "domains" in Windows. A realm is a complete standalone entity, containing users and hosts with trust mutually assured. At the core of the realm is a service known as the Key Distribution Center, or KDC. The KDC is the keeper of all authentication tokens, both for the end-users and for the hosts participating in the network. Each object in the Kerberos database is known as a "principal." Principals may also have multiple instances, in the form of "principal/instance." In theory, the instances are a special form of the the principal (as in the case of the "user" principal and the "user/admin" instance). In practice, instances are treated as separate principals with a distinct password and ACL definition so this document will refer to them as such. Since Kerberos is a mutual authentication system, all hosts will have a principal which, when signed by the KDC, will verify to the end user that the host with which he is communicating is valid and a participant in the Kerberos realm.

The authentication process:

LDAP-passthrough login procedure (non-SASL LDAP)

* User sends LDAP BIND request to LDAP server with "simple" authentication mechanism (username/DN and password)
* LDAP checks given DN for a Kerberos principal and contacts appropriate KDC. The KDC is sent the just-looked-up principal and the password from the BIND attempt.
* KDC returns pass/fail result for authentication attempt
* If KDC returns "pass," LDAP service access is granted (BIND operation succeeds)
* If a READ, ADD, DELETE or MODIFY operation is requested (any operation other than just BIND), LDAP ACLs are consulted based on the authenticating DN. The request may be granted or denied based on these ACLs.

LDAP SASL (GSSAPI) login procedure

* User sends LDAP BIND request using SASL mechanism (Principal and encoded TGT sent to server)
* LDAP server validates User's TGT
* User validates LDAP service's ticket
* If either ticket validation fails, the connection is aborted. If the validation succeeds, the LDAP BIND operation is successful.
* LDAP server consults rules to map Kerberos principal to LDAP DN

Example: bklang@ALKALOID.NET -> uid=bklang,ou=People,dc=alkaloid,dc=net

* LDAP server checks configured ACLs against resulting DN. Any READ, ADD, DELETE, or MODIFY operation will be allowed or denied based on the result of this ACL check.

Non-kerberized login procedure (non-GSSAPI ssh, PAM):

* User sends authentication credentials to service (username and password)
* Service contacts KDC and requests validation of given credentials
* KDC replies pass/fail to service
* If KDC reply is "pass", access is granted to service

Kerberized login procedure (GSSAPI ssh, other Kerberos or GSSAPI-aware protocols)

* User requests TGT from KDC. Password is NOT sent over the wire (refer to Kerberos protocol documentation for more information)
* User contacts service, signing the request with TGT obtained from KDC

More Here


Courtesy:http://docs.lucidinteractive.ca/index.php/Single_Sign-On_System_using_Kerberos_with_LDAP

Deploy the Service Provider behind a Reverse Web Proxy Shibboleth

A reverse proxy (called "proxy" below) is installed in front of a web server (called "resource" below), only the latter is hosting the resource and is running the Shibboleth Service Provider software. All traffic to that web server goes through the reverse proxy – there should be no way to access the web server directly (i.e., you must use packet filters, firewalls, web server configuration, etc. to prevent access from anywhere but the proxy).

The proxy could also be used for SSL offloading, handling all (HTTP and) HTTPS traffic, speaking only plain HTTP to the web server – if you wanted to rely solely on a trusted network. (In case your proxy itself is Apache httpd you can also enable HTTPS while proxying to the webserver, see the mod_ssl documentation).



Here is a description of the SSO flow (leaving out IdP Discovery for brevity):

1. The client attempts to access https://proxy.example.org/secure
2. The reverse proxy at proxy.example.org internally forwards the request to http://resource.example.org/secure
3. The location /secure on the resource is protected by a Shibboleth SP
4. The Shibboleth SP intercepts the request and generates a SAML2 AuthnRequest with an AssertionConsumerServiceURL of https://proxy.example.org/Shibboleth.sso/SAML2/POST (assuming default locations and a properly configured web environment on proxy.example.org. If proxy.example.org's web server configuration is not correct, a variety of wrong URL's may be generated here.)
5. Also the relayState for the requested URL is set (e.g in a HTTP cookie).
* Note that the path (/secure) to the requested resource is set by the Shibboleth SP and hence is specific to the protected resource on the web server. This mandates that the proxy either proxies the resource with the exact same path (/secure to /secure), or that the proxy is able to rewrite HTTP resonse headers (e.g. the ones containing the relayState) before returning results to the client.
6. The client authenticates at an IdP and bounces back to https://proxy.example.org/Shibboleth.sso/SAML/POST with an authentication (and probably also an attribute) assertion.
7. The resource gets the request forwarded from proxy.example.org
8. If there's no attribute assertion the Shiboleth SP at the resource may also query the IdP for attributes (Note that queries from the Shibboleth SP will not go though the proxy described in this document). The SP redirects to the resource specified in the relayState, applies any authorization logic and returns the page (to the proxy, and the proxy to the client).

Reverse Proxy
Apache httpd with mod_proxy

Building a basic reverse proxy with the Apache httpd web server:

ProxyPass /Shibboleth.sso/ http://resource.example.org/Shibboleth.sso/
ProxyPassReverse /Shibboleth.sso/ http://resource.example.org/Shibboleth.sso/
ProxyPass /secure/ http://resource.example.org/secure/
ProxyPassReverse /secure/ http://resource.example.org/secure/

Lighttpd with lighttpd_mod_proxy

Building a basic reverse proxy with the lighttpd web server:

server.modules += ( "mod_proxy" )
$HTTP["url"] =~ "^/secure/" {
proxy.server = ( "" => (( "host" => "resource.example.org", "port" => 80 )))
}

Note: Proxying the Shibboleth handlerURL is not part of this example, but will still need to be done when following the general direction of this document.
Resource
Apache httpd 2.2

On the web server with the Shibboleth SP set the ServerName directive to the scheme, host name and port of the proxy, cf. httpd documentation:

ServerName https://proxy.example.org:443
UseCanonicalName On

(In case of SSL offloading to the proxy, the resource's web server will only have a plain HTTP vhost configured – since any HTTPS traffic will be terminated at the proxy – but the ServerName directive will still need to be set as specified above.)
shibboleth2.xml

With SSL offloaded to the proxy, also set handlerSSL="false" in shibboleth2.xml, so the Shibboleth handler will accept protocol messages on plain HTTP.
Metadata

Any protocol endpoints in the Metadata describing the SP must point to proxy.example.org.
Unless the proxy itself does not handle HTTPS at all (i.e., access to the resource is not proteced by TLS/SSL), all endpoints in the metadata should be set to HTTPS URLs. If the process by which you generate metadata does not do this for you, you'll need to perform this change yourself.









More Here


Courtesy:https://spaces.internet2.edu/display/SHIB2/SPReverseProxy

Cross-Domain Single Sign-On Authentication with JAAS

Leverage your existing JAAS enterprise security system to provide SSO across multiple subsystems. Implementing this J2EE security model will take your security architecture to the next level.

Single sign-on (SSO) is a very hot topic. Businesses in every industry are eager to integrate legacy systems into newer applications, and SSO can alleviate the headaches users experience when trying to manage a long list of user names and passwords for various systems. Enter the Java Authentication and Authorization Service (JAAS).

As I wrote in a DevX 10-Minute Solution, "JAAS Security in Action": JAAS “is a flexible, standardized API that supports runtime pluggability of security modules.” If you are unfamiliar with JAAS, I recommend reading that article and reviewing the downloadable code before continuing, as this article assumes an understanding of JAAS. It takes the next logical step from a security architecture standpoint: integrating your J2EE security model to provide SSO across multiple subsystems by leveraging your existing LDAP directory server, database server, or any other enterprise security system.



Before going any further, let's clarify how this article uses the term "domain": It refers to security domains (LDAP, database, etc.) and not Web domains. If you are interested in using JAAS to share authentication information between multiple Web applications, read the article "Implement Single Sign-on with JAAS" written by James Tao in October of 2002. Additionally, if you are interested in Web applications that exist across firewalls and participate in some sort of Web service exchange, read the joint Web Single Sign-On Identify specifications that Microsoft and Sun recently published.

Securing the Enterprise
Single sign-on allows users to enter security credentials once (typically by logging into a workstation or a Web application) and have those credentials propagated to each local and network application the user accesses during his or her session. Local applications exchange authentication information directly, while remote network applications exchange authentication information across the network via encrypted security tokens.

Regardless of whether the deployment scenario is local, across a network, or a combination of the two, the security challenges are the same: sharing credentials between domains, correctly interpreting the credentials once received, and managing different sets of privileges across these domains (e.g., a user could be a manager within one system, a power user in another system, and a normal user in a third).

Finally, the heterogeneous nature of most enterprise systems creates some unique challenges for SSO security architectures. Each application within the enterprise could be comprised of different technologies, operate on different platforms, access disparate data sources, and except slightly different authentication credentials for the same principal (user). In spite of these overwhelming obstacles, JAAS combined with LDAP provides a solid framework for designing and implementing a robust SSO enterprise security framework.




The Architecture
The backbone of a J2EE SSO architecture is the standard J2EE security model, which is well documented in other places (see Related Resources in the left-hand column). In a nutshell, J2EE security consists of principals (users) who are associated with roles (groups) that are given privileges (authorization). These roles with assigned privileges are further organized under the concept of a realm (domain). Each realm maps users and groups to privileges within its own scope. The key to providing SSO is seamlessly connecting these different realms (and corresponding enterprise systems) without requiring the user to enter authentication information each time he or she wishes to access another system.

Consider the following example: A user logs in to an application via HTTP, authenticating herself against the server's security realm (MemoryRealm, JDBCRealm, JAASRealm, etc). The user then uses the Web application's search feature, querying the database and returning a resultlist. The database could then require that the middleware platform authenticate against the DB before performing the transaction. Finally, the user wants to update information stored in her directory server (LDAP). This is a privileged action, requiring the user to first authenticate against the LDAP realm before modifying any directory data. All three of these realms likely require slightly different authentication schemes (different user IDs, passwords, additional security tokens, etc.), but the same principal (user) is accessing them each time.

Java can provide an elegant SSO solution for the above scenario (and any number of similar scenarios) using JAAS's pluggable login module architecture. JAAS login modules facilitate the smooth integration of J2EE's security framework with various systems and their respective heterogeneous authentication mechanisms (OS, LDAP, database, etc.). These modules can be configured to share authentication data and designed to correctly identify users and roles by mapping principals and roles—even across domains with differing security schemas.

The Components
The application components required for a JAAS SSO solution include the following:

* Two or more enterprise systems that need a common, integrated security framework
* Two or more JAAS login module classes to drive the authentication exchange between agent (user or subsystem) and callback handler
* One or more JAAS callback handler classes to respond to callback events in order to perform the actual authentication procedure(s)
* A login configuration file to define how JAAS will manage authentication across multiple security realms (configuration could even be stored in an XML file or database)

Assembling these components and connecting all of the pieces correctly can be a bit daunting the first time. Be sure to thoroughly test your JAAS authentication components individually with each system prior to attempting to link them and share authentication information. The process of packaging, deploying, and testing your solution should go something like this:

1. Write a login module (implement LoginModule, a subtype of LoginContext) and a callback handler (implement CallbackHandler interface) for authenticating against a single enterprise system (LDAP, database, etc.).
2. Define the configuration for your login module (this could be as simple as an XML file containing a single statement).
3. Define a UI (Web, console, or rich GUI) to capture authentication data, and then pass it to your login module.
4. If this is a server-based solution (HTTP, sockets, RMI, etc.), define the J2EE security (constraints and roles) on the server in the usual way (web.xml or application.xml), and then define a realm on the server (server.xml) that references the JAAS login module configuration (accomplished via the appName attribute). Local (non-server) solutions will simply rely upon JAAS and a J2SE policy file to define security constraints and permissions.
5. Start the server (specifying the login configuration file via a Java command line attribute), launch the client, and provide authentication credentials. Debug and modify as necessary to resolve any errors.
6. Rinse and repeat. Continue this process as necessary until each enterprise system can successfully be authenticated via a JAAS login module.
7. Finally, hook all of the individual authentication pieces together. The following section addresses this issue.

The above list simply gives you a brief overview of the process. For more details on how to actually accomplish these steps, please consult the links in the Related Resources space.


More Here


Courtesy:http://www.devx.com/security/Article/28849/1954

Single Sign on using SAML with Apache Axis2 (Web Service Runtime)

Axis

Axis2 is a Java based open source web service runtime. It consists of tools for generating a Java proxy based from a WSDL service description. The web service proxy is used for invoking web services, as well as tools for generating web services on the provider side.

Checking SAP Notes

SAML Sender-vouches is supported with releases AS ABAP 7.00 (SP 15) and higher. Please ensure the following SAP notes have been applied:
AS ABAP 7.00:
  • SAP Notes: 1176558, 1325457
  • Kernel Patch level: 207
AS ABAP 7.01:
  • Support Package SP5
  • Kernel patch level: 74
AS ABAP 7.10:
  • SAP Notes 1170238, 1325457
  • Kernel patch level: 150

Checking Axis versions

I used the following library to run this example:
1) Axis 1.4.1 from http://ws.apache.org/axis2/download.cgi
2) Wss4J 1.5.7  from http://www.apache.org/dyn/closer.cgi/ws/wss4j/
Due to a bug in wss4j version 1.5.4 shipped with Axis2 1.4.1, I replaced the wss4j with version 1.5.7.  wss4j 1.5.4 ignores the SignedParts elements in axis2.xml and does not sign the timestamp element.

Configure the provider 

The ws provider needs to be configured to SAML Sender-Vouches authentication. To create such a configuration, follow the  instructions.

Configure Trust between Axis2 and SAP WebAS ABAP

The scenario involves an XML Signature. If you already have a certificate for signing the messages, feel free to use it. Otherwise, create a certificate with the java keytool by invoking the commands below (passwords are only as an example):

Create the keypair
keytool -genkey -alias SAML -keyalg RSA -keysize 1024 -validity 1000 -keypass abcd1234 -storepass abcd1234 -keystore axis.jks
Export the key
keytool -export -file axis.crt -alias SAML -keypass abcd1234 -storepass  abcd1234 -keystore axis.jks
Any SAML assertion created by Axis2 needs to be trusted by the SAP system and be mapped to an SAP user. Please follow the instructions from section Configure Trust for SAML SenderVouches authentication ( ABAP) using the following information:
  • SAML Issuer: Axis
  • SAML Name Identifier: (empty,not used)
  • Subject of the X.509 certificate used for the message signature (from the example): CN=Axis, OU=NW SIM, O=NW, L=Walldorf, SP=Baden Wuerttemberg, C=DE
The name of the issuer is kept in the Axis2 configuration file saml.properties
saml.properties
org.apache.ws.security.saml.issuerClass=saml.SAPSAMLIssuerImpl
org.apache.ws.security.saml.issuer.cryptoProp.file=crypto.properties
org.apache.ws.security.saml.issuer.key.name=SAML
org.apache.ws.security.saml.issuer.key.password=abcd1234
saml.issuer=Axis
saml.validity=200
org.apache.ws.security.saml.authenticationMethod=password
The second file crypto.properties contains the configuration information for the keystore
crypto.properties 
org.apache.ws.security.crypto.provider=org.apache.ws.security.components.crypto.Merlin
org.apache.ws.security.crypto.merlin.keystore.type=jks
org.apache.ws.security.crypto.merlin.keystore.password=abcd1234
org.apache.ws.security.crypto.merlin.keystore.alias=SAML
org.apache.ws.security.crypto.merlin.file=keys/axis.jks

Create the consumer

From the service configuration created in the previous step, copy the WSDL url and open it in the browser. By default the SAP WSDL contains WS-Policy. Axis is not able of processing these assertions, therefore it is best to take the WSDL without policy. Obtain the WSDL without policy by replacing ws_policy with standard in the WSDL url, i.e.:
With WS-Policy
http://host:port/sap/bc/srt/wsdl/bndg_001560AB336002ECB9B230CE92A94CD0/wsdl11/allinone/ws_policy/document?sap-client=001
Without WS-Policy
http://host:port/sap/bc/srt/wsdl/bndg_001560AB336002ECB9B230CE92A94CD0/wsdl11/allinone/standard/document?sap-client=001
Save the WSDL in a file.

Configure Axis2 to issue SAML assertions

The axis2.xml configuration file must configure a SAML assertion, a wsu:TimeStamp and a Signature over SOAP Body, wsu:TimeStamp and SAML assertion in the request and a TimeStamp in the response. This is configured by the following piece of XML.
      "rampart"/>
      "OutflowSecurity">
            
                   Timestamp SAMLTokenSigned
                   {Content}{http://schemas.xmlsoap.org/soap/envelope/}Body;{Content}{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd}Timestamp;
                   saml.properties
                   DirectReference
            

     

      "InflowSecurity">
            
                   Timestamp
                   false
                   "enableSignatureConfirmation"  value="false"/>
            

´             
The property file saml.properties contains the SAML specific configuration. Ramparts default implementation for creating SAML assertions does not define the validity of the SAML assertion, which is required by SAPs implementation. Use the example implementation below to generate SAML assertions accepted by SAP. The response contains a timestamp, which is configured in the InflowSecurity section.
 package saml;
import java.util.Arrays;
import java.util.Collection;
import java.util.Date;
import java.util.Properties;
import org.apache.ws.security.components.crypto.Crypto;
import org.apache.ws.security.components.crypto.CryptoFactory;
import org.apache.ws.security.saml.SAMLIssuer;
import org.opensaml.SAMLAssertion;
import org.opensaml.SAMLAuthenticationStatement;
import org.opensaml.SAMLException;
import org.opensaml.SAMLNameIdentifier;
import org.opensaml.SAMLStatement;
import org.opensaml.SAMLSubject;
import org.w3c.dom.Document;
/**
 * Builds a WS SAML Assertion supported by SAP AS ABAP/Java
 *
 * @author Martijn de Boer
 */
public class SAPSAMLIssuerImpl implements SAMLIssuer {
 private SAMLAssertion samlAssertion = null;
 private Properties properties = null;
 private Crypto issuerCrypto = null;
 private String issuerKeyPassword = null;
 private String issuerKeyName = null;
 private String username;
 /**
  * Constructor.
  */
 public SAPSAMLIssuerImpl() {
  System.err.println("Error: no cfg properties passed");
 }
 public SAPSAMLIssuerImpl(Properties prop) {
  /*
   * if no properties .. just return an instance, the rest will be done
   * later or this instance is just used to handle certificate conversions
   * in this implementation
   */
  if (prop == null) {
   return;
  }
  properties = prop;
  String cryptoProp = properties.getProperty("org.apache.ws.security.saml.issuer.cryptoProp.file");
  if (cryptoProp != null) {
   issuerCrypto = CryptoFactory.getInstance(cryptoProp);
   issuerKeyName = properties.getProperty("org.apache.ws.security.saml.issuer.key.name");
   issuerKeyPassword = properties.getProperty("org.apache.ws.security.saml.issuer.key.password");
  }
 }
 /**
  * Creates a new SAMLAssertion.
  *
  *
  * A complete SAMLAssertion is constructed.
  *
  * @return SAMLAssertion
  */
 public SAMLAssertion newAssertion() { // throws Exception {  // Issuer must enable crypto functions to get the issuer's certificate  String issuer = properties.getProperty("saml.issuer");
  int validity = Integer.parseInt(properties.getProperty("saml.validity", "300"));
  String qualifier = "";
  try {
   SAMLNameIdentifier nameId = new SAMLNameIdentifier(username, qualifier, "");
   nameId.setFormat("urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified");
   String subjectIP = null;
   String authMethod = null;
   if ("password".equals(properties.getProperty("org.apache.ws.security.saml.authenticationMethod"))) {
    authMethod = SAMLAuthenticationStatement.AuthenticationMethod_Password;
   }
   Date authInstant = new Date();
   SAMLSubject subject = new SAMLSubject(nameId, Arrays.asList(new String[] { SAMLSubject.CONF_SENDER_VOUCHES }), null, null);
   SAMLStatement[] statements =
{ new SAMLAuthenticationStatement(subject, authMethod, authInstant, subjectIP, null, (Collection) null) };
   Date now = new Date();
   Date expires = new Date();
   expires.setTime(now.getTime() + validity * 1000);
   samlAssertion = new SAMLAssertion(issuer, now, expires, null, null, Arrays.asList(statements));
  } catch (SAMLException ex) {
   throw new RuntimeException(ex.toString(), ex);
  }
  return samlAssertion;
 }
 /**
  * @param userCrypto
  *            The userCrypto to set.
  */
 public void setUserCrypto(Crypto userCrypto) {
  // ignored for sender vouches }
 /*
  * ignored (non-Javadoc)
  *
  * @see org.apache.ws.security.saml.SAMLIssuer#setUsername(java.lang.String)
  */
 public void setUsername(String username) {
  this.username = username;
 }
 /**
  * @return Returns the issuerCrypto.
  */
 public Crypto getIssuerCrypto() {
  return issuerCrypto;
 }
 /**
  * @return Returns the issuerKeyName.
  */
 public String getIssuerKeyName() {
  return issuerKeyName;
 }
 /**
  * @return Returns the issuerKeyPassword.
  */
 public String getIssuerKeyPassword() {
  return issuerKeyPassword;
 }
 /**
  * @return Returns the senderVouches.
  */
 public boolean isSenderVouches() {
  return true;
 }
 /*
  * ignored (non-Javadoc)
  *
  * @see
  * org.apache.ws.security.saml.SAMLIssuer#setInstanceDoc(org.w3c.dom.Document
  * )
  */
 public void setInstanceDoc(Document instanceDoc) {
  // ignored for sender vouches }
}

Invoking a web service using Axis2

To invoke the proxy, use the following example below. Basically the following data is needed:
  • Endpoint url of the web service
  • Path to Axis2 repository
  • Path to axis2 configuation file
  • Name of the user to write into the SAML assertion
  •  
Below is an coding example to invoke a proxy with SAML authentication. Except setting the username to be included in the SAML assertion, all data is included in configuraiton files.
package call;
import org.apache.axis2.context.ConfigurationContext;
import org.apache.axis2.context.ConfigurationContextFactory;
import org.apache.ws.security.handler.WSHandlerConstants;
import proxy.WsseEchoStub;
public class CallProxy {
 public static String callProxy(String input, String url, String repositoryDir, String axis2Path, String user)
throws Exception {
  /*
   * load configuration
   */
  ConfigurationContext ctx = ConfigurationContextFactory.createConfigurationContextFromFileSystem(repositoryDir, axis2Path);
  /*
   * create proxy instance
   */
  WsseEchoStub ws = new WsseEchoStub(ctx, url);
  /*
   * Set user to write into SAML assertion
   */
  ws._getServiceClient().getOptions().setProperty(WSHandlerConstants.USER, user);
  /*
   * call web service
   */
  proxy.WsseEchoStub.WSSE_ECHO a = new proxy.WsseEchoStub.WSSE_ECHO();
  a.setINPUT(input);
  WsseEchoStub.WSSE_ECHOResponse res = ws.WSSE_ECHO(a);
  return res.getOUTPUT();
 }
}

Example 1: Axis2 standalone

For illustration purposes, I'll first show how to invoke the proxy from a standalone Java application and authenticate the service call in the ABAP stack. As the standalone application does not support authentication itself, it should only be seen as a technical example and not used in realistic scenarios.

More Here


Courtesy:http://wiki.sdn.sap.com/wiki/display/Security/Single+Sign+on+using+SAML+with+Apache+Axis2+%28Web+Service+Runtime%29