Tuesday, February 26, 2013

Transparancy at a SaaS company



For me, transparancy is one of the most important characteristics a SaaS company or other cloud company (IaaS, PaaS) must have to survive in the current world.
A customer relies 24/7 on the SaaS solution and when something goes wrong (server down, security breach etc.) a customer should be informed immediately so he can adapt to it and hopefully don't loose too much time and money when the SaaS solution is down.

So when I read the tweet by AFAS Software CEO Bas van der Veldt that transparancy is great when you have nothing to hide and AFAS likes transparancy, I made a bold move.
I tweeted back that I want to test that. Promptly I got a tweet back with an invitation to do just that.
But as a SaaS software tester I was really interested in how AFAS deals with traceability, which was also interesting for Mr van der Veldt, so he invited me to come over.
Within a few days arrangements were made and I was invited on Friday 15 February to see how Testing&Development was done at AFAS in a transparant way.

After a nice drive through the Dutch 'hills' (Utrechtse Heuvelrug) I arrived at AFAS.
At arrival in the reception it became clear to me automation was a key process here.
The AFAS reception welcomed me and guided me to a registration unit where I could register myself. Pretty fancy stuff with an automated photocamera to take mugshots (not so fancy :-) ) and a SMS-service telling my host I arrived.
Within minutes my host arrived: Martijn Wouter, teamlead test.
After a brief introduction I was given an elaborate tour through the AFAS building seeing the different departments development, test and support and the inhouse server room. A nice thing to see was the AFAS Usability Lab where it explores through cameras and special software how customers realtime use its software and register the results for future use.
Martijn introduced me to his team and explained the different roles the team members have.
As a professionally educated tester it struck me most testers came from other divisions of AFAS ready to use their knowledge and also eager to learn testing by certifying and visiting workshops.
I see it as a way of exploratory testing, using your skills as a domain tester testing new software, doing testspecification and execution at the same time. The last is not simultaneously at AFAS, which
is no problem, software is rated high by its clients.
Another thing intrigued me: most SaaS-companies work via the agile methodology in small interdisciplinary teams. Martijn explained to me AFAS still uses the waterfall method, but because of the short line development&testing is still moving in a fast pace with the documentation department. Also with the client because of the direct incident system (including automated dashboards).
Next to this, inhouse developed test automation tools speed up tests and ensure test coverage.
Clients are very important to AFAS and AFAS sees to it they are satisfied through the already mentioned Usability lab, the AFAS Theater product and knowledge presentations (SEPA!), an online transparant annual report and special online client and partner dossiers. Traceability meets automation!
Employees are also important for AFAS: during breaks they can play table soccer, spent time in the gym or eat lunch/dinner at the company restaurant.

It was a great Friday afternoon at AFAS. I hereby want to thank AFAS for the opportunity they gave me to have a look into the kitchen of a successfull SaaS company.


Wednesday, January 16, 2013

Protocol of the Month


In my previous blogpost I said I am going to explore OAuth 2.0 more in detail.

Bluntly said, OAuth 2.0 is an open source framework for online datasharing without using a username/password, but by means of access tokens.
This simplifies data sharing for a user and is also more secure, because you do not have to enter your password in a third-party site.
UMA,my pet identity protocol to test of the last two years, is build uponOAuth2.0, making it a OAuth 2.0 profile.

To understand UMA, you have to understand OAuth2.0 first.
This can get quickly technical, demotivating nontechnical users tounderstand OAuth.
This is a pity.
That's why I will discuss OAuth 2.0 and its different authorization flows in a series of blogposts.
Told in a functional way, illustrated with daily used examples like social networks.
If you want to have more technical details I recommend the IETF OAuth2.0 draft.

First,let's have a look at OAuth 2.0 and its roles.
There are four roles:

resourceowner
An entity capable of granting access to a protected resource.
When the resource owner is a person, it is referred to as an enduser.

resourceserver
The server hosting the protected resources, capable of accepting
and responding to protected resource requests using access tokens.

client
An application making protected resource requests on behalf of the
resourceowner and with its authorization. 
The term client does not imply any particular implementation characteristics (e.g.whether the application executes on a server, a desktop, or other
devices).

authorizationserver
The server issuing access tokens to the client after successfully
authenticating the resource owner and obtaining authorization.

This can be visualized like in this diagram:


OAuth 2.0 roles as defined in the specification.


Obtaining access tokens is an important part of the OAuth2.0 protocol.
This differs per interaction the OAuth2.0 roles can undertake.
An access token is an example of an authorisation grant, a credential which represents the resource owner's authorization (to access itsprotected resources) used by the client to obtain an access token.
For granting authorization in OAuth2.0 there are four grant types:authorization code, implicit, resource owner password credentials,and client credentials, as well as an extensibility mechanism for deining additional types.

The next blog series will discuss the OAuth grant types

Stay tuned for my online adventures to unravel OAuth2.0 and interact with me through my blog, Twitter and Facebook.



Monday, December 31, 2012

Hello 2013, Goodbye 2012

For me, 2012 was a year full of challenges, setting up ways to meet peers (Facebook) and gaining experience in online identities, software testing and compliance .
This I want to continue in 2013.
Next year I want to deepen my security knowledge about online sharing protocols like UMA, OAuth2 OpenID and my adventures (work experience, conference meetings) will be highlighted.
Also, I will continue to follow the news on privacy , big data and compliance and blog about this to express my views on these subjects, which will be combined in 2013.

Papers will be written, conferences will be visited and no worries, software and protocols will be tested.

All thoroughly done to give you a quality up to date repository about testing Software as a Service, with a flavor of online identities.

See you all in 2013 on Facebook, TestingSaaS-blog and Twitter!
And perhaps in real life too!!

Tuesday, September 25, 2012

Mobile payment ecosystems: a challenge for compliance testing

When you have read my blog lately you know I am very interested in compliance and software testing, especially for SaaS, NFC and mobile payments.

Since Google initiated Google Wallet  I kept a close eye on what the compliance institutions were planning to do to develop testing programs for the emerging mobile payment ecosystems.
Especially because this is a new payment ecosystem, still unknown to the many merchants, acquirers and customers who are going to have to deal with it.
That it is susceptible to malicious attacks is illustrated by the POS (point of sale/checkout) attacks at different merchant locations in the USA, like Subway and Penn Station.
As always, the criminals know the weaker spots of the payment ecosystem the quickest and the compliance institutions are lagging behind, but have to react.
The compliance institute which is publicly expected to design countermeasures is the PCI Security Standards Council (PCI SSC). And recently it has 'done' that.

The PCI SSC has provided clarifications how every organisation (which stores, processes or transmits creditcard/debitcard data) should comply with the already devised PCI Data Security Standard (PCI DSS). A standard which should not be unknown to you if you read my former blog posts.

However, it are still clarifications about the PCI DSS, no update is planned yet until 2013.
Why clarifications?
Well, The entity which executes the compliance validation annually(!) is dependent on the amount of volume of transactions involved. If the volumes are small a Self-Assesment Questionnare (SAQ) is used. If the volumes are large, a Qualified Security Assessor (QSA) tests if the stakeholder is PCI DSS compliant.

When you read the PCI DSS you will encounter a lot of text, but no real details about how to test mobile payment ecosystems effectively (by QSA or through SAQ) resulting in an inadequate coverage. Especially what was part of the to be tested ecosystem (the 'scope') was insufficiently explained.

These and other points of concern are now written down in the Summary of 2012 Feedback for the PCI DSS and Payment Application Data Security Standard (PA-DSS). This document is a description of the international (!) feedback given by the PCI SSC stakeholders (merchants, acquirers, QSAs and payment software vendors) to the PCI SSC regarding PCI DSS v.2.0  and PA-DSS.
Regarding PCI DSS the feedback suggestions were mainly about the already mentioned need for scope guidance, a more detailed description of requirements, a more simplified SAQ and an update on password requirements. The last one is, regarding the current changes in identity management, an effective step in actualizing PCI DSS procedures.

Now I know what you're thinking: 'PCI SSC stakeholders, USA, is this also important for the European mobile payment system?'
Yes, it is. It's not a U.S. standard, it's a global standard, also affecting Europe, Africa and the Asia-Pacific (mobile) payment environments. The standard is a result of aligned policies from different U.S. Credit and Debitcard companies like MasterCard and VISA.
If these guidelines are not met annually, non -compliant merchants and acquirers meet its consequences: up to $500.000 fines and litigation costs, degradation to lower level PCI compliance and lower brand reputation and consumer confidence in the long term.

So, it is not amazing merchants and acquirers are giving feedback on guidelines they must comply to, but do not know how to comply to.
Let's see how the feedback from the PCI SSC stakeholders will result in a more testable, usable and , hopefully, more qualitative better PCI DSS standard.



Saturday, July 28, 2012

A tester's nightmare: wrong testdata

Imagine the following situation:
A team of IT-professionals are working hard to get a GO for a release of a module of their online CRM software. The architect designed the module, the developer creates the module and last, but not least, the tester has to do an end-to-end test to see if the whole chain is still working after the module is incorporated into the software.
As always, the end-to-end test is the finishing test, so the tester is on the critical path of the project. Better said, he has only 2 days to test and give a sound test advise of whether a GO/NO GO should be given.
Naturally, the tester prepared this test from the beginning.
While executing the testcases something weird happens: data is not migrated from the miodule to the other parts of the SaaS-landscape.
Awkward, because with other testcases this does not appear.

The tester thinks it's a bug, because the only difference with other testcases is the situation it wants to test and the expected result does not occur, so hee, it must be a bug. It's almost D-day and the tester has to go further with testing.
So, the tester documents the bug and communicates it to the team so the developer can pick it up and the tester continues the testing. Luckily it was not a showstopper.
The next day (the last day of testing!!) the developer comes to the tester and says it's not a bug, it's wrong testdata, which the database could not process.
After testing with the correct testdata the expected situation occurred. Shoot, almost a day lost due to bad testdata.

What can we learn from this?

A lot of you would say the tester screwed up here,he's responsible, but is he?
No!

In the beginning of the story I described the project and its team.
And that's exactly where the responsibility lies, the team!

This Single Point of Failure (wrong testdata) would not have occurred if other teammembers would have reviewed the testcases and the used testdata.
Then the rotten apple in the testdata would have been removed early in the process, even before the testing started.

A tester reviews documents and tests software for a living, but we are also human beings and we too make mistakes.
I am now a tester for 8 years and every now and then I use bad testdata.
But I learned to get my testcases reviewed and diminish the risk of using bad testdata,stall the test process and unintentionally get the project in the danger zone.

Bottom line, when you are testing in a very short time period, make sure the whole team is aware of the importance of testing and let them review your testcases, so the described situation above does not occur.