Enough SaaS for you?

The ramblings of a seasoned IT Guy…

SaaS (Software as a Service) is now a prevailing part of modern life. There are many services that, as a nation and internationally, that we rely on every day. Be that checking your email to writing complex code – SaaS provides an essential and growing role in our work and home lives.

It’s a great boon to people, a genuine example of a positive application of technology. What was once a per-device / per-person setup can now be done centrally, easily and remotely in a lot of cases. It allows for highly collaborative solutions and sharing of information – an evolution of the internet. Call it web 2.0 or whatever you like, it’s a positive progression. It allows for commodity level services at a (mainly) reasonable price. For vendors it allows a more predictable or “smooth” revenue stream, rather than boom and famine of ‘point-in-time’ software releases.

You can sense a “but” coming, can’t you? Well, yes – perhaps there is…

Users / services / devices can be frequently updated. These are GOOD things in the wider scheme of things – security updates in particular, but of course there are those regular “feature packs” or “improvements” which will typically change something to the ire of some – it happens, you can’t please everyone, we’re all different with different views and needs.

And, as we’ve seen a fair amount recently… it can go wrong.

We won’t explore the reasons behind this here – there’s far and away much more analysis a mere search away, but there are some fundamental questions that we really should explore.

– How much trust should we put in the Vendors to run any core solutions (if in fact we know who they are at the end of complex supply chains)?

– Do we truly know ours and our customer’s usage patterns?

– Do we need to be on the “bleeding edge” of updates

– Do we have visibility of the level of risk if something went wrong?

– Do we see that Vendor as a business risk?

– Do we know our usage patterns??

– Do we have expertise to support what we are running?

There’s bound to be more questions, but what is the answer?

Frustratingly, there’s never a simple cookie-cutter approach to these things

So many services we consume either are a key component in your day-to-day operations or run services (at a low level) for management of your devices / users.

One key aspect to remember is that in a SaaS solution you are always effectively co-locating your services with many other people’s systems, you simply have the vendor’s code and services controlling the boundaries/resources between them.

Fundamentally, you don’t run it on your own equipment. That’s it. Aside from the speed of “innovation” and some of the service maintenance – all you need to do is maintain your own instance / services / virtual machines. (Simplification here – we all know there is a lot more to that!) –  those SaaS organisations have varying levels of support (some excellent) – but the upshot is they won’t run your business for you.

There is a general view (in some cases) that “it’s in the cloud, it’s more secure and safe” – is it?

Certainly, if you look at some of the shared responsibilities that are part of the overall service you may find that simple aspects such as backups are not part of the provider’s remit.

Equally, if you have any classification, compliance, Sovereignty or regulatory requirements, does the SaaS solution fit within those? Particularly paying interest to both where the data is stored, where it could be stored and where it is processed or could be processed. Referring, of course, to the CLOUD and PATRIOT acts.

What happens when that Vendor makes an error?

Let’s be realistic. People make mistakes. Automation makes mistakes as it’s built by People. It’s something we have to account for. Humans are in the loop, errors will happen.

When things go wrong you may be 1 of 1000’s or more customers who are impacted. That’s just the way it is with cloud-based technology.

SaaS vendors will let you have the latest and greatest update as of release, be this a Malware pattern file or a SaaS update which is deployed – great. but…

A feature or code update could fundamentally change your business process. An understanding of the impacts if one of these breaks core components of your operating procedures or services is a good thing- for example an update changes the way information flows in the business or the way a device works (or breaks either) – or a vendor adds a feature which changes your risk profiles.

How about ephemeral items such as changes to the product licencing? That’s always a risk but how would it impact if you were dependent on the service and moving away was difficult?

Knowing what the services you consume do, what level of utilisation and reliance the business has on them. Even down to the level of “ah if this breaks it could do x ..”

The usage conundrum

Usage is key. Sometimes it’s unknown until you try. If you are providing services internally for your own organisation, you’re going to know or at least have a good idea of what is what.

If you are deploying a service in a SaaS solution for other customers to consume, that’s a completely different container(s) of fish.

Let’s ignore the internal items as we’ll assume you have a good understanding of utilisation, but for your customers – how do you handle it? Sure you can use containerisation, scalability and just-in-time models but there’s more to it then than.

Let’s say you allow customers to download data that is generated – we’ll use the example of a report in this case.

Customer generates the report with the criteria they need, verifies it, downloads. Fantastic (if simple) use-case.

What they will not see, and you have to factor in, is that utilisation. Ultimately it comes to a functionality vs. cost question to what is provided. Do you allow them to generate a report that might consist of millions of lines of data, all that processing and all that potential bandwidth usage?

Ideas

Every organisation is different. Fact.

You may find that you prefer to place all your services in the cloud – there’s nothing wrong with that, it may suit what you are doing, but some considerations could be:

Data Sensitivity: Consider country-based storage options or even private cloud either on premise or hosted to cater for your needs. Understanding how the vendor processes and where the data is physically located is key here.

Updates for devices: Consider abilities for pre-production rolls out prior to the entire organisations or splitting vendors that provide it (if feasible). For example, different teams could have different vendors. It’s a extra overhead to consider but if one goes wrong, only those teams are impacted, or you have a chance potentially to stop the update before everything is impacted if you have a pre-production group.

Updates for SaaS: Consider if the vendor has a release pattern that fits in with your business, a road map for changes you can consider and input into / plan for or allows you to be on a different release track to plan ahead for change. You can’t plan for everything but the more heads-up you get the more time you have to consider options and impact.

Usage: If you have data that is accessed in a predictable usage patter, does not change a lot and does not need to be readily accessible, does it really need that level cloud level of service? Is it worth considering a mix of burstable access and / or steady usage or even on-premise hardware to process it? Sometimes a cost-benefit analysis may actually show a preference for your own systems.

Backups: Does the vendor allow direct access to the data so you can copy it to a storage medium of your choice? Is there a 3rd party tool that can be used? What is the plan to extract your data if you choose to move it at a later time?

The Checklist

Some food for thought for when you evaluate vendors, they may seem familiar – these are no different from standard business continuity thought processes. It’s not exhaustive, but for a start:

  • Is this service core to our business and what are the impacts that occur when things go wrong or change? Does that vendor have a robust testing policy?
  • What is the fallout from those impacts? Be it procedural, financial or reputational.
  • What mitigations can we put in place should something happen? Does the vendor have the ability to for us to be on a later rollout schedule for updates?
  • What fallback or restoration plans can we / should we put in place? Does the vendor have an SLA for this and / or robust plans?
  • What security wrap is in place for the service?
  • What’s the licence model and length of term?
  • Is there data there that we should be able to recover in a period of time and how are we going to do it?
  • What are our customers going to do with it and do we have the knowledge to price it and / or manage that?
  • What type of data is it? Should we place extra safeguards around it?
  • Have we assessed if this vendor processes the data (or metadata) in a way or a location that may cause issues – for example outside the country of use?
  • How is our data segregated from others in the service?
  • Do we really need to use “cloud native” – is it more appropriate to use a hybrid or multi-cloud strategy? Do we need to store it in the country of origin?
  • Is our usage steady and predictable? What is the best cost model for that?
  • How do we support it for both our users and customers?
  • What’s the exit plan? How do we get our data out if we need to (and at what cost!)?

Image by freepik

Leave a comment