It’s a great time to be working with technology. The pace of change is accelerating all the time and as a result, new opportunities to create real value using IT are coming along faster and faster. Hosting services have transformed into hyperscale clouds, with all the servers and services any business could need at their fingertips.
Methodologies have also shifted, so that approaches based on timescales to buy and manage the tin, spinning rust and wires that make up most on-premises datacentres have given way to integrated processes and more flexible teams. And naturally, everyone has jumped on board. Every organisation has built devops teams, reworked their applications to run using cloud-native patterns and has adopted at least one cloud data centre for their fast-moving workloads. Haven’t they?
I regularly speak to CIO’s and senior delivery managers and they often want to make changes but hit a predictable set of internal blockers. Do any of these sounds familiar?
- I need to secure my data, and on-premises is safest (AKA I can’t get this past compliance)
- My team doesn’t have the capacity to take on the additional work or the skills to really use these new innovations
- I can’t build a business case for this with the board
- We bought a load of tin last year and we need to sweat it
Let’s have a slightly jovial look at each of these over the course of this and the next few of articles. Bur first, the steamroller. The current pace and breadth of IT innovation is amazing. It is acting like a steamroller, driving through old business models and practices. Most people working with IT have 3 choices. Stand in front of the steamroller, waving their arms and trying to get it to stop. This is hard, thankless work that will get them flattened. They can stand aside and watch the steamroller change the landscape they work with and destroy their hard work. The third option is to drive the steamroller and make the decisions about what stays and what gets squashed. In the end, if there has to be change (and there does), driving it is better than being hit by it. And, if you’re lucky, you get to drive a steamroller!
On-premises is safest
Nope, no, nyet, nein, non. Everyone who has been in IT for more than 10 years has heard stories about the cleaner pulling the power cable from a rack to power their hoover. These apocryphal tales highlight a real truth. How can outsourced maintenance staff, security guards and others be safely given intimate access to your entire physical server estate?
The sad truth is that on-premises servers give the illusion of security but most organisations cannot afford the level of defence in depth that Microsoft, Google or AWS can provide for physical security. Layers of process, locks, guards and cameras guard their datacentres. Who is guarding yours? When disks are removed are they destroyed immediately, or does a member of staff take them away to do something with them later? Who checks this process to ensure that the data and disk are destroyed properly?
Next, we turn to questions like “who guards your backups as they travel through the building” if you still use tape – and if you don’t use tape then you already have a cloud adoption (possibly rebadged by a vendor and hidden from you).
Virtual security should be there by design, and it is needed just as much on premises as in a cloud. Here, the need to secure access to resources should be tightly controlled by processes and systems that use intelligent technology to highlight if logins are occurring by unusual people from unusual locations. It’s definitely easy to use such technologies in cloud, and yes some of them can be used on premises as well. In Azure, however, you control your environment with code. It’s relatively easy to implement a “best of breed” environment based on patterns that are proven to work. Correct segregation between services, in line with vendor and industry best practice. Naturally, everyone does that on-premises too …. Don’t they?
There is far, far more to security and threat management than these few points but I hope that some points are clear – Cloud can be as secure as on premises, if not more so. To achieve this, however, there is some work to move us from where we are to where we need to be.
I don’t think we can get this past compliance!
Warning to the reader – the next paragraph is filled with secrets …. well information about secrets anyway. It’s a massive summary and simplification but can safely be skipped if you really don’t care about compliance, data classification or information security.
The level of certification for Azure in the UK is Official-Sensitive. For those who haven’t read the thrilling 35 pages of Government-Security-Classifications-April-2014 the summary definition of Official is:
“The majority of information that is created or processed by the public sector. This includes routine business operations and services, some of which could have damaging consequences if lost, stolen or published in the media, but are not subject to a heightened threat profile.”
Recent guidance (see previous article) confirms that all NHS data is at most Official, and may also be Sensitive. Again, back to those delightful 35 pages to see that:
“A limited subset of OFFICIAL information could have more damaging consequences (for individuals, an organisation or government generally) if it were lost, stolen or published in the media. This subset of information should still be managed within the “OFFICIAL” classification tier, but may attract additional measures (generally procedural or personnel) to reinforce the “need to know”. In such cases where there is a clear and justifiable requirement to reinforce the “need to know”, assets should be conspicuously marked: “OFFICIAL–SENSITIVE’”.
TL;DR All of the above text means 1 thing. Unless you’re a part of a subset of central government departments and security services who have access to real SECRET information (or your actually a spook who can access TOP SECRET data) you can store everything in Azure perfectly safely – as long as you do it right.