IT Security – are you hitting IT hard enough?

Paul Foley

Chief Technology Officer

As we all know, the cyber security environment remains challenging. For those who don’t have a reasonable security stance, the chances of the word ‘breach’ occurring in daily conversations continues to increase. What can be done to increase your company’s stance and better manage your risk?

I think we all take it for granted that everyone does pen tests (penetration testing) and that this is handled by a third-party company who does nothing else but this.

But this is not actually the case. A lot of companies either don’t perform regular pen tests or they do it internally with staff that may have some of the required skills but typically don’t have them all. And whilst you could argue that something is better than nothing, I’d argue that something gives senior management a false sense of security when in fact they should be worrying.

Several of you reading this will be asking “is your stance based purely on your ability to perform pen tests?” – and the brutally short answer is “NO!” - and anyone that tries to convince you that the answer is ‘yes’ should not be in your IT team unless they’re visiting with a nurse/responsible adult.  

What else can you do to improve your security stance and what’s the impact?

Interestingly your security stance is like your culture, it starts at the front door and goes all the way through the building to the car park and beyond.

It starts with you being ISO or SOCII certified, it continues with you having in place security policies and training, automated tools to validate source code quality and the health of your deployment packages. It includes formalised testing and if you’re fortunate enough to have good QA team it includes automated testing for regression testing, it involves your business facing teams validating releases, then we move on to the infrastructure (network security, firewalls, etc) – and finally we go back to external testing.

Before we’ve released anything to a production environment, we’ve got a culture in place that embraces security, we have tools in place to validate development efforts, we have a QA team to protect the business from IT and we have a business team making sure that they understand and agree with IT’s deliverables – and then we have infrastructure to detect and restrict access to threats.

What are we trying to achieve with pen tests if we already have all those things in place?

With a pen test we’re trying to simulate a specific set of potential attacks. Normally a pen test covers new functionality that you’ve added (or made significant changes to). If you haven’t made significant changes but you’ve updated operating systems or libraries, then your pen tests would typically cover these instead.

This sounds great, but pen tests are typically quite limited in their scope – which is one of the reasons that you perform multiple tests over the course of the year.

Upgrading from pen tests to Red Team testing

Whilst a pen test attacks a specific function, a Red Team test attacks a complete business area.

A pen test might check that an API endpoint is secure, but a red team test might check that a hacker can’t use any method possible to gain access to a system.

The Red Team are always an external company and depending on your mandate with them, they will be more or less imaginative.

To give you real word examples, we’ve provided access to infrastructure to allow hackers to simulate an internal attack but there are companies out there who have gone as far as to get their staff to take jobs as cleaners and to gain physical access to the client building before physically removing infrastructure (we use Azure data centres, so that’s not really possible for us).

The Red Team tests are a really good way of testing specific scenarios and by using external resources the results allow a company to identify, without bias, any issues that may exist around that scenario.

Red Team testing is a positive step forward – but can also be potentially restricted due to time and/or budget.

Taking it to the next level

One of the things that all IT departments should be doing on a regular basis is Disaster Recovery exercises.

The point of a DR exercise is to simulate a failure somewhere in the business – the wider the scope of these exercises the better it is for the business.

The goal of the DR exercise is to ensure that in the case of significant failure, the business will bounce back in line with the defined internal processes. It’s a way of making sure that everyone knows what to do when things go sideways and that the theory of how the business can recover matches the practice of what happens.

What does this look like in real terms?

In real terms only the risk manager and the CTO/CIO should know what the DR exercise includes.

It should include as many people as possible across the entire business, preferably across language, geographical and cultural divides – and it should be as realistic as possible.

There are companies who simply turn off a server and when the IT department put’s it back they let everyone know what a successful exercise it was.

When we perform a DR exercise, we’re trying to gain value for the business – we’re trying to identify risks that aren’t being addressed as well as they could be, we’re raising awareness across the business and we’re validating that as a business we are able to recover with minimal RPO and RTO (How much data will you lose, how much time will it take to recover).

So, if you’re simply turning off a server, you’re not putting the organization under stress and you’re performing a lightweight attack to make people happy – which means you’re doing your organisation a dis-service and as a CTO/CIO or Risk Manager you should probably not put your name against it.

So, what did we do?

In our last DR exercise, we simulated an insider attack, we used social engineering to gain access to environments that a user should not have access to and then provided an external player with access – we then tasked the DevOps team with identifying what the “deceased” team member and external attacker had done and in which environments. We then had the IT department and the business audit the system to identify every single change that had been made and what the impact was. In the process of doing this we validated our internal processes; we validated the responses of the different teams and individuals, and we validated our incident management plan.

We also identified several improvements, such as implementing some additional reports within our SIEM, improving the AI model within our SIEM, adding additional subjects as part of our security training.

The company derived value and our clients, completely oblivious to our efforts, benefitted from our teams practicing for ‘what if’.  

And for reference, last time I “killed” the head of DevOps during a cyber-attack, this time I “killed” a key member of the business to hide evidence of infiltration – next time, I’m probably going to eliminate more people. People are still talking about our DR exercises – they’re a good source of after-dinner conversations, they enforce security training, they ensure your team are ready for “what if?” but perhaps most importantly, they help solidify a company culture.

If you take nothing from this text except that last phrase – then you will have gained something positive for your organisation.

*No members of the IT team were actually harmed during the filming of ‘The DR Games’.

Paul Foley has worked in financial services for over 20 years, including trading, wealth management, private banking and institutional crypto. He joined qashqade in September 2022 as CTO and is responsible for product development and the IT function of qashqade. Pauls holds a BSc from Derby University.

A Jumble of Waterfalls: Making sense of distribution calculation models2024 private markets summer survey results
See how qashqade can help you, speak to our team today