When it comes to vulnerability testing, what should be in scope? In my view, that’s a really easy question to answer.
Everything.
Everything connected to your organisation’s network or using your organisation’s resources, including in the cloud, are in scope. The weighting of vulnerability findings will take into consideration their physical location as well as the data they hold and the services they provide. This might also change the frequency of vulnerability tests you run against them. Unless we include it in scope we’ll never know what risk it presents to us.
Focus Areas
- Networked devices
- Cloud Services
- Mobile devices (smartphones, tablets, etc.)
1. Networked Devices
This section addresses devices directly connected to the organisations networks.
Defining the vulnerability testing scope of your internal network-based scans
We need to test everything. With that in mind, we’ll start by including everything in RFC 1918:
10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
That’s a huge amount of address space to be scanning which is not very practical. Instead, we’ll look at other sources to see which ranges to prioritise. We do still need to run asset discovery scans against all of these ranges periodically just to make sure our detailed scoping and scanning is catching everything. Once a quarter, once every six months, or even once a year may be enough. All we’re trying to do is prove we haven’t missed anything with the next steps.
Looking at Routing Tables
Routing tables from your network equipment should be one of the best sources of information for which subnets are in use on your network. Dynamic routing protocols such as RIP, OSPF, EIGRP, etc. will update other routers with networks they know about. Each router figures out which networks they are directly connected to by looking at their own interfaces and static routes. Then they build up a route table to share with all of the others they are connected to. Eventually, this converges and a full network map is created.
Unfortunately, this can’t be guaranteed to be 100% complete. It is possible to opt-out of sharing specific networks through dynamic routing protocols. Logging into one router and showing the full route table sounds like a great way to find everything but that’s not always the case.
Looking at Router Config Files
We want to ensure that the network information we have is up-to-date and accurate. Router config files from running devices have to be relevant otherwise the network won’t work as expected. We may, however, have hundreds or thousands of routers on our network. So how do we review them all efficiently and effectively?
If you’re following best practice you will have offline backups of all of your router’s config files for every single device on your network. (If you haven’t got this you should probably look into it. Otherwise, when a router blows up at three in the morning and your local person shows up to plug a new one in, what config are you going to ask them to apply to it?) With a central repository of all of the config files we can grep out all of the route tables, interface configs, and other useful stuff, in batch. With a little bit of scripting, and some sorting and de-duplicating, we can have an accurate list of configured networks and interfaces and a shell script to pull them all for us in a repeatable way.
If you’re not following best practice you’ll need to SSH to every router and download the latest running-config. While you’re at it you should look at putting in a process to keep backups at the same time. Once you have all of the config files, create the same processing script and grab all of the known networks, interfaces, IP addresses, etc. that they contain.
Public IP addresses
Public IP addresses are addresses that fall outside of the RFC 1918 defined ranges. These are all ranges that have been assigned to people for use on the publicly routable internet. Your router config files will contain public IP addresses that are truly public-facing. These are the interfaces on your firewalls and routers directly connected to the internet. Depending on how well your network has been designed, it also may contain public IP addresses that are used internally only. I’ve seen this more often than you would imagine, even on brand new network deployments.
If you find any public IP ranges used internally keep track of them as we’ll need to exclude them from our external testing scope. If we don’t do this we’ll end up scanning other people on the internet without permission which isn’t cool.
Now that we have a list of all IP addresses/ranges in use on our network we can reduce the three enormous RFC 1918 ranges into something more manageable and specify what our external footprint looks like.
2. Cloud Services
This section covers cloud services such as Microsoft Azure, Amazon AWS, Google Cloud Platform, and SaaS applications.
Defining Vulnerability Testing Scope for Cloud Services
With cloud services we generally mean one of these three things:
- Software as a Service (SaaS)
- Platform as a Service (PaaS)
- Infrastructure as a Service (IaaS)
SaaS and PaaS cloud services are tricky as you don’t legally own any of the infrastructure. It’s not yours to test. What you can do however is test your configuration and implementation to ensure you’re meeting best practices, you haven’t left anything open that probably shouldn’t be, (public S3 buckets), and so on.
Products such as Tenable.io have built-in tools to connect to platforms like Microsoft Azure, Amazon AWS, and Google Cloud Platform through their APIs to test for secure configuration and usage. These three platforms also have their own built-in tools that give you a view of some of the vulnerabilities that are present on your environment.
IaaS is slightly different as the resources you’re renting are technically yours for the duration of the time you’re running them. Running vulnerability scans against your own virtual machines is generally fair game, but don’t start running scans against their surrounding infrastructure or you’ll have a bad time.
Like pulling network ranges and IP addresses from router config files, you can generally pull a list of all resources from the IaaS platform using their API. Commercial vulnerability scanners tend to include connectors to link to IaaS platforms for you to pull assets back automatically in this way.
In summary, include them in your vulnerability testing scope, but be very careful of how you actually test them as you don’t have ownership of much, if any, of it. If in doubt, check with the supplier before actually doing anything to ensure you’re not breaking their terms and conditions or the law.
3. Mobile Devices
Mobile devices such as smartphones and tablets come with their own challenges. They’re often not connected to your organisation’s network, use only cloud services, both officially provided and shadow IT, and can be full of vulnerabilities.
Any mobile device using your organisation’s data should be in scope for vulnerability testing. The way to enforce this can be challenging as not all devices will have been purchased by the organisation. Personal devices (BYOD) becoming more and more common.
This is where organisational policy is essential for bringing these devices into vulnerability testing scope. A sandboxing MDM solution such as Microsoft Intune will allow you to create a safe space on an end user’s device without requiring full access to everything on it. You can monitor patch levels, restrict access to company data to only apps published and running in your sandbox (and at the correct version) and prevent sharing between corporate and non-corporate apps on the device.
This is all easier to say than do. I expect you’ll find some of the greatest challenges when it comes to running vulnerability tests against mobile devices, particularly personal devices. Let me know what your thoughts are as I’m keen to get other opinions.
Vulnerability Testing Scope Verification
What we’ve done is not a one-off process. It takes a snapshot of what your network looks like at a point in time. We need to repeat this on a periodic basis to ensure that we’ve captured all changes that have occurred.
- Host discovery – ping/limited TCP scanning of all of the RFC 1918 ranges in full to see what is out there
- Network config analysis – running a repeatable process against all of the router config files on your network (this includes L3 switches)
Other Sources of Scope Information
While we’ve looked at the network in detail and covered what can and can’t be done with cloud platforms, there are other sources of information that are useful for defining our vulnerability testing scope. Some of these are:
- Public DNS for domains we own
- Private/internal DNS including all computers within Active Directory
- OSINT
- Manually created documentation
- Web gateway/proxy/CASB logs for what people are using to do their work
- Physical inspection for shadow networks (not connected to the rest of the corporate network and with their own ISP connection)
- Talking to people who work for IT in different areas of the business to see what they’ve seen.
Leave a Reply