Having a diverse background in Information Security has given me what I think is a unique perspective on both the receiving end and the giving end of technical security assessments. In sales support roles, I'm always trying to help understand and get to the bottom of what it is exactly that our customer is most in need of. There's something really rewarding in being able to translate and traverse the middle ground between technical jargon and bridging the gap between sales and executive-level decision makers. It may sound cliche to say, but I honest-to-goodness really have a passion for helping people find the most value in these assessments and to walk away with them being more secure than they were before engaging with our team.
Similarly, it really irks me to my core when I come across a statement of work or the results of a previous assessment and it was performed in a way that does not maximize the effectiveness of said assessment. With so many technical service offerings available and different organizations providing these services, it's hard to fault the customer or even the sales person who may simply struggle to understand them fully. Perhaps there's a limited budget available and services weren't properly prioritized. This is why it's so important to have a technical resource available during the beginning phases of sales conversations, even though most of us in this field just like to focus on delivery. Today, just about everyone offers a Penetration Test but testing methodologies are not always standardized and sadly, some aren't even a pentest by definition!
In this blog I hope to lay out some ways in which as a customer you can help ensure you're getting a quality assessment. If you're a technical resource, I also hope to help outline ways in which you can make sure you're offering the right assessment and delivering consistent, actionable results which are valuable to your customer.
Since I mentioned penetration testing, let's go there. A question I often get, as I imagine most of you readers do as well, is, "What is the difference between a penetration test and a vulnerability scan?". Don't feel bad asking this if you don't know, because sadly, many sales and technical people offering theses services don't seem to know this either. There's also red teaming. It's important to know what you're getting for your money but even more critical when dealing with PCI, because a vulnerability scan won't fulfill the council's requirements and could leave you failing compliance.
I'm probably going to over-simplify this definition for many, but simply put, a vulnerability scan is a passive or active scan of hosts and services to identify vulnerabilities and their severity, impact, and risk to the organization. There is a lot of value in a vulnerability scan, as it helps you proactively identify and resolve potential patching deficiencies and configuration issues before an attacker may. It also compliments your patch management process to ensure patches aren't being missed. However, this by itself, it not a pentest.
A traditional network penetration test (pentest) is the act of exploiting or validating these vulnerabilities with the intent to demonstrate the impact to the organization. Other tools and techniques can be used to simulate what an attacker may do, going further than just a single scan. It's worth noting that both vulnerability scans and penetration tests may or may not include web applications. Some focus on web applications specifically, often referred to as a "Web Application Pentest" or an "Application Security Test".
Lastly, a red team is essentially a penetration test but with the intent of simulating an attacker targeting the environment directly. This is often done in an opsec friendly way to "stay under the radar" and avoid detection from defensive teams and technologies. There's also more reconnaissance up front since the scope and access to the environment isn't likely to be provided by the customer.
Now we should all know at a high level what the differences are between these services. However, you'll see that not all pentests are created equal. If you're looking at penetration testing quotes keep in mind that you're most likely not comparing apples to apples, so going with the most affordable doesn't necessarily mean it will satisfy all of your requirements. Now, if you're looking to "check a box" to meet compliance regulatory requirements or to satisfy your customer requests, you may be okay with a basic "out of the box" assessment. Keep in mind that there are firms (I've seen the service contracts) that offer a vulnerability assessment but call it a penetration test in order to offer competitive pricing, I can only assume. If you come across one of them, please point them to this blog. 😉
Something I came across recently was an outsourced pentest that had already been sold. It's not uncommon to find a limited scope, with the intent to do a sampling of assets in the environment for budget or time constraints. I have my own opinion about sampling when it comes to penetration testing (don't do it!). Essentially an attacker will often find the easiest path in, the weakest link. If you miss it because you didn't look at everything at least once, you're not doing yourself any favors. This particular SOW stated that about 5% of the environment would be tested every quarter, for a year. This included vulnerability scans as well, with the same scope.
I understand wanting to limit the cost, but in this situation it would be better to take that same investment and put it towards a vulnerability scan for the ENTIRE environment, then focus the penetration testing on critical assets and the highest severity findings from the vulnerability assessment. If it can only be done twice a year for the pentest, that's better than four very limited tests. The way this was set up, they'll never have a complete picture of their environment at any one point in time. Had I been involved from the beginning or this was Pondurance offering the service, I would have made these suggestions to the customer in a pre-sales conversation.
Frequency of Testing
I just touched on it in the last paragraph, but the frequency of testing can play a role in the thoroughness and efficiency of an assessment. It is commonly recommended to perform a penetration test about twice a year. This is due to the dynamic nature of enterprise environments and the frequency of security vulnerabilities that are introduced into any system. How much is too much though? I'd rather see a comprehensive penetration test once a year than two or even four "budget" pentests. Attackers are financially motivated and if targeting a specific organization, time is often not a constraint for them. Consultants on the other hand, are. A good penetration tester will make the best use of their time, manually digging and looking for unique opportunities to move laterally and compromise credentials and hosts along the way.
If you decide you do want frequent tests, make sure you're not being over-charged either. The first assessment should have more time allocated to it with subsequent ones benefiting from familiarity and experience with the environment gained.
Maturity / Security Posture
Another common gotcha I see is when a customer or the sales person tries to put the proverbial cart before the horse. You can't run before you can walk.. I'll spare you the rest. 😃 Sometimes I wonder what my own sales team thinks when I'm in a scoping meeting and I'm actively reducing the scope of our services. Fortunately, my team at Pondurance is as passionate as I am about helping our customers so they've always been cool (at least in person!) about my stepping in and altering course. Many customers bring this on themselves, assuming the best place to start in their security journey is to go all out and do a red team assessment.
I often offer a lower cost but more effective first step, such as a security architecture review (gap analysis) or perhaps a vulnerability management program. Similarly, we offer a penetration test with every vulnerability management program offering. Many customers initially want the pentest first, followed by monthly external scans and quarterly internal scans. I always push back on this and instead, suggest we do the pentest at the end of the assessment. What value is there in an easy pentest, demonstrating the environment is full of holes? It's like shooting fish in a barrel, an easy win for the tester. Wouldn't there be so much more value in waiting a year while the customer receives their scan results and work on remediation throughout that time? Then, when they feel they've done everything they can to protect themselves we test that defense by simulating a real-world attack.
Things to Look For
Pre-Engagement Red Flags
One of the earliest indicators when assessing a new partner for security assessments is the questionnaire. This is the document, or form, that the sales representative uses to help scope the engagement appropriately. This document should be pretty telling for how and where they put their emphasis on time. While true that a number of IP addresses or URLs help provide a baseline estimate for determining how much time an assessment may take, there should be follow-up qualifying questions to gain more context around those. How are those accessible? Does a /24 subnet REALLY have all 254 IP addresses in use, or are you paying too much when there are just a handful of hosts within that?
Are they simply quoting you for everything you ask for or are they wanting to discuss what your end goals are with you in order to better serve your needs? This also shouldn't be a meeting to throw more stuff at you, but rather a conversation about the bigger picture to ensure the right services can be offered. This can result in a reduction in scope, as I mentioned above. If it's right for the customer, it should be right for the firm.
Ask about testing methodologies and frameworks. Does their testing include Manual testing? Are they following a standard process such as the Penetration Testing Execution Standard (PTES)? Is there a Quality Assurance component for both the technical work as well as the deliverable? What does the deliverable look like? Can you see a redacted version? Are their reports actionable with clear recommendations and not just a regurgitation of all of the issues you'll be facing in your copy?
Lastly, and probably most obvious, is the Statement of Work (SOW). Does this contract clearly define the testing process and expected deliverable formats? Does it specify the project management component? Exactly how much time is dedicated to manual testing vs automated scanning. Are they charging time for tools to run? Are compliance tests called out for their specific requirements? Is retesting something you're expecting and is it a separate line item? What are their data retention periods? There have been some big security vendors in the news recently for breaches that resulted in sensitive client data being exposed. *cough* Hacking Team *cough*
Engagement Red Flags
Once the engagement is sold and there's a kickoff meeting to discuss expectations around timing, the testing process, and delivery, do you discuss these things in detail? Are the rules of engagement specifically called out and discussed in depth?
Something I've found from my experience as a Systems Administrator and being on the receiving end of a penetration test, is that you may have certain expectations for findings. For example, I was at an organization where we had certain service accounts we knew needed to be transitioned, as well as some unsupported operating systems we had scheduled to retire. I specifically looked for these results on the penetration test report as a quick sanity check to make sure they were at least finding the low hanging fruit.
Ironically, as a tester I'm always concerned the client is doing the same to me, and they should! It's a great way to check my work and it's a challenge to make sure I try to find everything I can. No pentester can ever find everything, but again, the low hanging fruit should be discovered and exploited if possible. I wonder how many of our customers have had honeypots and I just didn't know it. 😅 Although this could be seen as a waste of time, it's yet another way to measure the effectiveness of your red team. I've also been pitted against a blue team SOC and blacklisting security devices, which made me really think carefully about how I was going to do my passive information gathering and fly under the radar. Now we get into purple-team operations where we can test the effectiveness of both teams and use the results as a training opportunity for each!
Post-Engagement Red Flags for Future Consideration
It may be difficult to determine the quality of the test based on the results alone. After all, the clients aren't typically as technical in the same areas as the company providing the testing services. However, the quality of the report should be obvious. Do they do a good job of breaking down the main issues in a prioritized, easy to understand executive summary? Does the report also give enough technical detail so that the responding IT department can resolve the issues being addressed?
Does their deliverable contain screenshots as evidence of the exploited vulnerabilities and are they effective in demonstrating the risk posed by them? Are there other supporting data files from tool outputs, such as vulnerability scan dumps, tool state files, and raw stdout? Are they willing to share a list of all of the tools that were used during the assessment? Part of the value in the assessment may be the education of tools and processes which can be used in internal training. A big part of what I do in Dynamic Application Security Assessments is to provide Burp Suite Professional project state files so that the developers can load the findings into their own tools and replay the payloads to verify that their findings were resolved on their own. I've even had the sales team in some instances add in an ad-hoc training opportunity for an entire team, tacked on to the end of the review meetings.
Speaking of review meetings, this in my opinion, is the most valuable part of the engagement for the customer. This should be offered a week or two after the results are provided, to allow time for the customer to digest and form questions. Are they conducting these as personally as possible? These should be done face to face, when feasible, and should be an open-floor presentation style to allow for healthy back-and-forth dialogue. It's an opportunity to really utilize those advisory resources that the consultancy has to offer by asking questions and making sure everyone's on the same page in regards to remediation, etc. Lastly, is the project closed immediately after the review meeting and the invoice is paid, or do they offer to answer lingering questions afterwards? I always personally offer this, knowing there may not be time available in the budget to charge to because I see the value in helping customers who in all likelihood, won't get around to resolving the huge list of issues you dumped into their laps until after the project is finished.
This is by no means a comprehensive list of things to do and look for when shopping for or offering security services. These are just a few things I see regularly and since I have a passion for making sure people get the most "bang for their buck", I wanted to share with the community as well. I think we can all do a better job as the technical delivery and sales teams to meet our customer's needs, and I strongly believe developing that quality reputation goes a long way in the overall success of the business. A lot of that comes down to communication up front, and not assuming our customers know what's best for themselves. We need to listen to them if they want something specific, but they're also hiring us to be their trusted advisors.
Please share any other thoughts and ideas! I'd love to hear how people are testing their testers. 😄
- Curtis Brazzell