Common Approaches to Automated Application Security Testing

Not All Automated Software Security Assessment Approaches Are Created Equal When planning a testing strategy for an application, it is important to evaluate the applicability and likely effectiveness of the various testing approach options. The two most common approaches to automated application security testing are static application security testing (SAST) and dynamic application security testing (DAST).

SAST involves testing application artifacts - such as source code or application binaries - at rest.

The testing tool reviews the application source or binary to build a model, and then performs various types of analyses such as semantic, data flow, or control flow to identify potential vulnerabilities based on a set of rules tuned for the type of application, language, and application framework in use. DAST involves testing a running application by sending inputs and analyzing the application's responses to see if those request/response patterns indicate the presence of security vulnerabilities.There are several concerns to take into account when evaluating SAST. The first potential roadblock would be not having access to the application source code.

Often, this is available for software that has been developed in-house however, in some cases internal security controls or organizational politics will make it difficult, if not impossible, for you to get access to the source code. Additionally, if you are planning to test third party software it is likely that you will not have access to source code unless you have negotiated this access. With no source code access, static analysis is likely off the table.

While there are some tools and providers that enable security static analysis based on an application binary, they often require binaries compiled with debug symbols or other compiler settings, which might be similarly difficult to acquire for third party software. A second potential roadblock could be if your static analysis tool doesn't have support for the application's language. If you have an application written in the Go language, but your static analysis tool only supports Java and .NET, then that tool is not going to be useful for testing the target application. Most static analysis tools have solid support for common enterprise development languages such as Java and .NET, but are either missing support or have less mature support for other languages.

Similarly, you must also consider if your static analysis tool has support for the application type. Typically, these tools have two major components - analyzers that build a model of the application to test, and rules that are applied to the model during testing. Most commercial static analysis tools have support for web applications, and many have support for mobile applications, but if you are looking to test IoT applications you need to consider how mature the rule set is for the application in question.

Missing or less mature rule sets will result in less effective testing. The final potential roadblock will be if your static analysis tool doesn't have support for any application frameworks in use. This is most frequently an issue for web applications but it can be applied for other application types as well. If you have a web application that is written in Java using the Spring framework, it is also important to understand if the static analysis tool has the ability to detect the use of important Spring constructs, as well as rules for detecting security issues based on the way that Spring handles user requests. Just as the evaluation of SAST tools can uncover potential issues in testing, looking at dynamic application security testing (DAST) can present roadblocks as well.

Most commercial DAST tools are intended to be used to test web applications, thus, you must confirm that the application in question is in fact a web application. There are both open source and commercial fuzzing tools and frameworks that can be used to test other types of applications - such as server daemons - but the applicability of these techniques and the level of expertise required to take advantage of them may make them less attractive options when starting to craft a testing program. Assuming the application is web-based, you must ask if your DAST tool supports testing the target application. Not all DAST tools are capable of testing APIs or web applications written using different web application development approaches such as single page applications (SPAs).

Critically, you must also determine if you have permission to test the system in question. For software developed in-house this is hopefully straightforward. You can either use a production instance or - often better - a pre-production version of the application for security testing.

However, for software developed or hosted by suppliers, partners, or other third parties, testing access can be more complicated. It is best to negotiate rights for security testing, as well as requirements for remediation, during the application acquisition process as this can be hard to do after the fact. Understanding your ability to test these systems will be critical in determining what, if any, testing approach can be taken.Not all automated assessment approaches are created equal.

When developing an automated testing strategy for an application, it is critical to match the testing approach and testing tool to the characteristics of the target application.

A failure to do this properly will result in ineffective or less effective testing, but properly matching automated testing tools to applications can provide you valuable insight into the applications' security.

Stack Ranking SSL Vulnerabilities: The ROBOT Attack

At least two additional security vendors, including IBM and Palo Alto Networks, have been added to the list of vendors[1] vulnerable to a variation on the Bleichenbacher attack called the ROBOT attack. The attack was published by a trio of researchers, Hanno Bock, Juraj Somorovsky, and Craig Young. They dusted off the old Bleichenbacher attack against RSA key exchanges and ran it against a set of modern TLS stacks, finding that some were vulnerable.

They contacted each of the vulnerable websites they found, and worked with the underlying TLS stack vendors, following the proper disclosure process. They've published their research in a paper titled "Return Of Bleichenbacher's Oracle Threat Or how we signed a Message with Facebook's Private Key." In the tradition of named TLS protocol vulnerabilities that started with Heartbleed, the ROBOT attack has its own website[2] and logo. Disclaimer: the company I work for, F5 Networks, is vulnerable[3] to this class of attacks.

I'm put in the somewhat precarious position of judging this vulnerability and trying to stay neutral. If anything, I think my scoring system below betrays the significance of this vulnerability; but by how much is an open question. Everything old is new again.

The Bleichenbacher "million message attack" (PDF[4]) isn't actually new: primers on SSL/TLS mentioned it as early as 1998. The original padding oracle attack for TLS, Bleichenbacher sends thousands of variations of ciphertext at a TLS server. The TLS server attempts to decrypt each one, and sends back one of two error codes--either the decrypt failed, or the padding was messed up.

By trying many variations of a message containing a third-party's TLS session, and differentiating between the two error codes, the attacker could eventually reconstruct the session, one bit at time. Within that TLS session the attacker might find user credentials, and then the breach is on. Bleichenbacher has since been refined (PDF[5]) to the point where this version requires only tens of thousands of attempts.

The Bleichenbacher attack raises its head every now and then. Filippo Valsorda found[6] one in the python-rsa library in 2016. A German team found one in XML encryption in 2012.

Another German team wrote about optimizations for Bleichenbacher in this 2014 paper (PDF)[7]. (In fact, Bleichenbachers come up often enough that I had trouble figuring out how to title this article because "revenge of[8]" and "strikes again" and "revisiting[9]" for Bleichenbacher were all taken.) What's the impact?

Bleichenbacher is a neat little theoretical protocol attack. However, to my knowledge, no real Bleichenbacher attack has ever been seen in the field. Forward secret ciphers (ECDHE and DHE) aren't vulnerable to RSA Bleichenbacher's and 90% of today's popular TLS stacks prefer forward secrecy, which is also supported by all major browsers.

So the scope of the vulnerability is limited to two classes of users. Those that are using RSA because for legacy reasons (think Windows XP users in the third world) or organizations that specifically disallow forward secrecy because of passive monitoring requirements. Many financial organizations fall into this category, and their end-users are the ones most at risk.

86% of internet hosts prefer forward secrecy; all modern browsers do, too The researchers claim that their proof-of-concept code can get the server to sign an arbitrary message.

The obvious thing to do would be to get the oracle to sign an active TLS handshake, allowing for a real TLS man-in-the-middle attack a la LOGJAM. But an attacker could (probably) not do an MitM in real time. Even the optimized Bleichenbacher attack requires tens of thousands and that's its biggest limitation.

Running the researchers' POC code took 12 hours on my virtual test harness. Fast server hardware gets it down to six hours, but after LOGJAM, administrators figured they should bound the SSL handshake by 6 seconds or less to avoid this kind of problem. The other obvious target would be to get the Bleichenbacher padding oracle to decrypt someone else's pre-master-secret, and use that to crack a recorded TLS session.

That's a real possibility with this optimized attack, and one that should be mitigated ASAP. How does this Bleichenbacher score? I try to provide a relative ranking of SSL/TLS vulnerabilities, independent of the CVSS scoring.

My ranking is for enterprise architects with a specific focus on TLS. The numbers generated aren't necessarily the important part; it's the relative ranking to other vulnerabilities. So far, Heartbleed remains the worst of all time, with nothing even in the same class.

If this Bleichenbacher is used to crack a session, then its impact score is: ROBOT stack rank score = 15 - Impact = session key derivation = 3

- Exploitability = requires MiTM and thousands of messages = 5 Stack Ranking SSL Vulnerabilities: The ROBOT Attack Mitigation Strategies

There are at least three mitigations for this edition of the Bleichenbacher attack. First, if your TLS server manufacturer has issued patches, go get them and patch your appliances or software. The second option is to require only forward-secret (non RSA) key exchange ciphers.

Bleichenbacher only works against RSA handshakes, so elliptic curve and Diffie-Hellman handshakes are safe. Most of the Internet already supports (and prefers) forward secrecy, so some sites may opt for this solution. Whether you decide to is up to you--I would measure the percentage of your browsers still connecting with RSA before you make that call.

RSA certificates used with forward secret ciphers are still okay. The third option is to rate-limit SSL handshakes from individual IP addresses. If your TLS solution has a programmable data plane, a simple rule that tracks outstanding TLS requests per flow is sufficient.

As a result of the LOGJAM attack from 2015, you should already be limiting TLS connection handshakes to 10 seconds or less. Why does this keep happening? If we all knew about Bleichenbacher as early as SSLv3, why do implementations keep falling prey to it?

One reason is that TLS implementers are just nice. They write code that tries to tell you why the server rejected your message. Maybe that information would be useful to the implementer of the client side.

Have you ever tried to debug a problem where the other side wouldn't give you any useful information other than "bad input"? In order to foil padding oracle attacks (and their uglier cousins, timing oracle attacks), implementers need to return a single error code (just pick one) and take the same amount of time to validate the input, no matter what kind of malicious input was given. There is a proper way to do that.

Instead of parsing the incoming message, the implementer of the server needs to construct a copy of what it expects to find, and then compare the incoming message to the copy. If they match, great. If not, then it should return a single error code.

This fixes padding oracles and timing oracles too. But that's next-level thinking that only fairly sophisticated infosec types have experience with. A basic, or even intermediate, programmer isn't going to be thinking that far ahead when building a crypto library or server.

The researchers note that there do not appear to be any easy-to-run Bleichenbacher unit tests available in today's crypto testing frameworks and conclude that few people test for them. There is no easy, free X509 parser for Python2, for example, which many testing frameworks use. Will this be the last Bleichenbacher?

The forthcoming TLS 1.3 protocol requires forward-secret ciphers, so it will be safe from RSA Bleichenbacher attacks. But I expect adoption of TLS 1.3 to be somewhat slow, because forward secrecy isn't free and introduces major visibility problems for enterprises with significant SSL inspection investments. I expect TLS 1.2 and its support of RSA handshakes to be around for several more years.

And at the current rate of a Bleichenbacher every couple of years, I guess we have a few more to see.


  1. ^ list of vendors (
  2. ^ website (
  3. ^ is vulnerable (
  4. ^ PDF (
  5. ^ PDF (
  6. ^ found (
  7. ^ paper (PDF) (
  8. ^ revenge of (
  9. ^ revisiting (

Risky Business (Part 2): Why You Need a Risk Treatment Plan

Performing a Risk Analysis and Taking Due Care Are No Longer Optional

Now hear this: You will always have exposure.

No company has the ability to mitigate all risks at all times. No company I've ever visited has even had all of its identified risks treated at any given point.

Yet so many companies lead their security strategy with controls. They'll make sizable investments in security appliances without fully understanding why the appliance is required.

They'll implement their controls without documentation of what the actual risks are and how they're being treated.

You may have learned about due diligence and due care[1], but this situation amounts to omitting both. To bridge that gap, you need a risk treatment plan.

The objective of a risk treatment plan is to document your exposure and show that the organization is applying appropriate resources to mitigate it in a reasonable timeframe.

Not only does this tie your mitigation efforts to the actual business risks being addressed, but the RTP is really a form of risk treatment in itself. Even if you can't mitigate every risk, you're documenting that you have a plan to deal with those risks -- and having your efforts documented provides some recourse to prove due care.

This is important when it comes to any form of litigation.

Today we live in a world where if (or when) you have a breach, you are going to have litigation. When you're working with your legal team, third parties, and insurance companies, the more detailed your treatment plan, the better position you are in.

If you can show you did the appropriate risk analysis, leveraged reasonable means to put a plan in place, and acted on that plan, you can minimize the impact of any breach and resulting litigation on the organization. Not only can you prove you did your due diligence in the risk assessment, but more importantly, you can prove you did your due care in building out a plan and, ultimately, following it.

So what constitutes reasonable efforts?

That judgement is generally based off the company's capacity to deal with a given risk. It should never be a requirement of the business to spend so much on mitigating a risk that it puts them out of business. So for smaller or mid-sized companies, it's reasonable to say that for some noncritical risks you're taking three years to do treatment when that's the timeframe your available resources will allow.

For example, your plan should lay out the mitigation to be implemented and state that, based on current budget, it would take 18 months to make the full investment.

Realistically, there will be risk exposure for those 18 months, but now you can be fully transparent with customers. In many instances, the customer will agree the plan is reasonable and write it into the contract that that you must execute against the plan.

On the other hand, the absence of a plan is negligence, period. If a customer trusts us with critical data and we are not doing our due diligence to understand the risk and document how it will be treated, that's negligence in a court of law.

That negligence is compounded if due diligence is done without due care, because you know you have a high-impact asset and you're aware of its associated risk, but haven't documented the steps you're taking to deal with it.

All of this is going to become even more important in the age of GDPR. There are so many measures the security industry considers optional today. GDPR is going to change that, putting some teeth into regulating security practices in the EU.

And since business is so global, eventually there will be other regulations and regulatory bodies in the U.S.

Consider how the financial industry has built out authorities and regulations such as the SEC and Sarbanes-Oxley.

Just as you would never run a business without appropriate financial controls, performing a risk analysis and taking due care are no longer optional.

They are mandatory.

Building your plan out today will put you in a position to ensure you're not negligent before you have to prove it.


  1. ^ due diligence and due care (