Network Security

New Features Added to CERT Tapioca Tool

The CERT Coordination Center (CERT/CC) at Carnegie Mellon University this week announced the launch of a new version of the network traffic analysis tool CERT Tapioca. CERT Tapioca was first released in 2014 as a network-layer man-in-the-middle (MITM) proxy virtual machine designed for identifying apps that fail to validate certificates and investigating the content of HTTP and HTTPS traffic. CERT Tapioca has been used to identify Android applications that fail to properly validate SSL certificates and expose users to MitM attacks.

More than one million apps have been checked and over 23,000 of them failed dynamic testing. The tool can be used to analyze network traffic not only on smartphones, but also on IoT devices, computers and VMs. Will Dormann, vulnerability analyst at CERT/CC and developer of CERT Tapioca, on Thursday announced the release of version 2.0, which introduces a graphical user interface and can be installed on multiple Linux distributions, including Red Hat, CentOS, Fedora, Ubuntu, OpenSUSE, and Raspbian.

CERT Tapioca 2.0 also allows users to set up a HOSTAP-compatible Wi-Fi adapter for wireless connectivity, and it can save results from multiple tested systems. In addition to checking HTTPS validation, verifying an application's use of modern cryptography standards, and observing the hosts contacted by an application, Tapioca now allows users to search network traffic for specified strings, such as passwords.

The CERT Tapioca 2.0 source code, along with additional details and usage instructions, are available on GitHub. Related: Kaspersky Releases Open Source Digital Forensics Tool Related: Secureworks Releases Open Source IDS Tools

Related: UK's GCHQ Spy Agency Launches Open Source Data Analysis Tool

Fitting Forward Secrecy into Today's Security Architecture

Forward Secrecy's day has come - for most. The cryptographic technique (sometimes called Perfect Forward Secrecy or PFS), adds an additional layer of confidentiality to an encrypted session, ensuring that only the two endpoints can decrypt the traffic. With forward secrecy, even if a third party were to record an encrypted session, and later gain access to the server private key, they could not use that key to decrypt a session protected by forward secrecy.

Neat, huh? Forward secrecy thwarts large-scale passive surveillance (such as might be conducted by a snooping nation state or other well-resourced threat actor) so it is seen a tool that helps preserve freedom of speech, privacy, and other rights-of-the-citizenry. It is supported and preferred by every major browser, most mobile browsers and applications, and nearly 90% of TLS hosts on the Internet, according to a recent TLS Telemetry report (PDF).

The crypto community applauds forward secrecy's broad acceptance today. Of course there's a snag, for some. While forward secrecy foils passive surveillance, it also complicates inspection for nearly every SSL security device currently in existence.

High security environments with requirements for encrypted network data even within the datacenter, inspect SSL traffic by sharing the RSA private keys throughout their network. That system no longer works with forward secrecy.

In this common configuration, without forward secrecy, inbound traffic can be inspected at any or every point within the data center. Malware can be found before it reaches gets uploaded to a repository. Anti-fraud measures can be taken against automated transactions.

Brute force logins can be detected and mitigated. SQL injections can be filtered out by web application firewalls. If forward secrecy were to be enabled in this common configuration, only the endpoints (the web user and the application server), can decrypt the traffic, leaving all the security inspection devices blind.

As a result, many high-security environments, like banks and other financial services, don't offer forward secrecy. Which was fine until TLS 1.3, which has no option except for forward secret ciphers. The banking community was unable to convince the IETF TLS committee to accommodate their architectures during the design of TLS 1.3.

TLS 1.3 has had other issues as well, as documented by Nick Sullivan and others, and as a result, is not used by anyone outside of Cloudflare, who helped design it. So what's the fix? Short and Long Term Solutions

If TLS 1.3 starts gaining traction, architects in high-security environments are going to have to consider how to fit it into their existing architectures. Your intrepid writer thinks there are only a handful of possibilities. Stick with TLS 1.2. Remember TLS 1.1?

No one ever used it, even though it was technically superior to TLS 1.0. There's a chance that no one will adopt TLS 1.3 either, because TLS 1.2 doesn't have any real pressing issues, such a protocol vulnerabilities, against it. Sure, TLS 1.3 is slightly faster, but the website IsTLSfastYet.com spent years and years telling everyone that TLS 1.2 was already fast, so why bother upgrading?

Proxy TLS 1.3 to TLS 1.2. In the medium term, solution architects could deploy a TLS 1.3 proxy. The proxy will communicate TLS 1.3 with the end user, and then establish a new TLS 1.2 session through the data center to the servers.

Your intrepid writer has been telling people for a couple of years that this the short term solution, but he's glad to find out that the IETF TLS committee is hinting that this is very likely the long term solution. Skip to Post-Quantum TLS 2.0 (PQTLS2?). Many in the crypto community are worried that an encryption-breaking quantum computer will be here within five years.

NIST is taking quantum seriously, and has accelerated the selection of post-quantum encryption algorithms to a mere two year deadline (which started last November). So around 2020, or even 2022, we may have a new version of TLS that has support for post-quantum algorithms. Actually there's a chance that TLS 1.3 could already accommodate a PQ handshake, but given the awkward dance that TLS 1.3 is already going through just to "fit in" there's a good chance it will change again.

And when it does, let's hope that it "fits in" better with security architectures.

Or that those architectures have matured to better inspect TLS traffic.

Attackers Hide in Plain Sight as Threat Hunting Lags: Report

CISO Survey Shows the Importance of Threat Hunting in the Finance Sector The finance sector has one of the most robust cybersecurity postures in industry. It is heavily regulated, frequently attacked, and well-resourced -- but not immune to cybercriminals.

Ninety percent of financial institutions were targeted by ransomware alone in the past 12 months. Endpoint protection firm Carbon Black surveyed the CISOs of 40 major financial institutions during April 2018 to understand how the finance sector is attacked and what concerns its defenders. Two things most stand out: nearly half (44%) of financial institutions are concerned about the security posture of their technology service providers (TSPs -- the supply chain); and despite their resources, only 37% have established threat hunting teams.

Concern over the supply chain is not surprising. Cybercriminals are increasingly attacking third-parties (who may be less well-protected or have their own security issues) to gain access to the primary target. The Federal Deposit Insurance Corporation (FDIC) is also concerned about the supply chain, and has developed an examination process that includes reviewing public information about the TSPs and their software.

One of the areas that concerns the FDIC is consolidation within the service provider industry. "For example," it notes, "a flawed acquisition strategy may weaken the financial condition of the acquirer, or a poorly planned integration could heighten operational or security risk." Carbon Black recommends that this potential risk be countered by hunt teams and defenders closely assessing their TSP security posture. But, it adds, "Given that 63% of financial institutions have yet to establish threat hunting teams, there should be concern regarding limited visibility into exposure created by TSPs."

But it also considers threat hunting to be important in detecting direct attacks. There are two primary reasons. The first is the increasing tendency for attackers to use fileless attacks that are not easily detected by standard technology; and the second is a growing willingness for attackers to engage in counter-countermeasures; that is, to counter the defender's incident response.

Fileless attacks are increasing across all industry sectors. A typical attack might involve a Flash vulnerability. Flash invokes PowerShell, feeding instructions via the command line.

PowerShell then connects to a stealth C&C server, from where it downloads a more extensive PowerShell script that performs the attack. All of this is done in memory -- no malware file is downloaded and there is nothing for traditional technology defenses to detect. "Active threat hunting," says Carbon Black, "puts defenders 'on the offensive' rather than simply reacting to the deluge of daily alerts." It "aims to find abnormal activity on servers and endpoints that may be signs of compromise, intrusion or exfiltration of data.

Though the concept of threat hunting isn't new, for many organizations the very idea of threat hunting is." But the need for threat hunting goes beyond simple detection of intrusion. "Attackers are able to go off their scripts while defenders are sticking to manual and automated playbooks," warns Carbon Black. "These playbooks are generally based off simple indicators of compromise (IoCs). As a result, security teams are often left thinking they have disrupted the attacker but, with counter incident response, attackers maintain the upper hand."

Compounding this, attackers are beginning to incorporate a secondary command and control in case one is discovered or disrupted. Carbon Black notes that this tactic has already been found in 10% of victims, and predicts it is a tactic that will grow in future months. The principal is that an attacker's ability to improvise and change directions at speed is best countered by a human defender rather than simply a pre-programmed set of incident response steps.

"Financial institutions," suggests Carbon Black, "should aim to improve situational awareness and visibility into the more advanced attacker movements post breach. This must be accompanied with a tactical paradigm shift from prevention to detection. The increasing attack surface, coupled with the utilization of advanced tactics, has allowed attackers to become invisible.

Decreasing dwell time is the true return on investment for any cybersecurity program." In reality, of course, this does not just apply to the finance sector. The same evolving methodology is being used by attackers across all industry sectors.

The need for threat hunting is not limited to finance. "All sectors should take heed," Carbon Black chief cybersecurity officer Tom Kellerman told SecurityWeek. "Generally speaking, financial services tend to be the most secure as they've come under attack with high-profile attack campaigns in recent years." The implication is that if the finance sector is slow to switch to active threat hunting, other sectors will be slower. In April 2018, Carbon Black filed an S-1 registration statement with the U.S. Securities and Exchange Commission (SEC) for a proposed initial public offering (IPO) of its common stock.

Shares of the company (NASDAQ: CBLK) jumped 26% on its first day of trading on May 4. The company has a market capitalization of nearly £1.6 billion at the time of publishing. The company emerged in its current form after its purchase by Bit9 in February 2014.

Related: Fileless Attacks Ten Times More Likely to Succeed: Report

Related: From Chasing Alerts to Hunting Threats: What Makes an Effective SOC is Evolving