Friday, November 22, 2024

This Week in Security: Footguns, Bing Worms, and Gogs

The world of security research is no stranger to the phenomenon of not-a-vulnerability. That’s where a security researcher finds something interesting, reports it to the project, and it turns out that it’s something other than a real security vulnerability. There are times that this just means a researcher got over-zealous on reporting, and didn’t really understand what was found. There is at least one other case, the footgun.

A footgun is a feature in a language, library, or tool that too easily leads to catastrophic mistake — shooting ones self in the foot. The main difference between a footgun and a vulnerability is that a footgun is intentional, and a vulnerability is not. That line is sometimes blurred, so an undocumented footgun could also be a vulnerability, and one possible solution is to properly document the quirk. But sometimes the footgun should really just be eliminated. And that’s what the article linked above is about. [Alex Leahu] takes a look at a handful of examples, which are not only educational, but also a good exercise in thinking through how to improve them.

The first example is Tesla from the Elixer language. Tesla is an HTTP/HTTPS client, not unlike libcurl, and the basic usage pattern is to initialize an instance with a base_url defined. So we could create an instance, and set the URL base to https://hackaday.com. Then, to access a page or endpoint on that base URL, you just call a Tesla.get(), and supply the client instance and path. The whole thing might look like:

client = build_client(config, "https://hackaday.com", headers)
response = Tesla.get(client, "/floss")

All is well, as this code snippet does exactly what you expect. The footgun comes when your path isn’t just /floss. If that path starts with a scheme, like http:// or https://, the base URL is ignored, and path is used as the entire URL instead. Is that a vulnerability? It’s clearly documented, so no, definitely not. Is this a footgun, that is probably responsible for vulnerabilities in other code? Yes, very likely. And here’s the interesting question: What is the ideal resolution? How do you get rid of the footgun?

There are two related approaches that come to mind. The first would be to add a function to the library’s API, a Tesla.get_safe() that will never replace the base URL, and update the documentation and examples to use the safe version. The related solution is to then take the extra step of deprecating the unsafe version of the function.

The other example we’ll look at is Psychopg, a PostSQL driver library for Python. The example of correctly using the driver is cur.execute("INSERT INTO numbers VALUES (%s, %s)", (10, 20)), while the incorrect example is cur.execute("INSERT INTO numbers VALUES (%s, %s)" % (10, 20)). The difference may not seem huge, but the first example is sending the values of 10 and 20 as arguments to the library. The second example is doing an printf-like Python string formatting with the % operator. That means it bypasses all the protections this library has to prevent SQL injection. And it’s trivially easy because the library uses % notation. The ideal solution here is pretty straightforward. Deprecate the % SQL notation, and use a different character that isn’t overloaded with a particularly dangerous language functino.

Wormable Bing

[pedbap] went looking for a Cross-Site Scripting (XSS) flaw on Microsoft’s services. The interesting thing here is that Bing is part of that crowd of Microsoft websites, that users automatically get logged in to with their Microsft accounts. An XSS flaw there could have interesting repercussions for the entire system. And since we’re talking about it, there was obviously something there.

The flaw in question was found on Bing maps, where a specific URL can load a map with custom features, though the use of json file specified in the URL. That json file can also include a Keyhole Markup Language file, a KML. These files have a lot of flexibility, like including raw HTML. There is some checking to prevent running arbitrary JavaScript, but that was defeated with a simple mixed case string: jAvAsCriPt:(confirm)(1337). Now the example does require a click to launch the JS, so it’s unclear if this is actually wormable in the 0-click sort of way. Regardless, it’s a fun find, and netted [pedbap] a bounty.

Right There in Plain Text

[Ian] from Shells.Systems was inside a Palo Alto Global Protect installation, a VPN server running on a Windows machine. And there was something unusual in the system logs. The log contained redacted passwords. This is an odd thing to come across, particularly for a VPN server like this, because the server shouldn’t ever have the passwords after creation.

So, to prove the point, [Ian] wrote an extractor program, that grabs the plaintext passwords from system memory. As far as we can tell, this doesn’t have a CVE or a fix, as it’s a program weakness rather than a vulnerability.

Your Gogs Need to Go

Speaking of issues that haven’t been patched, if you’re running gogs, it’s probably time to retire it. The latest release has a Remote Code Execution vulnerability, where an authenticated user can create a symlink to a real file on the gogs server, and edit the contents. This is a very quick route to arbitrary code execution.

The real problem here isn’t this specific vulnerability, or that it hasn’t been patched yet, or even that gogs hasn’t seen a release since 2023. The real problem is that the project seems to have been almost completely abandoned. The last change was only 2 weeks ago, but looking through the change log, almost all of the recent changes appear to be automated changes. The vulnerability was reported back in August, the 90 day disclosure deadline came and went, and there was never a word from the project. That’s concerning. It’s reminiscent of the sci-fi trope, when some system keeps running itself even after all the humans have left.

Bits and bytes

The NPM account takeover hack now has an Open Source checking tool. This is the issue of expired domains still listed on the developer email addresses on NPM packages. If an attacker can register the dangling domain, it’s possible to take over the package as well. The team at Laburity are on it, with the release of this tool.

Lutra Security researchers have an interesting trick up their sleeves, when it comes to encrypted emails. What if the same encrypted text encrypted to different readable messages for each different reader? With some clever use of both encryption and the multipart/alternative MIME type, that;s what Salamander/MIME pulls off.

And finally, it’s time to dive in to DOMPurify bypasses again. That’s the JavaScript library for HTML sanitizing using the browser’s own logic to guarantee there aren’t any inconsistent parsing issues. And [Mizu] has the lowdown on how to pull off an inconsistent parsing attack. The key here is mutations. When DOMPurify runs an HTML document through the browser’s parsing engine, that HTML is often modified — hence the Purify in the title. What’s not obvious is that a change made during this first iteration through the document can have unexpected consequences for the next iteration through the document. It’s a fun read, and only part one, so keep your eyes peeled for the rest of it!


No comments:

Post a Comment