Mysterious Files PH

Sunday, February 22, 2026

What About the Droid Attack on the Repos?

February 22, 2026 0
A grim reaper knocking on a door labelled "open source"

You might not have noticed, but we here at Hackaday are pretty big fans of Open Source — software, hardware, you name it. We’ve also spilled our fair share of electronic ink on things people are doing with AI. So naturally when [Jeff Greerling] declares on his blog (and in a video embedded below) that AI is destroying open source, well, we had to take a look.

[Jeff]’s article highlights a problem he and many others who manage open source projects have noticed: they’re getting flooded with agenetic slop pull requests (PRs). It’s now to the point that GitHub will let you turn off PRs completely, at which point you’ve given up a key piece of the ‘hub’s functionality. That ability to share openly with everyone seemed like a big source of strength for open source projects, but [Jeff] here is joining his voice with others like [Daniel Stenberg] of curl fame, who has dropped bug bounties over a flood of spurious AI-generated PRs.

It’s a problem for maintainers, to be sure, but it’s as much a human problem as an AI one. After all, someone set up that AI agent and pointed at your PRs. While changing the incentive structure– like removing bug bounties– might discourage such actions, [Jeff] has no bounties and the same problem. Ultimately it may be necessary for open source projects to become a little less open, only allowing invited collaborators to submit PRs, which is also now an option on GitHub.

Combine invitation-only access with a strong policy against agenetic AI and LLM code, and you can still run a quality project. The cost of such actions is that the random user with no connection to the project can no longer find and squash bugs. As unlikely as that sounds, it happens! Rather, it did. If the random user is just going to throw their AI agent at the problem, it’s not doing anybody any good.

First they came for our RAM, now they’re here for our repos. If it wasn’t for getting distracted by the cute cat pictures we might just start to think vibe coding could kill open source. Extra bugs was bad enough, but now we can’t even trust the PRs to help us squash them!


Saturday, February 21, 2026

Quieting Noisy Resistors

February 21, 2026 0

[Hans Rosenberg] has a new video talking about a nasty side effect of using resistors: noise. If you watch the video below, you’ll learn that there are two sources of resistor noise: Johnson noise, which doesn’t depend on the construction of the resistor, and 1/f noise, which does vary depending on the material and construction of the resistor.

In simple terms, some resistors use materials that cause electron flow to take different paths through the resistor. That means that different parts of the signal experience slightly different resistance values. In simple applications, it won’t matter much, but in places where noise is an important factor, the 1/f or excess noise contributes more  to errors than the Johnson noise at low frequencies.

[Hans] doesn’t just talk the math. He also built a simple test rig that lets him measure the 1/f noise with some limitations. While you might pretend that all resistors are the same, the test shows that thick film resistors produce much more noise than other types.

The video shows some rule-of-thumb lists indicating which resistors have better noise figures than others. Of course, resistors are only one source of noise in circuits. But they are so common that it is easy to forget they aren’t as perfect as we pretend in our schematics.

Want to learn more about noise? We can help. On the other hand, noise isn’t always a bad thing.


How the Intel 8087 FPU Knows Which Instructions to Execute

February 21, 2026 0
How the Intel 8087 FPU Knows Which Instructions to Execute

An interesting detail about the Intel 8087 floating point processor (FPU) is that it’s a co-processor that shares a bus with the 8086 or 8088 CPU and system memory, which means that somehow both the CPU and FPU need to know which instructions are intended for the FPU. Key to this are eight so-called ESCAPE opcodes that are assigned to the co-processor, as explained in a recent article by [Ken Shirriff].

The 8087 thus waits to see whether it sees these opcodes, but since it doesn’t have access to the CPU’s registers, sharing data has to occur via system memory. The address for this is calculated by the CPU and read from by the CPU, with this address registered by the FPU and stores for later use in its BIU register. From there the instruction can be fully decoded and executed.

This decoding is mostly done by the microcode engine, with conditional instructions like cos featuring circuitry that sprawls all over the IC. Explained in the article is how the microcode engine even knows how to begin this decoding process, considering the complexity of these instructions. The biggest limitation at the time was that even a 2 kB ROM was already quite large, which resulted in the 8087 using only 22 microcode entry points, using a combination of logic gates and PLAs to fully implement the entire ROM.

Only some instructions are directly implemented in hardware at the bus interface (BIU), which means that a lot depends on this microcode engine and the ROM for things to work half-way efficiently. This need to solve problems like e.g. fetching constants resulted in a similarly complex-but-transistor-saving approach for such cases.

Even if the 8087 architecture is convoluted and the ISA not well-regarded today, you absolutely have to respect the sheer engineering skills and out-of-the-box thinking of the 8087 project’s engineers.


Miranda’s Unlikely Ocean Has Us Asking If There’s Life Clinging On Around Uranus

February 21, 2026 0
Miranda’s Unlikely Ocean Has Us Asking If There’s Life Clinging On Around Uranus
Miranda, as imaged by Voyager 2 on Jan. 24, 1986.

If you’re interested in extraterrestrial life, these past few years have given an embarrassment of places to look, even in our own solar system. Mars has been an obvious choice since before the Space Age; in the orbit of Jupiter, Europa’s oceans have been of interest since Voyager’s day; the geysers of Enceladus give Saturn two moons of interest, if you count the possibility of a methane-based chemistry on Titan. Even faraway Neptune’s giant moon Triton probably has an ocean layer deep inside. Now the planet Uranus is getting in on the act, offering its moon Miranda for consideration in a kinda-recent study in the Planetary Science Journal.

Miranda and Uranus, the new hot spot for life-hunters. 
Photomontage credit NASA.

Even if you’re into astronomy, it may seem like this is coming out of left field. “Miranda, really? What new data could we possibly have on a moon of Neptune nobody’s visited since the 1980s?” Well, none, really. This study relies on reexamining the data collected during the Voyager 2 encounter and trying to make sense of the chaotic, icy world that the space probe revealed.

The faults and other features on Miranda indicated it was geologically active at some point; this study tries to recreate the moon’s history through computer modelling to find that Miranda probably had a ≥100 km thick ocean sometime in the last 100-500 million years, and that while some of it has likely frozen since, tidal heating could very well keep a layer of liquid water within the moon’s interior. Since the moon itself is only 470 km (290 mi) in diameter, a 100km deep ocean layer would actually be a huge proportion of its volume.

The model is a fairly simple one, with the ocean sandwiched between two layers of ice and a rocky core. Image from Caleb Strom et al 2024 Planet. Sci. J. 5 226

Right now, the over-optimistic thinking is that “water means life”, since that’s how it seems to work on Earth. It remains to be seen if Miranda, or indeed any of the icy moons, ever evolved so much as a microbe. Aside from the supposed presence of liquid dihydrogen monoxide, there’s nothing to suggest they have. Finding out is going to take a while: even with boots — er, robots — on the ground, Mars isn’t giving up that secret easily. Still, if we’re able to discover irrefutable evidence for such extraterrestrial life, it will provide an important constraint on one term of The Drake Equation: what fraction of worlds develop life. That by itself won’t tell us “are we alone,” but it will be interesting.

Of course, even if all these worlds are barren now, they might not be for long, once our probes start visiting.

Story via Earth.com

Header image: Miranda, imaged by Voyager 2. Credit NASA/JPL-Caltech


Retrotechtacular: Bleeding-Edge Memory Devices of 1959

February 21, 2026 0

Although digital computers are – much like their human computer counterparts – about performing calculations, another crucial element is that of memory. After all, you need to fetch values from somewhere and store them afterwards. Sometimes values need to be stored for long periods of time, making memory one of the most important elements, yet also one of the most difficult ones. Back in the 1950s the storage options were especially limited, with a 1959 Bell Labs film reel that [Connections Museum] digitized running through the bleeding edge of 1950s storage technology.

After running through the basics of binary representation and the difference between sequential and random access methods, we’re first taking a look at punch cards, which can be read at a blistering 200 cards/minute, before moving onto punched tape, which comes in a variety of shapes to fit different applications.

Electromechanical storage in the form of relays are popular in e.g. telephone exchanges, as they’re very fast. These use two-out-of-five code to represent the phone numbers and corresponding five relay packs, allowing the crossbar switch to be properly configured.

Twistor memory demonstration. (Credit: Bell Labs, 1959)
Twistor memory demonstration. (Credit: Bell Labs, 1959)

After these types of memory, we move on to magnetic memory, in the form of well-known magnetic tape that provide mass storage in relatively little space. There is also the magnetic drum, which is much like a very short and very fast tape and provides e.g. working memory. This is what e.g. the Bendix G-15 uses for its clock signal and working memory, while magnetic tape and punched tape are used for application and data storage.

Next we cover magnetic-core memory, which stores a magnetic orientation in its ferrite rings or on a ferrite plate. This is non-volatile memory, but has low bit density and performs destructive reads, preventing its use beyond the 1970s. Today’s NAND Flash memory has significant overlap with core memory in its operating principles, both in its advantages and disadvantages.

An interesting variation on core memory is Twistor memory, which saw brief use during the late 1960s and early 1970s. Invented by Bell Labs, it was supposed to make for cheaper core-like memory, but semiconductor memory wiped out its business case, along with the similar bubble memory. An interesting feature of Twistor memory was the ability to add write-inhibit cards containing permanent magnets.

Fascinatingly, a kind of crude mask ROM is also demonstrated, before we move on to the old chestnut of vacuum tubes. Demonstrated is a barrier-grid tube, which uses electrons to create an electrostatic charge on a mica surface. This electron beam is also used to read the value, which is naturally destructive, making it somewhat similar to core memory in its speed and functionality.

Finally, we get the flying-spot store system, which is a type of optical digital memory. This is reminiscent of optical disc systems like the Compact Disc, and a reminder of all the amazing breakthroughs that we’d be seeing over the next decades.

Perhaps the best part about this video is that it shows the world as it sidled still mostly unaware towards these big changes. Memory storage was still the realm of largely hand-assembled, macro-sized devices, vacuum tubes and chunky electromechanical relays. Only a few years after this video was released, we’d see semiconductor technology turn the macro into micro, by the 1970s nerds would be fighting over who had the most RAM in their home computers, and CD-ROMs would set the world of computer storage and home game consoles ablaze by the 1990s with literally hundreds of MBs of storage per very cheap disc.


Friday, February 20, 2026

Porting Super Mario 64 To the Original Nintendo DS

February 20, 2026 0

Considering that the Nintendo DS already has its own remake of Super Mario 64, one might be tempted to think that porting the original Nintendo 64 version would be a snap. Why you’d want to do this is left as an exercise to the reader, but whether due to nostalgia or out of sheer spite, the question of how easy this would be remains. Correspondingly, [Tobi] figured that he’d give it a shake, with interesting results.

Of note that is someone else already ported SM64 to the DSi, which is a later version of the DS with more processing power, more RAM and other changes. The reason why the 16 MB of RAM of the DSi is required, is because it needs to load the entire game into RAM, rather than do on-demand reads from the cartridge. This is why the N64 made do with just 4 MB of RAM, which is as much RAM as the ND has. Ergo it can be made to work.

The key here is NitroFS, which allows you to implement a similar kind of segmented loading as the N64 uses. Using this the [Hydr8gon] DSi port could be taken as the basis and crammed into NitroFS, enabling the game to mostly run smoothly on the original DS.

There are still some ongoing issues before the project will be released, mostly related to sound support and general stability. If you have a flash cartridge for the DS this means that soon you too should be able to play the original SM64 on real hardware as though it’s a quaint portable N64.


Auto-Reloading Magnet Dispenser Can Feed Itself

February 20, 2026 0

Magnet placement tools are great because they remove finger fumbling while ensuring correct polarity every time. [EmGi] has made a further improvement by making a version that auto-feeds from an internal stack of magnets.

A stack of magnets auto-feeds with every press of the plunger.

That is a trickier task than one might imagine, because magnets can have a pesky habit of being attracted in inconvenient ways, or flipping around and sticking where they should not. [EmGi] solves this with a clever rack and pinion mechanism to turn a single plunger press into a motion that shears one magnet from a stack and keeps it constrained while the same magnet responsible for holding it to the tip takes care of dragging it down a feed path. It’s easier to see it work in action, so check out the video (embedded below) in which [EmGi] explains exactly what is going on.

This design is actually an evolution of an earlier, non-reloading version. This new one is mechanically more complex, but if it looks useful you can get the design files from Printables or Makerworld and make your own.

The only catch is that this reloading design is limited in what sizes of magnet it can handle, because magnet behavior during feeding is highly dependent on the physical layout and movements. For a different non-reloading placement tool that works with any magnet size and is about as simple as one can get, you can make your own with little more than a bolt and a spring.

Thanks [Keith] for the tip!