Mysterious Files PH

Thursday, February 12, 2026

Exploring Homebrew for the Pokémon Mini

February 12, 2026 0

Originally only sold at the Pokémon Center New York in late 2001 for (inflation adjusted) $80, the Pokémon Mini would go on to see a release in Japan and Europe, but never had more than ten games produced for it. Rather than Game Boy-like titles, these were distinct mini games that came on similarly diminutive cartridges. These days it’s barely remembered, but it can readily be used for homebrew titles, as [Inkbox] demonstrates in a recent video.

Inside the device is an Epson-manufactured 16-bit S1C88 processor that runs at 4 MHz and handles basically everything, including video output to the monochrome 96×64 pixel display. System RAM is 4 kB of SRAM, which is enough for the basic games that it was designed for.

The little handheld system offered up some capabilities that even the full-sized Game Boy couldn’t match, such as a basic motion sensor in the form of a reed relay. There’s also 2 MB of ROM space directly addressable without banking.

Programming the device is quite straightforward, not only because of the very accessible ISA, but also the readily available documentation and toolchain. This enables development in C, but in the video assembly is used for the added challenge.

Making the screen tiles can be done in an online editor that [Inkbox] also made, and the game tested in an emulator prior to creating a custom cartridge that uses an RP2040-based board to play the game on real hardware. Although a fairly obscure gaming handheld, it seems like a delightful little system to tinker with and make more games for.


The Death of Baseload and Similar Grid Tropes

February 12, 2026 0
The Death of Baseload and Similar Grid Tropes

Anyone who has spent any amount of time in or near people who are really interested in energy policies will have heard proclamations such as that ‘baseload is dead’ and the sorting of energy sources by parameters like their levelized cost of energy (LCoE) and merit order. Another thing that one may have noticed here is that this is also an area where debates and arguments can get pretty heated.

The confusing thing is that depending on where you look, you will find wildly different claims. This raises many questions, not only about where the actual truth lies, but also about the fundamentals. Within a statement such as that ‘baseload is dead’ there lie a lot of unanswered questions, such as what baseload actually is, and why it has to die.

Upon exploring these topics we quickly drown in terms like ‘load-following’ and ‘dispatchable power’, all of which are part of a healthy grid, but which to the average person sound as logical and easy to follow as a discussion on stock trading, with a similar level of mysticism. Let’s fix that.

Loading The Bases

Baseload is the lowest continuously expected demand, which sets the minimum required amount of power generating capacity that needs to be always online and powering the grid. Hence the ‘base’ part, and thus clearly not something that can be ‘dead’, since this base demand is still there.

What the claim of ‘baseload is dead’ comes from is the idea that with new types of generation that we are adding today, we do not need special baseload generators any more. After all, if the entire grid and the connected generators can respond dynamically to any demand change, then you do not need to keep special baseload plants around, as they have become obsolete.

Example electrical demand "Duck Curve" using historical data from California. (Credit: ArnoldRheinhold)
Example electrical demand “Duck Curve” using historical data from California. (Credit: ArnoldRheinhold)

A baseload plant is what is what we traditionally call power plants that are designed to run at 100% output or close to it for as long as they can, usually between refueling and/or maintenance cycles. These are generally thermal plants, powered by coal or nuclear fuel, as this makes the most economical use of their generating capacity, and thus for the cheapest form of dispatchable power on the grid.

With only dispatchable generators on the grid this was very predictable, with any peaks handled by dedicated power plants, both load-following and peaking power plants. This all changed when large-scale solar and wind generators were introduced, and with it the duck curve was born.

As both the sun and wind are generally more prevalent during the day, and these generators are not  generally curtailed, this means that suddenly everything else, from thermal power plants to hydroelectric plants, has to throttle back. Obviously, doing so ruins the economics of these dispatchable power sources, but is a big part of why the distorted claim of ‘baseload is dead’ is being made.

Chaos Management

The Fengning pumped storage power station in north China's Hebei Province. (Credit: CFP)
The Fengning pumped storage power station in north China’s Hebei Province. (Credit: CFP)

Suffice it to say that having the entire grid adapt to PV solar and wind farms – whose output can and will fluctuate strongly over the course of the day – is not an incredibly great plan if the goal is to keep grid costs low. Not only can these forms of variable renewable energy (VRE) only be curtailed, and not ramped up, they also add thousands of kilometers of transmission lines and substations to the grid due to the often remote areas where they are installed, adding to the headache of grid management.

Although curtailing VRE has become increasingly more common, this inability to be dispatched is a threat to the stability of the national grids of countries that have focused primarily on VRE build-out, not only due to general variability in output, but also because of “anticyclonic gloom“: times when poor solar conditions are accompanied by a lack of wind for days on end, also called ‘Dunkelflaute’ if you prefer a more German flair.

What we realistically need are generators that are dispatchable – i.e. are available on demand – and can follow the demand – i.e. the load – as quickly as possible, ideally in the same generator. Basically the grid controller has to always have more capacity that can be put online within N seconds/minutes, and have spare online capacity that can ramp up to deal with any rapid spikes.

Although a lot is being made of grid-level storage that can soak up excess VRE power and release it during periods of high demand, there is no economical form of such storage that can also scale sufficiently. Thus countries like Germany end up paying surrounding countries to accept their excess power, even if they could technically turn all of their valleys into pumped hydro installations for energy storage.

This makes it incredibly hard to integrate VRE into an electrical grid without simply hard curtailing them whenever they cut into online dispatchable capacity.

Following Dispatch

Essential to the health of a grid is the ability to respond to changes in demand. This is where we find the concept of load-following, which also includes dispatchable capacity. At its core this means a power generator that – when pinged by the grid controller (transmission system operator, or TSO) – is able to spin up or down its power output. For each generator the response time and adjustment curve is known by the TSO, so that this factor can be taken into account.

European-wide grid oscillations prior to the Iberian peninsula blackout. (Credit: Linnert et al., FAU, 2025)
European-wide grid oscillations prior to the Iberian peninsula blackout. (Credit: Linnert et al., FAU, 2025)

The failure of generators to respond as expected, or by suddenly dropping their output levels can have disastrous effects, particularly on the frequency and thus voltage of the grid. During the 2025 Iberian peninsula blackout, for example, grid oscillations caused by PV solar farms caused oscillation problems until a substation tripped, presumably due to low voltage, and a cascade failure subsequently rippled through the grid. A big reason for this is the inability of current VRE generators to generate or absorb reactive power, an issue that could be fixed with so-called grid-forming converters, but at significant extra cost to the VRE generator owners, as this would add local energy storage requirements such as batteries.

Typically generators are divided into types that prefer to run at full output (baseload), can efficiently adjust their output (load follow) or are only meant for times when demand outstrips the currently available supply (peaker). Whether a generator is suitable for any such task largely depends on the design and usage.

This is where for example a nuclear plant is more ideal than a coal plant or gas turbine, as having either of these idling burns a lot of fuel with nothing to show for it, whereas running at full output is efficient for a coal plant, but is rather expensive for a gas turbine, making them mostly suitable for load-following and peaker plants as they can ramp up fairly quickly.

The nuclear plant on the other hand can be designed in a number of ways, making it optimized for full output, or capable of load-following, as is the case in nuclear-heavy countries like France where its pressurized water reactors (PWRs) use so-called ‘grey control rods’ to finely tune the reactor output and thus provide very rapid and precise load-following capacities.

Overview of the thermal energy transfer in the Natrium reactor design. (Source: TerraPower)

There’s now also a new category of nuclear plant designs that decouple the reactor from the steam turbine, by using intermediate thermal storage. The Terrapower Natrium reactor design – currently under construction – uses molten salt for its coolant, and also molten salt for the secondary (non-nuclear) loop, allowing this thermal energy to be used on-demand instead of directly feeding into a steam turbine.

This kind of design theoretically allows for a very rapid load-following, while giving the connected reactor all the time in the world to ramp up or down its output, or even power down for a refueling cycle, limited only by how fast the thermal energy can be converted into electrical power, or used for e.g. district heating or industrial heat.

Although grid-level storage in the form of pumped hydro is very efficient for buffering power, it cannot be used in many locations, and alternatives like batteries are too expensive to be used for anything more than smoothing out rapid surges in demand. All of which reinforces the case for much cheaper and versatile dispatchable power generators.

Grid Integration

Any power generator on the grid cannot be treated as a stand-alone unit, as each kind of generator comes with its own implications for the grid. This is a fact that is conveniently ignored when the so-called Levelized Cost of Energy (LCoE) metric is used to call VRE the ‘cheapest’ of all types of generators. Although it is true that VRE have no fuel costs, and relatively low maintenance cost, the problem with them is that most of their costs is not captured in the LCoE metric.

What LCoE doesn’t capture is whether it’s dispatchable or not, as a dispatchable generator will be needed when a non-dispatchable generator cannot produce due to clouds, night, heavy snow cover, no wind or overly strong wind. Also not captured in LCoE are the additional costs occurred from having the generator connected to the grid, from having to run and maintain transmission lines to remote locations, to the cost of adjusting for grid frequency oscillations and similar.

Levelized cost of operation of various technologies. (Credit: IEA)
Levelized cost of operation of various technologies. (Credit: IEA, 2020)

Ultimately these can be summarized as ‘system integration costs’, and they are significantly tougher to firmly nail down, as well as highly variable depending on the grid, the power mix and other variables. Correspondingly the cost of electricity from various sources is hotly debated, but the consensus is to use either Levelized Avoided Cost of Energy (LACE) or Value Adjusted LCoE (VALCoE), which do take these external factors into account.

Energy value by technology relative to average wholesale electricity price in the European Union in the Stated Policies Scenario. (Credit: IEA, 2020)
Energy value by technology relative to average wholesale electricity price in the European Union in the Stated Policies Scenario. (Credit: IEA, 2020)

As addressed in the linked IEA article on VALCoE, an implication of this is that the value of VREs drop as their presence on the grid increases. This can be seen in the above graph based on 2020-era EU energy policies, with the graphs for the US and China being different again, but China’s also showing the strong drop in value of PV solar while wind power is equally less affected.

A Heated Subject

It is unfortunate that energy policy has become a subject of heated political and ideological furore, as it should really be just as boring as any other administrative task. Although the power industry has largely tried to stay objective in this matter, it is unfortunately subject to both political influence and those of investors. This has led to pretty amazing and breakneck shifts in energy policy in recent years, such as Belgium’s phase-out of nuclear power, replacing it with multiple gas plants, to then not only decide to not phase out its existing nuclear plants, but also to look at building new nuclear.

Similarly, the US has and continues to see heated debates on energy policy which occasionally touch upon objective truth. Unfortunately for all of those involved, power grids do not care about personal opinions or preferences, and picking the wrong energy policy will inevitably lead to consequences that can cost lives.

In that sense, it is very harmful that corner stones of a healthy grid such as baseload, reactive power handling and load-following are being chipped away by limited metrics such as LCoE and strong opinions on certain types of power technologies. If we cared about a stable grid more than about ‘being right’, then all VRE generators would for example be required to use grid-forming converters, and TSOs could finally breathe a sigh of relief.


Bash via Transpiler

February 12, 2026 0
Bash via Transpiler

It is no secret that we often use and abuse bash to write things that ought to be in a different language. But bash does have its attractions. In the modern world, it is practically everywhere. It can also be very expressive, but perhaps hard to read.

We’ve talked about Amber before, a language that is made to be easier to read and write, but transpiles to bash so it can run anywhere. The FOSDEM 2026 conference featured a paper by [Daniele Scasciafratte] that shows how to best use Amber. If you prefer slides to a video, you can read a copy of the presentation.

For an example, here’s a typical Amber script. It compiles fully to a somewhat longer bash script:


import * from "std/env"
fun example(value:Num = 1) {
   if 1 > 0 {
      let numbers = [value, value]
      let sum = 0
      loop i in numbers {
         sum += numbers[i]
    }
    echo "it's " + "me"
    return sum
   }
   fail 1
}

echo example(1) failed {
   echo "What???"
   is_command("echo")
}

The slides have even more examples. The language seems somewhat Python-like, and you can easily figure out most of it from reading the examples. While bash is nearly universal, the programs a script might use may not be. If you have it, the Amber code will employ bshchk to check dependencies before execution.

According to the slides, zsh support is on the way, too. Overall, it looks like it would be a great tool if you have to deploy with bash or even if you just want an easier way to script.

We’ve looked at Amber before. Besides, there are a ton of crazy things you can do with bash.


Wednesday, February 11, 2026

FLOSS Weekly Episode 864: Work Hard, Save Money, Retire Early

February 11, 2026 0

This week Jonathan chats with Bill Shotts about The Linux Command Line! That’s Bill’s book published by No Starch Press, all about how to make your way around the Linux command line! Bill has had quite a career doing Unix administration, and has thoughts on the current state of technology. Watch to find out more!

Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or have the guest contact us! Take a look at the schedule here.

Direct Download in DRM-free MP3.

If you’d rather read along, here’s the transcript for this week’s episode.


Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)

Licensed under Creative Commons: By Attribution 4.0 License


Motorola’s Password Pill Was Just One Idea

February 11, 2026 0
Motorola’s Password Pill Was Just One Idea

Let’s face it; remembering a bunch of passwords is the pits, and it’s just getting worse as time goes on. These days, you really ought to have a securely-generated key-smash password for everything. And at that point you need a password manager, but you still have to remember the password for that.

Well, Motorola is sympathetic to this problem, or at least they were in 2013 when they came up with the password pill. Motorola Mobility, who were owned by Google at the time, debuted it at the All Things Digital D11 tech conference in California. This was a future that hasn’t come to pass, for better or worse, but it was a fun thought experiment in near-futurism.

Dancing with DARPA

Back then, such bleeding-edge research was headed by former DARPA chief Regina Dugan. At the conference, Dugan stated that she was “working to fix the mechanical mismatch between humans and electronics” by doing things such as partnering with companies that “make authentication more human”.

Round, white pills spill from a bottle against a yellow background.
Image by HeungSoon from Pixabay

Along with Proteus Digital Health, Dugan et. al created a pill with a small chip inside of it and a switch. Once swallowed, your various stomach acids serve as the electrolyte. The acids power the chip, and the switch goes on and off, creating an 18-bit ECG-like signal.

Basically, your entire body becomes an authentication token. Unlock your phone, your car door handle, and turn on your computer, just by existing near them.

It should be noted that Proteus already had FDA clearance for a medical device consisting of an ingestible sensor. The idea behind those is that medical staff can track when a patient has taken a pill based on the radio signal. Dugan said at the conference that it would be medically safe to ingest up to thirty of these pills per day for the rest of your life. Oh yeah, and she says the only thing that the pill exposes about the taker is whether they took it or not.

Motorola head Dennis Woodside stated that they had demonstrated this authentication technology working and authenticating a phone. While Motorola never intended to ship this pill, it was based on the Proteus device with FDA clearance, presumably so they could test it safely.

The story of Proteus Digital Health is beyond us here, but for whatever reason, their smart pills never took off. So we’re left to speculate about the impact on society that this past future of popping password pills would have had.

About That Government Influence

Robert Redford and Sidney Poitier in an open-topped convertible. Black and white.
Redford and Poitier in Sneakers (1992). Image via IMDb

While it sounds sorta cool at first, it also seems like something a government might choose to force on a person sooner or later. Someone they wanted to insert behind enemy lines, perhaps, or just create an inside job that otherwise wouldn’t have happened.

Taking off my tin foil hat for a moment, I’ll compare this pill with existing modern biometrics. A face scan, a fingerprint, or even my voice is my passport, verify me are all momentary actions.

With these, you’re more or less in control of when authentication happens.  A pill, on the other hand, must run its course. You can’t change the signal mid-digestive cycle. Plus, you’d have to guard your pills with your life, and if a couple pills pass through you every day, you’d better have a big pillbox.

Authentication Can Be Skin Deep

Motorola/MC10's temporary password tattoo, up close and personal.
Image by MC10 via Slashgear

So the password pill never came to pass, but it’s worth mentioning that at the same conference, Dugan debuted another method of physical authentication — a temporary password tattoo they developed along with MC10, a company that makes stretchable circuits and has since been acquired by a company called Medidata.

More typically, their circuits are used to do things like concussion detection for sports, or baby thermometers that continuously track temperature.

Dugan said that the key MC10 technology is in the accordion-like structures connecting the islands of inflexible silicon. These structures can stretch up to 200% and still work just fine. The tattoos are waterproof, so go ahead and swim or shower. Of course, the password tattoo never came to be, either. And that’s just fine with me.

 


Vintage Film Editor Becomes HDMI Monitor

February 11, 2026 0

With the convenience of digital cameras and editing software, shooting video today is so easy. But fifty years ago it wasn’t electronics that stored the picture but film, and for many that meant Super 8. Editing Super 8 involved a razor blade and glue, and an editing station, like a small projector and screen, was an essential accessory. Today these are a relatively useless curio, so [Endpoint101] picked one up for not a lot and converted it into an HDMI monitor.

Inside these devices there’s a film transport mechanism and a projection path usually folded with a couple of mirrors. In this case the glass screen and much of the internals have been removed, and an appropriate LCD screen fitted. It’s USB powered, and incorporates a plug-in USB power supply mounted in a UK trailing socket for which there’s plenty of space.

There’s always some discussion whenever a vintage device like this is torn apart as to whether that’s appropriate. These film editors really are ten a penny though, so even those of us who are 8 mm enthusiasts can see beyond this one. The result is a pleasingly retro monitor, which if we’re honest we could find space for ourselves. The full video is below the break. Meanwhile it’s not the first conversion we’ve seen, here’s another Hanimex packing a Raspberry Pi.


PROFS: The Office Suite of the 1980s

February 11, 2026 0

Today, we take office software suites for granted. But in the 1970s, you were lucky to have a typewriter and access to a photocopier. But in the early 1980s, IBM rolled out PROFS — the Professional Office System — to try to revolutionize the office. It was an offshoot of an earlier internal system. The system would hardly qualify as an office suite today, but for the time it was very advanced.

The key component was an editor you could use to input notes and e-mail messages. PROFS also kept your calendar and could provide databases like phonebooks. There were several key features of PROFS that would make it hard to recognize as productivity software today. For one thing, IBM terminals were screen-oriented. The central computer would load a form into your terminal, which you could fill out. Then you’d press send to transmit it back to the mainframe. That makes text editing, for example, a very different proposition since you work on a screen of data at any one time. In addition, while you could coordinate calendars and send e-mail, you could only do that with certain people.

A PROFS message from your inbox

In general,  PROFS connected everyone using your mainframe or, perhaps, a group of mainframes. In some cases, there might be gateways to other systems, but it wasn’t universal. However, it did have most of the major functions you’d expect from an e-mail system that was text-only, as you can see in the screenshot from a 1986 manual. PF keys, by the way, are what we would now call function keys.

The calendar was good, too. You could grant different users different access to your calendar. It was possible to just let people see when you were busy or mark events as confidential or personal.

You could actually operate PROFS using a command-line interface, and the PF keys were simply shorthand. That was a good thing, too. If you wanted to erase a file named Hackaday, for example, you had to type: ERASE Hackaday AUT$PROF.

Styles

PROFS messages were short and were essentially ephemeral chat messages. Of course, because of the block-mode terminals, you could only get messages after you sent something to the mainframe, or you were idle in a menu. A note was different. Notes were what we could call e-mail. They went into your inbox, and you could file them in “logs”, which were similar to folders.

If you wanted something with more gravitas, you could create documents. Documents could have templates and be merged with profiles to get information for a particular author. For example, a secretary might prepare a letter to print and mail using different profiles for different senders that had unique addresses, titles, and phone numbers.

Documents could be marked draft or final. You had your own personal data storage area, and there was also a shared storage. Draft documents could be automatically versioned. Documents also received unique ID numbers and were encoded with their creation date. Of course, you could also restrict certain documents to certain users or make them read-only for particular users.

More Features

Pretty good spell check options for the 1980s.

PROFS could remind you of things or calendar appointments. It could also let you look up things like phone numbers or work with other databases. The calendar could help you find times when all participants were available. PROFS could tie into DisplayWrite (at least, by version 2) so it could spell check using custom or stock dictionaries. It also looked for problematic words such as effect vs. affect and wordy phrases or clichés.

The real game changer, though, was the ability to find documents without searching through a physical filing cabinet. The amount of time spent maintaining and searching files in a typical pre-automation business was staggering.

You could ask PROFS to suggest rewrites for a certain grade level or access a thesaurus. This all sounds ordinary now, but it was a big innovation in the 1980s.

Of course, in those days, documents were likely to be printed on a computer-controlled typewriter or, perhaps, an ordinary line printer. But how could you format using text? This all hinged on IBM’s DisplayWriter word processor.

Markup

Today we use HTML or Markdown to give hints about rendering our text. PROFS and DisplayWriter wasn’t much different, although it had its own language. The :p. tag started a paragraph. You could set off a quotation between :q. and :eq. Unnumbered lists would start with :ul., continue with :li., and end with :eul. Sounds almost familiar, right? Of course, programs like roff and WordStar had similar kinds of commands, and, truthfully, the markup is almost like strange HTML.

The Whole Office

IBM wanted to show people that this wasn’t just wordprocessing for the secretarial pool. Advanced users could customize templates and profiles. Administrators could tailor menus and add features. There were applications you could add to provide a spreadsheet capability, access different databases, and gateway to other systems like TWX or Telex.

It is hard to find any demonstrations of PROFs, but a few years ago, someone documented their adventure in trying to get PROFS running. Check out [HS Tech Channel’s] video below.

History and Future

Supposedly, the original system was built in the late 1970s in conjunction with Amoco Research. However, we’re a little suspicious of that claim. We know of at least three other companies that were very proud of “helping IBM design PROFS.” As far as we could ever tell, that was a line IBM sales fed people when they helped them design a sign-in screen with their company name on it, and that was about it.

The system would go through several releases until it morphed into OfficeVision. As PCs started to take over, OfficeVision/2 and OS/2 were the IBM answer that few wanted. Eventually, IBM would suggest using Lotus Notes or Domino and would eventually buy Lotus in 1995 to own the products.

Scandal

One place that PROFS got a lot of public attention was during the Iran-Contra affair. Oliver North and others exchanged PROFS notes about their activities and deleted them. However, deleting a note in PROFS isn’t always a true deletion. If you send a note to several people, they all have to delete it before the system may delete it. If you send a document, deleting the message only deletes the notification that the document is ready, not the document.

Investigators recovered many “deleted” e-mails from PROFS that provided key details about the case. Oddly, around the same time, IBM offered an add-on to PROFS to ensure things you wanted to delete were really gone. Maybe a coincidence. Maybe not.

On Your Own

If you want to try to build up a new PROFS system, we suggest starting with a virtual machine. If anyone suggests that wordprocessing can’t get worse than DisplayWriter, they are very wrong.