Turns out that when using a Linux FIFO to allow sending commands to a running application, that I inadverntently caused “IO spin” as the input FIFO was being queried by the spawned application. The end result was I was burning CPU cycles of a single core while trying to read input from an async input. So my application went from an average of 10% usage on a core to over 100% (The application was multithreaded). This is something I’d like to look into in the future to understand why what I did caused this particular outcome. Until then, I’ve resorted to not piping stdin from a FIFO and dealing with a moving console output as I try to enter commands.
Perhaps this will be something I develop a small program to do and limit how often it tries to read from the named pipe to reduce wasted cycles.