Modern Linux Tools vs Unix Classics: Which would I choose?

A Dive Into Linux Command Line Bashing

Recently, a customer at my Day Job asked for data that could only be retrieved from our public API. But, most of our customers aren’t the API driven types. So rather than tell them “Use the API”, I Gave Them The Pickle as is customary for us.

To generate the report, I needed to parse some JSON at the command line and do some basic search and replaces, then convert it all to CSV. Data was retrieved from two API calls, each producing a key piece of data (an ID) and then separate details related to those ID’s. The API calls were easy enough with wget, and I was going to use JQ to parse the resulting JSON.

Now I had TWO problems

I was told that JQ was simple to use by somebody who’d used it in the past. But after fiddling with it for 20 minutes and not having much luck, I reached out to a coworker who’s a highly respected developer. My coworker admitted that they weren’t super familiar with JQ, and that they typically resorted to tried and true Unix commands and pipes rather than learning something new that could only do one thing.

Those tried and true commands he was talking about were the very ones I thought I would be “cheating” to use, because I was forgoing the use of a newer tool.

Do One Thing Well

Those tried and true commands we were referring to? None other than the usual awk sed cut grep and of course the Unix pipe | to glue them all together. Really, why use a JSON parsing program that only could only do one function (parse JSON) when I could use a combination of tools that, when piped together, could do far more?

Would I really gain anything by using a specialized tool released in 2013 (jq) (what Alton Brown calls a “unitasker”, devices that are created for one job and one job only) over using the common tools released for Unix between 1973 and 1985? And would it be worthwhile to learn something brand new instead of using tools were already well known to me?

I decided it wasn’t. So I got to work doing things the way I always do them. grep sed awk and cut and a loop.

For Things in Thing, Do Stuff. Thanks.

The bash script that I ended up writing does things quick and somewhat ugly. First, it saves the API call output to a couple of temp files. Then, it parses the JSON with the aforementioned Unix staples in a for loop. Then it uses the second API call to cross reference another piece of info I need and then sed to replace it in my output. All done, with the needed output sent to my customer, in less time than it would have taken me to properly learn JQ.

An Added Benefit: Portability

Bash might not be sexy, and grep sed awk and cut might not be cutting edge, but they are everywhere. MacOS? Yup. Linux? Obviously. And Windows? With WSL2, absolutely. In fact that’s where I developed this tool, and then went and ran it on a native Linux machine without having to install any additional packages. Had I used JQ, I’d have had to make sure it was installed everywhere my script was intended to be ran. It’s just the first steps on the stairway down to dependency hell.

This Might Be Awkward, but…

All of these musings led me to a conclusion that works for me in my situation: If I can’t do it with grep sed awk and cut and a for or while loop then maybe I’m not the right guy to do it. Oh sure, I could learn Python. But I don’t need Python on a daily basis like I do bash. And sure, I could learn Perl, but this is 2023 and people know better these days. [Editor’s note: He’s kidding, sort of, but he’s also mostly right]

In Conclusion

Sometimes the best tool for the job isn’t the New Shiny, but rather it is the tool you know, love, and trust. What are your favorite old and crusty go-to’s these days?

3 thoughts on “Modern Linux Tools vs Unix Classics: Which would I choose?

  1. But jq isn’t a unitasker. It’s usable for any task which involves JSON, which is about a million. You might as well say grep is a unitasker because it only deals with line-oriented data.

    Even Alton isn’t so strict with the moniker. A rice cooker isn’t just for rice, he says, but anything which cooks like rice, which is many foods.

    It’s especially funny to hear someone say they’re avoiding one program which does exactly what they need in order to stay off “the first steps on the stairway down to dependency hell”, and then run their program on Windows by adding an entire Linux install.

    1. hehe I get where you’re coming from. But, grep/awk/sed/etc can work with any kind of data stream, whereas JQ can only work with JSON. I’m not saying JQ is horrible, or that it isn’t worth learning, but in the moment, when I have stuff to get done there’s value in using the stuff you know off the top of your head.

      That being said, if I had to parse JSON on a daily basis, I’d be learning JQ deeply. But I don’t. So I didn’t.

      As for the dependency hell comment- I am referring to having to make sure that JQ is installed everywhere I intend to use the script, which has nothing to do with a local Linux env via WSL2.

      Thanks for the comment 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *