The shell
When it comes to solving small problems efficiently, the shell is undoubtedly one of the best tools available to us. It allows us to translate our thoughts into code quickly and concisely.
For example, suppose that we need to check if DNS records contained in a given
x509
certificate are all valid. This could be done rather easily with the shell:
openssl x509 -text -in $FILE \
| grep DNS \
| sed 's/\w*DNS://g' \
| sed 's/,/\n/g' \
| xargs dig
Imagine doing the same with Python, Go, or any other languages: it would take significantly longer.
Obviously, we cannot always use the shell, especially for production code that we have to maintain long-term. However, the shell is unbeatable when it comes to achieving small operational tasks. As a software developer, you have likely experienced needing to do something akin to the following:
- Bulk renaming a bunch of files.
- Restoring files using your version control, but only ones with certain extension / pattern.
- Setting up a watcher for your code which will continuously format and lint your code as you write it.
- Running Terraform for multiple projects, with certain conditionals.
Being able to use the shell to complete these kinds of tasks quickly can significantly boost productivity. If you want to see just how significant the speed improvement is from using the shell efficiently, just find someone near you who lives in their terminal—over the course of our careers, we tend to meet a few of them naturally. Seeing them do their work is often an eye-opening experience.
Shell is hard
As good as it sounds to be able to write a shell script ad hoc in our terminal to instantly solve our problems, the unfortunate truth we have to deal with is that writing shell efficiently is really hard. It involves having to remember many things by heart:
- Useful binaries that are available and their corresponding flags.
- Unusual syntaxes that shell uses.
- Syntaxes of other languages such as
awk
andperl
that you might mix with your shell scripts.
Back in college, after seeing my professor casually chain a bunch of shell
commands together and piping it to awk
with a bunch of ungodly syntaxes, I
was inspired to achieve his level of efficiency. For weeks, I tried learning
and memorizing the syntax for shell and awk
.
Despite trying so hard, I never succeeded. I do not write complex shell scripts often enough to be able to keep all those information fresh in my head. After a long period of time, all I could remember tend to regress to the following:
- Commonly used binaries and their flags.
- Shell syntaxes to chain binaries and deal with their output:
&&
||
;
|
>
. - Subshells and environment variables:
$(echo $VAR)
.
While these tend to be enough for most things I want to do (e.g. git diff | grep $WORD
), I still find it frustrating to have to repeatedly look up how to
write a loop in shell. As such, I started trying to find some way to remedy
this problem.
Using vim as a crutch
For a while, I started to use vim
as a crutch. As a heavy vim
user, I found
that I was able to use it as a substitute when I cannot recall how to do
certain things. For example, when I failed to remember how to use a loop in
shell, I would simply jump into vim
. In vim
I can easily generate all
iterations of the loops by using regex or :norm
. I also had a zsh
plugin
which allows me to edit my terminal commands using vim
, which helps improve
this workflow.
Ultimately though, this is still not a good substitute. In particular, I find it annoying that I cannot easily use my shell history to repeat past commands as the commands tend to get super long.
Shell tagging
Luckily for me, my co-worker gave this advice one day while I was doing some sensitive operations in my day job:
Just FYI, you might want to start inserting
#
before typing up your commands. That way, if you accidentally hit “Enter”, it will just be a no-op and you won’t cause people to get paged.
This made me realize that I can actually use comments when writing shell in my terminal, not just in scripts! In hindsight, this is something that is obviously possible. However, like what I presume to be most people, I never found the need to use comments when writing shell in my terminal, so this came as somewhat of a surprise.
With this in mind, I started doing what I call “shell tagging”. Whenever I find certain long chain of commands useful, I would run a “templated” version of it with some comments at the end. For example:
hg status \
| grep -v ${EXTENSION}$ \
| cut -d' ' -f2 \
| xargs hg revert \
# Revert all files that are not of type ${EXTENSION}
Paired with fzf, I can now quickly fetch
complicated commands when needed from my shell history. All I need to do to
fetch the command above is press Ctrl+R
and type # revert file ext
.
After accumulating useful commands for a few weeks, my efficiency have significantly improved. What used to take me a few minutes to do now takes me only a few seconds.
In addition to this, I have also started storing notes and references, such as:
#Link: https://learnxinyminutes.com/docs/bash/ | Bash reference
or
#TODO: https://docs.google.com/somedoc | Design doc for XYZ I need to review
Being able to fuzzy search all of this with tags that I personally set have
made information retrieval in general much faster for me. I may never be able
to remember awk
syntax by heart like my professor, but I think I have
achieved a similar level of efficiency with way less effort using this trick.