The $0 productivity upgrade most developers miss
Your Mac and Linux machines come with grep, find, and cat - tools from the 1970s. Modern alternatives run 10-100x faster, output JSON for AI workflows, and install in 30 minutes.

If you remember nothing else:
- Standard Unix tools are from the 1970s - Modern alternatives like ripgrep and fd run 10-100x faster while providing better output and clearer error messages
- JSON output changes everything for AI - Tools like ripgrep and jq output structured data that LLMs can parse reliably, enabling automation that was impossible with text-based tools
- The Rust revolution matters - Tools written in Rust between 2018-2024 are dramatically faster, safer, and more reliable than their C predecessors from decades ago
- Free 30-minute install creates lasting gains - One-time setup with Homebrew provides 10x performance improvements and a noticeably better developer experience permanently
Open a terminal on almost any developer’s machine and you’ll find tools from 1975.
grep was created in 1973. find in 1971. cat also 1971. They work. They’ve worked for 50 years. But between 2018 and 2024, the open-source community quietly rewrote all of them - and most developers never got the memo.
The new versions run 10-100x faster. They output JSON that AI tools can actually parse. They have sensible defaults, clear error messages, and they just behave better.
I think most developers have no idea these tools exist. The ones who do often shrug it off because “the old tools work fine.” That logic made sense before AI workflows. It doesn’t anymore.
Why 1970s tools break AI automation
What happens when you try to automate something with standard Unix tools and AI? Say you want to search your codebase for a pattern, pull some information, and feed it to Claude or ChatGPT for analysis. You write something like this:
grep -r "function.*authenticate" . --include="*.js" |
grep -v node_modules |
cut -d: -f1,2 |
# now try to get Claude to parse this messThe output looks like this:
src/auth/login.js:23: function authenticateUser(credentials) {
src/auth/oauth.js:45:function authenticate_oauth(token) {Unstructured text. Different formats per line. No metadata. No context. When you hand this to an LLM, it has to guess at the structure. Sometimes it works. Sometimes it hallucinates. Sometimes it just fails silently.
Now try the modern equivalent with ripgrep:
rg "function.*authenticate" --type js --jsonThe output:
{"type":"match","data":{"path":{"text":"src/auth/login.js"},"lines":{"text":" function authenticateUser(credentials) {"},"line_number":23,"absolute_offset":1250,"submatches":[{"match":{"text":"function authenticateUser"},"start":2,"end":27}]}}Structured data. The LLM can parse it reliably. You can extract exactly what you need. You can build automation that actually holds together.
This difference compounds across every script, every automation, every AI-assisted workflow. Text parsing fails unpredictably. Structured data works reliably. That’s the whole argument.
The 10 tools that matter
Not all modern tools are worth the install. Some are marginal improvements. Some solve problems nobody actually has. But ten tools make a genuine difference.
For code search: ripgrep
Ripgrep (rg) replaces grep. Ten times faster. Respects .gitignore automatically. Outputs JSON.
On a million-line codebase, grep takes 20 seconds. Ripgrep takes 2. Multiply that by how many times per day your developers search code. The JSON output is why it matters for AI - every automation you build on top just works because the data is structured.
Install: brew install ripgrep
For file finding: fd
The Unix find command has syntax that feels deliberately hostile. Finding all JavaScript files that don’t contain “test” in the path:
find . -name "*.js" -not -path "*/test/*" -type fWith fd:
fd -e js -E testFive times faster. A tenth of the cognitive load. Developers will actually use it instead of giving up and searching manually.
Install: brew install fd
For JSON processing: jq
If you work with APIs, config files, or logs, you work with JSON. The standard Unix approach is grep and sed. That’s like performing surgery with a hammer.
jq is a proper JSON processor. Query it like a database. Transform it reliably.
curl api.example.com/users | jq '.data[] | {name: .name, active: .status == "active"}'This is the difference between automation that works and automation that fails randomly.
Install: brew install jq
For data tables: miller
You have a CSV with 100,000 rows. You need to filter it, join it with another file, calculate statistics, and output results. The Unix way involves awk scripts that nobody can read and everyone is afraid to modify.
Miller (mlr) processes CSV, JSON, and other formats like a database. Filter, join, aggregate - all with SQL-like syntax.
mlr --csv filter '$age > 30' then stats1 -a mean,sum -f salary data.csvFor any data analysis or reporting automation, this is essential.
Install: brew install miller
For interactive selection: fzf
Your automation script needs the user to pick from a list. The old way is outputting the list and making them type a number or find your way a custom menu you spent an hour building.
fzf is a fuzzy finder. Pipe any list into it and the user can search and select interactively. Works with anything.
vim $(fzf)
git checkout $(git branch | fzf)This transforms clunky scripts into tools that feel polished.
Install: brew install fzf
For syntax highlighting: bat
cat dumps file contents. No syntax highlighting. No line numbers. Just text on a screen.
bat adds syntax highlighting, git integration, line numbers, and automatic paging. When you’re building automation that shows code to users, bat makes it actually readable.
Install: brew install bat
For better git diffs: delta
Git diffs are hard to read. Lines of red and green, no syntax highlighting, easy to miss important changes. Experienced developers approve PRs they clearly didn’t fully read because the diff was too painful.
delta makes git diffs readable. Syntax highlighting, better formatting, side-by-side view. Your developers will actually review changes instead of skimming.
Install: brew install git-delta
For smart navigation: zoxide
Developers type cd hundreds of times per day. Usually to directories they visit constantly.
zoxide learns which directories they use most and lets them jump there with partial matches. z api jumps to the api-v2 directory. z cont jumps to the controllers folder.
Seconds per use. Hours per year. Across a team, it adds up.
Install: brew install zoxide
For better prompts: starship
The default terminal prompt shows the current directory. Maybe the git branch if you’ve configured it.
starship shows context automatically. Git branch, current language version, whether the last command failed, how long it took. Developers don’t need to run separate commands to check. It’s just there.
Install: brew install starship
For quick command help: tldr
Man pages are thorough and nearly unreadable. When your developer needs to remember how to use tar, they don’t need 2000 lines of documentation. They need three examples.
tldr provides simplified, example-focused help. No theory. Just: here’s how you do the common things.
Install: brew install tldr
Why this matters for AI work
Every organization building with AI hits the same wall. Tools from the 1970s assume text output. AI works better with structured data.
GitHub’s own research confirms that AI coding assistants dramatically improve task completion speed and developer satisfaction. Structured code context makes that even more effective. Tools that output JSON give AI precise data to work with. Tools that output unstructured text force the AI to guess.
When ripgrep outputs search results as JSON, your AI knows exactly which file, which line, what matched, what the surrounding context is. When grep outputs text, the AI has to parse it heuristically and hope it gets it right. That’s a fragile foundation.
The pattern shows up everywhere: analyzing error logs, processing API responses, building automation, generating reports. The developers with modern tools build automation that holds up. The developers without them build automation that mostly works and fails mysteriously sometimes.
Probably the most frustrating part is that the failures are silent and intermittent. The automation seems fine until it doesn’t.
How to actually do this
Most of these tools - ripgrep, fd, bat, delta, zoxide, starship - are written in Rust. Not a coincidence.
Rust is a systems programming language that came out of Mozilla in 2010. As fast as C but memory-safe by design. Between 2018 and 2024, developers rewrote huge swaths of Unix utilities in Rust. The results are faster, more reliable, and have genuinely better error messages. When grep fails, you get “binary file matches.” When ripgrep fails, you get a clear explanation of what went wrong and how to fix it. That matters.
One-line install on Mac:
brew install ripgrep fd fzf bat jq git-delta zoxide starship tldr millerOn Linux, replace brew with apt or whatever package manager your distro uses.
Then configure the shell integrations:
# Add to ~/.zshrc or ~/.bashrc
eval "$(fzf --zsh)"
eval "$(zoxide init zsh)"
eval "$(starship init zsh)"Configure git to use delta:
git config --global core.pager deltaDone. Thirty minutes. The tools are available immediately.
The harder part is getting developers to actually use the new tools instead of defaulting to the old ones. That’s habit change. Some will switch immediately. Others need to see the benefit first.
Show, don’t mandate. When someone asks how to find all the places a function gets called, show them rg instead of grep. When someone needs to filter a CSV, show them mlr instead of awk. Over time, the team shifts. Not because it was required, but because the new tools are obviously better once you’ve tried them.
The real cost of staying put
This isn’t really about command-line tools. It’s about whether your developers’ environment keeps up with the work they’re doing.
The Unix tools from the 1970s were remarkable for their time. But we have better options now. Faster. Easier. Compatible with how AI actually works.
The developers who adopt modern tools build things that weren’t practical before. Not because the old tools couldn’t technically do it, but because the new tools make it easy enough that developers actually do it. That’s the distinction worth sitting with.
Say a developer has a manual task that takes 10 minutes. With old tools, scripting it takes an hour and still has edge cases. So they just do it manually forever. With modern tools, scripting it takes 10 minutes and works reliably. They script it once and never touch it again.
Multiply that across a team. Multiply it by the number of tasks. The difference is real.
The tools are free. The install takes half an hour. And when you’re building with AI, the structured output and reliable parsing make the difference between automation that holds up and automation that fails unpredictably.
Thirty minutes from now, every automation you build will be more reliable. Hard to find a better return on half an hour.
About the Author
Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.
Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.