1 minute read

JSON logging is incredibly useful in managed deployments: an ELK stack ingesting JSON logs is that much closer to producing actionable analytics with Kibana, no custom line-parsers needed. The downside, as anyone who has looked at screen-barf of JSON logs knows, is that they are incredibly difficult to read in their raw state.

Enter jq, a lightweight command-line log processing solution. jq accepts raw JSON text, filters for only what you want to see, and formats it in a way that is much easier on the eyes. Newer versions easily handle streaming logs, so the output of kubectl can be piped in and processed in one line. Here’s an example command I might use (the tee is there to handle lines not formatted as JSON):

kubectl -n dev logs my-container | tee >(grep -v "^{") | grep "^{" | jq -r -c '{dt: .datetime, level: .level, code: .code, route: .route, message: .msg} | [.dt, .level, .code, .route, .message] | @tsv' 

The result is nicely formatted logging, for example:

2021-02-13 00:41:04.338538	PERF		/some/endpoint	STARTED
2021-02-13 00:41:04.434863	INFO		Information logged
2021-02-13 00:41:04.436204	INFO		Additional Info
2021-02-13 00:41:04.245043	PERF	200	/some/endpoint	FINISHED

Which I think everyone agrees is much easier on the eyes than lines of

{"appName": "my-contaner", "code": "", "datetime": "2021-02-14 17:29:10.886288", "guid": "NOT SUPPLIED", "level": "PERF", "location": "http_performance_logging.py:46", "msg": "STARTED", "responseTime": "", "responseTime_ms": 0, "route": "/some/endpoint", "sourceIP": "10.0.0.77", "sourcePort": "45878", "thread": "CP Server Thread-5", "userInfo": null, "verb": "GET"}

Updated: