Manage the health of your CLI tools at scale
Your services have dashboards, tracing, and alerting. Your CLI tools print to STDOUT and exit. When something breaks, debugging starts at the API gateway -- everything upstream is a black box. This...

Source: DEV Community
Your services have dashboards, tracing, and alerting. Your CLI tools print to STDOUT and exit. When something breaks, debugging starts at the API gateway -- everything upstream is a black box. This makes no sense. If your CLI talks to an API, it's part of the request path. Instrument it like any other participant. This post describes how we instrumented an internal Perl CLI -- the same mycli tool from our earlier post on fatpacking -- with syslog logging, StatsD metrics, and correlation IDs. The post is strongly biased towards tooling internal to an organisation, which has the luxury of being opinionated: you control the deployment targets, you know where syslog goes, and you can lean on solved infrastructure rather than building your own. The principles generalise to any language and any CLI that talks to an API. Why observability matters in CLI tools Web services get dashboards as a matter of course[1]. Error rates, latency percentiles, request counts -- these are table stakes for an