Microservices architectures have fundamentally changed how modern software systems are built and deployed. However, while production-grade observability is well-established through APM platforms, tracing systems, and log aggregators, local development environments often suffer from poor visibility. Developers working on microservices locally need access to real-time metrics, structured logs, and trace visualization to quickly detect performance bottlenecks, trace service-to-service communication, and troubleshoot issues effectively. This blog outlines a technically detailed guide on how to implement local observability inside Visual Studio Code using a curated set of extensions. The focus is on enabling deep inspection of services running on localhost or via containers during development workflows.
When microservices are spun up locally via Docker Compose, makefiles, or individual scripts, they may interact asynchronously across REST, gRPC, message queues, or internal APIs. Diagnosing where requests fail, or which service introduces latency, is difficult without distributed logs and traces. Local observability helps trace request paths and response times across boundaries in real time.
Without integrated observability, developers rely on console.log
, shell outputs, and manual inspection of logs from multiple terminals. This breaks the development loop and increases cognitive overhead. Observability tools inside VSCode reduce this latency by surfacing metrics, logs, and health indicators within the IDE.
Many runtime bugs or performance issues stem from unobserved behaviors introduced during development. By simulating observability conditions locally using traces, metrics, and structured logs, developers can reduce environment drift and increase the fidelity of their local testing.
Quantitative indicators such as memory usage, request duration, error rate, or custom business metrics (orders created, cache hits) are fundamental to understanding a microservice’s behavior. In local environments, metrics help profile resource consumption and performance regressions during iterative development.
Logs provide the narrative of execution paths, error messages, state transitions, and service interactions. Structured logging using JSON or key-value pairs enables log aggregation and queryability. Logs that include context propagation, correlation identifiers, and service metadata provide rich insights even during local execution.
Traces represent a single transaction or request as it moves across services, capturing latencies, dependencies, and failures. When services are instrumented with distributed tracing, they produce spans that can be visualized in tools such as Jaeger. Locally, tracing helps developers identify downstream latency sources and incomplete or failed transactions.
The VSCode Docker extension, coupled with Dev Containers, allows developers to monitor microservices running in containers with precise context. When services are encapsulated in containers, VSCode can directly hook into their logs, environment variables, runtime metrics, and file system.
Create a .devcontainer
folder for each service, define dependencies in the Dockerfile, and use docker-compose
to bring up multiple services. Attach to the container using VSCode’s Remote - Containers feature to debug processes, monitor usage stats, and observe port bindings.
For microservices that expose Prometheus-compatible metrics via /metrics
endpoints, VSCode can be used to validate and simulate metrics requests. The REST Client extension enables HTTP requests directly inside .http
files, providing inline previews of Prometheus output.
localhost:port/metrics
from .http
filesInstrument services with libraries such as prom-client
in Node.js or prometheus_client
in Python. After launching services locally, create .http
test cases to simulate metric scraping. Confirm metrics are exposed correctly and include dimensional labels for aggregation.
Logs scattered across services, containers, and terminals create significant friction. To address this, use a log aggregation agent like Fluent Bit or Loki to route logs to a unified location. Extensions like Log Viewer or custom queries using REST Client for Loki endpoints make it possible to view logs directly inside VSCode.
Configure each microservice to emit structured logs to Fluent Bit, which forwards them to a local Loki instance. Use .http
queries to query logs from VSCode, or open the Loki web UI in a preview window. Include trace_id
and service
in every log line for correlation.
To trace requests across services locally, developers can instrument their services using OpenTelemetry SDKs. Export spans to a locally running Jaeger collector via OTLP exporters. Jaeger exposes a UI that can be opened from VSCode or embedded using web previews.
Add OpenTelemetry SDKs to each service, with consistent trace and span propagation headers. Run Jaeger using Docker Compose. After starting services, generate requests and examine their traces in the Jaeger UI. Use Trace Compass extension if you need direct support inside VSCode.
Tests are observability surfaces too. When running unit or integration tests, emit synthetic metrics and log outputs that simulate production telemetry. Use Mocha Test Explorer to visualize test timing, parallelism, and status.
In Node.js, use beforeEach
and afterEach
hooks in Mocha to record timing data and simulate metrics. Output custom telemetry to local StatsD or logs. Use VSCode’s testing sidebar to inspect execution time across test files and cases.
Most microservices expose readiness and liveness endpoints. Use VSCode tasks to manage orchestration scripts and post-startup health checks. Terminal enhancements can beautify CLI output, improving readability during multi-service startup.
curl
responses into VSCode terminal for endpoint verificationCreate tasks.json
entries to run docker-compose up
, npm run dev
, or any other entry points. Chain additional shell commands like curl localhost:port/healthz
to check service readiness. Use Terminal Tabs
to organize logs per service.
Organizing your .vscode
folder is essential for repeatable observability workflows.
.vscode/
├── launch.json # Attach to containers, Node, or Python services
├── tasks.json # Define orchestration, health check, and metrics polling
├── settings.json # Extension configurations, log view paths, telemetry options
├── rest.http # HTTP requests to simulate Prometheus, health checks
├── telemetry.yaml # OpenTelemetry configuration file for local tracing
Link these configurations with a shared .env
or environment-specific overrides to manage local observability environments efficiently.
Sometimes, developers want to share their local telemetry with other teammates or external tools. Using secure tunnels like Ngrok or Cloudflare Tunnel, you can expose Jaeger, Prometheus, or Loki instances from localhost.
Run Jaeger on localhost:16686
and expose it using ngrok http 16686
. Share the generated URL with a teammate to inspect traces without deploying to staging. VSCode Live Share can also share terminal sessions and extension states.
Monitoring microservices locally is no longer optional for serious developers. With service-to-service complexity rising, developers need local observability to deliver reliable, debuggable code. VSCode extensions provide a flexible and powerful interface for integrating metrics, logs, and traces directly into the daily development workflow. By leveraging tools like Docker, OpenTelemetry, Prometheus, and Loki, developers can simulate production-grade observability while maintaining the velocity of local iteration.