Monitoring Microservices Locally with VSCode: Observability Extensions Developers Should Know

Written By:
Founder & CTO
July 14, 2025

Microservices architectures have fundamentally changed how modern software systems are built and deployed. However, while production-grade observability is well-established through APM platforms, tracing systems, and log aggregators, local development environments often suffer from poor visibility. Developers working on microservices locally need access to real-time metrics, structured logs, and trace visualization to quickly detect performance bottlenecks, trace service-to-service communication, and troubleshoot issues effectively. This blog outlines a technically detailed guide on how to implement local observability inside Visual Studio Code using a curated set of extensions. The focus is on enabling deep inspection of services running on localhost or via containers during development workflows.

Why Local Observability is Critical for Microservices Developers
Debugging is Complex Across Multiple Local Services

When microservices are spun up locally via Docker Compose, makefiles, or individual scripts, they may interact asynchronously across REST, gRPC, message queues, or internal APIs. Diagnosing where requests fail, or which service introduces latency, is difficult without distributed logs and traces. Local observability helps trace request paths and response times across boundaries in real time.

Feedback Loops are Too Slow Without Inline Visibility

Without integrated observability, developers rely on console.log, shell outputs, and manual inspection of logs from multiple terminals. This breaks the development loop and increases cognitive overhead. Observability tools inside VSCode reduce this latency by surfacing metrics, logs, and health indicators within the IDE.

Production Incidents Often Originate From Locally Unobserved Behavior

Many runtime bugs or performance issues stem from unobserved behaviors introduced during development. By simulating observability conditions locally using traces, metrics, and structured logs, developers can reduce environment drift and increase the fidelity of their local testing.

Core Observability Dimensions for Local Monitoring
Metrics

Quantitative indicators such as memory usage, request duration, error rate, or custom business metrics (orders created, cache hits) are fundamental to understanding a microservice’s behavior. In local environments, metrics help profile resource consumption and performance regressions during iterative development.

Logs

Logs provide the narrative of execution paths, error messages, state transitions, and service interactions. Structured logging using JSON or key-value pairs enables log aggregation and queryability. Logs that include context propagation, correlation identifiers, and service metadata provide rich insights even during local execution.

Traces

Traces represent a single transaction or request as it moves across services, capturing latencies, dependencies, and failures. When services are instrumented with distributed tracing, they produce spans that can be visualized in tools such as Jaeger. Locally, tracing helps developers identify downstream latency sources and incomplete or failed transactions.

VSCode Extensions for Local Observability

Docker Extension with Dev Containers for Runtime Metrics and Environment Isolation

The VSCode Docker extension, coupled with Dev Containers, allows developers to monitor microservices running in containers with precise context. When services are encapsulated in containers, VSCode can directly hook into their logs, environment variables, runtime metrics, and file system.

Key Features
  • Display per-container memory and CPU metrics inline in the VSCode container view
  • View live container logs with filtering by service name or severity level
  • Access to Docker Compose configurations for orchestration
  • Terminal access to each container for debugging
Developer Workflow

Create a .devcontainer folder for each service, define dependencies in the Dockerfile, and use docker-compose to bring up multiple services. Attach to the container using VSCode’s Remote - Containers feature to debug processes, monitor usage stats, and observe port bindings.

Prometheus Metrics Debugging with REST Client and Swagger Viewer

For microservices that expose Prometheus-compatible metrics via /metrics endpoints, VSCode can be used to validate and simulate metrics requests. The REST Client extension enables HTTP requests directly inside .http files, providing inline previews of Prometheus output.

Key Features
  • Send GET requests to localhost:port/metrics from .http files
  • Validate format and headers of Prometheus text exposition
  • Use Swagger Viewer to document custom metrics endpoints
Developer Workflow

Instrument services with libraries such as prom-client in Node.js or prometheus_client in Python. After launching services locally, create .http test cases to simulate metric scraping. Confirm metrics are exposed correctly and include dimensional labels for aggregation.

Fluent Bit or Loki Integration with VSCode Log Viewer Extensions

Logs scattered across services, containers, and terminals create significant friction. To address this, use a log aggregation agent like Fluent Bit or Loki to route logs to a unified location. Extensions like Log Viewer or custom queries using REST Client for Loki endpoints make it possible to view logs directly inside VSCode.

Key Features
  • Tail logs from multiple services in a single window
  • Apply filters and keyword searches with timestamps
  • Query Loki with label selectors, regex, or JSON field filters
Developer Workflow

Configure each microservice to emit structured logs to Fluent Bit, which forwards them to a local Loki instance. Use .http queries to query logs from VSCode, or open the Loki web UI in a preview window. Include trace_id and service in every log line for correlation.

Distributed Tracing with OpenTelemetry and Jaeger

To trace requests across services locally, developers can instrument their services using OpenTelemetry SDKs. Export spans to a locally running Jaeger collector via OTLP exporters. Jaeger exposes a UI that can be opened from VSCode or embedded using web previews.

Key Features
  • Visualize service call graphs and span durations
  • Identify latency bottlenecks between microservices
  • Filter traces by operation name, tags, or duration
Developer Workflow

Add OpenTelemetry SDKs to each service, with consistent trace and span propagation headers. Run Jaeger using Docker Compose. After starting services, generate requests and examine their traces in the Jaeger UI. Use Trace Compass extension if you need direct support inside VSCode.

Test Observability with Mocha Test Explorer and Custom Metrics

Tests are observability surfaces too. When running unit or integration tests, emit synthetic metrics and log outputs that simulate production telemetry. Use Mocha Test Explorer to visualize test timing, parallelism, and status.

Key Features
  • View test execution tree and per-test duration
  • Emit logs during tests for observability signal simulation
  • Inject instrumentation using test lifecycle hooks
Developer Workflow

In Node.js, use beforeEach and afterEach hooks in Mocha to record timing data and simulate metrics. Output custom telemetry to local StatsD or logs. Use VSCode’s testing sidebar to inspect execution time across test files and cases.

Health Checks and Orchestration Using VSCode Tasks and Terminal Enhancements

Most microservices expose readiness and liveness endpoints. Use VSCode tasks to manage orchestration scripts and post-startup health checks. Terminal enhancements can beautify CLI output, improving readability during multi-service startup.

Key Features
  • Define task groups for starting services and checking health
  • Pipe curl responses into VSCode terminal for endpoint verification
  • Auto-scroll and group terminal logs by service
Developer Workflow

Create tasks.json entries to run docker-compose up, npm run dev, or any other entry points. Chain additional shell commands like curl localhost:port/healthz to check service readiness. Use Terminal Tabs to organize logs per service.

Structuring Your VSCode Workspace for Observability

Organizing your .vscode folder is essential for repeatable observability workflows.

.vscode/
├── launch.json         # Attach to containers, Node, or Python services
├── tasks.json          # Define orchestration, health check, and metrics polling
├── settings.json       # Extension configurations, log view paths, telemetry options
├── rest.http           # HTTP requests to simulate Prometheus, health checks
├── telemetry.yaml      # OpenTelemetry configuration file for local tracing

Link these configurations with a shared .env or environment-specific overrides to manage local observability environments efficiently.

Exposing Local Observability for External Inspection and Collaboration

Sometimes, developers want to share their local telemetry with other teammates or external tools. Using secure tunnels like Ngrok or Cloudflare Tunnel, you can expose Jaeger, Prometheus, or Loki instances from localhost.

Developer Workflow

Run Jaeger on localhost:16686 and expose it using ngrok http 16686. Share the generated URL with a teammate to inspect traces without deploying to staging. VSCode Live Share can also share terminal sessions and extension states.

Final Thoughts

Monitoring microservices locally is no longer optional for serious developers. With service-to-service complexity rising, developers need local observability to deliver reliable, debuggable code. VSCode extensions provide a flexible and powerful interface for integrating metrics, logs, and traces directly into the daily development workflow. By leveraging tools like Docker, OpenTelemetry, Prometheus, and Loki, developers can simulate production-grade observability while maintaining the velocity of local iteration.