Visualize metrics with Prometheus
This tutorial shows how to enable metrics collection for self-managed Boundary deployments like Boundary Enterprise and Boundary Community Edition using Prometheus, and configures Grafana to visualize metrics data.
Background
Visibility into Boundary workers and controllers plays an important role in ensuring the health of production deployments. Boundary 0.8 adds monitoring capabilities using the OpenMetric exposition format, enabling administrators to collect the data using tools like Prometheus and export it into a data visualizer.
This tutorial reviews metrics collection in dev mode, and demonstrates how to enable metrics in a more realistic deployment scenario. Prometheus is then used to gather metrics, and Grafana for visualization.
Tutorial Contents
- Get setup: dev mode
- Install Prometheus
- Examine Metrics
- Get setup: Docker Compose
- Configure Prometheus
- Configure Grafana
- Examine Graph Metrics
Prerequisites
Docker is installed
Docker-Compose is installed
Tip
Docker Desktop 20.10 and above include the Docker Compose binary, and does not require separate installation.
A Boundary binary greater than 0.8.0 in your
PATH
. This tutorial uses the 0.8.0 version of Boundary.Terraform 0.13.0 or greater in your
PATH
Access to prometheus.io/download
Get setup: dev mode
Metrics are available for controllers and workers in a Boundary deployment. To quickly view metrics data, you will first use Boundary's dev mode and deploy Prometheus locally. Later in the tutorial a Docker Compose environment will be used to visualize metrics with Grafana.
In production deployments, metrics are enabled in Boundary's config
file by declaring an "ops"
listener. This will be explored later on.
Start boundary dev
, which uses a configuration with a pre-defined ops listener.
Open a web browser and navigate to http://localhost:9203/metrics
. A list
of metrics events are available at this endpoint, reported in the OpenMetric
format. Monitoring solutions like Prometheus enable metrics reporting by
scraping the values from this endpoint.
Leave dev mode running in the current terminal session, and open a new terminal window or tab to continue the tutorial.
Install Prometheus
Prometheus is an open-source monitoring and alerting toolkit. It gathers and stores metrics reported in the OpenMetric exposition format.
From Prometheus's docs:
Prometheus scrapes metrics from instrumented jobs, either directly or via an intermediary push gateway for short-lived jobs. It stores all scraped samples locally and runs rules over this data to either aggregate and record new time series from existing data or generate alerts. Grafana or other API consumers can be used to visualize the collected data.
Download the latest Prometheus monitoring binary for your system. This tutorial uses version 2.35.0. Extract the archive to your local machine.
For example, on MacOS:
Next, unzip the archive:
Change into the extracted folder and view its contents:
The prometheus.yml
in this directory is Prometheus's configuration file, used
by the prometheus
binary.
Open prometheus.yml
in your text editor, and locate static_configs
under the
scrape_configs
block.
The targets
allow you to define the endpoints Prometheus should attempt to
scrape metrics from. Notice that localhost:9090
is already defined for
job_name: "prometheus"
. This is Prometheus's metrics endpoint, where it
collects data about itself.
For a simple look at metrics, add an additional target for localhost:9203
,
the standard metrics endpoint for Boundary.
Save this file, and return to your terminal.
Examine metrics
With the configuration file targeting Boundary's /metrics
endpoint, Prometheus
can be started to begin monitoring.
Ensure you are located in the extracted directory. Execute Prometheus,
supplying the prometheus.yml
config file.
Open a web browser, and navigate to http://localhost:9090
. You will be
redirected to the /graph
page, where queries can be constructed and executed.
Type boundary
into the search box. Notice that autocomplete shows a scrollable
list of metrics that can be queried for the controller and worker.
The metric names are automatically populated from the values available at the
/metrics
endpoint.
As of Boundary 0.8, the available metrics include:
Controller metrics:
- HTTP request latency
- HTTP request size
- HTTP response size
- gRPC service latency
Worker metrics:
- Open proxy connections
- Sent bytes for proxying, handled by worker
- Received bytes for proxying, handled by worker
- Time elapsed before a header is written back to the end user
Other metrics:
- Build info for version details
This is an initial set of operational metrics and more will be added in the future. To learn more about specific metrics and how to access them, refer to the metrics documentation.
Execute a simple query by clicking on a metric value. For example, select
boundary_cluster_client_grpc_request_duration_seconds_sum
, and then click
Execute.
The
boundary_cluster_client_grpc_request_duration_seconds
metric reports latencies for requests made to the gRPC
service running on the cluster listener.
The returned results show the matching queries. Scrolling through the
Evaluation time allows for quick navigation through metrics reported at a
particular timestamp. Most metrics in this example will share the same timestamp
because they were generated when boundary dev
was executed.
Select the Graph view to show these metrics over time.
Additional queries to Boundary would produce more metrics. For example,
authenticating to Boundary as the admin user would produce more events reported
by the boundary_controller_api_http_request_size_bytes
metric.
Return to Prometheus, and then execute a query for
boundary_controller_api_http_request_size_bytes_sum
. This would report a
number of method="get"
requests to the API at a single point in time.
Also notice that several metrics are available for boundary_worker
. Boundary's
dev mode includes a controller and worker instance for testing. Next, metrics
will be examined using a more realistic Boundary deployment with Docker Compose.
When finished examining dev mode metrics, locate the correct shell session and
stop Prometheus using ctrl+c
.
Lastly, locate the terminal session where boundary dev
was executed, and
stop the dev server using ctrl+c
.
Get setup: Docker Compose
The demo environment provided for this tutorial includes a Docker Compose cluster that deploys these containers:
- A Boundary 0.8.0 controller server
- A Boundary database
- 1 worker instance
- 1 postgres database target
- Prometheus
- Grafana
The Terraform Boundary
Provider is
also used in this tutorial to easily provision resources using Docker, and must
be available in your PATH
when deploying the demo environment.
To learn more about the various Boundary components, refer back to the Start a Development Environment tutorial.
Deploy the lab environment
The lab environment can be downloaded or cloned from the following Github repository:
In your terminal, clone the repository to get the example files locally:
Move into the
learn-boundary-prometheus-metrics
folder.Ensure that you are in the correct directory by listing its contents.
The repository contains the following files:
deploy
: A script used to deploy and tear down the Docker-Compose configuration.compose/docker-compose.yml
: The Docker-Compose configuration file describing how to provision and network the boundary cluster.compose/controller.hcl
: The controller configuration file.compose/worker.hcl
: The worker configuration file.compose/prometheus.yml
: The Prometheus configuration file.terraform/main.tf
: The terraform provisioning instructions using the Boundary provider.terraform/outputs.tf
: The terraform outputs file for printing user connection details.
This tutorial makes it easy to launch the test environment with the
deploy
script.Any resource deprecation warnings in the output can safely be ignored.
The Boundary user login details are printed in the shell output, and can also be viewed by inspecting the
terraform/terraform.tfstate
file. You will need the user1auth_method_id
to authenticate via the CLI and establish sessions later on.You can tear down the environment at any time by executing
./deploy cleanup
.To verify that the environment deployed correctly, print the running docker containers and notice the ones named with the prefix "boundary".
The next part of this tutorial focuses on the relationship between the controller, worker, the Prometheus metrics server, and Grafana.
Enable metrics
Both the controller and worker instances can be configured to report metrics at
their /metrics
endpoint.
To enable metrics, a tcp
listener with the "ops"
purpose must be defined in
the server configuration file:
This example is the minimum needed to enable metrics. To expose the /metrics
endpoint to Prometheus, a port should be specified in the listener configuration
as well.
Open the controller configuration file, compose/controller.hcl
. Uncomment
lines 26 - 30.
1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627282930313233343536373839404142434445464748495051
Adding this listener block to a Boundary server will enable metrics collection
at the address 0.0.0.0:9203
. Save this file.
In the compose/docker-compose.yml
file, notice that the controller
container
maps port 9203
to the localhost's 9203
.
With this configuration in place, restart the controller.
Note
Depending on the OS and Docker installation method, Compose may
name the containers differently. If the controller name does not match, list the
running containers with docker ps
and restart the controller using the listed
name (such as boundary-controller_1
).
Open your web browser, and visit http://localhost:9203/metrics
. You will find
a list of controller metrics, similar to what was displayed at this endpoint
when running boundary dev
earlier.
Unlike in dev mode, only the controller-related metrics are shown, including:
boundary_controller_api_http_request_duration_second
boundary_controller_api_http_request_size_bytes
boundary_controller_api_http_response_size_bytes
boundary_controller_cluster_grpc_request_duration_seconds
and other Go-related and generic Prometheus metrics.
Next, follow the same procedure to enable metrics on the worker.
Open the worker configuration file, compose/worker.hcl
. Uncomment lines 11 -
15.
1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627282930
Adding this listener block to a Boundary server will enable metrics collection
at the address 0.0.0.0:9203
. Save this file.
In the compose/docker-compose.yml
file, notice that the worker
container
maps port 9203
to the localhost's 9204
. This is to prevent a port collision
on 9203
, where the controller metrics are being reported.
With this configuration in place, restart the worker.
Note
Depending on the OS and Docker installation method, Compose may
name the containers differently. If the worker name does not match, list the
running containers with docker ps
and restart the controller using the listed
name (such as boundary-worker_1
).
Open your web browser, and visit http://localhost:9204/metrics
. You will find
a list of worker metrics. Unlike the controller, the metrics reported for the
worker mostly contain:
boundary_cluster_client_grpc_request_duration_seconds
boundary_worker_proxy_http_write_header_duration_seconds
boundary_worker_proxy_websocket_active_connections
boundary_worker_proxy_websocket_sent_bytes
and other Go-related and generic Prometheus metrics.
Configure Prometheus
With metrics enabled on the controller and worker servers, Prometheus is ready to be configured.
Open the compose/prometheus.yml
configuration file. This tutorial pre-defines
the Prometheus jobs under the scrape_configs
section:
1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627282930313233343536373839
Prometheus collects metrics about itself on localhost:9090
.
For the controller
job, Prometheus listens on the boundary
host at port
9203
. Similarly, the worker
job listens on the worker
host at port 9203
.
Remember that the Prometheus container is within the Docker Compose deployment,
so it is listening on the host's exposed ops
port, not the forwarded port on
your machine's localhost.
This configuration is already correct. Open your web browser and navigate to
http://localhost:9090
to view the Prometheus dashboard.
This dashboard is the same as when you deployed Prometheus locally earlier when
running boundary dev
. Check that various boundary_controller_
and
boundary_worker_
metrics are available within the Expression search box.
Execute a query for the controller and worker to ensure metrics are properly
enabled.
With Prometheus successfully reporting metrics, a more robust visualization tool can be used to explore Boundary metrics.
Configure Grafana
Grafana is an observability tool commonly integrated with Prometheus. The tool enables a more robust view of metrics than can be easily created using Prometheus alone.
This tutorial uses the Grafana Docker
image, but Grafana
Cloud can also be integrated with minimal
additional configuration using a remote_write
block added to the
compose/prometheus.yml
file.
Examine the Grafana configuration on lines 85 - 95 in the
compose/docker-compose.yml
file.
The compose/datasource.yml
file is copied to the grafana
container into
/etc/grafana/provisioning/datasources/
, the default config directory for
Grafana. When Grafana is started it automatically loads this file.
The grafana
container is mapped to port 3000
on your localhost.
Open compose/datasource.yml
.
1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627
This basic Grafana configuration specifies a datasource named boundary
, of
type prometheus
. The url: http://prometheus:9090
specifies the default
location where Prometheus is running, and serves as the source of metrics for
Grafana.
This configuration is already correct. Open your web browser and navigate to
http://localhost:3000
to view the Grafana dashboard.
Login using the default Grafana credentials:
- Email or username:
admin
- Password:
admin
Skip the prompt to create a new password to continue to the dashboard.
Examine graph metrics
From the Grafana Home, open the settings menu and select Data Sources.
Within the Configuration menu boundary should be listed under Data
sources. If there are no data sources, check that no changes were made to the
compose/datasource.yml
file, where this config is sourced from.
Click on the boundary data source to view its settings.
Open the Dashboards page.
Click Import for the Prometheus 2.0 Stats and Grafana metrics dashboards.
After importing, select Search dashboards from the left-hand navigation.
Open the Prometheus 2.0 Stats dashboard.
This standard Prometheus dashboard shows generic metrics, such as a Memory Profile. Explore these metrics in more detail by clicking on any panel and selecting View.
A simple dashboard compiling the controller and worker metrics has been provided
for this tutorial in the
learn-boundary-prometheus-metrics-dev/compose/Boundary-Overview-Dashboard.json
file.
To load this dashboard, open the Create menu from the sidebar and select Import.
Next, click Upload JSON file and select the
learn-boundary-prometheus-metrics-dev/compose/Boundary-Overview-Dashboard.json
file.
Click on Select a Prometheus data source and select boundary. When finished, click Import.
You will be redirected to the imported Boundary Overview Dashboard. The relevant Boundary metrics have been organized into simple panels that overlay common renderings of the same metric value, such as the panel:
boundary_controller_api_http_request_duration_seconds
boundary_controller_api_http_request_duration_seconds_bucket
boundary_controller_api_http_request_duration_seconds_count
boundary_controller_api_http_request_duration_seconds_sum
Note
There are many ways to organize these metrics values. The provided sample dashboard is not intended to recommend any best practices on metrics visualization or monitoring in general.
Return to a terminal session and proceed to make various queries and requests to
Boundary. You will need the user1
auth_method_id
from deploying the lab
environment to authenticate. The user1
password is
password
.
Next, examine the Boundary deployment by querying for various resources, such as listing targets and reading target details.
Additionally, you can log into the postgres
target to establish an active
session. The postgres user's password is password
.
Note that the postgres target sessions are automatically cancelled after 5
minutes, as defined in terraform/main.tf
. To exit the postgres session, enter
\q
into the postgres=#
prompt.
After interacting with Boundary, return to the Grafana dashboard. You will notice various metrics being reported and graphed over time as more are available.
Examine health check endpoint
Boundary 0.8 also introduces a health check endpoint for the controller.
Like metrics, the health endpoint is enabled when a listener with the "ops"
purpose is defined, by default on port 9203
. This configuration enables the
/health
endpoint where the controller's overall status can be monitored.
Health checks are critical for load-balanced Boundary deployments, and situations where a shutdown grace period is needed.
The new controller health service introduces a single read-only endpoint:
Status | Description |
---|---|
200 | GET /health returns HTTP status 200 OK if the controller's api gRPC Server is up |
5xx | GET /health returns HTTP status 5XX or request timeout if unhealthy |
503 | GET /health returns HTTP status 503 Service Unavailable status if the controller is shutting down |
All responses return empty bodies. GET /health
does not support any input.
Querying this in a terminal session using curl
, Invoke-WebRequest
or wget
returns a 200
response when the controller is healthy.
Cleanup and teardown
The Boundary cluster containers and network resources can be cleaned up
using the provided deploy
script.
Check your work with a quick docker ps
and ensure there are no more containers
with the boundary_
prefix leftover. If unexpected containers still exist,
execute docker rm -f CONTAINER_NAME
against each to remove them.