Kensu Documentation

⌘K
Getting started with the Kensu Community Edition
Marketing campaign
Financial data report
Getting credentials
Recipe: Observe Your First Pipeline
Agents: getting started
Python
PySpark
Scala Spark
Databricks Notebook
Agent Listing
Docs powered by archbee 
4min

Installation and configuration

The Marketing email campaign use case requires Docker.



If Docker is not installed, please follow these steps.

(The minimum requirements for Docker are 4 CPUs, 4 GB of Memory, 1GB of Swap, 40GB of Disk Image.)

1️⃣ To install the requirements, open your terminal and run this script:

Shell
|

This script fetches the repo from Github, downloads the needed docker image, and launches the PostgreSQL database.

2️⃣ Edit the configuration of the data observability agent

The agent collects information inside the application to send it to Kensu. The agent augments the program library and gathers metadata, defines the lineage, and computes the metrics before sending this to Kensu through the API. To configure the agent, a configuration file is needed.

Edit the /conf.default.ini file, with the following:

  • kensu_ingestion_url: This is the URL of the Ingestion of the Kensu platform, which is already pre-configured if you are using the Community edition.
  • kensu_ingestion_token: This token will be used in the HTTPS communication with the kensu_ingestion_url to add an identity to the traces, logs, and metrics sent with your user for easier governance and management.
  • kensu_api_token: This is the API token (Personal Access Token) that will allow you to create a rule programmatically.

If you haven't got your kensu_ingestion_token and kensu_api_token yet, please follow the instructions in Getting Kensu Credentials.

Next step click here

Updated 21 Nov 2022
Did this page help you?
Yes
No
UP NEXT
Observe your pipeline
Docs powered by archbee