Documentation

⌘K
Getting started with the Kensu Community Edition
Marketing campaign
Financial data report
Getting credentials
Recipe: Observe Your First Pipeline
Integration
Collectors
Connectors
Agents: getting started
Python
PySpark
Scala Spark
Databricks Notebook
Agent Listing
Docs powered by
Archbee

Recipe: Observe Your First Pipeline

1min

You are here because you want to detect, resolve, and prevent data issues in a pipeline -- but how to get started?

This is the purpose of the below step-by-step recipe, which will guide the success of your data observability initiative:

  1. Identify a couple of issues you have faced in this project (or another you want to protect).
  2. Determine the impacts (time, costs, money, energy, mood, trust, etc.).
  3. Team up with the technical referents for the applications /steps in your pipeline.
  4. Determine the technologies used for the applications in your pipeline.
  5. Based on issues defined in 1., prioritize which application to enrich with data observability first, then second, and so forth.
  6. For each application, you or the technical referent navigate to Agents: getting started, and follow the tutorial or read the documentation associated with the technology -- or you can also connect with the user community on Slack.
  7. After deployment, navigate to the project page or data sources, to acknowledge the observations generated by your applications.
  8. Use the visualization tool to select the observations to create rules to detect and anticipate issues defined in 1.



Updated 24 Apr 2023
Did this page help you?
PREVIOUS
Getting credentials
NEXT
Integration
Docs powered by
Archbee
Docs powered by
Archbee