Astarte Flow is a data processing framework which allows you to build reusable pipelines to process, classify and analyze your data. Flow integrates seamlessly with Astarte and Kubernetes, letting you focus on your algorithms while it handles data retrieval, routing and orchestration.

One of Astarte Flow key features is the ability to provide your own container as a data processing block, without having to worry about all the low level details needed to deploy it inside Kubernetes and making it receive data. By using Flow Python SDK, you just have to use the provided callbacks and functions to interact with incoming and outgoing messages in a standard format, Flow will take care of the rest.

These are some of the main concepts used in Astarte Flow and covered in this guide:

  • Messages are Flow's representation of a piece of data that is being processed.
  • Blocks are the fundamental processing unit of Astarte Flow. Container Blocks are a special kind of block which allows you to process your data with a Docker container.
  • Pipelines are collections of blocks providing routing logic and representing a specific computation.
  • Flows are specific instances of a pipeline, created providing concrete values to the parametric values of a pipeline.
  • Streams are sequences of messages sharing the same key and processes by the same Flow.

The two main ways to interact with Flow are through the pipeline editor and through the REST API.