Eventador ESP

Eventador Enterprise Streaming Platform (ESP) is a simple, secure and fully managed Apache Flink-based streaming platform. The ESP allows you to write Flink jobs that process streaming data to/from any source or sink including Kafka easily and seamlessly.

If you don't already have an enterprise account follow these steps to get set up:

  • ESP and Elements are only available in the enterprise environment. Contact sales for a trial account and/or demo. They will give you a dedicated URL, control plane, and environment for your use.
  • Login to your specific URL given to you by sales/support to create ESP deployments.

A deployment is all the components needed to run Apache Flink.

  1. Dedicated Zookeeper cluster
  2. Dedicated Apache Kafka cluster
  3. Dedicated Apache Flink cluster (job managers and task managers)
  4. Network access and routing rules
  5. Monitoring and stats gathering agents
  6. Access control manger
  7. Unique VPC.
  8. Brokers are spread across AWS availability zones.
  9. VIP's for Kafka and Flink endpoints.

To create a Flink deployment:

  • Click the Deployments tab from the Eventador Console.
  • Click Create Deployment.
  • Select a plan that suits your needs - be sure to select a Flink plan.
  • Select an AWS region.
  • Select the version of Apache Flink you would like to run.
  • Select the version of Apache Kafka you would like to run.
  • Name the deployment.
  • Click Create.

Flink programs are written in Java, Scala, or even Kotlin. They utilize the Flink API to process streaming data. For more information on how to write a Flink program see the documentation.

On Eventador, you can get started by using a pre-built template or if your program is checked into Github you can link it with Eventador.

Eventador ESP is integrated with Github - jobs are submitted by pulling them directly from a branch. Flink exposes a set of powerful API's to write streaming jobs. Eventador also has a library of templates to get you started.

  • Ensure you have associated your Eventador account with a Github account.
  • Go to the Projects list.
  • Click on the Create Project button - Use Eventador Template.
  • Enter a logical name for this project.
  • Enter a description.
  • Select Java - Read from Kafka write to Kafka
  • Select an Github account to clone this new repo to.
  • Click Create. You should get an email notification when the cloning is done.

You now have a new project in Eventador associated with a Github account, you have a template job, and the code can be run as is, or altered to your use case. See the documentation for writing Flink programs.

  • Go to the Projects list.
  • Select the job you created by clicking on the name - the job details will be displayed.
  • Select Build and Deploy Project.
  • Select Target Deployment and pick the PrototypeFlinkCluster cluster.
  • Change <read_topic> to input in the Command Line Parameters box.
  • Change <write_topic> to output in the Command Line Parameters box.
  • Ensure branch/master is selected as the Source Github Branch.
  • Click Run.

The job is checked out from the branch specified, built with Maven and shipped to the Flink cluster. You can click the Build and Deploy Logs button for the log of the build and check the Build Status column for the status of the build.

  • Once the Build Status column indicates Deployed click on the Deployment Name.
  • You can see the DAG of the job, see any exceptions, and see overall health of the job.
  • For any output from the job select the TaskManagers tab, then click the Logs button.
  • From the deployments list - Click on Apache Flink under your deployment, this will take you to the jobs page for this deployment.
  • At the page cluster wide stats are shown incuding number of slots available to run jobs on the cluster.
  • Select the job you wish to monitor, the exceptions (if any) for the job will be shown below, and the DAG for the job shown above including various stats about the job itself.
  • For each stage of the job on the DAG, it will be represented as a colored dot:
Color Status
Green running
Gray stopped
Red exception occurred
  • For more detailed information, check the job output by clicking the TaskManagers tab, then click the Logs button.

JMX stats and reporters

By default Eventador exposes the stock JMX metrics. Eventador provides a dashboard of the JMX stats. Eventador also supports the Prometheus and DataDog reporters. To get them configured contact Support.

Create a savepoint

  • Check out the Flink docs for information about savepoints.
  • From the deployments list - Click on the Apache Flink link under your deployment, this will take you to the jobs page for this deployment.
  • Click the Savepoint button for the job you would like to create a savepoint for.
  • Name the savepoint in the Description box, and DO NOT select the Cancel checkbox and click Create Savepoint.

Cancel with savepoint

  • Check out the Flink docs for information about savepoints.
  • From the deployments list - Click on the Apache Flink link under your deployment, this will take you to the jobs page for this deployment.
  • Click the Savepoint button for the job you would like to cancel.
  • Name the savepoint in the Description box, and select the Cancel checkbox and click Create Savepoint.

Resume from savepoint

  • Check out the Flink docs for information about savepoints.
  • The same process as Run a Flink Job, but start from a savepoint.
  • Click the savepoint name from the Restore from savepoint select box.
  • Optionally select Allow Non-Restored State check box.
  • Click Run

Eventador is a fully managed service, so it's generally not needed to access the stock Flink user interface. In the event it's needed, perhaps for monitoring a REST route you can get the connect string like:

  • From the deployments list - Click on Apache Flink under your deployment.
  • Click on the JobManagers tab.
  • You can access the HTTP service on the job manager at the Host location, port 8081. For more information on the Flink REST API see the documentation.

Sources and Sinks

Flink provides a number of pre-defined data sources known as sources and sinks. An Eventador deployment includes Apache Kafka along with Flink - but any valid data source is a potential source or sink. Because Eventador is VPC peered to your application VPC, then accessing sources and sinks in that VPC is seamless. External and other SaaS providers are also configurable. Should you need additional peering for access to various other sources contact Support.

Examples are:

Resource Type Example
Eventador Apache Kafka source/sink example
Confluent Cloud Kafka source/sink
AWS Managed Kafka Service source/sink
Apache Cassandra sink
Amazon Kinesis Streams source/sink
Elasticsearch sink
Hadoop FileSystem sink
RabbitMQ source/sink
Apache NiFi source/sink
Twitter Streaming API source example
JDBC sink example
CSV sink

In addition there are a number of community contributed sources/sinks, including Apache Bahir.