Application Configuration
This page describes how to configure AMT application clusters and Apache Pekko Management settings. It covers the different cluster modes available for hosting AMT applications—including Single Node, Multi Node, Bootstrap, and Remote clusters, as well as configuration options for health checks and monitoring.
Application Clusters
There are various ways to host AMT applications and separating workloads: Single and multi (including Bootstrap) node configurations, which are explained in this section. These nodes are configured in the amt-config.yaml file.
Single Node Cluster
The simplest way to host AMT is to use a single node cluster.
Single Node cluster example:
cluster:
type: PEKKO
mode: SINGLE_NODE
nodes:
single-node-amt:
settings:
host: 0.0.0.0
gRPCPort: 8080
restPort: 9000
httpOnly: false
jobQueues: Default
Multi Node Cluster
Using multiple nodes allows for separating workloads.
Multi Node cluster example:
cluster:
type: PEKKO
mode: CLUSTER_NODE
nodes:
node-1:
settings:
host: 0.0.0.0
port: 25251
seedNodes: localhost:25251,localhost:25252
gRPCPort: 8081
restPort: 9000
httpOnly: false
roles: job,file,program
jobQueues: JobQueue1
node-2:
settings:
host: 0.0.0.0
port: 25252
seedNodes: localhost:25251,localhost:25252
gRPCPort: 8082
restPort: 9001
httpOnly: false
roles: job,transaction,print
jobQueues: JobQueue2
Bootstrap
The Bootstrap mode enables automatic cluster formation. This mode is designed for dynamic cloud environments like Kubernetes, where nodes can be created and destroyed dynamically. Instead of manually specifying seed nodes, Bootstrap mode uses service discovery to automatically find and join cluster members.
Bootstrap works as follows when a node starts in this mode:
- Service Discovery: The node uses the configured discovery method (such as kubernetes-api) to find other nodes
- Contact Point Discovery: Identifies a minimum number of contact points required to form a cluster
- Cluster Formation: Once enough contact points are found, the nodes form a cluster automatically
- Management API: Apache Pekko Management is started before cluster bootstrap to enable health checks and monitoring
When to Use Bootstrap Mode
Bootstrap mode is designed for Kubernetes deployments, where it automatically discovers pods using the Kubernetes API. This is the primary and recommended use case for Bootstrap mode.
Bootstrap mode can also be used with other environments that support automatic cluster discovery, including:
- Other container orchestration platforms
- Cloud-native environments with dynamic scaling requirements
- Any infrastructure where nodes can join and leave the cluster without manual reconfiguration
Configuration
Bootstrap cluster example:
cluster:
type: PEKKO
mode: BOOTSTRAP
nodes:
bootstrap-amt:
settings:
host: 0.0.0.0
gRPCPort: 8080
restPort: 9000
httpOnly: false
jobQueues: Default
The Bootstrap configuration in the application.conf file controls the discovery behavior:
pekko {
management {
cluster.bootstrap {
contact-point-discovery {
# Service name for Kubernetes API discovery
service-name = "amtruntime"
# Discovery method (kubernetes-api, config, etc.)
discovery-method = kubernetes-api
# Minimum number of contact points needed
required-contact-point-nr = 2
required-contact-point-nr = ${?REQUIRED_CONTACT_POINT_NR}
}
}
}
}
For more information about Apache Pekko Cluster Bootstrap configuration and troubleshooting, see the Apache Pekko Management Bootstrap documentation.
Comparison of AMT Cluster Modes
| Feature | Bootstrap | Cluster_Node | Single_Node |
| Seed Nodes | Automatic discovery | Manual configuration required | Not applicable |
| Node Configuration | Minimal (host, ports, queues) | Detailed (port, seed nodes, roles) | Minimal |
| Use Case | Cloud/Kubernetes deployments | Fixed infrastructure | Development/testing |
| Scaling | Dynamic | Manual | Not applicable |
| Pekko Management | Started automatically | Optional | Optional |
Remote Clusters
A remote cluster allows for starting transactions on a remote application. Remote cluster nodes only require host and gRPCPort settings.
Remote cluster example:
remoteCluster:
nodes:
remote-node:
settings:
host: 0.0.0.0
gRPCPort: 9090
Advanced Configuration
See the AMT Application Config File page for all available options related to Clusters.
Apache Pekko Management
The settings described in this section are configured in the application.conf file.
See the Apache Pekko Management documentation for additional configuration options and more information.
Health Check Endpoint
The Health Check configuration is not currently in the application.conf file by default. To update the
application.conf file and enable the Health Check, add the enabled = true line under the
pekko.management section.
pekko {
management {
enabled = true
http {
hostname = "192.168.2.24"
port = 8558
}
health-checks {
readiness-checks {
example-ready = "com.avanade.ltcoe.amt.common.pekko.cluster.AMTHealthCheck"
}
}
}
}
Modify the hostname and/or port as needed. To know if the cluster is successfully up and running, look for the "OK" message that is returned. If not successful, a "Failed to connect" message appears.
Sample File
pekko {
loglevel = info
actor {
serializers {
proto = "org.apache.pekko.remote.serialization.ProtobufSerializer"
jackson-cbor = "com.avanade.ltcoe.amt.common.pekko.cluster.AMTJacksonCBORSerializer"
}
serialization-bindings {
"com.avanade.ltcoe.amt.common.pekko.cluster.Serializable" = jackson-cbor
}
coordinated-shutdown {
exit-jvm = on
}
# The default dispatcher uses the MDCAwareForkJoinExecutor as the execution service.
# This ForkJoinPool extension automatically resets the MDC values after a task has completed
# on on of the thread-pool threads.
default-dispatcher {
executor = "com.avanade.ltcoe.amt.platform.pekko.cluster.execution.forkjoin.MDCAwareForkJoinConfigurator"
}
# This dispatcher uses the MDCAwareThreadPoolExecutor as the execution service.
# This thread-pool executor automatically resets the MDC values after a task has completed
# on one of the thread-pool threads.
amt-job-dispatcher {
type = Dispatcher
executor = "com.avanade.ltcoe.amt.platform.pekko.cluster.execution.threadpool.MDCAwareThreadPoolConfigurator"
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 2
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 2.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 1
}
# This dispatcher uses the MDCAwareThreadPoolExecutor as the execution service.
# This thread-pool executor automatically resets the MDC values after a task has completed
# on one of the thread-pool threads.
amt-transaction-dispatcher {
type = Dispatcher
executor = "com.avanade.ltcoe.amt.platform.pekko.cluster.execution.threadpool.MDCAwareThreadPoolConfigurator"
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 2
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 2.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 1
}
amt-file-dispatcher {
type = Dispatcher
executor = "com.avanade.ltcoe.amt.platform.pekko.cluster.execution.threadpool.MDCAwareThreadPoolConfigurator"
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 2
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 2.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 1
}
amt-transaction-logger-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 2
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 2.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 1
}
# dispatcher intended for any db related actors
amt-database-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 2
core-pool-size-factor = 2.0
core-pool-size-max = 10
}
throughput = 1
}
# dispatcher intended for any general management operations, that do not deserve their own dispatcher,
# but are also not intended to run on the default one
amt-management-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 2
core-pool-size-factor = 2.0
core-pool-size-max = 10
}
throughput = 1
}
}
serialization {
jackson {
jackson-cbor {
serialization-features {
FAIL_ON_EMPTY_BEANS = off
}
}
}
}
http {
cors {
allowed-origins = ["https://*", "http://localhost:4200", "http://localhost:4300", "http://localhost:8000", "http://controlcenter-frontend:4200", "http://smoketest-frontend:4300"]
}
server {
preview {
enable-http2 = on
}
}
}
cluster {
shutdown-after-unsuccessful-join-seed-nodes = 120s
downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
roles = []
sharding {
# No actor will automatically be passivated
passivate-idle-entity-after = off
}
}
management {
cluster.bootstrap {
contact-point-discovery {
# For the kubernetes API this value is substituted into the %s in pod-label-selector
service-name = "amtruntime"
# pick the discovery method you'd like to use:
discovery-method = kubernetes-api
required-contact-point-nr = 2
required-contact-point-nr = ${?REQUIRED_CONTACT_POINT_NR}
}
}
health-checks {
readiness-checks {
example-ready = "AMTHealthCheck"
}
}
}
}
pekko.grpc.client {
"*" {
host = 127.0.0.1
port = 8080
trusted = /certs/ca.pem
}
}
