Application Configuration
Application Clusters
There are various ways to host AMT applications and separating workloads: Single and multi (including Bootstrap) node configurations, which are explained in this section. These nodes are configured in the amt-config.yaml file.
Single Node Cluster
The simplest way to host AMT is to use a single node cluster.
Single Node cluster example:
cluster:
type: PEKKO
mode: SINGLE_NODE
nodes:
single-node-amt:
settings:
host: 0.0.0.0
gRPCPort: 8080
restPort: 9000
httpOnly: false
jobQueues: Default
Multi Node Cluster
Using multiple nodes allows for separating workloads.
Multi Node cluster example:
cluster:
type: PEKKO
mode: CLUSTER_NODE
nodes:
node-1:
settings:
host: 0.0.0.0
port: 25251
seedNodes: localhost:25251,localhost:25252
gRPCPort: 8081
restPort: 9000
httpOnly: false
roles: job,file,program
jobQueues: JobQueue1
node-2:
settings:
host: 0.0.0.0
port: 25252
seedNodes: localhost:25251,localhost:25252
gRPCPort: 8082
restPort: 9001
httpOnly: false
roles: job,transaction,print
jobQueues: JobQueue2
Bootstrap
The bootstrap mode allows the creation of multiple nodes by using an automatic cluster discover system.
Bootstrap cluster example:
cluster:
type: PEKKO
mode: BOOTSTRAP
nodes:
bootstrap-amt:
settings:
host: 0.0.0.0
gRPCPort: 8080
restPort: 9000
httpOnly: false
jobQueues: Default
Remote Clusters
A remote cluster allows for starting transactions on a remote application. Remote cluster nodes only require host and gRPCPort settings.
Remote cluster example:
remoteCluster:
nodes:
remote-node:
settings:
host: 0.0.0.0
gRPCPort: 9090
Advanced Configuration
See the AMT Application Config File page for all available options related to Clusters.
Apache Pekko Management
The settings described in this section are configured in the application.conf file.
See the Apache Pekko Management documentation for additional configuration options and more information.
Health Check Endpoint
The Health Check configuration is not currently in the application.conf file by default. To update the
application.conf file and enable the Health Check, add the enabled = true line under the
pekko.management section.
pekko {
management {
enabled = true
http {
hostname = "192.168.2.24"
port = 8558
}
health-checks {
readiness-checks {
example-ready = "com.avanade.ltcoe.amt.common.pekko.cluster.AMTHealthCheck"
}
}
}
}
Modify the hostname and/or port as needed. To know if the cluster is successfully up and running, look for the "OK" message that is returned. If not successful, a "Failed to connect" message appears.
Sample File
pekko {
loglevel = info
actor {
serializers {
proto = "org.apache.pekko.remote.serialization.ProtobufSerializer"
jackson-cbor = "com.avanade.ltcoe.amt.common.pekko.cluster.AMTJacksonCBORSerializer"
}
serialization-bindings {
"com.avanade.ltcoe.amt.common.pekko.cluster.Serializable" = jackson-cbor
}
coordinated-shutdown {
exit-jvm = on
}
# The default dispatcher uses the MDCAwareForkJoinExecutor as the execution service.
# This ForkJoinPool extension automatically resets the MDC values after a task has completed
# on on of the thread-pool threads.
default-dispatcher {
executor = "com.avanade.ltcoe.amt.platform.pekko.cluster.execution.forkjoin.MDCAwareForkJoinConfigurator"
}
# This dispatcher uses the MDCAwareThreadPoolExecutor as the execution service.
# This thread-pool executor automatically resets the MDC values after a task has completed
# on one of the thread-pool threads.
amt-job-dispatcher {
type = Dispatcher
executor = "com.avanade.ltcoe.amt.platform.pekko.cluster.execution.threadpool.MDCAwareThreadPoolConfigurator"
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 2
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 2.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 1
}
# This dispatcher uses the MDCAwareThreadPoolExecutor as the execution service.
# This thread-pool executor automatically resets the MDC values after a task has completed
# on one of the thread-pool threads.
amt-transaction-dispatcher {
type = Dispatcher
executor = "com.avanade.ltcoe.amt.platform.pekko.cluster.execution.threadpool.MDCAwareThreadPoolConfigurator"
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 2
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 2.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 1
}
amt-file-dispatcher {
type = Dispatcher
executor = "com.avanade.ltcoe.amt.platform.pekko.cluster.execution.threadpool.MDCAwareThreadPoolConfigurator"
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 2
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 2.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 1
}
amt-transaction-logger-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 2
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 2.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 1
}
# dispatcher intended for any db related actors
amt-database-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 2
core-pool-size-factor = 2.0
core-pool-size-max = 10
}
throughput = 1
}
# dispatcher intended for any general management operations, that do not deserve their own dispatcher,
# but are also not intended to run on the default one
amt-management-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 2
core-pool-size-factor = 2.0
core-pool-size-max = 10
}
throughput = 1
}
}
serialization {
jackson {
jackson-cbor {
serialization-features {
FAIL_ON_EMPTY_BEANS = off
}
}
}
}
http {
cors {
allowed-origins = ["https://*", "http://localhost:4200", "http://localhost:4300", "http://localhost:8000", "http://controlcenter-frontend:4200", "http://smoketest-frontend:4300"]
}
server {
preview {
enable-http2 = on
}
}
}
cluster {
shutdown-after-unsuccessful-join-seed-nodes = 120s
downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
roles = []
sharding {
# No actor will automatically be passivated
passivate-idle-entity-after = off
}
}
management {
cluster.bootstrap {
contact-point-discovery {
# For the kubernetes API this value is substituted into the %s in pod-label-selector
service-name = "amtruntime"
# pick the discovery method you'd like to use:
discovery-method = kubernetes-api
required-contact-point-nr = 2
required-contact-point-nr = ${?REQUIRED_CONTACT_POINT_NR}
}
}
health-checks {
readiness-checks {
example-ready = "AMTHealthCheck"
}
}
}
}
pekko.grpc.client {
"*" {
host = 127.0.0.1
port = 8080
trusted = /certs/ca.pem
}
}
