Webflow Session
CAS uses Spring Webflow to manage the authentication sequence. Spring Webflow provides a pluggable architecture whereby various actions, decisions and operations throughout the primary authentication workflow can be easily controlled and navigated. In order for this navigation to work, some form of conversational session state must be maintained.
Client-side Sessions
CAS provides a facility for storing flow execution state on the client in Spring Webflow. Flow state is stored as an encoded byte stream in the flow execution identifier provided to the client when rendering a view. By default, CAS automatically attempts to store and keep track of this state on the client in an encrypted form via encryption and signing keys to remove the need for session cleanup, termination and replication.
The following settings and properties are available from the CAS configuration catalog:
cas.webflow.crypto.encryption.key=
The encryption key. The encryption key by default and unless specified otherwise must be randomly-generated string whose length is defined by the encryption key size setting.
|
cas.webflow.crypto.signing.key=
The signing key is a JWT whose length is defined by the signing key size setting.
|
cas.webflow.session.hazelcast.cluster.core.instance-name=
The instance name. This setting supports the Spring Expression Language.
|
cas.webflow.session.hazelcast.cluster.network.members=
Sets the well known members. If members is empty, calling this method will have the same effect as calling clear(). A member can be a comma separated string, e..g '10.11.12.1,10.11.12.2' which indicates multiple members are going to be added.
|
cas.webflow.session.hazelcast.cluster.network.port=5701
You can specify the ports which Hazelcast will use to communicate between cluster members. The name of the parameter for this is port and its default value is 5701. By default, Hazelcast will try 100 ports to bind. Meaning that, if you set the value of port as 5701, as members are joining to the cluster, Hazelcast tries to find ports between 5701 and 5801.
|
cas.webflow.session.hazelcast.cluster.discovery.aws.access-key=
AWS access key.
|
cas.webflow.session.hazelcast.cluster.discovery.aws.secret-key=
AWS secret key.
|
cas.webflow.session.hazelcast.cluster.discovery.azure.client-id=
The Azure Active Directory Service Principal client ID.
|
cas.webflow.session.hazelcast.cluster.discovery.azure.client-secret=
The Azure Active Directory Service Principal client secret.
|
cas.webflow.session.hazelcast.cluster.discovery.azure.cluster-id=
The name of the tag on the hazelcast vm resources. With every Hazelcast Virtual Machine you deploy in your resource group, you need to ensure that each VM is tagged with the value of cluster-id defined in your Hazelcast configuration. The only requirement is that every VM can access each other either by private or public IP address.
|
cas.webflow.session.hazelcast.cluster.discovery.azure.group-name=
The Azure resource group name of the cluster. You can find this in the Azure portal or CLI.
|
cas.webflow.session.hazelcast.cluster.discovery.azure.subscription-id=
The Azure subscription ID.
|
cas.webflow.session.hazelcast.cluster.discovery.azure.tenant-id=
The Azure Active Directory tenant ID.
|
cas.webflow.session.hazelcast.cluster.discovery.jclouds.credential=
Cloud Provider credential, can be thought of as a password for cloud services.
|
cas.webflow.session.hazelcast.cluster.discovery.jclouds.identity=
Cloud Provider identity, can be thought of as a user name for cloud services.
|
cas.webflow.session.hazelcast.cluster.discovery.jclouds.provider=
String value that is used to identify ComputeService provider. For example, "google-compute-engine" is used for Google Cloud services. See here for more info.
|
cas.webflow.session.hazelcast.cluster.discovery.zookeeper.group=
Name of this Hazelcast cluster. You can have multiple distinct clusters to use the same ZooKeeper installation
|
cas.webflow.session.hazelcast.cluster.discovery.zookeeper.path=/discovery/hazelcast
Path in zookeeper to be used for auto-discovery of members where members are tracked.
|
cas.webflow.session.hazelcast.cluster.discovery.zookeeper.url=
Zookeeper url address typically in the format of
|
cas.webflow.crypto.alg=AES
The signing/encryption algorithm to use.
|
cas.webflow.crypto.enabled=true
Whether crypto operations are enabled.
|
cas.webflow.crypto.encryption.key-size=16
Encryption key size.
|
cas.webflow.crypto.signing.key-size=512
The signing key size.
|
cas.webflow.session.compress=false
Whether or not the snapshots should be compressed.
|
cas.webflow.session.lock-timeout=PT30S
Sets the time period that can elapse before a timeout occurs on an attempt to acquire a conversation lock. The default is 30 seconds. Only relevant if session storage is done on the server. This settings supports the
|
cas.webflow.session.max-conversations=5
Using the maxConversations property, you can limit the number of concurrently active conversations allowed in a single session. If the maximum is exceeded, the conversation manager will automatically end the oldest conversation. The default is 5, which should be fine for most situations. Set it to -1 for no limit. Setting maxConversations to 1 allows easy resource cleanup in situations where there should only be one active conversation per session. Only relevant if session storage is done on the server.
|
cas.webflow.session.storage=false
Controls whether spring webflow sessions are to be stored server-side or client side. By default state is managed on the client side, that is also signed and encrypted.
|
cas.webflow.session.hazelcast.cluster.core.async-backup-count=0
Hazelcast supports both synchronous and asynchronous backups. By default, backup operations are synchronous. In this case, backup operations block operations until backups are successfully copied to backup members (or deleted from backup members in case of remove) and acknowledgements are received. Therefore, backups are updated before a put operation is completed, provided that the cluster is stable. Asynchronous backups, on the other hand, do not block operations. They are fire and forget and do not require acknowledgements; the backup operations are performed at some point in time.
|
cas.webflow.session.hazelcast.cluster.core.async-fillup=true
Used when replication is turned on with
|
cas.webflow.session.hazelcast.cluster.core.backup-count=1
To provide data safety, Hazelcast allows you to specify the number of backup copies you want to have. That way, data on a cluster member will be copied onto other member(s). To create synchronous backups, select the number of backup copies. When this count is 1, a map entry will have its backup on one other member in the cluster. If you set it to 2, then a map entry will have its backup on two other members. You can set it to 0 if you do not want your entries to be backed up, e.g., if performance is more important than backing up. The maximum value for the backup count is 6. Sync backup operations have a blocking cost which may lead to latency issues.
|
cas.webflow.session.hazelcast.cluster.core.cp-member-count=0
CP Subsystem is a component of a Hazelcast cluster that builds a strongly consistent layer for a set of distributed data structures. Its data structures are CP with respect to the CAP principle, i.e., they always maintain linearizability and prefer consistency over availability during network partitions. Besides network partitions, CP Subsystem withstands server and client failures. All members of a Hazelcast cluster do not necessarily take part in CP Subsystem. The number of Hazelcast members that take part in CP Subsystem is specified here. CP Subsystem must have at least 3 CP members.
|
cas.webflow.session.hazelcast.cluster.core.eviction-policy=LRU
Hazelcast supports policy-based eviction for distributed maps. Currently supported policies are LRU (Least Recently Used) and LFU (Least Frequently Used) and NONE. See this for more info.
|
cas.webflow.session.hazelcast.cluster.core.logging-type=slf4j
Hazelcast has a flexible logging configuration and doesn't depend on any logging framework except JDK logging. It has in-built adaptors for a number of logging frameworks and also supports custom loggers by providing logging interfaces. To use built-in adaptors you should set this setting to one of predefined types below.
|
cas.webflow.session.hazelcast.cluster.core.map-merge-policy=PUT_IF_ABSENT
Define how data items in Hazelcast maps are merged together from source to destination. By default, merges map entries from source to destination if they don't exist in the destination map. Accepted values are:
|
cas.webflow.session.hazelcast.cluster.core.max-no-heartbeat-seconds=300
Max timeout of heartbeat in seconds for a node to assume it is dead.
|
cas.webflow.session.hazelcast.cluster.core.max-size=85
Sets the maximum size of the map.
|
cas.webflow.session.hazelcast.cluster.core.max-size-policy=USED_HEAP_PERCENTAGE
|
cas.webflow.session.hazelcast.cluster.core.partition-member-group-type=
With
|
cas.webflow.session.hazelcast.cluster.core.replicated=false
A Replicated Map is a distributed key-value data structure where the data is replicated to all members in the cluster. It provides full replication of entries to all members for high speed access. A Replicated Map does not partition data (it does not spread data to different cluster members); instead, it replicates the data to all members. Replication leads to higher memory consumption. However, a Replicated Map has faster read and write access since the data is available on all members. Writes could take place on local/remote members in order to provide write-order, eventually being replicated to all other members. Replicated Map uses the internal partition system of Hazelcast in order to serialize updates happening on the same key at the same time. This happens by sending updates of the same key to the same Hazelcast member in the cluster. Due to the asynchronous nature of replication, a Hazelcast member could die before successfully replicating a "write" operation to other members after sending the "write completed" response to its caller during the write process. In this scenario, Hazelcast’s internal partition system promotes one of the replicas of the partition as the primary one. The new primary partition does not have the latest "write" since the dead member could not successfully replicate the update.
|
cas.webflow.session.hazelcast.cluster.core.timeout=5
Connection timeout in seconds for the TCP/IP config and members joining the cluster.
|
cas.webflow.session.hazelcast.cluster.discovery.enabled=false
Whether discovery should be enabled via the configured strategies below.
|
cas.webflow.session.hazelcast.cluster.discovery.multicast.enabled=false
Enables a multicast configuration using a group address and port. Contains the configuration for the multicast discovery mechanism. With the multicast discovery mechanism Hazelcast allows Hazelcast members to find each other using multicast. So Hazelcast members do not need to know concrete addresses of members, they just multicast to everyone listening. It depends on your environment if multicast is possible or allowed; otherwise you need to have a look at the tcp/ip cluster
|
cas.webflow.session.hazelcast.cluster.discovery.multicast.group=
The multicast group address used for discovery. With the multicast auto-discovery mechanism, Hazelcast allows cluster members to find each other using multicast communication. The cluster members do not need to know the concrete addresses of the other members, as they just multicast to all the other members for listening. Whether multicast is possible or allowed depends on your environment.
|
cas.webflow.session.hazelcast.cluster.discovery.multicast.port=0
The multicast port used for discovery.
|
cas.webflow.session.hazelcast.cluster.discovery.multicast.time-to-live=32
Gets the time to live for the multicast package in seconds. This is the default time-to-live for multicast packets sent out on the socket
|
cas.webflow.session.hazelcast.cluster.discovery.multicast.timeout=2
specifies the time in seconds that a member should wait for a valid multicast response from another member running in the network before declaring itself the leader member (the first member joined to the cluster) and creating its own cluster. This only applies to the startup of members where no leader has been assigned yet. If you specify a high value, such as 60 seconds, it means that until a leader is selected, each member will wait 60 seconds before moving on. Be careful when providing a high value. Also, be careful not to set the value too low, or the members might give up too early and create their own cluster.
|
cas.webflow.session.hazelcast.cluster.discovery.multicast.trusted-interfaces=
Multicast trusted interfaces for discovery. With the multicast auto-discovery mechanism, Hazelcast allows cluster members to find each other using multicast communication. The cluster members do not need to know the concrete addresses of the other members, as they just multicast to all the other members for listening. Whether multicast is possible or allowed depends on your environment.
|
cas.webflow.session.hazelcast.cluster.network.ipv4-enabled=true
IPv6 support has been switched off by default, since some platforms have issues in use of IPv6 stack. And some other platforms such as Amazon AWS have no support at all. To enable IPv6 support set this setting to false.
|
cas.webflow.session.hazelcast.cluster.network.local-address=
If this property is set, then this is the address where the server socket is bound to.
|
cas.webflow.session.hazelcast.cluster.network.network-interfaces=
You can specify which network interfaces that Hazelcast should use. Servers mostly have more than one network interface, so you may want to list the valid IPs. Range characters ('*' and '-') can be used for simplicity. For instance, 10.3.10.* refers to IPs between 10.3.10.0 and 10.3.10.255. Interface 10.3.10.4-18 refers to IPs between 10.3.10.4 and 10.3.10.18 (4 and 18 included). If network interface configuration is enabled (it is disabled by default) and if Hazelcast cannot find an matching interface, then it will print a message on the console and will not start on that node. Interfaces can be separated by a comma.
|
cas.webflow.session.hazelcast.cluster.network.outbound-ports=
The outbound ports for the Hazelcast configuration.
|
cas.webflow.session.hazelcast.cluster.network.port-auto-increment=true
You may also want to choose to use only one port. In that case, you can disable the auto-increment feature of port.
|
cas.webflow.session.hazelcast.cluster.network.public-address=
The default public address to be advertised to other cluster members and clients.
|
cas.webflow.session.hazelcast.cluster.network.ssl.cipher-suites=
Comma-separated list of cipher suite names allowed to be used. Its default value are all supported suites in your Java runtime.
|
cas.webflow.session.hazelcast.cluster.network.ssl.key-manager-algorithm=
Name of the algorithm based on which the authentication keys are provided.
|
cas.webflow.session.hazelcast.cluster.network.ssl.key-store-type=JKS
Type of the keystore. Its default value is JKS. Another commonly used type is the PKCS12. Available keystore/truststore types depend on your Operating system and the Java runtime. Only needed when the mutual authentication is used.
|
cas.webflow.session.hazelcast.cluster.network.ssl.keystore=
Path of your keystore file. Only needed when the mutual authentication is used.
|
cas.webflow.session.hazelcast.cluster.network.ssl.keystore-password=
Password to access the key from your keystore file. Only needed when the mutual authentication is used.
|
cas.webflow.session.hazelcast.cluster.network.ssl.mutual-authentication=
Mutual authentication configuration. It’s empty by default which means the client side of connection is not authenticated. Available values are:
|
cas.webflow.session.hazelcast.cluster.network.ssl.protocol=TLS
Name of the algorithm which is used in your TLS/SSL. For the protocol property, we recommend you to provide TLS with its version information, e.g., TLSv1.2. Note that if you write only TLS, your application chooses the TLS version according to your Java version.
|
cas.webflow.session.hazelcast.cluster.network.ssl.trust-manager-algorithm=
Name of the algorithm based on which the trust managers are provided.
|
cas.webflow.session.hazelcast.cluster.network.ssl.trust-store=
Path of your truststore file. The file truststore is a keystore file that contains a collection of certificates trusted by your application.
|
cas.webflow.session.hazelcast.cluster.network.ssl.trust-store-password=
Password to unlock the truststore file.
|
cas.webflow.session.hazelcast.cluster.network.ssl.trust-store-type=JKS
Type of the truststore. Its default value is JKS. Another commonly used type is the PKCS12. Available keystore/truststore types depend on your Operating system and the Java runtime.
|
cas.webflow.session.hazelcast.cluster.network.ssl.validate-identity=false
Flag which allows enabling endpoint identity validation. It means, during the TLS handshake client verifies if the server’s hostname (or IP address) matches the information in X.509 certificate (Subject Alternative Name extension).
|
cas.webflow.session.hazelcast.cluster.network.tcpip-enabled=true
Enable TCP/IP config. Contains the configuration for the Tcp/Ip join mechanism. The Tcp/Ip join mechanism relies on one or more well known members. So when a new member wants to join a cluster, it will try to connect to one of the well known members. If it is able to connect, it will now about all members in the cluster and doesn't rely on these well known members anymore.
|
cas.webflow.session.hazelcast.cluster.wan-replication.enabled=false
Whether WAN should be enabled.
|
cas.webflow.session.hazelcast.cluster.wan-replication.replication-name=apereo-cas
Name of this replication group.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets=
List of target clusters to be used for synchronization and replication.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].acknowledge-type=ACK_ON_OPERATION_COMPLETE
Accepted values are:
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].batch-maximum-delay-milliseconds=1000
Maximum amount of time, in milliseconds, to be waited before sending a batch of events in case batch.size is not reached.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].batch-size=500
Maximum size of events that are sent to the target cluster in a single batch.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].cluster-name=
Sets the cluster name used as an endpoint group password for authentication on the target endpoint. If there is no separate publisher ID property defined, this cluster name will also be used as a WAN publisher ID. This ID is then used for identifying the publisher.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].consistency-check-strategy=NONE
Strategy for checking the consistency of data between replicas.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].endpoints=
Comma separated list of endpoints in this replication group. IP addresses and ports of the cluster members for which the WAN replication is implemented. These endpoints are not necessarily the entire target cluster and WAN does not perform the discovery of other members in the target cluster. It only expects that these IP addresses (or at least some of them) are available.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].executor-thread-count=2
The number of threads that the replication executor will have. The executor is used to send WAN events to the endpoints and ideally you want to have one thread per endpoint. If this property is omitted and you have specified the endpoints property, this will be the case. If necessary you can manually define the number of threads that the executor will use. Once the executor has been initialized there is thread affinity between the discovered endpoints and the executor threads - all events for a single endpoint will go through a single executor thread, preserving event order. It is important to determine which number of executor threads is a good value. Failure to do so can lead to performance issues - either contention on a too small number of threads or wasted threads that will not be performing any work.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].properties=
The WAN publisher properties.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].publisher-class-name=com.hazelcast.enterprise.wan.replication.WanBatchReplication
Publisher class name for WAN replication.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].publisher-id=
Returns the publisher ID used for identifying the publisher.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].queue-capacity=10_000
For huge clusters or high data mutation rates, you might need to increase the replication queue size. The default queue size for replication queues is 10,000. This means, if you have heavy put/update/remove rates, you might exceed the queue size so that the oldest, not yet replicated, updates might get lost.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].queue-full-behavior=THROW_EXCEPTION
Accepted values are:
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].response-timeout-milliseconds=60_000
Time, in milliseconds, to be waited for the acknowledgment of a sent WAN event to target cluster.
|
cas.webflow.session.hazelcast.cluster.wan-replication.targets[0].snapshot-enabled=
When set to true, only the latest events (based on key) are selected and sent in a batch.
|
cas.webflow.session.hazelcast.core.enable-compression=false
Enables compression when default java serialization is used.
|
cas.webflow.session.hazelcast.core.enable-jet=true
Enable Jet configuration/service on the hazelcast instance. Hazelcast Jet is a distributed batch and stream processing system that can do stateful computations over massive amounts of data with consistent low latency. Jet service is required when executing SQL queries with the SQL service.
|
cas.webflow.session.hazelcast.core.enable-management-center-scripting=true
Enables scripting from Management Center.
|
cas.webflow.session.hazelcast.core.license-key=
Hazelcast enterprise license key.
|
cas.webflow.session.hazelcast.cluster.discovery.aws.connection-timeout-seconds=5
The maximum amount of time Hazelcast will try to connect to a well known member before giving up. Setting this value too low could mean that a member is not able to connect to a cluster. Setting the value too high means that member startup could slow down because of longer timeouts (for example, when a well known member is not up). Increasing this value is recommended if you have many IPs listed and the members cannot properly build up the cluster. Its default value is 5.
|
cas.webflow.session.hazelcast.cluster.discovery.aws.host-header=
Host header. i.e.
|
cas.webflow.session.hazelcast.cluster.discovery.aws.iam-role=
If you do not want to use access key and secret key, you can specify iam-role. Hazelcast fetches your credentials by using your IAM role. This setting only affects deployments on Amazon EC2. If you are deploying CAS in an Amazon ECS environment, the role should not be specified. The role is fetched from the task definition that is assigned to run CAS.
|
cas.webflow.session.hazelcast.cluster.discovery.aws.port=-1
Hazelcast port. Typically may be set to
|
cas.webflow.session.hazelcast.cluster.discovery.aws.region=us-east-1
AWS region. i.e.
|
cas.webflow.session.hazelcast.cluster.discovery.aws.security-group-name=
If a security group is configured, only instances within that security group are selected.
|
cas.webflow.session.hazelcast.cluster.discovery.aws.tag-key=
If a tag key/value is set, only instances with that tag key/value will be selected.
|
cas.webflow.session.hazelcast.cluster.discovery.aws.tag-value=
If a tag key/value is set, only instances with that tag key/value will be selected.
|
cas.webflow.session.hazelcast.cluster.discovery.gcp.hz-port=5701-5708
A range of ports where the plugin looks for Hazelcast members.
|
cas.webflow.session.hazelcast.cluster.discovery.gcp.label=
A filter to look only for instances labeled as specified; property format:
|
cas.webflow.session.hazelcast.cluster.discovery.gcp.private-key-path=
A filesystem path to the private key for GCP service account in the JSON format; if not set, the access token is fetched from the GCP VM instance.
|
cas.webflow.session.hazelcast.cluster.discovery.gcp.projects=
A list of projects where the plugin looks for instances; if not set, the current project is used.
|
cas.webflow.session.hazelcast.cluster.discovery.gcp.region=
A region where the plugin looks for instances; if not set, the
|
cas.webflow.session.hazelcast.cluster.discovery.gcp.zones=
A list of zones where the plugin looks for instances; if not set, all zones of the current region are used.
|
cas.webflow.session.hazelcast.cluster.discovery.jclouds.credential-path=
Used for cloud providers which require an extra JSON or P12 key file. This denotes the path of that file. Only tested with Google Compute Engine. (Required if Google Compute Engine is used.)
|
cas.webflow.session.hazelcast.cluster.discovery.jclouds.endpoint=
Defines the endpoint for a generic API such as OpenStack or CloudStack (optional).
|
cas.webflow.session.hazelcast.cluster.discovery.jclouds.group=
Filters instance groups (optional). When used with AWS it maps to security group.
|
cas.webflow.session.hazelcast.cluster.discovery.jclouds.port=-1
Port which the hazelcast instance service uses on the cluster member. Default value is 5701. (optional)
|
cas.webflow.session.hazelcast.cluster.discovery.jclouds.regions=
Defines region for a cloud service (optional). Can be used with comma separated values for multiple values.
|
cas.webflow.session.hazelcast.cluster.discovery.jclouds.role-name=
Used for IAM role support specific to AWS (optional, but if defined, no identity or credential should be defined in the configuration).
|
cas.webflow.session.hazelcast.cluster.discovery.jclouds.tag-keys=
Filters cloud instances with tags (optional). Can be used with comma separated values for multiple values.
|
cas.webflow.session.hazelcast.cluster.discovery.jclouds.tag-values=
Filters cloud instances with tags (optional) Can be used with comma separated values for multiple values.
|
cas.webflow.session.hazelcast.cluster.discovery.jclouds.zones=
Defines zone for a cloud service (optional). Can be used with comma separated values for multiple values.
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.api-retries=3
Defines the number of retries to Kubernetes API. Defaults to: 3.
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.api-token=
Defines an oauth token for the kubernetes client to access the kubernetes REST API. Defaults to reading the token from the auto-injected file at:
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.ca-certificate=
CA Authority certificate from Kubernetes Master. Defaults to reading the certificate from the auto-injected file at:
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.kubernetes-master=
Defines an alternative address for the kubernetes master. Defaults to:
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.namespace=
Defines the namespace of the application POD through the Service Discovery REST API of Kubernetes.
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.pod-label-name=
Defines the pod label to lookup through the Service Discovery REST API of Kubernetes.
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.pod-label-value=
Defines the pod label value to lookup through the Service Discovery REST API of Kubernetes.
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.resolve-not-ready-addresses=false
Defines if not ready addresses should be evaluated to be discovered on startup.
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.service-dns=
Defines the DNS service lookup domain. This is defined as something similar to
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.service-dns-timeout=-1
Defines the DNS service lookup timeout in seconds. Defaults to 5 secs.
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.service-label-name=
Defines the service label to lookup through the Service Discovery REST API of Kubernetes.
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.service-label-value=
Defines the service label value to lookup through the Service Discovery REST API of Kubernetes.
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.service-name=
Defines the service name of the POD to lookup through the Service Discovery REST API of Kubernetes.
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.service-port=0
If specified with a value greater than 0, its value defines the endpoint port of the service (overriding the default).
|
cas.webflow.session.hazelcast.cluster.discovery.kubernetes.use-node-name-as-external-address=false
Defines if the node name should be used as external address, instead of looking up the external IP using the
|
cas.webflow.session.hazelcast.cluster.discovery.docker-swarm.dns-provider.enabled=false
Enable provider.
|
cas.webflow.session.hazelcast.cluster.discovery.docker-swarm.dns-provider.peer-services=
Comma separated list of docker services and associated ports to be considered peers of this service. Note, this must include itself (the definition of serviceName and servicePort) if the service is to cluster with other instances of this service.
|
cas.webflow.session.hazelcast.cluster.discovery.docker-swarm.dns-provider.service-name=
Name of the docker service that this instance is running in.
|
cas.webflow.session.hazelcast.cluster.discovery.docker-swarm.dns-provider.service-port=5701
Internal port that hazelcast is listening on.
|
cas.webflow.session.hazelcast.cluster.discovery.docker-swarm.member-provider.docker-network-names=
Comma delimited list of Docker network names to discover matching services on.
|
cas.webflow.session.hazelcast.cluster.discovery.docker-swarm.member-provider.docker-service-labels=
Comma delimited list of relevant Docker service label=values to find tasks/containers on the networks.
|
cas.webflow.session.hazelcast.cluster.discovery.docker-swarm.member-provider.docker-service-names=
Comma delimited list of relevant Docker service names to find tasks/containers on the networks.
|
cas.webflow.session.hazelcast.cluster.discovery.docker-swarm.member-provider.enabled=false
Enable provider.
|
cas.webflow.session.hazelcast.cluster.discovery.docker-swarm.member-provider.hazelcast-peer-port=5701
The raw port that hazelcast is listening on. IMPORTANT: This is NOT a docker "published" port, nor is it necessarily a EXPOSEd port. It is the hazelcast port that the service is configured with, this must be the same for all matched containers in order to work, and just using the default of 5701 is the simplest way to go.
|
cas.webflow.session.hazelcast.cluster.discovery.docker-swarm.member-provider.skip-verify-ssl=false
If Swarm Mgr URI is SSL, to enable skip-verify for it.
|
cas.webflow.session.hazelcast.cluster.discovery.docker-swarm.member-provider.swarm-mgr-uri=
Swarm Manager URI (overrides DOCKER_HOST).
|
cas.webflow.crypto.encryption.key=
The encryption key. The encryption key by default and unless specified otherwise must be randomly-generated string whose length is defined by the encryption key size setting.
|
cas.webflow.crypto.signing.key=
The signing key is a JWT whose length is defined by the signing key size setting.
|
cas.webflow.crypto.alg=AES
The signing/encryption algorithm to use.
|
cas.webflow.crypto.enabled=true
Whether crypto operations are enabled.
|
cas.webflow.crypto.encryption.key-size=16
Encryption key size.
|
cas.webflow.crypto.signing.key-size=512
The signing key size.
|
This CAS feature is able to accept signing and encryption crypto keys. In most scenarios if keys are not provided, CAS will auto-generate them. The following instructions apply if you wish to manually and beforehand create the signing and encryption keys.
Note that if you are asked to create a JWK of a certain size for the key, you are to use the following set of commands to generate the token:
1
2
wget https://raw.githubusercontent.com/apereo/cas/master/etc/jwk-gen.jar
java -jar jwk-gen.jar -t oct -s [size]
The outcome would be similar to:
1
2
3
4
5
{
"kty": "oct",
"kid": "...",
"k": "..."
}
The generated value for k
needs to be assigned to the relevant CAS settings. Note that keys generated via
the above algorithm are processed by CAS using the Advanced Encryption Standard (AES
) algorithm which is a
specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology.
Configuration Metadata
The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.
Be Selective
This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.
YAGNI
Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.
Naming Convention
Property names can be specified in very relaxed terms. For instance cas.someProperty
, cas.some-property
, cas.some_property
are all valid names. While all
forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where
this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those
that might be presented to the system via an external library or framework such as Spring Boot, etc.
When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value
.
The only possible exception to this rule is when naming actuator endpoints; The name of the
actuator endpoints (i.e. ssoSessions
) MUST remain in camelCase mode.
Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas
. All other settings are controlled and provided
to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with
the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition
or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the
way you intend.
Validation
Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be
recognized or validated by the configuration schema. The validation process is on by default and can be skipped on startup using a special system
property SKIP_CONFIG_VALIDATION
that should be set to true
. Additional validation processes are also handled
via Configuration Metadata and property migrations applied automatically on
startup by Spring Boot and family.
Indexed Settings
CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value
. The index [0]
is meant to be
incremented by the adopter to allow for distinct multiple configuration blocks.
In the event that keys are not generated by the deployer, CAS will attempt to auto-generate keys and will output the result for each respected key. The deployer MUST attempt to copy the generated keys to their CAS properties file, specially when running a multi-node CAS deployment. Failure to do so will prevent CAS to appropriate decrypt and encrypt the webflow state and will prevent successful single sign-on.
While the above settings are all optional, it is recommended that you provide your own configuration and settings for encrypting and transcoding of the web session state.
Server-side Sessions
In the event that you wish to use server-side session storage for managing the webflow session, you will need to enable this behavior via CAS properties.
The following settings and properties are available from the CAS configuration catalog:
cas.webflow.session.compress=false
Whether or not the snapshots should be compressed.
|
cas.webflow.session.lock-timeout=PT30S
Sets the time period that can elapse before a timeout occurs on an attempt to acquire a conversation lock. The default is 30 seconds. Only relevant if session storage is done on the server. This settings supports the
|
cas.webflow.session.max-conversations=5
Using the maxConversations property, you can limit the number of concurrently active conversations allowed in a single session. If the maximum is exceeded, the conversation manager will automatically end the oldest conversation. The default is 5, which should be fine for most situations. Set it to -1 for no limit. Setting maxConversations to 1 allows easy resource cleanup in situations where there should only be one active conversation per session. Only relevant if session storage is done on the server.
|
cas.webflow.session.storage=false
Controls whether spring webflow sessions are to be stored server-side or client side. By default state is managed on the client side, that is also signed and encrypted.
|
spring.session.hazelcast.flush-mode=on-save
Sessions flush mode. Determines when session changes are written to the session store.
|
spring.session.hazelcast.map-name=spring:session:sessions
Name of the map used to store sessions.
|
spring.session.hazelcast.save-mode=on-set-attribute
Sessions save mode. Determines how session changes are tracked and saved to the session store.
|
spring.session.jdbc.cleanup-cron=0 * * * * *
Cron expression for expired session cleanup job.
|
spring.session.jdbc.flush-mode=on-save
Sessions flush mode. Determines when session changes are written to the session store.
|
spring.session.jdbc.initialize-schema=embedded
Database schema initialization mode.
|
spring.session.jdbc.platform=
Platform to use in initialization scripts if the @@platform@@ placeholder is used. Auto-detected by default.
|
spring.session.jdbc.save-mode=on-set-attribute
Sessions save mode. Determines how session changes are tracked and saved to the session store.
|
spring.session.jdbc.schema=classpath:org/springframework/session/jdbc/schema-@@platform@@.sql
Path to the SQL file to use to initialize the database schema.
|
spring.session.jdbc.table-name=SPRING_SESSION
Name of the database table used to store sessions.
|
spring.session.mongodb.collection-name=sessions
Collection name used to store sessions.
|
spring.session.redis.cleanup-cron=0 * * * * *
Cron expression for expired session cleanup job.
|
spring.session.redis.configure-action=notify-keyspace-events
The configure action to apply when no user defined ConfigureRedisAction bean is present.
|
spring.session.redis.flush-mode=on-save
Sessions flush mode. Determines when session changes are written to the session store.
|
spring.session.redis.namespace=spring:session
Namespace for keys used to store sessions.
|
spring.session.redis.save-mode=on-set-attribute
Sessions save mode. Determines how session changes are tracked and saved to the session store.
|
spring.session.servlet.filter-dispatcher-types=asyncerrorrequest
Session repository filter dispatcher types.
|
spring.session.servlet.filter-order=
Session repository filter order.
|
spring.session.store-type=
Session store type.
|
spring.session.timeout=
Session timeout. If a duration suffix is not specified, seconds will be used.
|
Configuration Metadata
The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.
Be Selective
This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.
YAGNI
Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.
Naming Convention
Property names can be specified in very relaxed terms. For instance cas.someProperty
, cas.some-property
, cas.some_property
are all valid names. While all
forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where
this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those
that might be presented to the system via an external library or framework such as Spring Boot, etc.
When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value
.
The only possible exception to this rule is when naming actuator endpoints; The name of the
actuator endpoints (i.e. ssoSessions
) MUST remain in camelCase mode.
Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas
. All other settings are controlled and provided
to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with
the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition
or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the
way you intend.
Validation
Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be
recognized or validated by the configuration schema. The validation process is on by default and can be skipped on startup using a special system
property SKIP_CONFIG_VALIDATION
that should be set to true
. Additional validation processes are also handled
via Configuration Metadata and property migrations applied automatically on
startup by Spring Boot and family.
Indexed Settings
CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value
. The index [0]
is meant to be
incremented by the adopter to allow for distinct multiple configuration blocks.
Doing so will likely require you to also enable sticky sessions and/or session replication in a clustered deployment of CAS.
Generally speaking, you do not need to enable server-side sessions unless you have a rather specialized deployment or are in need of features that store bits and pieces of data into a sever-backed session object. It is recommended that you stick with the default client-side session storage and only switch if and when mandated by a specific CAS behavior.
Hazelcast Session Replication
If you don’t wish to use the native container’s strategy for session replication, you can use CAS’s support for Hazelcast session replication.
This feature is enabled via the following module:
1
2
3
4
5
<dependency>
<groupId>org.apereo.cas</groupId>
<artifactId>cas-server-support-session-hazelcast</artifactId>
<version>${cas.version}</version>
</dependency>
1
implementation "org.apereo.cas:cas-server-support-session-hazelcast:${project.'cas.version'}"
1
2
3
4
5
6
7
8
9
dependencyManagement {
imports {
mavenBom "org.apereo.cas:cas-server-support-bom:${project.'cas.version'}"
}
}
dependencies {
implementation "org.apereo.cas:cas-server-support-session-hazelcast"
}
1
2
3
4
5
6
7
8
9
10
dependencies {
/*
The following platform references should be included automatically and are listed here for reference only.
implementation enforcedPlatform("org.apereo.cas:cas-server-support-bom:${project.'cas.version'}")
implementation platform(org.springframework.boot.gradle.plugin.SpringBootPlugin.BOM_COORDINATES)
*/
implementation "org.apereo.cas:cas-server-support-session-hazelcast"
}
The following settings and properties are available from the CAS configuration catalog:
spring.session.hazelcast.flush-mode=on-save
Sessions flush mode. Determines when session changes are written to the session store.
|
spring.session.hazelcast.flush-mode=on-save
Sessions flush mode. Determines when session changes are written to the session store.
|
spring.session.hazelcast.map-name=spring:session:sessions
Name of the map used to store sessions.
|
spring.session.hazelcast.map-name=spring:session:sessions
Name of the map used to store sessions.
|
spring.session.hazelcast.save-mode=on-set-attribute
Sessions save mode. Determines how session changes are tracked and saved to the session store.
|
spring.session.hazelcast.save-mode=on-set-attribute
Sessions save mode. Determines how session changes are tracked and saved to the session store.
|
Configuration Metadata
The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.
Be Selective
This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.
YAGNI
Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.
Naming Convention
Property names can be specified in very relaxed terms. For instance cas.someProperty
, cas.some-property
, cas.some_property
are all valid names. While all
forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where
this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those
that might be presented to the system via an external library or framework such as Spring Boot, etc.
When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value
.
The only possible exception to this rule is when naming actuator endpoints; The name of the
actuator endpoints (i.e. ssoSessions
) MUST remain in camelCase mode.
Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas
. All other settings are controlled and provided
to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with
the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition
or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the
way you intend.
Validation
Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be
recognized or validated by the configuration schema. The validation process is on by default and can be skipped on startup using a special system
property SKIP_CONFIG_VALIDATION
that should be set to true
. Additional validation processes are also handled
via Configuration Metadata and property migrations applied automatically on
startup by Spring Boot and family.
Indexed Settings
CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value
. The index [0]
is meant to be
incremented by the adopter to allow for distinct multiple configuration blocks.
Redis Session Replication
If you don’t wish to use the native container’s strategy for session replication, you can use CAS’s support for Redis session replication.
This feature is enabled via the following module:
1
2
3
4
5
<dependency>
<groupId>org.apereo.cas</groupId>
<artifactId>cas-server-support-session-redis</artifactId>
<version>${cas.version}</version>
</dependency>
1
implementation "org.apereo.cas:cas-server-support-session-redis:${project.'cas.version'}"
1
2
3
4
5
6
7
8
9
dependencyManagement {
imports {
mavenBom "org.apereo.cas:cas-server-support-bom:${project.'cas.version'}"
}
}
dependencies {
implementation "org.apereo.cas:cas-server-support-session-redis"
}
1
2
3
4
5
6
7
8
9
10
dependencies {
/*
The following platform references should be included automatically and are listed here for reference only.
implementation enforcedPlatform("org.apereo.cas:cas-server-support-bom:${project.'cas.version'}")
implementation platform(org.springframework.boot.gradle.plugin.SpringBootPlugin.BOM_COORDINATES)
*/
implementation "org.apereo.cas:cas-server-support-session-redis"
}
The following settings and properties are available from the CAS configuration catalog:
spring.redis.client-name=
Client name to be set on connections with CLIENT SETNAME.
|
spring.redis.client-type=
Type of client to use. By default, auto-detected according to the classpath.
|
spring.redis.cluster.max-redirects=
Maximum number of redirects to follow when executing commands across the cluster.
|
spring.redis.cluster.nodes=
Comma-separated list of "host:port" pairs to bootstrap from. This represents an "initial" list of cluster nodes and is required to have at least one entry.
|
spring.redis.connect-timeout=
Connection timeout.
|
spring.redis.database=0
Database index used by the connection factory.
|
spring.redis.host=localhost
Redis server host.
|
spring.redis.jedis.pool.enabled=
Whether to enable the pool. Enabled automatically if "commons-pool2" is available. With Jedis, pooling is implicitly enabled in sentinel mode and this setting only applies to single node setup.
|
spring.redis.jedis.pool.max-active=8
Maximum number of connections that can be allocated by the pool at a given time. Use a negative value for no limit.
|
spring.redis.jedis.pool.max-idle=8
Maximum number of "idle" connections in the pool. Use a negative value to indicate an unlimited number of idle connections.
|
spring.redis.jedis.pool.max-wait=-1ms
Maximum amount of time a connection allocation should block before throwing an exception when the pool is exhausted. Use a negative value to block indefinitely.
|
spring.redis.jedis.pool.min-idle=0
Target for the minimum number of idle connections to maintain in the pool. This setting only has an effect if both it and time between eviction runs are positive.
|
spring.redis.jedis.pool.time-between-eviction-runs=
Time between runs of the idle object evictor thread. When positive, the idle object evictor thread starts, otherwise no idle object eviction is performed.
|
spring.redis.lettuce.cluster.refresh.adaptive=false
Whether adaptive topology refreshing using all available refresh triggers should be used.
|
spring.redis.lettuce.cluster.refresh.dynamic-refresh-sources=true
Whether to discover and query all cluster nodes for obtaining the cluster topology. When set to false, only the initial seed nodes are used as sources for topology discovery.
|
spring.redis.lettuce.cluster.refresh.period=
Cluster topology refresh period.
|
spring.redis.lettuce.pool.enabled=
Whether to enable the pool. Enabled automatically if "commons-pool2" is available. With Jedis, pooling is implicitly enabled in sentinel mode and this setting only applies to single node setup.
|
spring.redis.lettuce.pool.max-active=8
Maximum number of connections that can be allocated by the pool at a given time. Use a negative value for no limit.
|
spring.redis.lettuce.pool.max-idle=8
Maximum number of "idle" connections in the pool. Use a negative value to indicate an unlimited number of idle connections.
|
spring.redis.lettuce.pool.max-wait=-1ms
Maximum amount of time a connection allocation should block before throwing an exception when the pool is exhausted. Use a negative value to block indefinitely.
|
spring.redis.lettuce.pool.min-idle=0
Target for the minimum number of idle connections to maintain in the pool. This setting only has an effect if both it and time between eviction runs are positive.
|
spring.redis.lettuce.pool.time-between-eviction-runs=
Time between runs of the idle object evictor thread. When positive, the idle object evictor thread starts, otherwise no idle object eviction is performed.
|
spring.redis.lettuce.shutdown-timeout=100ms
Shutdown timeout.
|
spring.redis.password=
Login password of the redis server.
|
spring.redis.port=6379
Redis server port.
|
spring.redis.sentinel.master=
Name of the Redis server.
|
spring.redis.sentinel.nodes=
Comma-separated list of "host:port" pairs.
|
spring.redis.sentinel.password=
Password for authenticating with sentinel(s).
|
spring.redis.sentinel.username=
Login username for authenticating with sentinel(s).
|
spring.redis.ssl=false
Whether to enable SSL support.
|
spring.redis.timeout=
Read timeout.
|
spring.redis.url=
Connection URL. Overrides host, port, and password. User is ignored. Example: redis://user:password@example.com:6379
|
spring.redis.username=
Login username of the redis server.
|
spring.session.redis.cleanup-cron=0 * * * * *
Cron expression for expired session cleanup job.
|
spring.session.redis.configure-action=notify-keyspace-events
The configure action to apply when no user defined ConfigureRedisAction bean is present.
|
spring.session.redis.flush-mode=on-save
Sessions flush mode. Determines when session changes are written to the session store.
|
spring.session.redis.namespace=spring:session
Namespace for keys used to store sessions.
|
spring.session.redis.save-mode=on-set-attribute
Sessions save mode. Determines how session changes are tracked and saved to the session store.
|
Configuration Metadata
The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.
Be Selective
This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.
YAGNI
Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.
Naming Convention
Property names can be specified in very relaxed terms. For instance cas.someProperty
, cas.some-property
, cas.some_property
are all valid names. While all
forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where
this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those
that might be presented to the system via an external library or framework such as Spring Boot, etc.
When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value
.
The only possible exception to this rule is when naming actuator endpoints; The name of the
actuator endpoints (i.e. ssoSessions
) MUST remain in camelCase mode.
Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas
. All other settings are controlled and provided
to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with
the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition
or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the
way you intend.
Validation
Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be
recognized or validated by the configuration schema. The validation process is on by default and can be skipped on startup using a special system
property SKIP_CONFIG_VALIDATION
that should be set to true
. Additional validation processes are also handled
via Configuration Metadata and property migrations applied automatically on
startup by Spring Boot and family.
Indexed Settings
CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value
. The index [0]
is meant to be
incremented by the adopter to allow for distinct multiple configuration blocks.
MongoDb Session Replication
If you don’t wish to use the native container’s strategy for session replication, you can use CAS’s support for Mongo session replication.
This feature is enabled via the following module:
1
2
3
4
5
<dependency>
<groupId>org.apereo.cas</groupId>
<artifactId>cas-server-support-session-mongo</artifactId>
<version>${cas.version}</version>
</dependency>
1
implementation "org.apereo.cas:cas-server-support-session-mongo:${project.'cas.version'}"
1
2
3
4
5
6
7
8
9
dependencyManagement {
imports {
mavenBom "org.apereo.cas:cas-server-support-bom:${project.'cas.version'}"
}
}
dependencies {
implementation "org.apereo.cas:cas-server-support-session-mongo"
}
1
2
3
4
5
6
7
8
9
10
dependencies {
/*
The following platform references should be included automatically and are listed here for reference only.
implementation enforcedPlatform("org.apereo.cas:cas-server-support-bom:${project.'cas.version'}")
implementation platform(org.springframework.boot.gradle.plugin.SpringBootPlugin.BOM_COORDINATES)
*/
implementation "org.apereo.cas:cas-server-support-session-mongo"
}
The following settings and properties are available from the CAS configuration catalog:
spring.data.mongodb.authentication-database=
Authentication database name.
|
spring.data.mongodb.auto-index-creation=
Whether to enable auto-index creation.
|
spring.data.mongodb.database=
Database name.
|
spring.data.mongodb.field-naming-strategy=
Fully qualified name of the FieldNamingStrategy to use.
|
spring.data.mongodb.grid-fs-database=
Deprecation status is |
spring.data.mongodb.gridfs.bucket=
GridFS bucket name.
|
spring.data.mongodb.gridfs.database=
GridFS database name.
|
spring.data.mongodb.host=
Mongo server host. Cannot be set with URI.
|
spring.data.mongodb.password=
Login password of the mongo server. Cannot be set with URI.
|
spring.data.mongodb.port=
Mongo server port. Cannot be set with URI.
|
spring.data.mongodb.replica-set-name=
Required replica set name for the cluster. Cannot be set with URI.
|
spring.data.mongodb.repositories.type=auto
|
spring.data.mongodb.uri=mongodb://localhost/test
Mongo database URI. Overrides host, port, username, password, and database.
|
spring.data.mongodb.username=
Login user of the mongo server. Cannot be set with URI.
|
spring.data.mongodb.uuid-representation=java-legacy
Representation to use when converting a UUID to a BSON binary value.
|
spring.session.mongodb.collection-name=sessions
Collection name used to store sessions.
|
Configuration Metadata
The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.
Be Selective
This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.
YAGNI
Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.
Naming Convention
Property names can be specified in very relaxed terms. For instance cas.someProperty
, cas.some-property
, cas.some_property
are all valid names. While all
forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where
this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those
that might be presented to the system via an external library or framework such as Spring Boot, etc.
When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value
.
The only possible exception to this rule is when naming actuator endpoints; The name of the
actuator endpoints (i.e. ssoSessions
) MUST remain in camelCase mode.
Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas
. All other settings are controlled and provided
to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with
the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition
or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the
way you intend.
Validation
Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be
recognized or validated by the configuration schema. The validation process is on by default and can be skipped on startup using a special system
property SKIP_CONFIG_VALIDATION
that should be set to true
. Additional validation processes are also handled
via Configuration Metadata and property migrations applied automatically on
startup by Spring Boot and family.
Indexed Settings
CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value
. The index [0]
is meant to be
incremented by the adopter to allow for distinct multiple configuration blocks.
JDBC Session Replication
If you don’t wish to use the native container’s strategy for session replication, you can use CAS’s support for JDBC session replication.
This feature is enabled via the following module:
1
2
3
4
5
<dependency>
<groupId>org.apereo.cas</groupId>
<artifactId>cas-server-support-session-jdbc</artifactId>
<version>${cas.version}</version>
</dependency>
1
implementation "org.apereo.cas:cas-server-support-session-jdbc:${project.'cas.version'}"
1
2
3
4
5
6
7
8
9
dependencyManagement {
imports {
mavenBom "org.apereo.cas:cas-server-support-bom:${project.'cas.version'}"
}
}
dependencies {
implementation "org.apereo.cas:cas-server-support-session-jdbc"
}
1
2
3
4
5
6
7
8
9
10
dependencies {
/*
The following platform references should be included automatically and are listed here for reference only.
implementation enforcedPlatform("org.apereo.cas:cas-server-support-bom:${project.'cas.version'}")
implementation platform(org.springframework.boot.gradle.plugin.SpringBootPlugin.BOM_COORDINATES)
*/
implementation "org.apereo.cas:cas-server-support-session-jdbc"
}
The following settings and properties are available from the CAS configuration catalog:
spring.datasource.continue-on-error=
Deprecation status is |
spring.datasource.data=
Deprecation status is |
spring.datasource.data-password=
Deprecation status is |
spring.datasource.data-username=
Deprecation status is |
spring.datasource.dbcp2.abandoned-usage-tracking=
|
spring.datasource.dbcp2.access-to-underlying-connection-allowed=
|
spring.datasource.dbcp2.auto-commit-on-return=
|
spring.datasource.dbcp2.cache-state=
|
spring.datasource.dbcp2.clear-statement-pool-on-return=
|
spring.datasource.dbcp2.connection-factory-class-name=
|
spring.datasource.dbcp2.connection-init-sqls=
|
spring.datasource.dbcp2.default-auto-commit=
|
spring.datasource.dbcp2.default-catalog=
|
spring.datasource.dbcp2.default-query-timeout=
|
spring.datasource.dbcp2.default-read-only=
|
spring.datasource.dbcp2.default-schema=
|
spring.datasource.dbcp2.default-transaction-isolation=
|
spring.datasource.dbcp2.disconnection-sql-codes=
|
spring.datasource.dbcp2.driver=
|
spring.datasource.dbcp2.driver-class-name=
|
spring.datasource.dbcp2.enable-auto-commit-on-return=
Deprecation status is |
spring.datasource.dbcp2.eviction-policy-class-name=
|
spring.datasource.dbcp2.fast-fail-validation=
|
spring.datasource.dbcp2.initial-size=
|
spring.datasource.dbcp2.jmx-name=
|
spring.datasource.dbcp2.lifo=
|
spring.datasource.dbcp2.log-abandoned=
|
spring.datasource.dbcp2.log-expired-connections=
|
spring.datasource.dbcp2.login-timeout=
|
spring.datasource.dbcp2.max-conn-lifetime-millis=
|
spring.datasource.dbcp2.max-idle=
|
spring.datasource.dbcp2.max-open-prepared-statements=
|
spring.datasource.dbcp2.max-total=
|
spring.datasource.dbcp2.max-wait-millis=
|
spring.datasource.dbcp2.min-evictable-idle-time-millis=
|
spring.datasource.dbcp2.min-idle=
|
spring.datasource.dbcp2.num-tests-per-eviction-run=
|
spring.datasource.dbcp2.password=
|
spring.datasource.dbcp2.pool-prepared-statements=
|
spring.datasource.dbcp2.remove-abandoned-on-borrow=
|
spring.datasource.dbcp2.remove-abandoned-on-maintenance=
|
spring.datasource.dbcp2.remove-abandoned-timeout=
|
spring.datasource.dbcp2.rollback-on-return=
|
spring.datasource.dbcp2.soft-min-evictable-idle-time-millis=
|
spring.datasource.dbcp2.test-on-borrow=
|
spring.datasource.dbcp2.test-on-create=
|
spring.datasource.dbcp2.test-on-return=
|
spring.datasource.dbcp2.test-while-idle=
|
spring.datasource.dbcp2.time-between-eviction-runs-millis=
|
spring.datasource.dbcp2.url=
|
spring.datasource.dbcp2.username=
|
spring.datasource.dbcp2.validation-query=
|
spring.datasource.dbcp2.validation-query-timeout=
|
spring.datasource.driver-class-name=
Fully qualified name of the JDBC driver. Auto-detected based on the URL by default.
|
spring.datasource.embedded-database-connection=
Connection details for an embedded database. Defaults to the most suitable embedded database that is available on the classpath.
|
spring.datasource.generate-unique-name=true
Whether to generate a random datasource name.
|
spring.datasource.hikari.allow-pool-suspension=
|
spring.datasource.hikari.auto-commit=
|
spring.datasource.hikari.catalog=
|
spring.datasource.hikari.connection-init-sql=
|
spring.datasource.hikari.connection-test-query=
|
spring.datasource.hikari.connection-timeout=
|
spring.datasource.hikari.data-source-class-name=
|
spring.datasource.hikari.data-source-j-n-d-i=
|
spring.datasource.hikari.data-source-properties=
|
spring.datasource.hikari.driver-class-name=
|
spring.datasource.hikari.exception-override-class-name=
|
spring.datasource.hikari.health-check-properties=
|
spring.datasource.hikari.idle-timeout=
|
spring.datasource.hikari.initialization-fail-timeout=
|
spring.datasource.hikari.isolate-internal-queries=
|
spring.datasource.hikari.jdbc-url=
|
spring.datasource.hikari.keepalive-time=
|
spring.datasource.hikari.leak-detection-threshold=
|
spring.datasource.hikari.login-timeout=
|
spring.datasource.hikari.max-lifetime=
|
spring.datasource.hikari.maximum-pool-size=
|
spring.datasource.hikari.metrics-tracker-factory=
|
spring.datasource.hikari.minimum-idle=
|
spring.datasource.hikari.password=
|
spring.datasource.hikari.pool-name=
|
spring.datasource.hikari.read-only=
|
spring.datasource.hikari.register-mbeans=
|
spring.datasource.hikari.scheduled-executor=
|
spring.datasource.hikari.schema=
|
spring.datasource.hikari.transaction-isolation=
|
spring.datasource.hikari.username=
|
spring.datasource.hikari.validation-timeout=
|
spring.datasource.initialization-mode=
Deprecation status is |
spring.datasource.jmx-enabled=false
Whether to enable JMX support (if provided by the underlying pool). How can I configure this property?
Deprecation status is |
spring.datasource.jndi-name=
JNDI location of the datasource. Class, url, username and password are ignored when set.
|
spring.datasource.name=
Datasource name to use if "generate-unique-name" is false. Defaults to "testdb" when using an embedded database, otherwise null.
|
spring.datasource.oracleucp.abandoned-connection-timeout=
|
spring.datasource.oracleucp.connection-factory-class-name=
|
spring.datasource.oracleucp.connection-factory-properties=
|
spring.datasource.oracleucp.connection-harvest-max-count=
|
spring.datasource.oracleucp.connection-harvest-trigger-count=
|
spring.datasource.oracleucp.connection-labeling-high-cost=
|
spring.datasource.oracleucp.connection-pool-name=
|
spring.datasource.oracleucp.connection-properties=
|
spring.datasource.oracleucp.connection-repurpose-threshold=
|
spring.datasource.oracleucp.connection-validation-timeout=
|
spring.datasource.oracleucp.connection-wait-timeout=
|
spring.datasource.oracleucp.data-source-name=
|
spring.datasource.oracleucp.database-name=
|
spring.datasource.oracleucp.description=
|
spring.datasource.oracleucp.fast-connection-failover-enabled=
|
spring.datasource.oracleucp.high-cost-connection-reuse-threshold=
|
spring.datasource.oracleucp.inactive-connection-timeout=
|
spring.datasource.oracleucp.initial-pool-size=
|
spring.datasource.oracleucp.login-timeout=
|
spring.datasource.oracleucp.max-connection-reuse-count=
|
spring.datasource.oracleucp.max-connection-reuse-time=
|
spring.datasource.oracleucp.max-connections-per-shard=
|
spring.datasource.oracleucp.max-idle-time=
|
spring.datasource.oracleucp.max-pool-size=
|
spring.datasource.oracleucp.max-statements=
|
spring.datasource.oracleucp.min-pool-size=
|
spring.datasource.oracleucp.network-protocol=
|
spring.datasource.oracleucp.o-n-s-configuration=
|
spring.datasource.oracleucp.pdb-roles=
|
spring.datasource.oracleucp.port-number=
|
spring.datasource.oracleucp.property-cycle=
|
spring.datasource.oracleucp.query-timeout=
|
spring.datasource.oracleucp.read-only-instance-allowed=
|
spring.datasource.oracleucp.role-name=
|
spring.datasource.oracleucp.s-q-l-for-validate-connection=
|
spring.datasource.oracleucp.seconds-to-trust-idle-connection=
|
spring.datasource.oracleucp.server-name=
|
spring.datasource.oracleucp.sharding-mode=
|
spring.datasource.oracleucp.time-to-live-connection-timeout=
|
spring.datasource.oracleucp.timeout-check-interval=
|
spring.datasource.oracleucp.u-r-l=
|
spring.datasource.oracleucp.user=
|
spring.datasource.oracleucp.validate-connection-on-borrow=
|
spring.datasource.password=
Login password of the database.
|
spring.datasource.platform=
Deprecation status is |
spring.datasource.schema=
Deprecation status is |
spring.datasource.schema-password=
Deprecation status is |
spring.datasource.schema-username=
Deprecation status is |
spring.datasource.separator=
Deprecation status is |
spring.datasource.sql-script-encoding=
Deprecation status is |
spring.datasource.tomcat.abandon-when-percentage-full=
|
spring.datasource.tomcat.access-to-underlying-connection-allowed=
|
spring.datasource.tomcat.alternate-username-allowed=
|
spring.datasource.tomcat.commit-on-return=
|
spring.datasource.tomcat.connection-properties=
|
spring.datasource.tomcat.data-source-j-n-d-i=
|
spring.datasource.tomcat.db-properties=
|
spring.datasource.tomcat.default-auto-commit=
|
spring.datasource.tomcat.default-catalog=
|
spring.datasource.tomcat.default-read-only=
|
spring.datasource.tomcat.default-transaction-isolation=
|
spring.datasource.tomcat.driver-class-name=
|
spring.datasource.tomcat.fair-queue=
|
spring.datasource.tomcat.ignore-exception-on-pre-load=
|
spring.datasource.tomcat.init-s-q-l=
|
spring.datasource.tomcat.initial-size=
|
spring.datasource.tomcat.jdbc-interceptors=
|
spring.datasource.tomcat.jmx-enabled=
|
spring.datasource.tomcat.log-abandoned=
|
spring.datasource.tomcat.log-validation-errors=
|
spring.datasource.tomcat.login-timeout=
|
spring.datasource.tomcat.max-active=
|
spring.datasource.tomcat.max-age=
|
spring.datasource.tomcat.max-idle=
|
spring.datasource.tomcat.max-wait=
|
spring.datasource.tomcat.min-evictable-idle-time-millis=
|
spring.datasource.tomcat.min-idle=
|
spring.datasource.tomcat.name=
|
spring.datasource.tomcat.num-tests-per-eviction-run=
|
spring.datasource.tomcat.password=
|
spring.datasource.tomcat.propagate-interrupt-state=
|
spring.datasource.tomcat.remove-abandoned=
|
spring.datasource.tomcat.remove-abandoned-timeout=
|
spring.datasource.tomcat.rollback-on-return=
|
spring.datasource.tomcat.suspect-timeout=
|
spring.datasource.tomcat.test-on-borrow=
|
spring.datasource.tomcat.test-on-connect=
|
spring.datasource.tomcat.test-on-return=
|
spring.datasource.tomcat.test-while-idle=
|
spring.datasource.tomcat.time-between-eviction-runs-millis=
|
spring.datasource.tomcat.url=
|
spring.datasource.tomcat.use-disposable-connection-facade=
|
spring.datasource.tomcat.use-equals=
|
spring.datasource.tomcat.use-lock=
|
spring.datasource.tomcat.use-statement-facade=
|
spring.datasource.tomcat.username=
|
spring.datasource.tomcat.validation-interval=
|
spring.datasource.tomcat.validation-query=
|
spring.datasource.tomcat.validation-query-timeout=
|
spring.datasource.tomcat.validator-class-name=
|
spring.datasource.type=
Fully qualified name of the connection pool implementation to use. By default, it is auto-detected from the classpath.
|
spring.datasource.url=
JDBC URL of the database.
|
spring.datasource.username=
Login username of the database.
|
spring.datasource.xa.data-source-class-name=
XA datasource fully qualified name.
|
spring.datasource.xa.properties=
Properties to pass to the XA data source.
|
spring.session.jdbc.cleanup-cron=0 * * * * *
Cron expression for expired session cleanup job.
|
spring.session.jdbc.flush-mode=on-save
Sessions flush mode. Determines when session changes are written to the session store.
|
spring.session.jdbc.initialize-schema=embedded
Database schema initialization mode.
|
spring.session.jdbc.platform=
Platform to use in initialization scripts if the @@platform@@ placeholder is used. Auto-detected by default.
|
spring.session.jdbc.save-mode=on-set-attribute
Sessions save mode. Determines how session changes are tracked and saved to the session store.
|
spring.session.jdbc.schema=classpath:org/springframework/session/jdbc/schema-@@platform@@.sql
Path to the SQL file to use to initialize the database schema.
|
spring.session.jdbc.table-name=SPRING_SESSION
Name of the database table used to store sessions.
|
Configuration Metadata
The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.
Be Selective
This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.
YAGNI
Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.
Naming Convention
Property names can be specified in very relaxed terms. For instance cas.someProperty
, cas.some-property
, cas.some_property
are all valid names. While all
forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where
this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those
that might be presented to the system via an external library or framework such as Spring Boot, etc.
When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value
.
The only possible exception to this rule is when naming actuator endpoints; The name of the
actuator endpoints (i.e. ssoSessions
) MUST remain in camelCase mode.
Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas
. All other settings are controlled and provided
to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with
the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition
or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the
way you intend.
Validation
Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be
recognized or validated by the configuration schema. The validation process is on by default and can be skipped on startup using a special system
property SKIP_CONFIG_VALIDATION
that should be set to true
. Additional validation processes are also handled
via Configuration Metadata and property migrations applied automatically on
startup by Spring Boot and family.
Indexed Settings
CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value
. The index [0]
is meant to be
incremented by the adopter to allow for distinct multiple configuration blocks.