Hazelcast Throttling Authentication Attempts

This feature uses a distributed Hazelcast map to record throttled authentication attempts. This component requires and depends on the CAS auditing functionality

Enable the following module in your configuration overlay:

1
2
3
4
5
<dependency>
    <groupId>org.apereo.cas</groupId>
    <artifactId>cas-server-support-throttle-hazelcast</artifactId>
    <version>${cas.version}</version>
</dependency>
1
implementation "org.apereo.cas:cas-server-support-throttle-hazelcast:${project.'cas.version'}"
1
2
3
4
5
6
7
8
9
dependencyManagement {
    imports {
        mavenBom "org.apereo.cas:cas-server-support-bom:${project.'cas.version'}"
    }
}

dependencies {
    implementation "org.apereo.cas:cas-server-support-throttle-hazelcast"
}
1
2
3
4
5
6
7
8
9
10
dependencies {
    /*
    The following platform references should be included automatically and are listed here for reference only.
            
    implementation enforcedPlatform("org.apereo.cas:cas-server-support-bom:${project.'cas.version'}")
    implementation platform(org.springframework.boot.gradle.plugin.SpringBootPlugin.BOM_COORDINATES)
    */

    implementation "org.apereo.cas:cas-server-support-throttle-hazelcast"
}

Hazelcast Configuration

The following settings and properties are available from the CAS configuration catalog:

The configuration settings listed below are tagged as Required in the CAS configuration metadata. This flag indicates that the presence of the setting may be needed to activate or affect the behavior of the CAS feature and generally should be reviewed, possibly owned and adjusted. If the setting is assigned a default value, you do not need to strictly put the setting in your copy of the configuration, but should review it nonetheless to make sure it matches your deployment expectations.

The configuration settings listed below are tagged as Optional in the CAS configuration metadata. This flag indicates that the presence of the setting is not immediately necessary in the end-user CAS configuration, because a default value is assigned or the activation of the feature is not conditionally controlled by the setting value. In other words, you should only include this field in your configuration if you need to modify the default value or if you need to turn on the feature controlled by the setting.

  • cas.authn.throttle.hazelcast.core.enable-compression=false
  • Enables compression when default java serialization is used.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.core.enable-jet=true
  • Enable Jet configuration/service on the hazelcast instance. Hazelcast Jet is a distributed batch and stream processing system that can do stateful computations over massive amounts of data with consistent low latency. Jet service is required when executing SQL queries with the SQL service.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.core.enable-management-center-scripting=true
  • Enables scripting from Management Center.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.core.license-key=
  • Hazelcast enterprise license key.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreProperties.

    How can I configure this property?

    Configuration Metadata

    The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.

    Be Selective

    This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.

    YAGNI

    Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.

    Naming Convention

    Property names can be specified in very relaxed terms. For instance cas.someProperty, cas.some-property, cas.some_property are all valid names. While all forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those that might be presented to the system via an external library or framework such as Spring Boot, etc.

    :information_source: Note

    When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value. The only possible exception to this rule is when naming actuator endpoints; The name of the actuator endpoints (i.e. ssoSessions) MUST remain in camelCase mode.

    Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas. All other settings are controlled and provided to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the way you intend.

    Validation

    Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be recognized or validated by the configuration schema. Additional validation processes are also handled via Configuration Metadata and property migrations applied automatically on startup by Spring Boot and family.

    Indexed Settings

    CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value. The index [0] is meant to be incremented by the adopter to allow for distinct multiple configuration blocks.

    Hazelcast Cluster Core

    The following settings and properties are available from the CAS configuration catalog:

    The configuration settings listed below are tagged as Required in the CAS configuration metadata. This flag indicates that the presence of the setting may be needed to activate or affect the behavior of the CAS feature and generally should be reviewed, possibly owned and adjusted. If the setting is assigned a default value, you do not need to strictly put the setting in your copy of the configuration, but should review it nonetheless to make sure it matches your deployment expectations.

  • cas.authn.throttle.hazelcast.cluster.core.instance-name=
  • The instance name.

    This setting supports the Spring Expression Language.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

    The configuration settings listed below are tagged as Optional in the CAS configuration metadata. This flag indicates that the presence of the setting is not immediately necessary in the end-user CAS configuration, because a default value is assigned or the activation of the feature is not conditionally controlled by the setting value. In other words, you should only include this field in your configuration if you need to modify the default value or if you need to turn on the feature controlled by the setting.

  • cas.authn.throttle.hazelcast.cluster.core.async-backup-count=0
  • Hazelcast supports both synchronous and asynchronous backups. By default, backup operations are synchronous. In this case, backup operations block operations until backups are successfully copied to backup members (or deleted from backup members in case of remove) and acknowledgements are received. Therefore, backups are updated before a put operation is completed, provided that the cluster is stable. Asynchronous backups, on the other hand, do not block operations. They are fire and forget and do not require acknowledgements; the backup operations are performed at some point in time.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.async-fillup=true
  • Used when replication is turned on with #isReplicated().

    If a new member joins the cluster, there are two ways you can handle the initial provisioning that is executed to replicate all existing values to the new member. Each involves how you configure the async fill up.
    • First, you can configure async fill up to true, which does not block reads while the fill up operation is underway. That way, you have immediate access on the new member, but it will take time until all the values are eventually accessible. Not yet replicated values are returned as non-existing (null).
    • Second, you can configure for a synchronous initial fill up (by configuring the async fill up to false), which blocks every read or write access to the map until the fill up operation is finished. Use this with caution since it might block your application from operating.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.backup-count=1
  • To provide data safety, Hazelcast allows you to specify the number of backup copies you want to have. That way, data on a cluster member will be copied onto other member(s). To create synchronous backups, select the number of backup copies. When this count is 1, a map entry will have its backup on one other member in the cluster. If you set it to 2, then a map entry will have its backup on two other members. You can set it to 0 if you do not want your entries to be backed up, e.g., if performance is more important than backing up. The maximum value for the backup count is 6. Sync backup operations have a blocking cost which may lead to latency issues.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.cp-member-count=0
  • CP Subsystem is a component of a Hazelcast cluster that builds a strongly consistent layer for a set of distributed data structures. Its data structures are CP with respect to the CAP principle, i.e., they always maintain linearizability and prefer consistency over availability during network partitions. Besides network partitions, CP Subsystem withstands server and client failures. All members of a Hazelcast cluster do not necessarily take part in CP Subsystem. The number of Hazelcast members that take part in CP Subsystem is specified here. CP Subsystem must have at least 3 CP members.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.eviction-policy=LRU
  • Hazelcast supports policy-based eviction for distributed maps. Currently supported policies are LRU (Least Recently Used) and LFU (Least Frequently Used) and NONE. See this for more info.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.logging-type=slf4j
  • Hazelcast has a flexible logging configuration and doesn't depend on any logging framework except JDK logging. It has in-built adaptors for a number of logging frameworks and also supports custom loggers by providing logging interfaces. To use built-in adaptors you should set this setting to one of predefined types below.

    • jdk: JDK logging
    • log4j: Log4j
    • slf4j: Slf4j
    • none: Disable logging

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.map-merge-policy=PUT_IF_ABSENT
  • Define how data items in Hazelcast maps are merged together from source to destination. By default, merges map entries from source to destination if they don't exist in the destination map. Accepted values are:

    • PUT_IF_ABSENT: Merges data structure entries from source to destination if they don't exist in the destination data structure.
    • HIGHER_HITS: * Merges data structure entries from source to destination data structure if the source entry has more hits than the destination one.
    • DISCARD: Merges only entries from the destination data structure and discards all entries from the source data structure.
    • PASS_THROUGH: Merges data structure entries from source to destination directly unless the merging entry is null
    • EXPIRATION_TIME: Merges data structure entries from source to destination data structure if the source entry will expire later than the destination entry. This policy can only be used if the clocks of the nodes are in sync.
    • LATEST_UPDATE: Merges data structure entries from source to destination data structure if the source entry was updated more frequently than the destination entry. This policy can only be used if the clocks of the nodes are in sync.
    • LATEST_ACCESS: Merges data structure entries from source to destination data structure if the source entry has been accessed more recently than the destination entry. This policy can only be used if the clocks of the nodes are in sync.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.max-no-heartbeat-seconds=300
  • Max timeout of heartbeat in seconds for a node to assume it is dead.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.max-size=85
  • Sets the maximum size of the map.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.max-size-policy=USED_HEAP_PERCENTAGE
    • FREE_HEAP_PERCENTAGE: Policy based on minimum free JVM heap memory percentage per JVM.
    • FREE_HEAP_SIZE: Policy based on minimum free JVM heap memory in megabytes per JVM.
    • FREE_NATIVE_MEMORY_PERCENTAGE: Policy based on minimum free native memory percentage per Hazelcast instance.
    • FREE_NATIVE_MEMORY_SIZE: Policy based on minimum free native memory in megabytes per Hazelcast instance.
    • PER_NODE: Policy based on maximum number of entries stored per data structure (map, cache etc) on each Hazelcast instance.
    • PER_PARTITION: Policy based on maximum number of entries stored per data structure (map, cache etc) on each partition.
    • USED_HEAP_PERCENTAGE: Policy based on maximum used JVM heap memory percentage per data structure (map, cache etc) on each Hazelcast instance .
    • USED_HEAP_SIZE: Policy based on maximum used JVM heap memory in megabytes per data structure (map, cache etc) on each Hazelcast instance.
    • USED_NATIVE_MEMORY_PERCENTAGE: Policy based on maximum used native memory percentage per data structure (map, cache etc) on each Hazelcast instance.
    • USED_NATIVE_MEMORY_SIZE: Policy based on maximum used native memory in megabytes per data structure (map, cache etc) on each Hazelcast instance .

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.partition-member-group-type=
  • With PartitionGroupConfig, you can control how primary and backup partitions are mapped to physical Members. Hazelcast will always place partitions on different partition groups so as to provide redundancy. Accepted value are: PER_MEMBER, HOST_AWARE, CUSTOM, ZONE_AWARE, SPI. In all cases a partition will never be created on the same group. If there are more partitions defined than there are partition groups, then only those partitions, up to the number of partition groups, will be created. For example, if you define 2 backups, then with the primary, that makes 3. If you have only two partition groups only two will be created.

    • {}PER_MEMBER Partition Groups}: This is the default partition scheme and is used if no other scheme is defined. Each Member is in a group of its own.</li>
    • {}HOST_AWARE Partition Groups}: In this scheme, a group corresponds to a host, based on its IP address. Partitions will not be written to any other members on the same host. This scheme provides good redundancy when multiple instances are being run on the same host.</li>
    • {}CUSTOM Partition Groups}: In this scheme, IP addresses, or IP address ranges, are allocated to groups. Partitions are not written to the same group. This is very useful for ensuring partitions are written to different racks or even availability zones.</li>
    • {}ZONE_AWARE Partition Groups}: In this scheme, groups are allocated according to the metadata provided by Discovery SPI Partitions are not written to the same group. This is very useful for ensuring partitions are written to availability zones or different racks without providing the IP addresses to the config ahead.</li>
    • {}SPI Partition Groups}: In this scheme, groups are allocated according to the implementation provided by Discovery SPI.</li> </ul> </p>

      org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

      How can I configure this property?

      </div></td> </tr>
  • cas.authn.throttle.hazelcast.cluster.core.replicated=false
  • A Replicated Map is a distributed key-value data structure where the data is replicated to all members in the cluster. It provides full replication of entries to all members for high speed access. A Replicated Map does not partition data (it does not spread data to different cluster members); instead, it replicates the data to all members. Replication leads to higher memory consumption. However, a Replicated Map has faster read and write access since the data is available on all members. Writes could take place on local/remote members in order to provide write-order, eventually being replicated to all other members.

    If you have a large cluster or very high occurrences of updates, the Replicated Map may not scale linearly as expected since it has to replicate update operations to all members in the cluster. Since the replication of updates is performed in an asynchronous manner, Hazelcast recommends you enable back pressure in case your system has high occurrences of updates.

    Note that Replicated Map does not guarantee eventual consistency because there are some edge cases that fail to provide consistency.

    Replicated Map uses the internal partition system of Hazelcast in order to serialize updates happening on the same key at the same time. This happens by sending updates of the same key to the same Hazelcast member in the cluster.

    Due to the asynchronous nature of replication, a Hazelcast member could die before successfully replicating a "write" operation to other members after sending the "write completed" response to its caller during the write process. In this scenario, Hazelcast’s internal partition system promotes one of the replicas of the partition as the primary one. The new primary partition does not have the latest "write" since the dead member could not successfully replicate the update.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.timeout=5
  • Connection timeout in seconds for the TCP/IP config and members joining the cluster.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

    Configuration Metadata

    The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.

    Be Selective

    This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.

    YAGNI

    Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.

    Naming Convention

    Property names can be specified in very relaxed terms. For instance cas.someProperty, cas.some-property, cas.some_property are all valid names. While all forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those that might be presented to the system via an external library or framework such as Spring Boot, etc.

    :information_source: Note

    When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value. The only possible exception to this rule is when naming actuator endpoints; The name of the actuator endpoints (i.e. ssoSessions) MUST remain in camelCase mode.

    Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas. All other settings are controlled and provided to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the way you intend.

    Validation

    Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be recognized or validated by the configuration schema. Additional validation processes are also handled via Configuration Metadata and property migrations applied automatically on startup by Spring Boot and family.

    Indexed Settings

    CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value. The index [0] is meant to be incremented by the adopter to allow for distinct multiple configuration blocks.

    Hazelcast Cluster Networking

    The following settings and properties are available from the CAS configuration catalog:

    The configuration settings listed below are tagged as Required in the CAS configuration metadata. This flag indicates that the presence of the setting may be needed to activate or affect the behavior of the CAS feature and generally should be reviewed, possibly owned and adjusted. If the setting is assigned a default value, you do not need to strictly put the setting in your copy of the configuration, but should review it nonetheless to make sure it matches your deployment expectations.

  • cas.authn.throttle.hazelcast.cluster.network.members=
  • Sets the well known members. If members is empty, calling this method will have the same effect as calling clear(). A member can be a comma separated string, e..g 10.11.12.1,10.11.12.2 which indicates multiple members are going to be added. The list of members must include ALL CAS server node, including the current node that owns this configuration.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.port=5701
  • You can specify the ports which Hazelcast will use to communicate between cluster members. The name of the parameter for this is port and its default value is 5701. By default, Hazelcast will try 100 ports to bind. Meaning that, if you set the value of port as 5701, as members are joining to the cluster, Hazelcast tries to find ports between 5701 and 5801.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkClusterProperties.

    How can I configure this property?

    The configuration settings listed below are tagged as Optional in the CAS configuration metadata. This flag indicates that the presence of the setting is not immediately necessary in the end-user CAS configuration, because a default value is assigned or the activation of the feature is not conditionally controlled by the setting value. In other words, you should only include this field in your configuration if you need to modify the default value or if you need to turn on the feature controlled by the setting.

  • cas.authn.throttle.hazelcast.cluster.network.ipv4-enabled=true
  • IPv6 support has been switched off by default, since some platforms have issues in use of IPv6 stack. And some other platforms such as Amazon AWS have no support at all. To enable IPv6 support set this setting to false.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.local-address=
  • If this property is set, then this is the address where the server socket is bound to.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.network-interfaces=
  • You can specify which network interfaces that Hazelcast should use. Servers mostly have more than one network interface, so you may want to list the valid IPs. Range characters ('*' and '-') can be used for simplicity. For instance, 10.3.10.* refers to IPs between 10.3.10.0 and 10.3.10.255. Interface 10.3.10.4-18 refers to IPs between 10.3.10.4 and 10.3.10.18 (4 and 18 included). If network interface configuration is enabled (it is disabled by default) and if Hazelcast cannot find an matching interface, then it will print a message on the console and will not start on that node.

    Interfaces can be separated by a comma.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.outbound-ports=
  • The outbound ports for the Hazelcast configuration.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.port-auto-increment=true
  • You may also want to choose to use only one port. In that case, you can disable the auto-increment feature of port.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.public-address=
  • The default public address to be advertised to other cluster members and clients.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.tcpip-enabled=true
  • Enable TCP/IP config. Contains the configuration for the Tcp/Ip join mechanism. The Tcp/Ip join mechanism relies on one or more well known members. So when a new member wants to join a cluster, it will try to connect to one of the well known members. If it is able to connect, it will now about all members in the cluster and doesn't rely on these well known members anymore.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkClusterProperties.

    How can I configure this property?

    Configuration Metadata

    The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.

    Be Selective

    This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.

    YAGNI

    Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.

    Naming Convention

    Property names can be specified in very relaxed terms. For instance cas.someProperty, cas.some-property, cas.some_property are all valid names. While all forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those that might be presented to the system via an external library or framework such as Spring Boot, etc.

    :information_source: Note

    When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value. The only possible exception to this rule is when naming actuator endpoints; The name of the actuator endpoints (i.e. ssoSessions) MUST remain in camelCase mode.

    Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas. All other settings are controlled and provided to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the way you intend.

    Validation

    Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be recognized or validated by the configuration schema. Additional validation processes are also handled via Configuration Metadata and property migrations applied automatically on startup by Spring Boot and family.

    Indexed Settings

    CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value. The index [0] is meant to be incremented by the adopter to allow for distinct multiple configuration blocks.

    Hazelcast Network TLS Encryption

    You can use the TLS (Transport Layer Security) protocol to establish an encrypted communication across your Hazelcast cluster with key stores and trust stores. Hazelcast allows you to encrypt socket level communication between Hazelcast members and between Hazelcast clients and members, for end to end encryption.

    :information_source: Usage

    Hazelcast SSL is an enterprise feature. You will need a proper Hazelcast Enterprise License to use this facility.

    Hazelcast provides a default SSL context factory implementation, which is guided and auto-configured by CAS when enabled to use the configured keystore to initialize SSL context.

    The following settings and properties are available from the CAS configuration catalog:

    The configuration settings listed below are tagged as Required in the CAS configuration metadata. This flag indicates that the presence of the setting may be needed to activate or affect the behavior of the CAS feature and generally should be reviewed, possibly owned and adjusted. If the setting is assigned a default value, you do not need to strictly put the setting in your copy of the configuration, but should review it nonetheless to make sure it matches your deployment expectations.

    The configuration settings listed below are tagged as Optional in the CAS configuration metadata. This flag indicates that the presence of the setting is not immediately necessary in the end-user CAS configuration, because a default value is assigned or the activation of the feature is not conditionally controlled by the setting value. In other words, you should only include this field in your configuration if you need to modify the default value or if you need to turn on the feature controlled by the setting.

  • cas.authn.throttle.hazelcast.cluster.network.ssl.cipher-suites=
  • Comma-separated list of cipher suite names allowed to be used. Its default value are all supported suites in your Java runtime.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkSslProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.ssl.key-manager-algorithm=
  • Name of the algorithm based on which the authentication keys are provided.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkSslProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.ssl.key-store-type=JKS
  • Type of the keystore. Its default value is JKS. Another commonly used type is the PKCS12. Available keystore/truststore types depend on your Operating system and the Java runtime. Only needed when the mutual authentication is used.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkSslProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.ssl.keystore=
  • Path of your keystore file. Only needed when the mutual authentication is used.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkSslProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.ssl.keystore-password=
  • Password to access the key from your keystore file. Only needed when the mutual authentication is used.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkSslProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.ssl.mutual-authentication=
  • Mutual authentication configuration. It’s empty by default which means the client side of connection is not authenticated. Available values are:

    • REQUIRED - server forces usage of a trusted client certificate
    • OPTIONAL - server asks for a client certificate, but it doesn't require it

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkSslProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.ssl.protocol=TLS
  • Name of the algorithm which is used in your TLS/SSL. For the protocol property, we recommend you to provide TLS with its version information, e.g., TLSv1.2. Note that if you write only TLS, your application chooses the TLS version according to your Java version.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkSslProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.ssl.trust-manager-algorithm=
  • Name of the algorithm based on which the trust managers are provided.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkSslProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.ssl.trust-store=
  • Path of your truststore file. The file truststore is a keystore file that contains a collection of certificates trusted by your application.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkSslProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.ssl.trust-store-password=
  • Password to unlock the truststore file.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkSslProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.ssl.trust-store-type=JKS
  • Type of the truststore. Its default value is JKS. Another commonly used type is the PKCS12. Available keystore/truststore types depend on your Operating system and the Java runtime.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkSslProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.network.ssl.validate-identity=false
  • Flag which allows enabling endpoint identity validation. It means, during the TLS handshake client verifies if the server’s hostname (or IP address) matches the information in X.509 certificate (Subject Alternative Name extension).

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastNetworkSslProperties.

    How can I configure this property?

    Configuration Metadata

    The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.

    Be Selective

    This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.

    YAGNI

    Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.

    Naming Convention

    Property names can be specified in very relaxed terms. For instance cas.someProperty, cas.some-property, cas.some_property are all valid names. While all forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those that might be presented to the system via an external library or framework such as Spring Boot, etc.

    :information_source: Note

    When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value. The only possible exception to this rule is when naming actuator endpoints; The name of the actuator endpoints (i.e. ssoSessions) MUST remain in camelCase mode.

    Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas. All other settings are controlled and provided to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the way you intend.

    Validation

    Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be recognized or validated by the configuration schema. Additional validation processes are also handled via Configuration Metadata and property migrations applied automatically on startup by Spring Boot and family.

    Indexed Settings

    CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value. The index [0] is meant to be incremented by the adopter to allow for distinct multiple configuration blocks.

    :information_source: Performance

    Under Linux, the JVM automatically makes use of /dev/random for the generation of random numbers. If this entropy is insufficient to keep up with the rate requiring random numbers, it can slow down the encryption/decryption since it could block for minutes waiting for sufficient entropy . This can be fixed by setting the -Djava.security.egd=file:/dev/./urandom system property. Note that if there is a shortage of entropy, this option will not block and the returned random values could theoretically be vulnerable to a cryptographic attack.