Throttling Authentication Attempts

Capacity Throttling

CAS is able to support request rate-limiting based on the token-bucket algorithm, via the Bucket4j project. This means that authentication requests that reach a certain configurable capacity within a time window may either be blocked or throttled to slow down. This is done to protect the system from overloading, allowing you to introduce a scenario to allow CAS 120 authentication requests per minute with a refill rate of 10 requests per second that would continually increase in the capacity bucket. Please note that the bucket allocation strategy is specific to the client IP address.

Enable the following module in your configuration overlay:

1
2
3
4
5
<dependency>
    <groupId>org.apereo.cas</groupId>
    <artifactId>cas-server-support-throttle-bucket4j</artifactId>
    <version>${cas.version}</version>
</dependency>
1
implementation "org.apereo.cas:cas-server-support-throttle-bucket4j:${project.'cas.version'}"
1
2
3
4
5
6
7
8
9
dependencyManagement {
    imports {
        mavenBom "org.apereo.cas:cas-server-support-bom:${project.'cas.version'}"
    }
}

dependencies {
    implementation "org.apereo.cas:cas-server-support-throttle-bucket4j"
}
1
2
3
4
5
6
7
8
9
10
dependencies {
    /*
    The following platform references should be included automatically and are listed here for reference only.
            
    implementation enforcedPlatform("org.apereo.cas:cas-server-support-bom:${project.'cas.version'}")
    implementation platform(org.springframework.boot.gradle.plugin.SpringBootPlugin.BOM_COORDINATES)
    */

    implementation "org.apereo.cas:cas-server-support-throttle-bucket4j"
}

The following settings and properties are available from the CAS configuration catalog:

The configuration settings listed below are tagged as Required in the CAS configuration metadata. This flag indicates that the presence of the setting may be needed to activate or affect the behavior of the CAS feature and generally should be reviewed, possibly owned and adjusted. If the setting is assigned a default value, you do not need to strictly put the setting in your copy of the configuration, but should review it nonetheless to make sure it matches your deployment expectations.

The configuration settings listed below are tagged as Optional in the CAS configuration metadata. This flag indicates that the presence of the setting is not immediately necessary in the end-user CAS configuration, because a default value is assigned or the activation of the feature is not conditionally controlled by the setting value. In other words, you should only include this field in your configuration if you need to modify the default value or if you need to turn on the feature controlled by the setting.

  • cas.authn.throttle.bucket4j.bandwidth[0].capacity=120
  • Number of tokens/requests that can be used within the time window.

    org.apereo.cas.configuration.model.support.bucket4j.Bucket4jBandwidthLimitProperties.

    How can I configure this property?

  • cas.authn.throttle.bucket4j.bandwidth[0].duration=PT60S
  • Time window in which capacity can be allowed.

    This settings supports the java.time.Duration syntax [?].

    org.apereo.cas.configuration.model.support.bucket4j.Bucket4jBandwidthLimitProperties.

    How can I configure this property?

  • cas.authn.throttle.bucket4j.bandwidth[0].initial-tokens=
  • By default initial size of bucket equals to capacity. But sometimes, you may want to have lesser initial size, for example for case of cold start in order to prevent denial of service.

    org.apereo.cas.configuration.model.support.bucket4j.Bucket4jBandwidthLimitProperties.

    How can I configure this property?

  • cas.authn.throttle.bucket4j.bandwidth[0].refill-count=10
  • The number of tokens that should be used to refill the bucket given the specified refill duration.

    org.apereo.cas.configuration.model.support.bucket4j.Bucket4jBandwidthLimitProperties.

    How can I configure this property?

  • cas.authn.throttle.bucket4j.bandwidth[0].refill-duration=PT30S
  • Duration to use to refill the bucket.

    This settings supports the java.time.Duration syntax [?].

    org.apereo.cas.configuration.model.support.bucket4j.Bucket4jBandwidthLimitProperties.

    How can I configure this property?

  • cas.authn.throttle.bucket4j.bandwidth[0].refill-strategy=GREEDY
  • Describes how the bucket should be refilled. Specifies the speed of tokens regeneration. Available values are as follows:

    • GREEDY: This type of refill regenerates tokens in a greedy manner; it tries to add the tokens to bucket as soon as possible. For example refill "10 tokens per 1 second" adds 1 token per each 100 millisecond; in other words refill will not wait 1 second to regenerate 10 tokens.
    • INTERVALLY: This type of refill regenerates tokens in intervally manner. "Intervally" in opposite to "greedy" will wait until whole period would be elapsed before regenerating tokens.

    org.apereo.cas.configuration.model.support.bucket4j.Bucket4jBandwidthLimitProperties.

    How can I configure this property?

  • cas.authn.throttle.bucket4j.bandwidth=
  • Describe the available bandwidth and the overall limitations. Multiple bandwidths allow for different policies per unit of measure. (i.e. allows 1000 tokens per 1 minute, but not often then 50 tokens per 1 second).

    org.apereo.cas.configuration.model.support.throttle.Bucket4jThrottleProperties.

    How can I configure this property?

  • cas.authn.throttle.bucket4j.blocking=true
  • Whether the request should block until capacity becomes available. Consume a token from the token bucket. If a token is not available this will block until the refill adds one to the bucket.

    org.apereo.cas.configuration.model.support.throttle.Bucket4jThrottleProperties.

    How can I configure this property?

  • cas.authn.throttle.bucket4j.enabled=true
  • Decide whether bucket4j functionality should be enabled.

    org.apereo.cas.configuration.model.support.throttle.Bucket4jThrottleProperties.

    How can I configure this property?

    Configuration Metadata

    The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.

    Be Selective

    This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.

    YAGNI

    Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.

    Naming Convention

    Property names can be specified in very relaxed terms. For instance cas.someProperty, cas.some-property, cas.some_property are all valid names. While all forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those that might be presented to the system via an external library or framework such as Spring Boot, etc.

    :information_source: Note

    When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value. The only possible exception to this rule is when naming actuator endpoints; The name of the actuator endpoints (i.e. ssoSessions) MUST remain in camelCase mode.

    Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas. All other settings are controlled and provided to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the way you intend.

    Validation

    Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be recognized or validated by the configuration schema. Additional validation processes are also handled via Configuration Metadata and property migrations applied automatically on startup by Spring Boot and family.

    Indexed Settings

    CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value. The index [0] is meant to be incremented by the adopter to allow for distinct multiple configuration blocks.

    Failure Throttling

    CAS provides a facility for limiting failed login attempts to support password guessing and related abuse scenarios. A couple strategies are provided for tracking failed attempts:

    1. Source IP - Limit successive failed logins against any username from the same IP address.
    2. Source IP and username - Limit successive failed logins against a particular user from the same IP address.

    All login throttling components that ship with CAS limit successive failed login attempts that exceed a threshold rate, which is a time in seconds between two failures. The following properties are provided to define the failure rate.

    • threshold - Number of failed login attempts.
    • rangeSeconds - Period of time in seconds.

    A failure rate of more than 1 per 3 seconds is indicative of an automated authentication attempt, which is a reasonable basis for throttling policy. Regardless of policy care should be taken to weigh security against access; overly restrictive policies may prevent legitimate authentication attempts.

    :information_source: Threshold Rate

    The failure threshold rate is calculated as: threshold / rangeSeconds. For instance, the failure rate for the above scenario would be 0.333333. An authentication attempt may be considered throttled if the request submission rate (calculated as the difference between the current date and the last submission date) exceeds the failure threshold rate.

    Enable the following module in your configuration overlay:

    1
    2
    3
    4
    5
    
    <dependency>
        <groupId>org.apereo.cas</groupId>
        <artifactId>cas-server-support-throttle</artifactId>
        <version>${cas.version}</version>
    </dependency>
    
    1
    
    implementation "org.apereo.cas:cas-server-support-throttle:${project.'cas.version'}"
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    
    dependencyManagement {
        imports {
            mavenBom "org.apereo.cas:cas-server-support-bom:${project.'cas.version'}"
        }
    }
    
    dependencies {
        implementation "org.apereo.cas:cas-server-support-throttle"
    }
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    
    dependencies {
        /*
        The following platform references should be included automatically and are listed here for reference only.
                
        implementation enforcedPlatform("org.apereo.cas:cas-server-support-bom:${project.'cas.version'}")
        implementation platform(org.springframework.boot.gradle.plugin.SpringBootPlugin.BOM_COORDINATES)
        */
    
        implementation "org.apereo.cas:cas-server-support-throttle"
    }
    

    Configuration

    The following settings and properties are available from the CAS configuration catalog:

    The configuration settings listed below are tagged as Required in the CAS configuration metadata. This flag indicates that the presence of the setting may be needed to activate or affect the behavior of the CAS feature and generally should be reviewed, possibly owned and adjusted. If the setting is assigned a default value, you do not need to strictly put the setting in your copy of the configuration, but should review it nonetheless to make sure it matches your deployment expectations.

  • cas.authn.throttle.hazelcast.cluster.core.instance-name=
  • The instance name.

    This setting supports the Spring Expression Language.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

    The configuration settings listed below are tagged as Optional in the CAS configuration metadata. This flag indicates that the presence of the setting is not immediately necessary in the end-user CAS configuration, because a default value is assigned or the activation of the feature is not conditionally controlled by the setting value. In other words, you should only include this field in your configuration if you need to modify the default value or if you need to turn on the feature controlled by the setting.

  • cas.authn.throttle.schedule.enabled=true
  • Whether scheduler should be enabled to schedule the job to run.

    org.apereo.cas.configuration.model.support.quartz.SchedulingProperties.

    How can I configure this property?

  • cas.authn.throttle.schedule.enabled-on-host=.*
  • Overrides SchedulingProperties#enabled property value of true if this property does not match hostname of CAS server. This can be useful if deploying CAS with an image in a statefulset where all names are predictable but where having different configurations for different servers is hard. The value can be an exact hostname or a regular expression that will be used to match the hostname.

    This settings supports regular expression patterns. [?].

    org.apereo.cas.configuration.model.support.quartz.SchedulingProperties.

    How can I configure this property?

  • cas.authn.throttle.schedule.repeat-interval=PT2M
  • String representation of a repeat interval of re-loading data for a data store implementation. This is the timeout between consecutive job’s executions.

    This settings supports the java.time.Duration syntax [?].

    org.apereo.cas.configuration.model.support.quartz.SchedulingProperties.

    How can I configure this property?

  • cas.authn.throttle.schedule.start-delay=PT15S
  • String representation of a start delay of loading data for a data store implementation. This is the delay between scheduler startup and first job’s execution

    This settings supports the java.time.Duration syntax [?].

    org.apereo.cas.configuration.model.support.quartz.SchedulingProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.async-backup-count=0
  • Hazelcast supports both synchronous and asynchronous backups. By default, backup operations are synchronous. In this case, backup operations block operations until backups are successfully copied to backup members (or deleted from backup members in case of remove) and acknowledgements are received. Therefore, backups are updated before a put operation is completed, provided that the cluster is stable. Asynchronous backups, on the other hand, do not block operations. They are fire and forget and do not require acknowledgements; the backup operations are performed at some point in time.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.async-fillup=true
  • Used when replication is turned on with #isReplicated().

    If a new member joins the cluster, there are two ways you can handle the initial provisioning that is executed to replicate all existing values to the new member. Each involves how you configure the async fill up.
    • First, you can configure async fill up to true, which does not block reads while the fill up operation is underway. That way, you have immediate access on the new member, but it will take time until all the values are eventually accessible. Not yet replicated values are returned as non-existing (null).
    • Second, you can configure for a synchronous initial fill up (by configuring the async fill up to false), which blocks every read or write access to the map until the fill up operation is finished. Use this with caution since it might block your application from operating.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.backup-count=1
  • To provide data safety, Hazelcast allows you to specify the number of backup copies you want to have. That way, data on a cluster member will be copied onto other member(s). To create synchronous backups, select the number of backup copies. When this count is 1, a map entry will have its backup on one other member in the cluster. If you set it to 2, then a map entry will have its backup on two other members. You can set it to 0 if you do not want your entries to be backed up, e.g., if performance is more important than backing up. The maximum value for the backup count is 6. Sync backup operations have a blocking cost which may lead to latency issues.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.cp-member-count=0
  • CP Subsystem is a component of a Hazelcast cluster that builds a strongly consistent layer for a set of distributed data structures. Its data structures are CP with respect to the CAP principle, i.e., they always maintain linearizability and prefer consistency over availability during network partitions. Besides network partitions, CP Subsystem withstands server and client failures. All members of a Hazelcast cluster do not necessarily take part in CP Subsystem. The number of Hazelcast members that take part in CP Subsystem is specified here. CP Subsystem must have at least 3 CP members.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.eviction-policy=LRU
  • Hazelcast supports policy-based eviction for distributed maps. Currently supported policies are LRU (Least Recently Used) and LFU (Least Frequently Used) and NONE. See this for more info.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.logging-type=slf4j
  • Hazelcast has a flexible logging configuration and doesn't depend on any logging framework except JDK logging. It has in-built adaptors for a number of logging frameworks and also supports custom loggers by providing logging interfaces. To use built-in adaptors you should set this setting to one of predefined types below.

    • jdk: JDK logging
    • log4j: Log4j
    • slf4j: Slf4j
    • none: Disable logging

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.map-merge-policy=PUT_IF_ABSENT
  • Define how data items in Hazelcast maps are merged together from source to destination. By default, merges map entries from source to destination if they don't exist in the destination map. Accepted values are:

    • PUT_IF_ABSENT: Merges data structure entries from source to destination if they don't exist in the destination data structure.
    • HIGHER_HITS: * Merges data structure entries from source to destination data structure if the source entry has more hits than the destination one.
    • DISCARD: Merges only entries from the destination data structure and discards all entries from the source data structure.
    • PASS_THROUGH: Merges data structure entries from source to destination directly unless the merging entry is null
    • EXPIRATION_TIME: Merges data structure entries from source to destination data structure if the source entry will expire later than the destination entry. This policy can only be used if the clocks of the nodes are in sync.
    • LATEST_UPDATE: Merges data structure entries from source to destination data structure if the source entry was updated more frequently than the destination entry. This policy can only be used if the clocks of the nodes are in sync.
    • LATEST_ACCESS: Merges data structure entries from source to destination data structure if the source entry has been accessed more recently than the destination entry. This policy can only be used if the clocks of the nodes are in sync.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.max-no-heartbeat-seconds=300
  • Max timeout of heartbeat in seconds for a node to assume it is dead.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.max-size=85
  • Sets the maximum size of the map.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.max-size-policy=USED_HEAP_PERCENTAGE
    • FREE_HEAP_PERCENTAGE: Policy based on minimum free JVM heap memory percentage per JVM.
    • FREE_HEAP_SIZE: Policy based on minimum free JVM heap memory in megabytes per JVM.
    • FREE_NATIVE_MEMORY_PERCENTAGE: Policy based on minimum free native memory percentage per Hazelcast instance.
    • FREE_NATIVE_MEMORY_SIZE: Policy based on minimum free native memory in megabytes per Hazelcast instance.
    • PER_NODE: Policy based on maximum number of entries stored per data structure (map, cache etc) on each Hazelcast instance.
    • PER_PARTITION: Policy based on maximum number of entries stored per data structure (map, cache etc) on each partition.
    • USED_HEAP_PERCENTAGE: Policy based on maximum used JVM heap memory percentage per data structure (map, cache etc) on each Hazelcast instance .
    • USED_HEAP_SIZE: Policy based on maximum used JVM heap memory in megabytes per data structure (map, cache etc) on each Hazelcast instance.
    • USED_NATIVE_MEMORY_PERCENTAGE: Policy based on maximum used native memory percentage per data structure (map, cache etc) on each Hazelcast instance.
    • USED_NATIVE_MEMORY_SIZE: Policy based on maximum used native memory in megabytes per data structure (map, cache etc) on each Hazelcast instance .

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.partition-member-group-type=
  • With PartitionGroupConfig, you can control how primary and backup partitions are mapped to physical Members. Hazelcast will always place partitions on different partition groups so as to provide redundancy. Accepted value are: PER_MEMBER, HOST_AWARE, CUSTOM, ZONE_AWARE, SPI. In all cases a partition will never be created on the same group. If there are more partitions defined than there are partition groups, then only those partitions, up to the number of partition groups, will be created. For example, if you define 2 backups, then with the primary, that makes 3. If you have only two partition groups only two will be created.

    • {}PER_MEMBER Partition Groups}: This is the default partition scheme and is used if no other scheme is defined. Each Member is in a group of its own.</li>
    • {}HOST_AWARE Partition Groups}: In this scheme, a group corresponds to a host, based on its IP address. Partitions will not be written to any other members on the same host. This scheme provides good redundancy when multiple instances are being run on the same host.</li>
    • {}CUSTOM Partition Groups}: In this scheme, IP addresses, or IP address ranges, are allocated to groups. Partitions are not written to the same group. This is very useful for ensuring partitions are written to different racks or even availability zones.</li>
    • {}ZONE_AWARE Partition Groups}: In this scheme, groups are allocated according to the metadata provided by Discovery SPI Partitions are not written to the same group. This is very useful for ensuring partitions are written to availability zones or different racks without providing the IP addresses to the config ahead.</li>
    • {}SPI Partition Groups}: In this scheme, groups are allocated according to the implementation provided by Discovery SPI.</li> </ul> </p>

      org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

      How can I configure this property?

      </div></td> </tr>
  • cas.authn.throttle.hazelcast.cluster.core.replicated=false
  • A Replicated Map is a distributed key-value data structure where the data is replicated to all members in the cluster. It provides full replication of entries to all members for high speed access. A Replicated Map does not partition data (it does not spread data to different cluster members); instead, it replicates the data to all members. Replication leads to higher memory consumption. However, a Replicated Map has faster read and write access since the data is available on all members. Writes could take place on local/remote members in order to provide write-order, eventually being replicated to all other members.

    If you have a large cluster or very high occurrences of updates, the Replicated Map may not scale linearly as expected since it has to replicate update operations to all members in the cluster. Since the replication of updates is performed in an asynchronous manner, Hazelcast recommends you enable back pressure in case your system has high occurrences of updates.

    Note that Replicated Map does not guarantee eventual consistency because there are some edge cases that fail to provide consistency.

    Replicated Map uses the internal partition system of Hazelcast in order to serialize updates happening on the same key at the same time. This happens by sending updates of the same key to the same Hazelcast member in the cluster.

    Due to the asynchronous nature of replication, a Hazelcast member could die before successfully replicating a "write" operation to other members after sending the "write completed" response to its caller during the write process. In this scenario, Hazelcast’s internal partition system promotes one of the replicas of the partition as the primary one. The new primary partition does not have the latest "write" since the dead member could not successfully replicate the update.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.cluster.core.timeout=5
  • Connection timeout in seconds for the TCP/IP config and members joining the cluster.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreClusterProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.core.enable-compression=false
  • Enables compression when default java serialization is used.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.core.enable-jet=true
  • Enable Jet configuration/service on the hazelcast instance. Hazelcast Jet is a distributed batch and stream processing system that can do stateful computations over massive amounts of data with consistent low latency. Jet service is required when executing SQL queries with the SQL service.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.core.enable-management-center-scripting=true
  • Enables scripting from Management Center.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreProperties.

    How can I configure this property?

  • cas.authn.throttle.hazelcast.core.license-key=
  • Hazelcast enterprise license key.

    org.apereo.cas.configuration.model.support.hazelcast.HazelcastCoreProperties.

    How can I configure this property?

  • cas.authn.throttle.core.app-code=CAS
  • Application code used to identify this application in the audit logs.

    org.apereo.cas.configuration.model.support.throttle.ThrottleCoreProperties.

    How can I configure this property?

  • cas.authn.throttle.core.username-parameter=
  • Username parameter to use in order to extract the username from the request.

    org.apereo.cas.configuration.model.support.throttle.ThrottleCoreProperties.

    How can I configure this property?

  • cas.authn.throttle.failure.code=AUTHENTICATION_FAILED
  • Failure code to record in the audit log. Generally this indicates an authentication failure event.

    org.apereo.cas.configuration.model.support.throttle.ThrottleFailureProperties.

    How can I configure this property?

  • cas.authn.throttle.failure.range-seconds=-1
  • Period of time in seconds for the threshold rate.

    org.apereo.cas.configuration.model.support.throttle.ThrottleFailureProperties.

    How can I configure this property?

  • cas.authn.throttle.failure.threshold=-1
  • Number of failed login attempts for the threshold rate.

    org.apereo.cas.configuration.model.support.throttle.ThrottleFailureProperties.

    How can I configure this property?

  • cas.authn.throttle.failure.throttle-window-seconds=0
  • Indicate the number of seconds the account should remain in a locked/throttled state before it can be released to continue again. If no value is specified, the failure threshold and rate that is calculated would hold.

    This settings supports the java.time.Duration syntax [?].

    org.apereo.cas.configuration.model.support.throttle.ThrottleFailureProperties.

    How can I configure this property?

    Configuration Metadata

    The collection of configuration properties listed in this section are automatically generated from the CAS source and components that contain the actual field definitions, types, descriptions, modules, etc. This metadata may not always be 100% accurate, or could be lacking details and sufficient explanations.

    Be Selective

    This section is meant as a guide only. Do NOT copy/paste the entire collection of settings into your CAS configuration; rather pick only the properties that you need. Do NOT enable settings unless you are certain of their purpose and do NOT copy settings into your configuration only to keep them as reference. All these ideas lead to upgrade headaches, maintenance nightmares and premature aging.

    YAGNI

    Note that for nearly ALL use cases, declaring and configuring properties listed here is sufficient. You should NOT have to explicitly massage a CAS XML/Java/etc configuration file to design an authentication handler, create attribute release policies, etc. CAS at runtime will auto-configure all required changes for you. If you are unsure about the meaning of a given CAS setting, do NOT turn it on without hesitation. Review the codebase or better yet, ask questions to clarify the intended behavior.

    Naming Convention

    Property names can be specified in very relaxed terms. For instance cas.someProperty, cas.some-property, cas.some_property are all valid names. While all forms are accepted by CAS, there are certain components (in CAS and other frameworks used) whose activation at runtime is conditional on a property value, where this property is required to have been specified in CAS configuration using kebab case. This is both true for properties that are owned by CAS as well as those that might be presented to the system via an external library or framework such as Spring Boot, etc.

    :information_source: Note

    When possible, properties should be stored in lower-case kebab format, such as cas.property-name=value. The only possible exception to this rule is when naming actuator endpoints; The name of the actuator endpoints (i.e. ssoSessions) MUST remain in camelCase mode.

    Settings and properties that are controlled by the CAS platform directly always begin with the prefix cas. All other settings are controlled and provided to CAS via other underlying frameworks and may have their own schemas and syntax. BE CAREFUL with the distinction. Unrecognized properties are rejected by CAS and/or frameworks upon which CAS depends. This means if you somehow misspell a property definition or fail to adhere to the dot-notation syntax and such, your setting is entirely refused by CAS and likely the feature it controls will never be activated in the way you intend.

    Validation

    Configuration properties are automatically validated on CAS startup to report issues with configuration binding, specially if defined CAS settings cannot be recognized or validated by the configuration schema. Additional validation processes are also handled via Configuration Metadata and property migrations applied automatically on startup by Spring Boot and family.

    Indexed Settings

    CAS settings able to accept multiple values are typically documented with an index, such as cas.some.setting[0]=value. The index [0] is meant to be incremented by the adopter to allow for distinct multiple configuration blocks.

    Actuator Endpoints

    The following endpoints are provided by CAS:

     Get throttled authentication records.


    Throttling Strategies

    The following throttling strategies are offered by CAS.

    Storage Description
    IP Address Uses a memory map to prevent successive failed login attempts from the same IP address.
    IP Address and Username Uses a memory map to prevent successive failed login attempts for a username from the same IP address.
    JDBC See this guide.
    MongoDb See this guide.
    Redis See this guide.
    Hazelcast See this guide.

    High Availability

    All of the throttling components are suitable for a CAS deployment that satisfies the recommended HA architecture. In particular deployments with multiple CAS nodes behind a load balancer configured with session affinity can use either in-memory or inspektr components. It is instructive to discuss the rationale. Since load balancer session affinity is determined by source IP address, which is the same criterion by which throttle policy is applied, an attacker from a fixed location should be bound to the same CAS server node for successive authentication attempts. A distributed attack, on the other hand, where successive request would be routed indeterminately, would cause haphazard tracking for in-memory CAS components since attempts would be split across N systems. However, since the source varies, accurate accounting would be pointless since the throttling components themselves assume a constant source IP for tracking purposes. The login throttling components are not sufficient for detecting or preventing a distributed password brute force attack.