Test Process

This page documents the steps that a CAS developer/contributor should take for testing a CAS server deployment during development. For additional instructions and guidance on the general build process, please see this page.

Contributions

Patches submitted to the CAS codebase in form of pull requests must pass all automated unit or integration tests, and/or provide adequate unit or integration tests for the proposed changes. In the absence of appropriate test cases, the contribution most likely will not be accepted into the codebase and ultimately may be closed.

Test Cases

The following types of test cases in the project are those that CAS developers/contributors need to review,

Remember

If you are about to describe a problem, please try to put together a test case that concretely demonstrates the issue in an automated fashion in an environment that is fairly isolated, WITHOUT manual instructions and guidance as much as possible. Descriptive instructions and language to explain what one must manually do (i.e. go here, click there, wait 3 seconds, etc) in order to reproduce an issue or environment are not as effective or acceptable as evidence of an issue, and may require a significant time and research investment to ultimately get to a root cause. Save yourself and everyone else time and headache, and put together automated, repeatable, verifiable reproducers.

Unit Tests

Unit tests are composed of small pieces of work or functionality that generally can be tested in isolation. They are created as Java test classes, and individual test scenarios are typically annotated with the @Test annotation and then executed by the test framework.

For example, a src/main/java/Extractor.java type of component would have its test cases inside a src/test/java/ExtractorTests.java test class.

Integration Tests

In some scenarios unit tests also run a series of tests against databases, external systems or APIs to verify functionality and correctness of integration. For example, a MongoDbTicketRegistry type of component would require a MongoDb running instance and special markup and annotations to run tests when that external instance is up and running. Structurally speaking, such tests are almost identical to plain vanilla unit tests and may only contain additional decorations and annotations,depending on the test system.

Functional Tests

Functional tests are categories of tests that verify combinations of scenarios and behaviors from the perspective of an end-user. Such tests typically are composed of an execution script and scenario, and may involve a headless browser to run through a scenario, verify data elements on the screen, etc. This category of tests is usually very effective at reproducing scenarios related to an issue or possible defect, and can be very helpful in troubleshooting and issue diagnosis.

Testing Modules

To test the functionality provided by a given CAS module, execute the following steps:

  • For the tomcat, undertow or jetty webapp, add the module reference to the webapp.gradle build script of web application you intend to run:
1
implementation project(":support:cas-server-support-modulename")

Alternatively, pass the required modules automatically:

1
bc ldap,x509

…where bc is an alias for building CAS.

Prepare the embedded container, to run and deploy the web application.

Unit / Integration Testing

To simplify the test execution process, you may take advantage of the testcas.sh script found at the root of the repository as such:

1
2
# chmod +x ./testcas.sh
./testcas.sh --category <category> [--test <test-class>] [--debug] [--with-coverage]

To learn more about the script, use:

1
./testcas.sh --help

All unit and integration tests are executed by the continuous integration system.

Code Coverage & Metrics

Code coverage metrics are collected and reported by the following platforms:

System Badge
Codacy Codacy Badge
SonarCloud Coverage
CodeCov codecov

Quality metrics are collected and reported by the following platforms:

System Badge
Codacy Codacy Badge
SonarCloud Quality Gate Sonarqube Quality
SonarCloud Maintainability Sonarqube Quality

Browser & Functional Testing

Automated browser testing is done via the Puppeteer framework. Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol and runs headless by default.

Functional tests start by generating a plain CAS overlay as a baseline that is able to run under HTTPS using a pre-generated keystore. This overlay is supplied the test scenario configuration that explains the required modules, properties, etc to use when CAS is deployed inside an embedded Apache Tomcat container. Once running, the Puppeteer script is executed by Node for the given test scenario to verify specific functionality such as successful logins, generation of tickets, etc.

All functional and browser tests are executed by the continuous integration system.

To install Puppeteer once:

1
npm i -g puppeteer

To help simplify the testing process, you may use the following bash function in your ~/.profile:

1
2
3
4
5
6
function pupcas() {
  cd /path/to/cas
  scenario=$1
  shift 1
  ./ci/tests/puppeteer/run.sh --scenario ./ci/tests/puppeteer/scenarios/"$scenario" $@
}

…which can later be invoked as:

1
pupcas <scenario-name>

To see the list of available test scenarios:

1
./gradlew --build-cache --configure-on-demand --no-daemon -q puppeteerScenarios

Remote debugging may be available on port 5000. To successfully run tests, you need to make sure jq is installed.

The following command-line options are supported for test execution:

Flag Description
--scenario The scenario name, typically modeled after the folder name that contains the test
--install-puppeteer, --install, --i Install or update Puppeteer node modules.
--debug, --d Launch the CAS web application with remote debugging enabled.
--debug-port, --port Specify the remote debugging port, typically 5000.
--debug-suspend, --suspend, --s Suspend the CAS web application on startup until a debugger session connects.
--rebuild, --r, --build Rebuild the CAS web application, and disregard previously-built WAR artifacts.
--dry-run, --y Launch the CAS web application configured in the test without actually running the test.
--headless, --h Launch the test scenario with a headless browser.
--rerun, --resume, --r Launch and assume the CAS is already running from a previous attempt.
--hbo A combination of --headless and --build and Gradle’s --offline flag.
--bo A combination of --build and Gradle’s --offline flag.
--hr A combination of --headless and --resume.
--ho A combination of --headless and Gradle’s --offline flag.
--hb A combination of --headless and --build.
--body, --body, --boyd A combination of --build, --debug, --dry-run and Gradle’s --offline flag.
--boy A combination of --build, --dry-run and Gradle’s --offline flag.

For example, the login-success test scenario may be run using:

1
pupcas login-success --hbo

All other build related options and flags (i.e. --info) may be passed directly to the script.

Test Scenario Anatomy

Each test scenario is composed of the following files:

  • script.js: The main driver of the test that executes the test, via launching a headless browser via Puppeteer. The basic outline of the test script may be:
1
2
3
4
5
6
7
8
9
10
11
const puppeteer = require('puppeteer');
const cas = require('../../cas.js');

(async () => {
    const browser = await puppeteer.launch(cas.browserOptions());
    const page = await cas.newPage(browser);
    
    // Do stuff and check/assert behavior...
    
    await browser.close();
})();
  • script.json: The optional test configuration in JSON that includes necessary CAS modules, properties, and other specifics.

A basic modest outline of the test configuration may be:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
  "dependencies": "module1,module2,module3,...",
  "conditions": {
    "docker": "true"
  },
  "properties": [
    "--cas.server.something=something"
  ],
  "jvmArgs": "...",
  "SPRING_APPLICATION_JSON": {},
  "readyScript": "${PWD}/ci/tests/puppeteer/scenarios/${SCENARIO}/ready.sh",
  "initScript": "${PWD}/ci/tests/puppeteer/scenarios/${SCENARIO}/init.sh",
  "exitScript": "${PWD}/ci/tests/puppeteer/scenarios/${SCENARIO}/exit.sh",
  "instances": 2,
  "instance1": {
    "properties": [ ]
  },
  "instance2": {
    "properties": [ ]
  }
}

The only required bit in the test JSON configuration might be the dependencies attribute.

MacOS Firewall Popup

To allow the firewall to accept incoming network connections for Chromium on MacOS, you may apply the following command:

1
2
chromium="/path/to/cas/ci/tests/puppeteer/node_modules/puppeteer/.local-chromium"
sudo codesign --force --deep --sign - "${chromium}/mac-*/chrome-mac/Chromium.app"

Continuous Integration

Unit and integration tests are automatically executed by the CAS CI system, GitHub Actions.