Skip to content

Add automation for lts testing: clickhouse-odbc, grafana, superset, dbeaver#104

Open
Selfeer wants to merge 11 commits intomainfrom
feature/automate_release_tests
Open

Add automation for lts testing: clickhouse-odbc, grafana, superset, dbeaver#104
Selfeer wants to merge 11 commits intomainfrom
feature/automate_release_tests

Conversation

@Selfeer
Copy link
Collaborator

@Selfeer Selfeer commented Feb 17, 2026

Add LTS testing skills and automation framework for web/desktop UI tools

Summary

  • Introduce a new lts/ folder structure for LTS (Long-Term Support) testing automation covering four integration targets: lts/clickhouse-odbc, lts/grafana, lts/superset, and lts/dbeaver
  • Each sub-suite follows the established TestFlows test structure (modeled after altinity/clickhouse-regression and altinity/clickhouse-grafana) with a top-level regression.py, requirements/, steps/, and tests/ layout
  • Add AGENT.md documenting proper test structure conventions, including: requirements definition, requirement-to-test-scenario mapping (ideally 1:1, or one higher-level requirement per feature), test steps organization, and test scenario authoring

Scope per target

Target Type Automation approach
clickhouse-odbc Driver/API Fully automated via TestFlows
grafana Web UI Fully automated (Selenium/Playwright), following existing altinity/clickhouse-grafana UI test patterns
superset Web UI Fully automated (Selenium/Playwright) web UI tests
dbeaver Desktop UI Semi-automated: agent-driven or manual steps where desktop GUI automation is limited

Test structure (per AGENT.md)

lts/<target>/
├── regression.py          # Main entry point, xfails/ffails, argparser
├── requirements/
│   ├── requirements.md    # Human-readable SRS requirements
│   └── requirements.py    # TestFlows requirement objects
├── steps/                 # Reusable test step functions (actions, UI interactions)
│   └── ...
└── tests/                 # Test scenarios (map 1:1 to requirements where possible)
    └── ...

Key conventions

  • Requirements: Each requirement has a unique ID and is traceable to one or more test scenarios
  • Requirement-to-scenario mapping: Ideally 1:1 for granular requirements; one higher-level requirement maps to a feature-level test suite
  • Test steps: Reusable, composable step functions (Given/When/Then) shared across scenarios
  • Test scenarios: Self-contained, independently runnable tests that explicitly link back to their requirement(s)

Test plan

  • Verify lts/ folder structure is created with all four sub-suites
  • Confirm each sub-suite has regression.py, requirements/, steps/, tests/
  • Validate AGENT.md documents the test structure, requirements mapping, and conventions
  • Ensure lts/grafana follows the pattern from altinity/clickhouse-grafana UI tests
  • Review requirement-to-scenario traceability in each suite

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant