Writing good tests: Difference between revisions

From Qt Wiki
Jump to navigation Jump to search
No edit summary
Line 3: Line 3:


* The tests are run on CI machines potentially under heavy load.
* The tests are run on CI machines potentially under heavy load.
* Tests might be run in parallel (depending on the switch CONFIG ''= parallel_test.
* Tests might be run under various desktop systems.
* Tests might be run under various desktop systems.
* The graphics setup is usually the GL enabling layer of some virtual machine.
* The graphics setup is usually the GL enabling layer of some virtual machine.

Revision as of 13:31, 17 December 2015

Writing good tests

Aspects to consider

  • The tests are run on CI machines potentially under heavy load.
  • Tests might be run under various desktop systems.
  • The graphics setup is usually the GL enabling layer of some virtual machine.

Recommendations

General

  • Do not use hard-coded timeouts to (qWait) to wait for some conditions to become true. Consider using QTRY_VERIFY and QTRY_COMPARE .
  • QVERIFY2 with an error message is preferable over a plain QVERIFY in order to obtain messages when something fails:
QVERIFY2(a < 2, (QByteArray::number(a) + QByteArrayLiteral(" is not less than 2"))
  • Do not re-use instances of the class under test in several tests. Test instances (for example widgets) should not be member variables of the tests), but peferably be instantiated on the stack to ensure proper cleanup even if a test fails and tests do not interfere with each other
  • Tests should ensure their resources are cleaned up even if a test fails (consider that a failed QCOMPARE/QVERIFY executes a return statement). Classes should be instantiated on the stack or use a QScopedPointer or be parented on a QObject whose deletion is guaranteed. It is recommended to check this in the slot cleanup():
void tst_QGraphicsProxyWidget::cleanup() // This will be called after every test function.
{
 QVERIFY (QApplication::topLevelWidgets().isEmpty());
}

Files, I/O resources

  • Tests should not create files in their build/source directories nor in common folders like home folders. Use QTemporaryDir and QTemporaryFile and ensure those folders and files are deleted after test execution (that is, all file handles are closed so that automatic deletion works). Always use
    QVERIFY (temporaryDir,isValid())
    
    .
  • Tests should not create file watchers on commonly used folders like home or temporary folders (remember that tests run in parallel). Use QTemporaryDir for this as well.

Widgets and Windows

  • If not required for testing purposes, use at most one top level window on the screen to prevent focus fights. If several windows are required, position them explicitly beside each other
  • Preferably center top level windows within
     QGuiApplication::primaryScreen()->availableGeometry()
    
    . Most importantly, do not position windows at 0, 0 since the CI uses Ubuntu's Unity as well, which has a taskbar on the left.
  • Top level windows with decoration should be at least 160x40 (else, a warning will appear on Windows 8).
  • Beware of interference from the current cursor position. If necessary, move the cursor away from the window under test using QCursor::setPos() . Note that more sophisticated use of QCursor-API needs to be enclosed within
     #ifndef QT_NO_CURSOR
    
    .
  • If a widget/window needs to be visible on the screen or active for a test to succeed, use:
 QVERIFY (QTest::qWaitForWindowExposed(&view));
 QVERIFY (QTest::qWaitForWindowActive(&view));

Practical hints

  • On Windows, a crashing test will not display a dialog prompting you to attach a debugger (to prevent the CI from getting stuck). This can be activate by passing the command line option -nocrashhandler.
  • It is possible to run single tests by passing its name on the command line (see Documentation ). If you type a substring, the test will display the matches.
  • Qt Creator contains a Perl script scripts/test2tasks.pl that converts testlib text output into a Task file that is shown the build issues pane and can be used to quickly navigate to test failures .
  • An overview of flaky/failing tests can be found here .

Hints for analyzing test flakyness (GUI tests)

  • Does the test create unrelated windows that overlap and interfere? For example, most existing widget tests still use a member variable test widget shared between the tests. If a single test then instantiates another widget, this can lead to focus issues. As stated above the member variable should be replaced by per-test widget instances. In some cases, windows created by skipped/failed tests leak.
  • Does the test create windows at random positions (notably on X11)? Such windows might end up in the taskbar area or interfere with notification windows of the OS. As stated above, Windows should be centered.
  • Is the test being influenced by the mouse cursor position (this is particularly an issue on Mac)? If so, the cursor should be moved to a well-defined position.