Qt-contributors-summit-2014-Qs2014QmlTest
Jump to navigation
Jump to search
This article may require cleanup to meet the Qt Wiki's quality standards. Reason: Auto-imported from ExpressionEngine. Please improve this article if you can. Remove the {{cleanup}} tag and add this page to Updated pages list after it's clean. |
QmlTest
Michał Sawicz, Michael Zanetti
We've been rather happy with qmltestrunner, we'd like to show our approach to combining auto and manual test QML code. The thing we've been struggling with is measuring coverage for QML, so that's what we'd like to brainstorm about.
Some notes:
- It's hard to measure test coverage with QML because:
- Declarative code doesn't really execute anything – it "creates"
- eval() breaks coverage metrics by adding code at runtime
- standard coverage tools don't know about QML/JS
- We can deal with declarative code by measuring which types were instantiated.
- We can deal with the eval() problem by agreeing on not to use eval().
- We can try to use the QML profiler as tool to measure coverage.
- The QML profiler currently measures only function calls. We want branch/condition or line coverage. This would be possible by collecting more data with the profiler, at the cost of a higher impact on performance. As that is not a good idea for profiling it should be optional.
- Multi-engine profiling is currently not possible with the command line profiler but could be done using EngineControl.
(Something is wrong with command line handling and the test runner in conjunction with qmlprofiler – it didn't work in the demo.)