Qt-contributors-summit-2011-PerformanceQA: Difference between revisions

From Qt Wiki
Jump to navigation Jump to search
(Add "cleanup" tag)
(Categorize)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
{{Cleanup | reason=Auto-imported from ExpressionEngine.}}
{{Cleanup | reason=Auto-imported from ExpressionEngine.}}
 
[[Category:QtCS2011]]
This session is about quality assuring the performance of Qt, with a special emphasis on preventing performance regressions.
This session is about quality assuring the performance of Qt, with a special emphasis on preventing performance regressions.


Line 30: Line 30:
*** Current performance test coverage
*** Current performance test coverage
** Provide feedback to commits:
** Provide feedback to commits:
*** “Sorry, your commit introduced a performance regression …”
*** "Sorry, your commit introduced a performance regression …"
*** “Congratulations! Your commit improved performance …”
*** "Congratulations! Your commit improved performance …"


=Comments / discussion=
=Comments / discussion=
Line 43: Line 43:
* Coverage: For performance, this is tricky. Function coverage not reliable since performance can vary with different parameters. 3% coverage could be perfectly acceptable for performance benchmark tests, if the right 3% have been covered.
* Coverage: For performance, this is tricky. Function coverage not reliable since performance can vary with different parameters. 3% coverage could be perfectly acceptable for performance benchmark tests, if the right 3% have been covered.
* Could Qt allow performance profiles for different platforms, that allows Qt behave differently depending on the platform it runs on?
* Could Qt allow performance profiles for different platforms, that allows Qt behave differently depending on the platform it runs on?
* In order to find out where performance testing should be focused, we should measure usage statistics of Qt being run, to see where Qt uses most of it’s time during execution.
* In order to find out where performance testing should be focused, we should measure usage statistics of Qt being run, to see where Qt uses most of it's time during execution.
* Improving performance in a part of Qt isn’t always good as it could introduce undesirable side-effects.
* Improving performance in a part of Qt isn't always good as it could introduce undesirable side-effects.
* Likewise, sacrificing performance in a part of Qt is sometimes necessary.
* Likewise, sacrificing performance in a part of Qt is sometimes necessary.

Latest revision as of 16:43, 6 January 2017

This article may require cleanup to meet the Qt Wiki's quality standards. Reason: Auto-imported from ExpressionEngine.
Please improve this article if you can. Remove the {{cleanup}} tag and add this page to Updated pages list after it's clean.

This session is about quality assuring the performance of Qt, with a special emphasis on preventing performance regressions.

First, we identify fundamental challenges that any system that attempts to detect (and ultimately minimize the occurrence of) performance regressions in Qt is likely to have to deal with.

Second, a couple of practical ways to facilitate contributions to this effort are proposed.

The goal of the session is to get feedback and collect additional ideas.

————————————————-

Background / proposal

  • Scope: Preventing performance regressions
  • Main requirement: Report a performance regression if and only if there really is one in Qt
    • Avoid false positives/negatives
  • Challenges:
    • Coverage (what parts of Qt are tested for performance?)
    • Stability (of test environment or test itself)
    • Precision (ideally each commit should be tested, but it would require more resources)
    • Exceptions (sometimes performance needs to be sacrificed)
  • Possible contributions:
    • Submit a patch to Qt that does not worsen performance
    • Submit a patch to Qt that improves performance (e.g. by fixing an existing regression)
    • Submit a new performance test to increase the coverage
  • Possible facilitations:
    • Provide access to QA reports:
      • Known performance regressions
      • Current performance test coverage
    • Provide feedback to commits:
      • "Sorry, your commit introduced a performance regression …"
      • "Congratulations! Your commit improved performance …"

Comments / discussion

  • How do we ensure that people submit performance benchmark in Open Governance?
  • Counting instruction reads may not be a useful metric in some cases (e.g. apps involving spinlocks).
  • Start-up time of an application is an important aspect of performance.
  • There was a promise around Qt4 that Qt start-up time would be reduced, which was not kept. Is start-up time verified during development?
  • Perceived performance is very different from real performance. How can Qt help facilitate improving the perceived quality?
  • Use git to pin-point the introduction of a performance regression (bisect)
  • Coverage: For performance, this is tricky. Function coverage not reliable since performance can vary with different parameters. 3% coverage could be perfectly acceptable for performance benchmark tests, if the right 3% have been covered.
  • Could Qt allow performance profiles for different platforms, that allows Qt behave differently depending on the platform it runs on?
  • In order to find out where performance testing should be focused, we should measure usage statistics of Qt being run, to see where Qt uses most of it's time during execution.
  • Improving performance in a part of Qt isn't always good as it could introduce undesirable side-effects.
  • Likewise, sacrificing performance in a part of Qt is sometimes necessary.