Automating regression testing for evolving GUI software
Title | Automating regression testing for evolving GUI software |
Publication Type | Journal Articles |
Year of Publication | 2005 |
Authors | Memon AM, Nagarajan A, Xie Q |
Journal | Journal of Software Maintenance and Evolution: Research and Practice |
Volume | 17 |
Issue | 1 |
Pagination | 27 - 64 |
Date Published | 2005/// |
ISBN Number | 1532-0618 |
Keywords | daily/nightly builds, event-flow graphs, Graphical user interfaces, GUI regression testing, GUI testing, smoke testing, Software quality |
Abstract | With the widespread deployment of broadband connections worldwide, software development and maintenance are increasingly being performed by multiple engineers, often working around-the-clock to maximize code churn rates. To ensure rapid quality assurance of such software, techniques such as ‘nightly/daily building and smoke testing’ have become widespread since they often reveal bugs early in the software development process. During these builds, a development version of the software is checked out from the source code repository tree, compiled, linked, and (re)tested with the goal of (re)validating its basic functionality. Although successful for conventional software, smoke tests are difficult to develop and automatically re-run for software that has a graphical user interface (GUI). In this paper, we describe a framework called DART (Daily Automated Regression Tester) that addresses the needs of frequent and automated re-testing of GUI software. The key to our success is automation: DART automates everything from structural GUI analysis, smoke-test-case generation, test-oracle creation, to code instrumentation, test execution, coverage evaluation, regeneration of test cases, and their re-execution. Together with the operating system's task scheduler, DART can execute frequently with little input from the developer/tester to re-test the GUI software. We provide results of experiments showing the time taken and memory required for GUI analysis, test case and test oracle generation, and test execution. We empirically compare the relative costs of employing different levels of detail in the GUI test oracle. We also show the events and statements covered by the smoke test cases. Copyright © 2005 John Wiley & Sons, Ltd. |
URL | http://onlinelibrary.wiley.com/doi/10.1002/smr.305/abstract |
DOI | 10.1002/smr.305 |