Mining Android App Usages for Generating Actionable GUI-based Execution Scenarios - MSR 2015 Online Appendix
This web page is a companion to our MSR 2015 paper entitled "Mining Android App Usages for Generating Actionable GUI-based Execution Scenarios".
1. The MonkeyLab Framework: Record->Mine->Generate->Validate
*Image available also as a PDF

> Tools
The language models implementation is available at GitHub: Kramer repository. Other tools used in particular for static analysis are:> Depth-First-Search Exploration
We implemented a depth-first-search exploration using the UIAutomator framework to extract GUI component information from the current device state. UIAutomator provides information for every GUI component that currently exists on a device’s screen. Therefore, starting from the initial application screen, we extract all of the available components and we filter by instances that are clickable, long-clickable, and checkable. Then we populate an unvisited stack with all of the visible components . Our DFS algorithm then iterates over the unvisited stack to exercise each component and stores the successfully executed components on another stack called the visited stack. Finally, we update the unvisited stack with the new window state and repeat the process until the stack is empty.2. Data (APPs)
APP | Version | LOC | Activities | Methods | GUI components | Low-level events in logs | Mined GUI-level events |
Car Report | 2.9.1 | 7K+ | 6 | 764 | 142 | 23.4K+ | 1.5K+ |
GnuCash | 1.5.3 | 10K+ | 6 | 1,027 | 275 | 14.7K+ | 895 |
Mileage | 3.1.1 | 10K+ | 51 | 1,139 | 99 | 9.8K+ | 783 |
My Expenses | 2.4.0 | 24K+ | 17 | 1778 | 693 | 20.3K+ | 854 |
Tasks | 1.0.12 | 10K+ | 4 | 561 | 200 | 70.6K+ | 1.7K+ |
3. Examples
In the following we present examples of the actionable scenarios generated automatically by MonkeyLab. We present a video reproducing two actionable scenarios for each app on a Nexus 7 tablet. The scenarios were reproduced automatically, using the sequence of input commands generated by MonkeyLab. In addition to the video we provide the input commands (specific for Nexus 7) and the GUI level event sequence representing the scenario (also generated by MonkeyLab). A playlist is available at youtube with all the videos: PLAYLIST> GnuCash
Scenario 1: event sequence (GUI level) and input commands
Scenario 2: event sequence (GUI level) and input commands
> My Expenses
Scenario 1: event sequence (GUI level) and input commands
Scenario 2: event sequence (GUI level) and input commands
> Tasks
Scenario 1: event sequence (GUI level) and input commands
Scenario 2: event sequence (GUI level) and input commands
> Mileage
Scenario 1: event sequence (GUI level) and input commands
Scenario 2: event sequence (GUI level) and input commands
> Car Report
Scenario 1: event sequence (GUI level) and input commands
Scenario 2: event sequence (GUI level) and input commands
4. Results
>> Accumulated Coverage
The figures bellow depicts the accumulated statement coverage achieved by the users and different testing approaches: Android UI monkey, Depth-First-Search (DFS) exploration, best language model in serial mode, and MonkeyLab in interactive mode (I-LM).





>> Events and methods
In addition to the coverage analysis we identified the difference set of events executed only by Strategy A compared to Strategy B.
The heatmap bellow depicts the number of source code methods in which the coverage was higher in Strategy A vs Strategy B.

5. Authors
- Mario Linares-Vásquez
- The College of William and Mary, VA, USA.
E-mail: mlinarev at cs dot wm
dot edu
- Martin White
- The College of William and Mary, VA, USA.
E-mail: mgwhite at cs dot wm
dot edu
- Carlos Bernal-Cárdenas
- The College of William and Mary, VA, USA.
E-mail: cebernal at cs dot wm
dot edu
- Kevin Moran
- The College of William and Mary, VA, USA.
E-mail: kpmoran at cs dot wm
dot edu
- Denys Poshyvanyk
- The College of William and Mary.
E-mail: denys at cs dot wm dot edu