page 1  (10 pages)
2to next section

List of Figures

Chapter 1

Figure Description 1.1 The Prolog grandparent program (adapted from Bratko, 1986). 1.2 PTP trace of the query grandparent(X, Z). 1.3 TPM coarse-grained (left) and fine-grained (right) views of the query grandparent(X, Z).

Chapter 2

Figure Description 2.1 Map of the systems covered. 2.2 Snapshot of the PECAN system from Reiss (1985). 2.3 BALSA showing a bar chart animation of the selection sort algorithm. 2.4 The TANGO framework from Stasko (1989). 2.5 Using the DANCE algorithm demonstration environment to create an animation from Hudson and Stasko (1993). 2.6 The Viz architecture from Domingue (1994). 2.7 TRI taken from Domingue (1993). 2.8 A ZEUS animation of the binpacking algorithm. 2.9 The PAVANE animation construction methodology taken from Roman (1992).
2.10 TreeViz visualization adapted from Johnson (1992). 2.11 A series of images taken from a ZEUS-3D animation of a heapsort algorithm. 2.12 A POLKA 3D representation of a quicksort algorithm from Stasko and Kreamer (1992). 2.13 An overview of software visualization taxonomies 2.14 The APT tracer adapted from Rajan (1990).

2.15 BPM notation from Morishita and Namao (1986). 2.16 An example DEWLAP visualization adapted from Dewar and Cleary (1986).
2.17 An example CODA session adapted from Plummer (1988).

Chapter 3

Figure Description 3.1 Type declaration showing inter-dependency from Green (1989). 3.2 Discourse model from Taylor (1988). 3.3 Program showing single pointer misconception. 3.4 Program showing one pointer per predicate misconception. 3.5 List termination example. 3.6 Termination by identification. 3.7 Termination using a counter. 3.8 Program showing the list recursion technique from Brna et al (1991).

Chapter 4

Figure Description 4.1 A summary of studies evaluating the effectiveness of different display characteristics for CBI applications. 4.2 Summary of studies into the effect of SV or notation on program comprehension. 4.3 Number of students understanding the bug at each stage. 4.4 A model of the role of SV within programming. 4.5 Types of description used in the analysis of program comprehension and debugging.

Chapter 5

Figure Description 5.1 Program showing the retractall bug from Dodd (19xx).

5.2 Starting a PPVL query. 5.3 The PPVL query window. 5.4 Selecting a tracer. 5.5 Selecting goals for compression. 5.6 Fun cars program. 5.7 Example append program. 5.8 The first call in the Spy trace of the ?fun? program. 5.9 The first two calls in the Spy trace of the fun program. 5.10 First eight lines in the Spy trace of the fun program. 5.11 Status symbols used in Spy. 5.12 Status symbols used in Spy. 5.13 Full Spy trace of the query fun(What). 5.14 Fun rule with bound second subgoal. 5.15 Trace control panel. 5.16 Top-level query fun(What) in PTP. 5.17 The first two steps of the PTP trace of the query fun(What). 5.18 The first three steps of the PTP trace of fun(What). 5.19 The first nine lines of the PTP trace of fun(What). 5.20 PTP symbols. 5.21 Full PTP trace of fun(What). 5.22 PTP trace of the query appendx([1, 2, 3], [4, 5], What). 5.23 Top level invocation of qsort (from Eisenstadt, 1985). 5.24 Subgoal failures in the buggy qsort program (from Eisenstadt, 1985) 5.25 Zooming into the split subgoal of the buggy qsort program (from Eisenstadt, 1985). 5.26 Zooming into the buggy split clause (from Eisenstadt, 1985). 5.27 The buggy split predicate (from Eisenstadt, 1985). 5.28 Suspect split predicate (from Eisenstadt, 1985). 5.29 PTP trace clich? (from Eisenstadt, 1985).

5.30 Starting state of TPM trace of fun(What). 5.31 First step in TPM trace of fun(What). 5.32 Second step in TPM trace of fun(What). 5.33 Third step in TPM trace of fun(What). 5.34 Fourth step in TPM trace of fun(What). 5.35 The gold query following the second invocation of car. 5.36 Final state of TPM trace of fun(What). 5.37 The pinball program. 5.38 The final state of the TPM trace of pinball(What). 5.39 The TPM coarse-grained view of appendx. 5.40 TPM symbols for coarse-grained view. 5.41 TPM fine-grained view of car subgoal. 5.42 TPM fine-grained view of top-level fun goal. 5.43 TPM fine-grained view of gold subgoal. 5.44 TPM fine-grained view of gold subgoal when instantiated within the source code. 5.45 The second invocation of the gold node showing a shadow. 5.46 TPM symbols for fine-grained view. 5.47 TPM display of top-level bindings in appendx. 5.48 The TPM replay panel. 5.49 Selecting a fine-grained view in TPM. 5.50 The party program from Eisenstadt and Brayshaw (1990). 5.51 TPM trace of the party program. 5.52 The FGV of the cut parent. 5.53 First step of the TTT trace of fun(What). 5.54 Second step of the TTT trace of fun(What). 5.55 Third step of the TTT trace of fun(What). 5.56 Fourth step of the TTT trace of fun(What). 5.57 TTT trace of the failure of the second subgoal of fun(What).

5.58 TTT showing the retrying of the first car clause. 5.59 TTT showing the trying of the second car clause. 5.60 TTT showing the second invocation of the gold subgoal. 5.61 Comparison between the pinball TTT trace and the pinball source code. 5.62 TTT showing the query of the bike subgoal. 5.63 TTT symbols. 5.64 TTT presentation of the cut. 5.65 Full TTT trace of fun(What). 5.66 Unification of the top-level appendx call. 5.67 The subgoal succeeds. 5.68 The unification program. 5.69 TTT trace of the unification program. 5.70 An illustration of the four designs in terms of explicitness and mapping distance.
5.71 A clause level and program level informational account of each SV system.

Chapter 6

Figure Description 6.1 Correct version of the Coombs and Stell (1985) isomorph. 6.2 Spy trace of the unfriendly program. 6.3 PTP trace of the unfriendly program. 6.4 TPM coarse-grained view of the unfriendly program. 6.5 TTT trace of the unfriendly program. 6.6 Information content during code familiarisation. 6.7 Further codes used for the code familiarisation protocols. 6.8 Mean solution times (mins.) by problem type and tracer. 6.9 Completion rates on the individual control flow and data flow questions. 6.10 Number of problems completed by problem type.

6.11 Mean number of problems completed within five minutes. 6.12 Protocol coding scheme for information types. 6.13 Mean number of utterances for each information type. 6.14 CFI, DFI utterances. 6.15 Strategies identified in the protocols. 6.16 Mean number of comprehension strategies for each subject pair. 6.17 Review strategies. 6.18 Test strategies. 6.19 Misunderstandings of the trace identified in the protocols. 6.20 Mean number of misunderstandings of the trace per subject pair. 6.21 How well subjects felt they had understood the tracer. 6.22 How useful subjects found the tracer. 6.23 Interrelations across, information, strategies and misunderstandings for control flow and data flow.

Chapter 7

Figure Descrption 7.1 Buggy version of the list_components program. 7.2 Correct version of list_components program. 7.3 Query window for the expert experiment. 7.4 Selecting clauses. 7.5 Modifying a clause. 7.6 Mid-point of the TTT trace of appendx([1, 2, 3], [4, 5], What).
7.7 Final state of the TTT trace of appendx([1, 2, 3], [4, 5], What).
7.8 Spy-N bug detection and repair history. 7.9 PTP-N bug detection and repair history. 7.10 TPM-N bug detection and repair history.

7.11 TTT-N bug detection and repair history. 7.12 Spy-E bug detection and repair history. 7.13 PTP-E bug detection and repair history. 7.14 TPM-E bug detection and repair history. 7.15 TTT-E bug detection and repair history. 7.16 Distribution of expert debugging strategies. 7.17 Identifying the deepest point of the trace in PTP. 7.18 Identifying the deepest point of the trace in TPM. 7.19 The initial point of failure. 7.20 Visual Gestalts used whilst debugging. 7.21 Section of PTP trace showing unbound variable in line 41. 7.22 The relation between the symptom, cause and attempted fixes. 7.23 The qsort program. 7.24 Information flow diagram for Spy-N. 7.25 Information flow diagram for Spy-E. 7.26 Information flow diagram for PTP-N. 7.27 Information flow model for PTP-E. 7.28 Information flow diagram for TPM-N. 7.29 Information flow diagram for TPM-E. 7.30 Information flow diagram for TTT-N. 7.31 Information flow diagram for TTT-E. 7.32 Mean number of utterances for each information type. 7.33 Mean number of information relations by time period for high and low experience subjects. 7.34 Mean number of each information type over time for low experience subjects.
7.35 Mean number of each information type over time for high experience subjects.
7.36 TPM trace of qsort([2, 1, 3], Sort).

7.37 TPM trace of qsort([3, 2, 1], Sort). 7.38 Amended Spy qsort trace. 7.39 Visual Gestalts used whilst teaching.

Chapter 8

Figure Description 8.1 Spy trace of Coombs and Stell (1987) isomorph. 8.2 The connected program adapted from Rajan (1984). 8.3 The exception handling program. 8.4 The appendx program. 8.5 First two lines of the Plater trace of connected(From, To). 8.6 First four lines of the Plater trace of connected(From, To). 8.7 The first point of failure in the Plater trace of connected(From, To).
8.8 Returning to the most recent choice-point. 8.9 Reunification with the destination predicate. 8.10 The second subgoal failure. 8.11 The third unification with the destination predicate. 8.12 Returning to the remaining choice-point. 8.13 Unification with the second connected clause. 8.14 Plater symbols. 8.15 The first two lines of the Plater trace of number of wheels(reliant_robin, N). 8.16 Showing the effect of the cut in Plater. 8.17 First solution to the query number of wheels(reliant_robin, N). 8.18 The number of wheels program fails. 8.19 Full Plater trace of the query number of wheels(reliant_robin, N). 8.20 Plater trace of the query appendx([1, 2], [3, 4], What).

8.21 First three lines of the Pinter trace of connected(From, To). 8.22 First four lines of the Pinter trace of connected(From, To). 8.23 The destination goal succeeds. 8.24 Trying the second subgoal. 8.25 The second subgoal fails. 8.26 Returning to destination choice-point. 8.27 The second unification of the destination goal. 8.28 Updating the goal line. 8.29 Fresh invocation of the origin subgoal. 8.30 Returning to the connected choice-point. 8.31 Unification with the second connected clause. 8.32 The first subgoal of the second clause. 8.33 Pinter symbols. 8.34 Step two of the Pinter trace of number of wheels(reliant_robin, N). 8.35 Step three of the Pinter trace of number of wheels(reliant_robin, N). 8.36 Step six of the Pinter trace of number of wheels(reliant_robin, N). 8.37 Step seven of the Pinter trace of number of wheels(reliant_robin, N). 8.38 Full Pinter trace of the query connected(From, To). 8.39 The third level appendx goal succeeds. 8.40 The second level appendx goal succeeds. 8.41 The top level appendx goal succeeds. 8.42 Choice-point SV design summary. 8.43 The query window. 8.44 Creating a file. 8.45 Selecting a window. 8.46 Correct version of the Coombs and Stell (1985) isomorph.

8.47 Full Plater trace of unfriendly(Who). 8.48 Full Pinter trace of unfriendly(Who). 8.49 Mean number of control flow, data flow and simulation utterances. 8.50 Completion rates on the individual control flow and data flow questions. 8.51 Number of problems completed by problem type. 8.52 Mean number of problems completed. 8.53 Mean number of utterances for each information type. 8.54 Review strategies. 8.55 Mean number of comprehension strategies for each subject pair. 8.56 Mean number of misunderstandings of the trace per subject pair. 8.57 How well subjects felt they had understood the tracer. 8.58 How useful subjects found the tracer. 8.59 Plater trace of bug DF-var.

Chapter 9

Figure Description 9.1 First solution in Theseus trace of the exception handling program using the query number of wheels(reliant_robin, N). 9.2 Final state in Theseus trace of the exception handling program using the query number of wheels(reliant_robin, N). 9.3 The Theseus trace of Rajan's (1986) connected program. 9.4 The extended control panel.