In this experiment, we asked programmers to recall the location of refactoring on pie menus.
In this survey, we asked professional programmers about their opinions of the refactoring tools.
In this set of experiments, we characterized how programmers refactor. Below, you'll find the materials that we used to do it:
tool_usage. This table contained data from .refactorings directories from four Toolsmiths.
projects. This table contains the 41 Eclipse projects which tool_usage referred to.
eclipse_dev_commits. All files changed (extracted from CVS) for all projects listed in projects.
eclipse_dev_inspected. A list of all refactorings that (and some non-refactorings) that we detected by manually comparing Eclipse versions.
udc_usage. A list of Eclipse commands, and how many times and by how many users they were executed.
random_numbers. Pre-defined random numbers, used for randomizing global commit ids.The following queries are also notable:
update_global_commits_using_ratzinger. A query that updates the _global_commits_of_interest table and checks hasRefactoringComment, according to Ratzinger's commit-classifying methodology.
commmits_labeled. The pool from which 20 commits matching Ratzinger's keywords were drawn.
commmits_unlabeled. The pool from which 20 commits not matching Ratzinger's keywords were drawn.
In section 3.5, we wish to estimate how many pure-refactoring commits were made to CVS. Recall that previously, we sampled 20 Labeled projects and 20 Unlabeled projects, and we know that 6 Labeled commits were pure-refactoring and 0 Unlabeled commits were pure-refactoring. Naively, might simply do the addition (6+0) and divide over the total commits to get the estimate: 6/40 = 15%. However, this is a good estimate for our sample, but a bad estimate for the population as a whole, because our 20-20 sample was drawn from two unequal strata. Specifically, in this naive estimate, we are giving too much weight to the 6 pure-refactoring commits, because Labeled commits only account for about 10% of total commits. So what do we do?
Instead of the naive approach, we normalize our estimate for the relative proportions of Labeled (~10%) to Unlabeled commits (~90%). The following calculation gives the normalized result:
6 is the number of Labeled pure-refactoring commits. 0 is the number of Unlabeled pure-refactoring commits. 290 is the number of Labeled commits. 2498 is the number of Unlabeled commits. (6/20)*(290/(290+2498)) + (0/20)*(1-290/(290+2498)) = 0.0312051649928264And thus, we estimate that about 3% of commits contained pure-refactorings.
Blank Experimenter's Notebook. The experiment administrator filled this out as the experiment progressed.
Smell Cards. The 8 cards given to participants to familiarize them with smells.
Result Database. Contains 3 tables:
finding. Each record was a Java file (indicated by 'firstOrSecond') inspected by the programmer using a tool or manually ('usedTool' boolean). Numbers in the last 8 columns indicate how many times they noticed that smell. For instance, subject 4 inspected code 8 with the help of a tool and didn't see any data clumping, but did notice 6 instances of feature envy.
questionnaire. *_important records the middle major column in the post-experiment questionnaire, while *_obey records the right major column in the post-experiment questionnaire.
subjects. Demographics from the subjects. Job descriptions are removed to preserve anonymity.
Result Notes. This document contains transcriptions of my handwritten notes from the experiment. Note that there may be some transcription errors.
We used code from Vuze and the Java libraries:
envy1 | Win32FileSystem.java | java.source/src/java/io | line 201 |
envy3 | Win32PrintService.java | java.source/src/sun/print | line 1223 |
envy6 | FileUtil.java | azureus2/org/gudy/azureus2/core3/util | line 171 |
envy8 | AZMessageFactory.java | azureus2/com/aelitis/azureus/core/peermanager/messaging/azureus | line 157 |
scroll-1 | Win32OffScreenSurfaceData.java | java.source/src/sun/awt/windows | line 46 |
scroll-3 | Win32SurfaceData.java | java.source/src/sun/awt/windows | line 37 |
scroll-7 | AzureusRestarterImpl.java | azureus2/com/aelitis/azureus/core/update/impl | line 43 |
scroll-8 | SpeedManagerAlgorithmProviderVivaldi.java | azureus2/com/aelitis/azureus/core/speedmanager/impl | line 46 |
ToolDemo | DHTUDPUtils.java | azureus2/com/aelitis/azureus/core/dht/transport/udp/impl | line 732 |
In this experiment, we asked programmers to select code using 3 different selection tools, and interpret the meaning of violated Extract Method preconditions using either Refactoring Annotations or error messages.
In this experiment, we asked programmers to interpret the meaning of violated refactoring preconditions using either Refactoring Annotations or error messages.
subjects. Demographic information about the 10 subject programmers.
timings. How long it to complete 8 refactoring analysis tasks. Two records per subject; one for Refactoring Annotations, one for error messages.
numerical_results. The number of correct, missed, neutral, and irrelevant code fragments identified for each violation interpretation task.
questionnaire. Results of the post-experiment questionnaire.