Last updated: 2021-07-08
Checks: 2 0
This reproducible R Markdown analysis was created with workflowr (version 1.6.1). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 0167b0f. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use
wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
Ignored files: Ignored: .Rhistory Ignored: .Rproj.user/ Ignored: analysis/patch_selection.png Ignored: analysis/patch_selection_8.png Ignored: analysis/site_libs/ Untracked files: Untracked: admin/ Untracked: analysis/Notes.txt Untracked: analysis/archive/ Untracked: analysis/prereg/ Untracked: analysis/reward rate analysis.docx Untracked: analysis/rewardRate.jpg Untracked: data/archive/ Untracked: data/sona data/ Untracked: infoOverloadWave.png Untracked: presentations/ Untracked: references/ Unstaged changes: Deleted: analysis/exp1_nodelay.Rmd Deleted: analysis/exp2_errordelay.Rmd Deleted: analysis/exp4_pixeldelay.Rmd
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were made to the R Markdown (
analysis/index.Rmd) and HTML (
docs/index.html) files. If you’ve configured a remote Git repository (see
?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.
|Rmd||0167b0f||knowlabUnimelb||2021-07-08||Add Experiment 3 analysis|
|Rmd||255ca46||knowlabUnimelb||2021-07-07||Update Typing Data Analysis|
|Rmd||b50c72d||knowlabUnimelb||2021-07-01||Add Typing Data Analysis|
|Rmd||27a55e4||knowlabUnimelb||2020-12-01||Add Experiment 4 analysis|
|Rmd||34263a2||knowlabUnimelb||2020-12-01||Add Experiment 4 analysis|
|Rmd||5875e44||GitHub||2020-12-01||Add files via upload|
|Rmd||0f44724||knowlabUnimelb||2020-11-25||Add analysis of Experiment 2|
|Rmd||ab0d9a7||knowlabUnimelb||2020-11-24||Finalise Experiment 1|
|Rmd||7bae36d||knowlabUnimelb||2020-10-30||Separate analysis page|
|Rmd||fec9865||knowlabUnimelb||2020-06-15||Add reward rate graph|
|Rmd||04d8d3e||knowlabUnimelb||2020-06-13||Add ks-test analysis|
|Rmd||86faf9e||knowlabUnimelb||2020-06-05||Fix spatial strategy analysis|
|Rmd||4688f88||knowlabUnimelb||2020-06-05||Update inferential statistics|
|Rmd||33ea1a4||knowlabUnimelb||2020-06-01||Update analysis and fix bug in optimality analysis|
|Rmd||4c06a56||knowlabUnimelb||2020-05-27||Update analysis and data|
|Rmd||dbcb22b||knowlabUnimelb||2020-05-27||Update analysis and data|
|Rmd||98777c9||knowlabUnimelb||2020-05-26||Update analysis and data|
|Rmd||bb248c2||knowlabUnimelb||2020-04-17||Scheduling task analysis|
|Rmd||79a1183||knowlabUnimelb||2020-04-16||Start workflowr project.|
(REP Title: Prioritizing Dot Discrimination Tasks)
Daniel R. Little1, Ami Eidels2, and Deborah J. Lin1
1 The University of Melbourne, 2 The University of Newcastle
Preregistration of our hypotheses and data analysis plan are available here:
Scheduling theory is concerned with the development of policies which determine the “optimal allocation of resources, over time, to a set of tasks” (Parker, 1995). Scheduling problems have been studied extensively in the context of operations research and computer science, where optimal policies have been established for many cases. However, almost no research has examined how cognitive and perceptual mechanisms perform with respect to these optimal policies.
For many types of goals, such as minimizing the number of tasks completed after a deadline or completing as many tasks as possible in a given length of time, an optimal schedule can be determined (Chretienne et al., 1995). In the present study, we are focused on the scheduling of tasks that vary in difficulty where the goal is to complete as many tasks as possible prior to the deadline. In this case, it is optimal to complete subtasks in order of shortest to longest completion time.
We conducted a number of experiments in which we controlled the difficulty of each subtask by requiring perceptual decisions with known properties. For example, in Experiments 1-3, each subtask is a Random Dot Kinematogram (RDK). In an RDK, dots are moving across an aperture on the screen in a pseudo-random fashion. The participant must to determine whether more dots are moving coherently to the right or to the left. Multiple RDK patches will be presented on the selection screen. Participants will click on the RDK they want to complete, indicate the direction of motion in that patch, and then return to the main screen to select the next RDK. The RDK’s will vary in coherence and consequently, difficulty and average completion time (Ludwig & Evens, 2017).
Four experiments were completed online. In all experiments, each subtask varied in diffculty, either by manipulating the coherence of the RDK or by manipulating the number of black pixels in Experiment 4. Patches varied from Very Easy to Very Hard. Text labels were presented on each patch so that the difficulty could be discerned without viewing the subtask.
In this experiment, there were two conditions, one using fixed locations of task difficulty on each trial and one using random locations of task difficulty on each trial. In each condition, subjects completed several trials selecting and completing RDK tasks when there was no deadline for completing all of the subtasks and where there was a six second deadline for completing all of the subtasks. There were four subtasks. When an error was made on the RDK task, the direction of the RDK was immediately restarted with a potentially new sampled direction.
This experiment was identical to Experiment 1, with one exception. When an error was made on the RDK direction judgment, a 500 msec delay was inserted before resampling the RDK direction and restarting the RDK motion. Both fixed and random location conditions were used.
In this experiment, we increased the number of RDK subtasks to eight. In the deadline condition, a 10 second deadline was applied. Only the fixed location condition was tested.
In this experiment, we substituted an pixel brightness judgment for each RDK subtask. In the pixel brightness subtask, participants had to judge whether the number of black pixels in a 100 x 100 patch were greater or less than 50%. We again only tested the fixed location condition with a 500 msec delay after an error before the patch was resampled.
In this experiment, we removed the text labels and allowed each RDK to continue moving (at its given coherence) on the selection screen. Once an RDK was selected the non-selected RDK’s would stop and the direction of the selected RDK was resampled.
In this experiment, participants typed out paragraphs of differing length. Length was varied exponentially across paragraphs.
In this experiment, participants typed out paragraphs of differing length. Length was varied linearly across paragraphs.