Last updated: 2022-11-09

Checks: 2 0

Knit directory: SCHEDULING/

This reproducible R Markdown analysis was created with workflowr (version 1.7.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version 67e1aac. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rproj.user/
    Ignored:    analysis/patch_selection.png
    Ignored:    analysis/patch_selection_8.png
    Ignored:    analysis/patch_selection_avg.png
    Ignored:    analysis/site_libs/

Untracked files:
    Untracked:  analysis/Notes.txt
    Untracked:  analysis/archive/
    Untracked:  analysis/fd_pl.rds
    Untracked:  analysis/fu_pl.rds
    Untracked:  analysis/prereg/
    Untracked:  analysis/reward rate analysis.docx
    Untracked:  analysis/rewardRate.jpg
    Untracked:  analysis/toAnalyse/
    Untracked:  analysis/wflow_code_string.txt
    Untracked:  archive/
    Untracked:  data/archive/
    Untracked:  data/create_database.sql
    Untracked:  data/dataToAnalyse/
    Untracked:  data/exp6a_typing_exponential.xlsx
    Untracked:  data/exp6b_typing_linear.xlsx
    Untracked:  data/rawdata_incEmails/
    Untracked:  data/sona data/
    Untracked:  data/summaryFiles/
    Untracked:  models/
    Untracked:  old Notes on analysis.txt
    Untracked:  presentations/
    Untracked:  references/
    Untracked:  spatial_pdist.Rdata

Unstaged changes:
    Modified:   data/README.md

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/index.Rmd) and HTML (docs/index.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
Rmd 67e1aac knowlabUnimelb 2022-11-09 Publish data and analysis files
Rmd 606d1db knowlabUnimelb 2022-11-07 Start workflowr project.

(REP Title: Prioritizing Dot Discrimination Tasks or Prioritizing Typing Tasks)

Daniel R. Little1, Ami Eidels2, and Deborah J. Lin1

1 The University of Melbourne, 2 The University of Newcastle

Scheduling Project

Preregistration of our hypotheses and data analysis plan are available here:

Overview

Scheduling theory is concerned with the development of policies which determine the “optimal allocation of resources, over time, to a set of tasks” (Parker, 1995). Scheduling problems have been studied extensively in the context of operations research and computer science, where optimal policies have been established for many cases. However, almost no research has examined how cognitive and perceptual mechanisms perform with respect to these optimal policies.

For many types of goals, such as minimizing the number of tasks completed after a deadline or completing as many tasks as possible in a given length of time, an optimal schedule can be determined (Chretienne et al., 1995). In the first set of studies, we are focused on the scheduling of tasks that vary in difficulty where the goal is to complete as many tasks as possible prior to the deadline. In this case, it is optimal to complete subtasks in order of shortest to longest completion time. In the second set of studies, we additionally vary reward, and prompt participants to maximize the total value of the subtasks completed in the allotted time. With this goal, the optimal strategy is to balance reward against difficulty, creating an index based on reward rate and completing the tasks in order of this index.

We conducted a number of experiments in which we controlled the difficulty of each subtask by requiring perceptual decisions or behavioral actions with known properties. For example, each subtask may be a Random Dot Kinematogram (RDK). In an RDK, dots are moving across an aperture on the screen in a pseudo-random fashion. The participant must to determine whether more dots are moving coherently to the right or to the left. Multiple RDK patches will be presented on the selection screen. Participants will click on the RDK they want to complete, indicate the direction of motion in that patch, and then return to the main screen to select the next RDK. The RDK’s will vary in coherence and consequently, difficulty and average completion time (Ludwig & Evens, 2017). In other experiments, we use a different perceptual judgment task or a typing task requiring typing passages of different lengths.

All experiments were completed online. In all experiments, each subtask varied in diffculty, either by manipulating the coherence of the RDK, by manipulating the ease of the brightness judgement, or by varying the length of the typing task. Patches varied from Very Easy to Very Hard. In most experiments, text labels were presented on each patch so that the difficulty could be discerned without viewing the subtask. In other experiments, difficulty had to be judged by previewing the task.

Experiment Set 1: Different Levels of Difficulty, Equal Reward Value

In this set of experiments, we compare how tasks of different difficulty but equal reward value are scheduled across a number of conditions. In Experiments 1 and 2, we examine tasks which are labelled with their corresponding difficulty. In Experiment 3, we test scheduling when the difficulty information is conveyed dynamically (i.e., participants can view the tasks before selecting it). Experiment 4 examines scheduling with eight subtasks, instead of four as in the other experiments. Experiment 5 examines a different perceptual judgment task: brightness discrimination. Finally, Experiment 6 examines how people schedule typing tasks of different length.

Experiment 1: Four Labelled RDK Subtasks with No Error Penalty

In this experiment, there were two conditions, one using fixed locations of task difficulty on each trial and one using random locations of task difficulty on each trial. In each condition, subjects completed several trials selecting and completing RDK tasks when there was no deadline for completing all of the subtasks and where there was a six second deadline for completing all of the subtasks. There were four subtasks varying from Easy to Very Hard, labelled with a text label. When an error was made on the RDK task, the direction of the RDK was immediately restarted with a potentially new sampled direction.

Experiment 2: Four Labelled RDK Subtasks with Error Delay

This experiment was identical to Experiment 1, with one exception. When an error was made on the RDK direction judgment, a 500 msec delay was inserted before resampling the RDK direction and restarting the RDK motion. Both fixed and random location conditions were used.

Experiment 2b: Four Labelled RDK Subtasks with Error Delay

This experiment was identical to Experiment 2 but with the labels changed from “easy”, “hard”, etc to the average completion time (msec) from Experiment 2.

Dynamically varying tasks

In contrast to Experiments 1 and 2, in Experiment 3 no difficulty labels were provided. Instead, all four RDk’s moved on the selection screen at their given coherence level. Once selected, the direction was randomly resampled and the unselected tasks either stopped moving or (in Experiment 3b) the selected task was highlighted.

Experiment 3a: Four Dynamic RDK’s with Error Delay

In this experiment, we removed the text labels and allowed each RDK to continue moving (at its given coherence) on the selection screen. Once an RDK was selected the non-selected RDK’s would stop and the direction of the selected RDK was resampled.

Experiment 3b: Four Dynamic RDK’s with Error Delay and Highlighting

This experiment is identical to Experiment 6, but once an RDK was selected, it was highlighted by a bounding box, the non-selected RDK’s continued moving, and the direction of the selected RDK was resampled.

Experiment 3c: Four Dynamic RDK’s with Error Delay, Short Dot Life

This experiment is identical to Experiment 6 but with the RDK dots set to disappear after a short number of frames (rather than at the boundary).

Increasing the number of tasks

Experiment 4: Eight Labelled RDK Subtasks with Error Delay

In this experiment, we increased the number of RDK subtasks to eight. In the deadline condition, a 10 second deadline was applied. Only the fixed location condition was tested.

Alternative Perceptual Judgment

Experiment 5a: Four Labelled Pixel Brightness Subtasks with Error Delay

In this experiment, we substituted an pixel brightness judgment for each RDK subtask. In the pixel brightness subtask, participants had to judge whether the number of black pixels in a 100 x 100 patch were greater or less than 50%. We again only tested the fixed location condition with a 500 msec delay after an error before the patch was resampled.

Experiment 5b: Four Displayed Pixel Brightness Subtasks with Error Delay

In this experiment, we presented four pixel brightness judgments. Participants could see each patch brightness. Once selected, the pixels were resampled at the same level. Here we tested both fixed and random locations.

Typing tasks

Experiment 6a: Four Typing Tasks of exponentially varying length

In this experiment, participants typed out paragraphs of differing length. Length was varied exponentially across paragraphs.

Experiment 6b: Four Typing Tasks of linearly varying length

In this experiment, participants typed out paragraphs of differing length. Length was varied linearly across paragraphs.

Experiment Set 2: Different Levels of Difficulty, Different Reward

In Experiment Set 2, we again examine how people schedule tasks that vary in difficulty but we add reward as an additional factor. In all experiments, the reward varies with difficulty such that more difficult tasks incur larger rewards. We instantiated reward in two ways: first, we adapted the popular Wordle game to provide participants with clues to a missing target word that they attempted to guess after each scheduling trial. Completing subtasks rewarded participants with clues of different informativeness. Experiment 7 tested random dot motion while Experiment 8 examined typing word lists of different length. In Experiments 7e and 8e, each task was allocated a point value and participants accumulated points across trials by completing different subtasks.

Random Dot Motion

Experiment 7a: Replication of Experiment 2 Fixed Location Condition

Experiment 7b: Word Game Reward

Experiment 7c: Word Game Reward + Instructions

Experiment 7d: Word Game Reward + Instructions + Letter Keyboard

Experiment 7e: Point Reward

Typing tasks

Experiment 8a: Replication of Experiment 2 Fixed Location Condition

Experiment 8b: Word Game Reward

Experiment 8c: Word Game Reward + Instructions

Experiment 8d: Word Game Reward + Instructions + Letter Keyboard

Experiment 8e: Point Reward