Last updated: 2022-11-09
Checks: 7 0
Knit directory: SCHEDULING/
This reproducible R Markdown analysis was created with workflowr (version 1.7.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20221107)
was run prior to running
the code in the R Markdown file. Setting a seed ensures that any results
that rely on randomness, e.g. subsampling or permutations, are
reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version 67e1aac. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish
or
wflow_git_commit
). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
Ignored: .Rproj.user/
Ignored: analysis/patch_selection.png
Ignored: analysis/patch_selection_8.png
Ignored: analysis/patch_selection_avg.png
Ignored: analysis/site_libs/
Untracked files:
Untracked: analysis/Notes.txt
Untracked: analysis/archive/
Untracked: analysis/fd_pl.rds
Untracked: analysis/fu_pl.rds
Untracked: analysis/prereg/
Untracked: analysis/reward rate analysis.docx
Untracked: analysis/rewardRate.jpg
Untracked: analysis/toAnalyse/
Untracked: analysis/wflow_code_string.txt
Untracked: archive/
Untracked: data/archive/
Untracked: data/create_database.sql
Untracked: data/dataToAnalyse/
Untracked: data/exp6a_typing_exponential.xlsx
Untracked: data/exp6b_typing_linear.xlsx
Untracked: data/rawdata_incEmails/
Untracked: data/sona data/
Untracked: data/summaryFiles/
Untracked: models/
Untracked: old Notes on analysis.txt
Untracked: presentations/
Untracked: references/
Untracked: spatial_pdist.Rdata
Unstaged changes:
Modified: data/README.md
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown (analysis/prereg_v3.Rmd
) and HTML
(docs/prereg_v3.html
) files. If you’ve configured a remote
Git repository (see ?wflow_git_remote
), click on the
hyperlinks in the table below to view the files as they were in that
past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | 67e1aac | knowlabUnimelb | 2022-11-09 | Publish data and analysis files |
Human scheduling of real-world typing tasks
Scheduling theory is concerned with the development of policies which determine the “optimal allocation of resources, over time, to a set of tasks” (Parker, 1995). Scheduling problems have been studied extensively in the context of operations research and computer science, where optimal policies have been established for many cases. However, almost no research has examined how cognitive and perceptual mechanisms perform with respect to these optimal policies.
We know that switching between tasks leads to context shift costs (Monsell, 2003) and that multitasking leads to inefficiencies and errors (Strayer & Johnston, 2001). But it is unclear whether people are sensitive to the efficiency of different schedules and whether these costs impact on the optimality of human scheduling. The current project will develop a new framework for studying the scheduling of mental processes in human cognition.
For many types of goals, such as minimizing the number of tasks completed after a deadline or completing as many tasks as possible in a given length of time, an optimal schedule can be determined (Chretienne et al., 1995). In the present study, we are focused on the scheduling of tasks that vary in difficulty where the goal is to complete as many tasks as possible prior to the deadline. In this case, it is optimal to complete subtasks in order of shortest to longest completion time.
Future experiments will additionally vary the utility/reward of each subtask, the overall goal, and whether or not the difficulty needs to be learned or is described.
In this study, each subtask is a typing task. Typewriting has become an essential skill. Most people have the experience of typing on a computer keyboard, especially most college students who have strong typing skills have received formal training in typing (Logan & Crump, 2011). Typing is very complex in terms of its cognitive processes. Logan and Crump (2011) proposed a two-loop theory to demonstrate how hierarchical control is involved in skilled typing. Firstly, an outer loop comprehends language, then generates words to be typed, and finally passes them to the inner loop sequentially. The inner loop translates words into several letters, then activates the corresponding keystrokes, and lastly executes them in the correct order. Since typewriting is controlled by these two hierarchical cognitive mechanisms, a typing task can be well used to study scheduling in higher-order human cognition as well as providing more real-world implications.
In the typing task, a paragraph is presented, and participants must type the paragraph accurately until completion. Any typing mistake is not registered on the screen. Only correct keystrokes progress the typed text. Multiple typing tasks will be presented on the selection screen. Participants will click on the task they want to complete, type it to complection, and then return to the main screen to select the next task. The typing tasks will vary in length, and consequently, the average completion time taken to type the text.
Typing length will be constant across trials but the text will be randomly sampled from a book on border collies (Logan & Crump, 2011). Typing tasks will be presented in onscreen at the four compass points (North, South, East, West). Location of each difficulty level will be either fixed across trials or random across trials. Once the task is selected, the selection screen will be removed and the typing task will be presented.
The optimal policy to minimize the number of tasks completed after the deadline (or i.e., to maximize the number of tasks completed before the deadline) is to complete shortest subtask first. We will determine whether participants converge to this optimal schedule. We will compare scheduling both with and without a deadline and expect greater optimality when participants have to complete the typing task under a deadline. We additionally expect greater optimality when difficulty location is held constant compared to when it is varied from trial to trial.
In this experiment, participants will complete multiple trials for selecting and completing typing tasks. On each trial, participants will be shown a set of four tasks labelled Short, Medium, Long, and Very Long The labels correspond to the length of the typing task. Participants will select and complete one task at a time, in any order, completing as many as possible before a deadline.
Before completing the deadline task, participants will complete a short number of trials with no deadline (e.g., 2). This will help participants learn the task, explore strategies, and allow us to compare the optimality of responding between a no deadline and a deadline condition.
N/A
Difficulty location will be randomized across subjects. Four length levels will be used to manipulate the completion time. Long tasks will be half as long as very long tasks, medium tasks are half as long as long tasks, and short tasks are half as long as medium tasks. A single deadline of e.g., 90 seconds will be tested and compared to a no deadline condition (i.e., a long trial length of 240 seconds). In Experiment 2, the deadline will be based on typing performance.
Location of tasks (i.e., typing length) will be randomized across participants.
Only CI Little has access to the data, which is stored in a password secured database. CI Little has inspected the raw data to ensure that it has been recorded properly. No summary analysis of the data has been undertaken.
Prticipants will be recruited from the University of Melbourne’s or the University of Newcastle’s Research Experience Program. Participants will be reimbursed 1 credit for participation.
We will recruit 100 participants in the fixed and random location conditions (N = 100 total).
In each condition, participants will complete two no deadline scheduling trials and two deadline scheduling trials via a web browser.
This is a preliminary study. Sample size is chosen based on pragmatic considerations.
Sampling will be stopped after 100 participants have been collected or as close as possible given asynchronicity between remote enrolment in the experiment and completion of the experiment. .
We will manipulate the length of the typing task and the deadline available for selection and completion of the four tasks on each trial. We will additionally manipulate whether there was a deadline or not.
The main variable is the order in which subtasks are completed. Secondary variables are the accuracy and completion time of the subtasks.
N/A
For each participant, the optimal order of each location will be determined (i.e., from easiest to hardest). The first analysis will compute something like Kendall’s Tau, rank-order distance. What we want is the distance of the selected options from the optimal solutions, which is Kendall’s Tau. However, because a participant may run out of time, there may be missing values. To handle these values, for each trial, we find the orders which partially match the selected order and compute: (1) the maximum distance of those possible orders and the optimal solution and (2) the average distance of those possible orders and the optimal solution.
We will examine the distribution of distances across participants to determine how much they deviate from optimal. Bayesian analysis will be used to characterise the posterior uncertainty in distance across trials.
We will perform a similar analysis using the proportion of first-optimal responses (i.e., the proportion of responses in which the first chosen option is the easiest).
We will additionally create “spatial-order” strategies (e.g., clockwise strategy, anti-clockwise strategy) and compute the partial-matching Kendall’s Tau measure of the response to these spatial orders. These strategies may be non-optimal, depending on the random distribution of difficulties; nonetheless, participants may adopt these strategies.
We will compare the optimality of the untimed trials against the deadline trials using a Bayesian one-way ANOVA and a hierarchical Bayesian model described below.
N/A
Posterior parameter estimation will be used for inference. Optimality (measured using normalised Kendall’s Tau adjusted for missing data as described above) will be given a uniform Beta(1,1) prior in each condition. We will compare two models, one in which the hyperpriors are free to vary across the untimed and deadline conditions, and one in which the hyperpriors are fixed. Posterior parameter estimation and DIC will be used to examine whether there is a credible difference in optimality between deadline conditions.
Initial analysis will not exclude any data. Followup analyses may exclude data on the basis of typing task performance.
Participants will be excluded if they do not complete all trials.
Most of the data analyses considered so far may be considered exploratory in that we make no particular hypothesis as to whether scheduling should be optimal or not.
N/A
Chretienne, P., Coffman Jr, E. G., Lenstra, J. K., & Liu, Z. (1997). Scheduling theory and its applications. Journal of the Operational Research Society, 48(7), 764-765.
Logan, G. D., & Crump, M. J. (2011). Hierarchical control of cognitive processes: The case for skilled typewriting. In Psychology of learning and motivation (Vol. 54, pp. 1-27). Academic Press.
Monsell, S. (2003). Task switching. Trends in cognitive sciences, 7(3), 134-140.
Parker, R. G. (1996). Deterministic scheduling theory. CRC Press.
Strayer, D. L., & Johnston, W. A. (2001). Driven to distraction: Dual-task studies of simulated driving and conversing on a cellular telephone. Psychological science, 12(6), 462-466.
sessionInfo()
R version 4.1.3 (2022-03-10)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19042)
Matrix products: default
locale:
[1] LC_COLLATE=English_Australia.1252 LC_CTYPE=English_Australia.1252
[3] LC_MONETARY=English_Australia.1252 LC_NUMERIC=C
[5] LC_TIME=English_Australia.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] workflowr_1.7.0
loaded via a namespace (and not attached):
[1] Rcpp_1.0.8.3 bslib_0.3.1 compiler_4.1.3 pillar_1.7.0
[5] later_1.3.0 git2r_0.30.1 jquerylib_0.1.4 tools_4.1.3
[9] getPass_0.2-2 digest_0.6.29 jsonlite_1.8.0 evaluate_0.15
[13] tibble_3.1.6 lifecycle_1.0.1 pkgconfig_2.0.3 rlang_1.0.2
[17] cli_3.2.0 rstudioapi_0.13 yaml_2.3.5 xfun_0.30
[21] fastmap_1.1.0 httr_1.4.2 stringr_1.4.0 knitr_1.38
[25] sass_0.4.1 fs_1.5.2 vctrs_0.4.1 rprojroot_2.0.3
[29] glue_1.6.2 R6_2.5.1 processx_3.5.3 fansi_1.0.3
[33] rmarkdown_2.13 callr_3.7.0 magrittr_2.0.3 whisker_0.4
[37] ps_1.6.0 promises_1.2.0.1 htmltools_0.5.2 ellipsis_0.3.2
[41] httpuv_1.6.5 utf8_1.2.2 stringi_1.7.6 crayon_1.5.1