Proceedings of the Royal Society B: Biological Sciences
You have accessResearch articles

An observer model of tilt perception, sensitivity and confidence

Derek H. Arnold

Derek H. Arnold

Perception Lab, School of Psychology, The University of Queensland, St Lucia, Queensland 4072, Australia

[email protected]

Contribution: Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Software, Supervision, Writing-original draft, Writing-review & editing

Google Scholar

Find this author on PubMed

Blake W. Saurels

Blake W. Saurels

Perception Lab, School of Psychology, The University of Queensland, St Lucia, Queensland 4072, Australia

Contribution: Investigation, Writing-review & editing

Google Scholar

Find this author on PubMed

Natasha L. Anderson

Natasha L. Anderson

Perception Lab, School of Psychology, The University of Queensland, St Lucia, Queensland 4072, Australia

Contribution: Investigation, Writing-review & editing

Google Scholar

Find this author on PubMed

Alan Johnston

Alan Johnston

School of Psychology, University of Nottingham, Nottingham NG7 2RD, UK

Contribution: Conceptualization, Funding acquisition, Writing-review & editing

Google Scholar

Find this author on PubMed


Humans experience levels of confidence in perceptual decisions that tend to scale with the precision of their judgements; but not always. Sometimes precision can be held constant while confidence changes—leading researchers to assume precision and confidence are shaped by different types of information (e.g. perceptual and decisional). To assess this, we examined how visual adaptation to oriented inputs changes tilt perception, perceptual sensitivity and confidence. Some adaptors had a greater detrimental impact on measures of confidence than on precision. We could account for this using an observer model, where precision and confidence rely on different magnitudes of sensory information. These data show that differences in perceptual sensitivity and confidence can therefore emerge, not because these factors rely on different types of information, but because they rely on different magnitudes of sensory information.

1. Introduction

The human mind is constantly self-evaluating. We experience levels of confidence in our decisions, and in perception correct decisions tend to generate greater feelings of confidence [1,2]. Intriguingly, people do not need feedback to know how well they are performing a perceptual task. This is therefore a form of metacognition—a situation where the human mind has insight into its underlying operations, in this case into how well it has encoded sensory information.

There is great interest in identifying which perceptual experiences elicit metacognition (e.g. [37]), and in the computational processes (e.g. [8,9]) and brain structures (e.g. [1012]) that govern perceptual confidence. One reason for our interest in this topic is that understanding the processes that give humans a reasonably accurate sense of uncertainty has potential to inform artificial technologies. Driverless vehicles, for instance, might benefit from accurate estimates of the precision of available information regarding the surrounding environment, of the type that humans evidently possess, to enable more caution to be taken when information is degraded. So how do we know when we are right?

Some important first steps towards understanding perceptual metacognition have involved breaking the typical relationship between performance and confidence [9,1315]. For instance, having a large range of direction signals has a greater negative impact on confidence than on judgements of average direction [3]. This separability, of perceptual precision and confidence, has encouraged researchers to ask what special features of visual brain activity might inform confidence? That question remains unresolved.

Existing models of perceptual metacognition assume sensory processing makes a matched contribution to confidence and precision (e.g. [16,17]), with differences attributed to post perceptual processes (e.g. by only sampling evidence consistent with a pre-determined hypothesis) [1719]. This approach risks missing important contributions of visual processing, as it could not detect if sensory processing were making a mismatched contribution to confidence and precision.

To better understand the contributions of sensory processing to perceptual confidence, we decided to leverage visual adaptation in conjunction with computational modelling. We felt this was a promising line for investigation, as visual adaptation can modulate perceptual precision [20,21]—in ways that can be understood through modelling (e.g. [2225]). Hence, we anticipated that confidence might be similarly impacted by visual adaptation, and understood through modelling. However, given that past investigations had shown that perceptual precision and confidence can be separated [9,1315], we equally anticipated that confidence might be less impacted by visual adaptation. What we did not anticipate is what we are about to report—that confidence would be more impacted by visual adaptation.

2. Methods

Stimuli were presented on a 19.8’ CRT HP 1110 monitor, driven by a Cambridge Research Systems ViSaGe stimulus generator and custom Matlab R2007b (The MathWorks, Natick, MA, USA) software. The monitor had a resolution of 1024 × 768 pixels and a refresh rate of 100 Hz. Participants viewed stimuli from 57 cm, from directly in front with their head restrained by a chin rest. There were six participants, including the first three authors. The experiment was approved by the University of Queensland ethics committee and was conducted in accordance with the principles of the Declaration of Helsinki. Participation involved approximately 14 h of testing for each observer, split across 14 experimental sessions (usually conducted on different days).

On each trial of adaptation blocks of trials, participants were adapted (for 5 s) to two second-order contrast-modulated Gabors (figure 1). These were presented against circular (diameter subtending 10 cycles/degrees of visual angle (dva)) grey discs (CIE x = 0.297, y = 0.357, Y = 58 cd m−2) centred 3.57 dva to the left and right of a central fixation point (so the two discs were overlapping). Contrast-modulated Gabors were constructed by multiplying the contrast of static white noise patterns (diameter subtending 7.14 dva) via a Gabor function (spatial frequency 0.4 cycles/dva, spatial constant 1.19 dva). As these patterns change in contrast (100%), but not spatially averaged brightness, they do not generate brighter or darker after-images after the prolonged viewing that is needed for adaptation, which would interfere with perception of subsequent tests. The two adapting Gabors were oppositely tilted from a standard orientation (45°), by one of a range of different magnitudes (±0, 15, 30, 45, 75 and 90°), each sampled in a different block of trials.

Figure 1.

Figure 1. Graphic depicting the experimental protocol. On each trial participants were adapted (for 5 s) to a pair of second-order contrast-modulated Gabors (e.g. top left). These were positioned to either side of a central fixation point. The two adaptors were oppositely rotated (left anti-clockwise, right clockwise) from a nominal orientation of 45° by one of a range of magnitudes (the range of possible adaptation differences are depicted at top right). Only a single adapted orientation difference was sampled in each block of trials. After a variable inter-stimulus-interval participants were briefly shown a pair of test second-order contrast-modulated Gaussians, also oppositely rotated from 45° (see bottom left for examples). Participants were then asked to simultaneously indicate which test had been rotated more clockwise (i.e. which test was closer to horizontal), and if they had a low or high confidence in this decision. Note that example orientation differences are depicted at top and bottom by black bars for clarity. All stimuli were actually contrast-modulated Gabors (e.g. top left). (Online version in colour.)

After a brief inter-stimulus-interval (randomly varying between 0.25 and 1.25 s), participants were shown two test Gabors, also oppositely tilted from 45°. Participants then used a mouse to select one of four response options, indicating combinations of orientation perception (left/right test is more rotated clockwise) and confidence (high or low; figure 1 for a graphic depicting the experimental protocol). Baseline blocks of trials, conducted without adaptation, were also conducted for comparison. Visual feedback regarding task performance was provided on the first eight trials of each block (figure 1). Tests on these trials all had very large test differences (±10.5°), which served to (re)familiarize participants with the experimental task. Data were not recorded from these trials. Feedback was discontinued after these initial easy trials, to avoid contaminating measures of intuitive confidence with feedback regarding task performance.

In addition to calibration trials, during a block of trials 14 test orientation differences (±10.5, 8.5, 6.5, 4.5, 2.5, 1.5 and 0.5) were each presented on either 16 (baseline) or eight (adaptation) experimental trials. These were interleaved in random order, for totals of 224 (baseline) or 112 (adaptation) individual experimental trials. Each participant completed two blocks of trials for each of the six adaptors, and for the baseline condition. Data from the two blocks of each condition were collated before analyses.

Cumulative Gaussian functions (e.g. figure 3c,d for functions fit to model data, and electronic supplementary material, figure S1 for functions fit to collated participant data) were fit to categorical perceptual decisions for each experimental condition, and we took 50% points as estimates of which test pairs were Perceived as having a Subjectively Equal orientation (PSE estimates, marked by red vertical lines in figure 3c,d). Distances in-between 50 and 75% points (respectively, marked by red and black vertical bars) were taken as estimates of just noticeable differences (JNDs) in orientation—a measure of the precision of perceptual decisions. Raised Gaussian functions (figure 3e,f) were fit to individual confidence data from each experimental condition, and we took fitted function peaks as an additional PSE estimate (marked by red vertical bars in figure 3e,f). Note that tests coinciding with these points elicit greatest categorical uncertainty. We take the full-width at the half height (FWHH) of raised Gaussian function fits as estimates of uncertainty spread (i.e. the range of tests that elicit uncertainty when making categorical decisions—with limits are marked by black vertical bars in figure 3e,f).

3. Results

As predicted by many studies (e.g. [26]), adaptation to orientation differences of approximately 15–45° distorted orientation perception, with tests apparently tilted away from the orientations of adaptors that had been presented in the same spatial locations (figures 2a and 4a). Adaptation to ±15° differences also produced a reduction in decisional precision — not evident for other adaptors (as per [20], see figures 2c and 4b, blue data). Adaptation-induced reductions were greater in magnitude and more widespread for confidence (see figures 2c and 4b, red data). This is most obvious after ±30° adaptation (where there is no obvious reduction in decisional precision but a robust reduction in confidence; see figures 2c and 4b).

Figure 2.

Figure 2. Functions describing PSE changes (a) estimated from perceptual decisions (blue data) and confidence judgments (red data). Shaded regions depict ± 1 s.e.m. (c) Uncertainty spread (red) and JND (blue) changes. Negative uncertainty spread changes reflect greater uncertainty post-adaptation, and negative JND changes reflect greater perceptual imprecision. (b) Histogram showing numbers of simulations resulting in different proportions of nominally ‘correct’ classifications of individual functions formed by randomly sampling from individual perception and confidence PSE change functions. This is a null distribution of chance classifications, that can be compared to our actual decoding classification success rate (red dotted bar). (d) Details are as for (b), but for a classification process for JND and uncertainty spread data. (Online version in colour.)

Figure 3.

Figure 3. Depictions of unadapted (Baseline) model channels (a), and channels adapted to +30° tilt (b). Faint blue lines depict channel response potentials on three simulated trials. Potential channel responses averaged across 1584 trials are also depicted (bold blue lines). (c) Proportion of trials categorized as right tilted by our baseline model, as a function of physical input values (black data points). A cumulative Gaussian function has been fit to these data (blue line). (d) As for (c), but for a +30° adapted model. (e) Proportion of trials categorized as resulting in a low-confidence decision by our baseline model, as a function of physical input values (black data points). A raised Gaussian function has been fit to these data (blue line). (f) As for (e), but for a +30° adapted model. (g) X/Y scatter plot depicting encoded (Y-axis) and physical (X-axis) orientations across 1584 trials simulated by our baseline model. While each physical input is encoded differently on discrete trials, due to simulated encoding noise, on average the baseline model encodes inputs veridically, so datapoints cluster about the red oblique line (which plots veridical 1 : 1 encodings). The horizontal green dotted line depicts the criterion value used for perceptual categorizations (0°), and the edges of the horizontal pink rectangle depict the single unsigned magnitude criterion used to categorize confidence as low or high (3°, see main text for a full description). (h) Details are as for (g), but for trials simulated by a +30° adapted model. (i) Differences between perceived orientation values encoded on individual trials by our baseline (X-axis) and +30° adapted (Y-axis) model. (Online version in colour.)

Figure 4.

Figure 4. (a) Observer model fits (solid lines) to PSE changes estimated from categorical perceptual decisions (blue data) and from confidence judgments (red data points). Negative after-effects have been generated by assuming these would be equal but opposite relative to our measured after-effects for positive adaptors. Positive after-effects have simply been re-drawn from figure 1a. (b) Observer model fits (solid lines) to changes in both JNDs (blue data—perceptual precision change estimates) and uncertainty spread (red data—confidence changes). Note that our model fits capture key qualitative features of all four datasets, most especially the tightly tuned increase in JNDs (which signify reductions in perceptual sensitivity, negative blue data points) for adaptors tilted ±15°, and the greater magnitude and more dispersed increase in uncertainty spread (negative red data points). (c) Observer model fits (solid blue line) to JND changes estimated from this study (blue data), and to data re-drawn from Regan & Beverly ([20]—blue data points). Data from the earlier study have been scaled by multiplying the percentage changes they reported by the ratio between our maximal negative model changes and their maximal JND percentage changes. Note the qualitative consistency between the two datasets, gathered 36 years apart, and the fact that our model captures key qualitative characteristics of both datasets. In all panels, error bars depict ± 1 s.d. from the after-effect averaged across participants. (Online version in colour.)

As a formal test for a different impact of tilt adaptation on JND changes and uncertainty spread, we conducted a non-parametric shuffle test, based on a nearest neighbour classification process with jack-knifed cross validation. To minimize the influence of individual differences in after-effect magnitudes, all four individual after-effect functions were first normalized to the largest unsigned after-effect for that individual (i.e. the largest after-effect in each individual function was ±1). Individual functions describing the impact of tilt adaptation were then classified as having described a JND or an uncertainty spread function based on similarities between that function and all other individual functions for these two categories. This process successfully classified 92% of individual tilt adaptation functions for these two categories. This success rate can be compared to classifications resulting from 2000 simulations, wherein data points from each individuals JND and uncertainty spread change functions were randomly interchanged to form a function where there is no correspondence between the experimental condition and function data. These arbitrary functions are then classified according to the same process as the original procedure. The 2000 simulations provide a null distribution of chance classification success rates. Comparing our actual classification success rate to this null distribution of chance classifications resulted in a p-value of 0.005 (figure 2d)—demonstrating that the success of our classification procedure was very unlikely to have emerged by chance. Overall, these data show that tilt adaptation had a different impact on functions describing JND and uncertainty spread changes.

The last set of results can be compared to a matching set of analyses for our two sets of PSE changes—one calculated from categorical perceptual decisions, and one calculated from confidence judgements (from estimates of peak uncertainty). These analyses reveal that these two measures are interchangeable, with an actual classification success rate of just 58%, and a non-parametric testing procedure resulting in a p-value of 0.175 (figure 2b). This demonstrates that categorical perceptual decisions and confidence judgements can provide equivalent measures of perceptual central tendency, which are equally impacted by tilt adaptation. Hence the differences we have identified between the impact of tilt adaptation on JND and uncertainty spread changes are not due to our two tasks providing unrelated measures, as central tendency estimates extracted from categorical perceptual decisions and confidence judgements were equivalent and equally impacted by tilt adaptation. Nor was the greater impact of tilt adaptation on confidence contingent on using FWHH estimates to measure uncertainty spread, as the same pattern of results was evident when we used the width at 90% of the height of fitted Gaussian functions to measure uncertainty spread (see electronic supplementary material, figure S2).

While the width of Gaussian functions fit to low-confidence decisions provide a good metric of uncertainty spread which is impacted by visual adaptation, the height of these functions is governed by how willing people are to report having low confidence. This is not just determined by insight into perception but also by personality related biases, for instance, to report having high confidence regardless of performance [27]. Such a bias may be evident in our data, as proportions of low-confidence decisions did not tend to reach 1, even for tests resulting in chance levels of performance (see electronic supplementary material, figure S1). The peak proportion of trials resulting in low-confidence decisions, as indicated by the height of raised Gaussian functions fit to individual data, did not seem to be impacted by tilt adaptation. Peak proportions for baseline (M 0.78 s.d. 0.21) trials were, for example, no different to peak proportions for 30° adapted trials (M 0.79 s.d. 0.26, paired t5 = 0.29, p = 0.786, see electronic supplementary material, figure S1). This contrasts with robust differences in uncertainty spread (e.g. baseline M 176 s.d. 35, 30° adapted trials mean 228 s.d. 41, paired t5 = 3.45, p = 0.018, see electronic supplementary material, figure S1).

So, why would tilt adaptation have a different, greater, impact on uncertainty spread than on perceptual precision? To answer this question, we created a labelled line observer model [2123] to describe the impact of tilt adaptation on perception, the precision of perceptual decisions and confidence (see electronic supplementary material, observer model code). This class of model assumes that sensory information is encoded as a pattern of responses to input from a number of ‘channels’ that are each maximally responsive to a different magnitude of input—in this case, to different test orientations (figure 3a,b). The potential response of a channel to inputs in our model is described by a normal distribution, with a standard deviation of 10°. Peak potential responses (channel tunings) are separated by 10°, ranging from ±90° (horizontal) to +80° in 10° steps—so our model has 18 channels. The neural consequences of visual adaptation include reduced responding and changes to both the optimal input, and to the range of inputs, that drive responses [24]. We model these effects by implementing a reduction (of up to 95%) to the response potential of model channels that are reactive to the adaptor [2123], and by applying model channel tuning shifts (of up to 10°) away from adapted orientations [24,25] (figure 3b). We operationalize encoding noise by applying a randomly determined reduction to the response potential of each channel (ranging from 0 to 100%) on each trial (figure 3a,b, and observer model code provided as electronic supplementary material).

Our model determines a perceived orientation value on each trial by taking a sum of the orientation tunings of each channel that are responsive to the test input, weighted in proportion to the magnitudes of channel responses. As the potential response magnitudes of each channel are randomly modulated on each trial, to simulate neural encoding noise, different perceived orientation values are encoded from repeated exposures to identical physical tests (figure 3g,h). Categorical perceptual decisions (e.g. is a stimulus tilted left or right?) on each trial are determined by indexing these perceived orientation values against a single criterion (0°). Categorical confidence decisions (e.g. low or high) are similarly determined, by indexing (unsigned) orientation values against a single magnitude criterion (3°, so inputs with perceived orientation values more than ±3° elicit a high-confidence categorization, and smaller values elicit a low-confidence categorization). Our behavioural results suggest this magnitude criterion (of 3°) equates to approximately 1.1 the average baseline JND for our observers (average 2.69, s.d. 0.81).

Model data from a simulated experiment are shown in figure 3. Repeated trials with identical tests result in different perceived orientation values, both at baseline (figure 3g) and after adaptation to a +30° tilt (figure 3h). Differences between values decoded from the baseline and +30° adapted model are also shown (figure 3i). After +30° adaptation, model decodings of positive physical tilts tend to be reduced in magnitude. This has relatively little impact on JNDs, as JND estimates are governed by values decoded at and immediately about the categorical decision criterion (0°—marked by green dotted lines), and the greatest changes driven by +30° adaptation are offset from this value. However, confidence is more negatively impacted as adaptation-induced changes encroach on the criterion magnitude for confidence (indicated by the edges of the pink shaded region—note the horizontal distances of data points from the red line, which marks veridical baseline : adapted orientation values), meaning that post-adaptation less physical inputs result in orientation values that surpass the confidence criterion threshold. These modelled data are reminiscent of our behavioural data (figures 2c and 4).

Observer model fits to functions describing all four behavioural after-effects are depicted in figure 4. These represent a simultaneous fit to PSE change estimates from both categorical perceptual decisions (figure 4a, blue data) and confidence judgements (figure 4a, red data), and to JND (figure 4b, blue data) and uncertainty spread (figure 4b, red data) changes. Note that we have used a common set of model parameters to fit all four datasets, and the different impacts of tilt adaptation on perceived orientation (figure 4a), perceptual precision (figure 4b, blue data) and confidence (figure 4b, red data) are all well described by our model, with its core assumption that perceptual precision estimates and confidence measures rely on different magnitudes of sensory information (i.e. on different magnitudes of perceived tilt). In figure 4c, we have also re-plotted our data relating to adaptation-induced changes in tilt JNDs (black data points) along with a similar dataset re-drawn from an earlier study [20], which also examined the impact of tilt adaptation on tilt JNDs. The high consistency between our data and the dataset collected some 36 years ago suggests to us that we (and they) have captured reliable changes in perceptual precision post tilt adaptation.

It is common practice to describe labelled line models as having just a couple of free parameters, which are adjusted to best fit data (e.g. [26]). The behaviour of labelled line models are, however, contingent on a complex interplay between many factors, in our case this includes the number (18) and spacing (10°) of model channels, their tuning bandwidths (10°), the criterion values chosen to classify data, the magnitudes of post-adaptation changes in potential channel responses (up to 95%) and tuning shifts (up to 10°), as well as the spread of these changes across model channels. Of these, post-adaptation reductions in potential channel responses are most important for biasing perceived orientation values away from adapted orientations, and tuning shifts are most important for generating localized changes in perceptual precision and confidence. The other factors tend to govern the spread of these changes. We arrived at our parameter settings via a process educated adjustments to best fit our data. Given the high dimensionality of our model, we would not describe it as optimal in any way, or assert that it is superior to any other similar model. We simply regard the performance of our model as an existence proof, that the key qualitative features of our data can be described by a biologically inspired model that does not assume that confidence and perceptual precision are informed by different types of sensory processing.

4. Discussion

We have found that tilt adaptation has a different impact on measures of perceptual precision and confidence. Both tended to be undermined by tilt adaptation, but the impact of tilt adaptation on the spread of uncertainty was greater than the impact of tilt adaptation on estimates of perceptual precision. Moreover, we have found that both sets of changes can be explained by a labelled-line observer model, which assumes that estimates of perceptual precision and confidence rely on different magnitudes of sensory information (i.e. on different magnitudes of perceived orientation difference between tests). Estimates of perceptual precision are assumed to rely on small differences in perceived orientation (i.e. ±approx. 2.69°) which are too slight to evoke high levels of confidence. These feelings are assumed to rely on perceiving a greater magnitude of difference between test orientations (i.e. ± approx. 3°).

Our data reveal that tilt adaptation can have a different impact on measures of perceptual sensitivity and confidence, which is reminiscent of other investigations that have identified stimulus manipulations that selectively impact perceptual confidence [3,1315]. The important conceptual implication of our data is that the data challenge the assumption that any dissociation between decisional precision and confidence must rely on these being informed by different types of information, as has previously been assumed by a number of researchers [1315], including the lead author of this paper [3]. Rather, a dissociation can emerge if judgements rely on different magnitudes of perceived information. This is not to say that all such dissociations will have a similar cause, but our data encourage a re-evaluation of similar findings.

In this investigation, we were primarily interested in how perception and sensory processing inform confidence. If perceptual precision and confidence had been equally impacted by visual adaptation, our data would only have shown that the precision of perception can inform confidence—which is already well established [1,2]. However, we have found that visual adaptation has a greater (mostly negative) impact on confidence than on perceptual precision—so our confidence measures are not just a different metric of perception.

Our investigation has focused on the direct contributions of visual processing to confidence in perceptual decisions [1,2,812]. This is not to deny that other factors also govern perceptual confidence. Cognitive processes, such as memory and attention, have a strong influence on confidence measures that are additional to any impact of sensory processing (e.g. [9,15]). An individual's personality is also a powerful general factor, which can be misplaced (e.g. generally confident people can perform relatively poorly) [27]. Our model does not capture these factors, which could be described by a model with a more elaborate architecture (e.g. [28]). We, however, have refrained from this, as our goal was to see if a relatively simple model of sensory processing could explain detailed datasets describing how sensory adaptation changes perception, estimates of decisional precision and confidence. We have succeeded in that goal, and feel that this provides strong evidence that perceptual precision and confidence are informed by different magnitudes of sensory evidence. We do not wish, however, to create the false impression that ours is a comprehensive model of perceptual confidence. It is not.

There are a number of ways to measure confidence—from forcing people to select which of a small number of labelled options best describes their decisional confidence (e.g. number labels ‘1’–‘5’) [68,16], to asking people to choose which of a small number of recent decisions had elicited most confidence [5], or asking people to commit to a post-decisional wager regarding whether or not they were correct [29]. We used a protocol suited to the particular needs of our experiment. By having people simultaneously report on perception and confidence, we saved time on each trial, which is an important practical consideration as adaptation trials are already protracted, so by limiting response times we can maximize data collection. Simultaneous responding on perception and confidence also eliminates confounds relating to memory decay, which are a feature of sequential reporting protocols (e.g. [3,57,15,29]).

Adaptation is a ubiquitous feature of visual processing, so the approach we have used here could be extended to examine numerous other forms of visual adaptation, from features that are typically regarded as being encoded at a ‘low-level’ of the visual hierarchy (e.g. colour and motion direction adaptation), to features that are thought to tap higher levels of analysis (e.g. human face adaptation, for a review see [30]). Future investigations along these lines would generate useful datasets to clarify the impact of visual adaptation on measures of perceptual precision and confidence, revealing if the principles we have discovered here are context specific, or will generalize.

There is great contemporary interest in the human brain's ability to self-evaluate its internal processes, including its ability to assess the accuracy of its perceptual decisions. Our data suggest feelings of confidence in perceptual decisions might simply scale with the magnitude of encoded sensory information. This possibility has previously been discounted, as there are situations where estimates of perceptual precision and confidence systematically differ. While our data further reinforce that this can happen, these same data suggest that, in this context, these estimates have differed not because they rely on different types of information, but because they have relied on different magnitudes of sensory information. These findings should therefore encourage researchers to re-evaluate assumptions regarding perceptual metacognition. Our data suggest the influence of sensory processing on confidence might be more straightforward than some researchers have assumed, based on interpretations of datasets that are qualitatively very similar to the findings we have reported here.

Data accessibility

All data and code are available in the main text and as electronic supplementary material.

The data are provided in electronic supplementary material [31].

Authors' contributions

D.H.A.: conceptualization, formal analysis, funding acquisition, investigation, methodology, project administration, software, supervision, writing-original draft, writing-review and editing; N.L.A. and B.W.S.: investigation, writing-review and editing; A.J.: conceptualization, funding acquisition, writing-review and editing.

All authors gave final approval for publication and agreed to be held accountable for the work performed therein.

Competing interests

Authors declare that they have no competing interests.


This research was supported by an ARC Discovery Project grant no. DP200102227 awarded to D.H.A. and A.J.


Electronic supplementary material is available online at

Published by the Royal Society. All rights reserved.