Here time turns into space: Does consciousness implement the fractional Fourier transform?

Exploring fractional Fourier transforms as a possible organizing principle for wave-based computations in the brain.

Cube Flipper (Smoothbrains)https://www.smoothbrains.net/ , Andrés Gómez-Emilsson ../people/andrés-gómez-emilsson (Qualia Research Institute)https://www.qri.org/
Oct 11, 2025

Summary

Introduction

This post is co-authored with Andrés Gómez Emilsson, who has also been pursuing a similar line of thinking around fractional Fourier transforms for some time. I’d also like to say thanks to Hunter Meyer, Ethan Kuntz and Raimonds Jermaks from the Qualia Research Institute for the extensive discussions earlier this year; Qiaochu Yuan for pointing me at linear canonical transformations (Wikipedia contributors 2024c); and Sloan Quine for additional technical discussions.

My apologies in advance for the rambly infodump nature of this post – I’m doing my best to provide a snapshot of a number of lines of thinking I began exploring earlier this year. When writing this sort of thing, I usually try to take the time to unpack dense concepts and explain my lines of reasoning along the way, but I’ll admit to being a little more fast and loose this time around. I’ll also be discussing mathematics which I am only recently familiar with. If things don’t entirely make sense, you’re welcome to send me an email.

This post makes reference to a model of consciousness in which subjective experience is taken to be one and the same as the electromagnetic field. If some parts don’t make sense, I recommend reading my earlier post, An introduction to Susan Pockett: An electromagnetic theory of consciousness.

I was lucky enough to be introduced to the Fourier transform when I was relatively young (Wikipedia contributors 2024a). I was an media nerd as a teenager, and as such I spent a significant amount of time editing digital images and audio using software like Adobe Photoshop and Ableton Live. Lacking restraint, I often had a lot of fun combining the filters available in original ways – layering up excessive colour correction or convolution filters in Photoshop, or perhaps chorus, reverb, and equalisation in Ableton.

Self-portrait taken in 2001 using a Sony Mavica MVC-FD71. A heavy posterisation filter is applied.

My life changed when someone whispered the words signal theory in my direction. This lead me to understand that audio and images were just one and two dimensional digital signals, and that mathematically there wasn’t much difference between the amplitude of a sample and the brightness of a pixel. Additionally, many of the one dimensional transformations I was applying had two dimensional equivalents – and vice versa. An audio low-pass filter is also a Gaussian blur when you apply it to an image! Got it.

The second breakthrough came when someone else nudged me towards reading about the Fourier transform. I gave a brief review of the Fourier transform in my previous post:

If you are unfamiliar with what the Fourier transform is, a full explainer is beyond the scope of this post – but I don’t think that understanding how it works is as important as simply understanding what it does. Suffice it to say that the Fourier transform is a mathematical function which can take a signal as input and produce its frequency domain representation as output. This relies upon the fact that any arbitrary signal can be constructed by summing a series of sinusoidal functions. This is a lossless transform – the frequency domain representation can be transformed back into the original signal without any loss of information.

An approximation of a square wave constructed from the sum of a Fourier series of sine waves. Viewed from either side, we see the time domain or frequency domain – but when viewed from above, we see the time-frequency domain. Visualisation by Izaak Neutelings (Neutelings 2019).

The Fourier transform is widely used in signal processing, and has discrete versions employed in digital signal processing. Have you ever seen the scrolling spectrogram displayed by some music software? That’s the Fourier transform in action. The Fourier transform can operate on signals of arbitrary dimensionality – two-dimensional Fourier transforms exist, and are used in image processing.

If you are interested in a full explainer detailing how the Fourier transform works, I’d recommend checking out one of these:

I found this revelatory. Suddenly I was no longer restricted to thinking about the media I worked with in the time domain or spatial domain – there was a whole new frequency domain I could transform signals into. It felt like I could now see the world from a vantage point in a dimension orthogonal to spacetime.

Illustration of the Fourier transform. From Artem Kirsanov on YouTube.

I began to notice applications of the Fourier transform all around me; how it pervaded our technological reality and how I could use it to reason about the signals I encountered in everyday life. I now understood how the compression codec in my MP3 player and the pitch shifting algorithm in my guitar’s DigiTech Whammy pedal worked.

Back in 1999, the musician Aphex Twin used software called MetaSynth to hide a picture of his own face in the spectrogram of one of his songs. Viewed in Adobe Audition.

I was delighted to discover audio software like MetaSynth and later Adobe Audition, which has a spectrogram view which lets you work inside the short-time Fourier transform directly – and even found a two-dimensional Fourier transform plugin for Adobe Photoshop, which was so powerful that I was surprised that Photoshop did not come bundled with such a feature by default.

A demonstration of how to use a two-dimensional Fourier transform plugin in Adobe Photoshop to remove a moiré pattern from a scanned image. By VSXD Tutorials on YouTube.

Later in life I would often find excuses to work frequency domain processing into various creative coding projects, as I found that I could often derive much more original and surprising effects than what was achievable otherwise.

Even later, I find myself involved with consciousness research. I’ve described it better elsewhere – but in brief, I regard this as a process of reverse engineering, which involves observing phenomenology in both sober and altered states of consciousness, and discussing which mathematical tools are most appropriate for modelling its dynamics.

Frequency domain analysis is incredibly useful, so it would be surprising to me if evolution did not find a way of implementing it too. For example, it can be used in object recognition to recognise frequency domain invariants of a given object independent of translation, rotation, and scale. Perhaps it would be fruitful to consider how a biological analogue of the Fourier transform might be implemented – and if so, might we also recognise its phenomenological signature?

Pairs of common functions and their freque ncy domain spectra. Note how the cosine function corresponds to a spike in the frequency domain, and so on. Consider that such values will remain invariant independent of where the original signal is located in time. From Feature Extraction and Image Processing for Computer Vision (Nixon and Aguado, 2002).

In this post, I’m going to discuss how the brain may implement something like the Fourier transform. Specifically, I believe it may use something called the fractional Fourier transform – a generalisation of the regular Fourier transform which is amenable to implementation using wave dynamics.

Firstly, I’ll explain what the fractional Fourier transform is and show how it can appear naturally in a variety of physical systems. Secondly, I’ll then discuss the kind of phenomenological evidence which would demonstrate the signature of the fractional Fourier transform – particularly the characteristic ringing artifacts which arise in the visual field during psychedelic experiences. Thirdly, I’ll explore how the brain might implement such a transform using travelling waves in cortical structures. Then finally, I’ll discuss why this would be computationally advantageous – it would provide a means of implementing the kind of massively parallel pattern recognition operations which would be prohibitively expensive to implement any other way.

What is the fractional Fourier transform?

I’m looking for a biologically plausible implementation, so I was pleased to discover the fractional Fourier transform exists – a continuous domain transform which can smoothly interpolate from time domain or spatial domain to the frequency domain and back again.

The fractional Fourier transform function.

Crucially, the fractional Fourier transform can be implemented in wave-based systems – for example, in optics via Fresnel diffraction (Wikipedia contributors 2024b), or speculatively cortical travelling waves. We’ll get into how that works in a moment, but first I’m going to explain the fractional Fourier transform. We’ll work in two dimensions, as I think that’s more illustrative. We’ll start by looking at the regular two-dimensional Fourier transform:

A demonstration of the two-dimensional Fourier transform using the Lena test image. The spatial domain signal is on the left and the frequency domain signal is on the right. Red, green, and blue channels are transformed separately. Only the complex magnitude is shown.

Note that the lowest frequencies are at the origin, while the highest frequencies are at the border. In this case, the presence of low frequencies generates the bright central cusp. We also see a pair of diagonal streaks indicating the presence of broadband diagonal spatial frequencies in the original signal.

A demonstration of the two-dimensional Fourier transform using a more abstract test image – a tilted square grid. The significant frequency domain components are visible on the right as the cross-shaped cusps aligned perpendicular to their originating waves.

Andrés Gómez Emilsson has a good tweet exploring this perhaps unconventional use of the term cusp, and how optical cusps might constitute the building blocks of our inner world simulation.

The fractional Fourier transform can be considered to be a rotation in the time-frequency plane, and as such the relevant free parameter is known as the angle α, which is a value in the range 0 to 2π. Though sometimes we use the order a, which is in the range 0 to 4. This is the fractional Fourier transform at four different orders:

The two-dimensional fractional Fourier transform for orders a ∈ {0, ⅓, ⅔, 1}.

Isn’t that neat. I think the ringing artifacts at a = ⅓ are particularly curious. The integer values of a are useful to understand:

  1. The signal
  2. The Fourier transformed signal
  3. The inverse signal
  4. The inverse Fourier transformed signal
  5. The signal

The fractional Fourier transform loops back on itself at a = 4:

The fractional Fourier transform can be understood as a rotation in phase space.

Perhaps an animated version should be more illustrative:

The two-dimensional fractional Fourier transform cycling through the range a ∈ [0, 4). I suspect consciousness may be doing something akin to this, forty times per second – but we’ll come back to that later. (Flipper 2025)

Isn’t this mysterious. If you have noticed the aesthetic resemblance with diffraction patterns, that’s not a coincidence. The fractional Fourier transform can be implemented using a coherent light source and a pair of lenses – and analog frequency domain filters can even be implemented by inserting a mask at the focal plane:

Demonstration of the optical Fourier transform. Note how the first lens, focal plane, and second lens correspond to fractional orders 0, 1, and 2 respectively. By Hans Chiu on Twitter.

Optical systems and linear canonical transformations

Perhaps this should make sense if you are already familiar with the Huygens-Fresnel principle? The fractional Fourier transform can be understood as a convolution with a quadratic phase kernel, which represents the phase accumulated by a spherical wavefront during propagation. The key word here is quadratic – unlike a simple plane wave which has a linear phase, a spherical wave has a phase that varies with the square of the distance. When you convolve a signal with this quadratic phase pattern, it’s a little like asking, what would this signal look like if it were illuminated by a spherical wavefront? Visually, it looks like the complex-valued analogue of the real-valued Fresnel zone plates which show up in holography:

The quadratic phase kernel eir2 in the range x, y ∈ [–8, 8]. This kernel should be scaled proportional to the wavelength and lens curvature.

In physical optics, this is exactly the effect you get from free-space propagation combined with a lens of a particular focal length. The fractional Fourier transform is actually a specific case of a more general family of transforms known as linear canonical transformations. These are the set of all linear symplectic (orientation and area preserving) transformations of the time-frequency plane:

Another specific case of a linear canonical transformation is the chirplet transform, corresponding to a shear followed by a rotation time-frequency plane. Such transforms are of interest to me because they may be especially useful for representing features invariant to perspective projections.

Linear canonical transformations have a one-to-one correspondence with paraxial optical systems, which are physical optical systems made from standard first-order optical elements and analysed under the paraxial approximation. This in turn is its own rabbit hole.

Quantum systems and fractional revivals

The fractional Fourier transform can also be implemented using the Schrödinger wave equation from quantum mechanics. In fact, the Fourier equation and the Schrödinger equation are very deeply related to one another. The Schrödinger wave function is usually expressed in position space ψ(x, t), but to obtain the wavefunction in momentum space ϕ(p, t) you simply apply the Fourier transform. This also shows why the Heisenberg uncertainty principle is mathematically equivalent to the Fourier uncertainty principle.

Anyway, to compute the time evolution of a Schrödinger wave function, it’s natural to move into momentum space – so this is where the Fourier transform comes in. So, the time evolution of the wave function of a particle in free space is equivalent to the Fresnel transform, which is itself equivalent to the fractional Fourier transform except that it spreads out in space as time goes on. For a particle in a quadratic potential well, the time evolution is exactly the fractional Fourier transform. Other potential wells are more complicated, but may also be approximated locally by similar Fourier-type transformations.

A quantum mechanical simulation of the behaviour of a particle in an infinite one dimensional potential well. The probability distribution of the particle’s position is at the top and the probability distribution of the particle’s momentum is at the bottom. Note that the pattern repeats itself as the video loops. Generated using Paul Falstad’s 1D quantum states applet.

Such systems may return to their original states periodically, despite undergoing complicated spreading and self-interference along the way. The period on which they repeat themselves is known as the revival time, and in some systems, the wave function passes through exact fractional Fourier transforms of the original signal along the way. In other systems, fractional revivals may instead produce multiple smaller copies of the original wave packet rather than a simple Fourier relation.

A quantum mechanical simulation of the behaviour of a particle in an infinite two dimensional potential well. Notice the fractional revivals where a smaller version of the original wave function tiles the potential well. Generated using Paul Falstad’s 2D quantum box modes applet.

Revivals are a generic wave phenomenon and are not unique to quantum systems – they show up in classical optical and acoustic systems as well. I found this to be an encouraging prospect – suddenly the class of systems which could implement the fractional Fourier transform became much broader. I began experiencing imaginal visions, of the structure of the universe reflecting and repeating itself all the way down, dispersing and reviving on an eternal loop, at every scale, forever…

Another way of viewing the fractional Fourier transform is to think of its order as corresponding to the propagation distance in free-space diffraction. For a plane wave passing through a periodic grating, the field at distance z is given by the fractional Fourier transform of the grating’s transmission function at some order a proportional to z. Plotting the field intensity as a function of position and distance then gives you a pattern known as a Talbot carpet: (Wikipedia contributors 2024d)

The optical Talbot effect for a diffraction grating. Full or fractional revivals of the original pattern appear at propagation distances that are rational fractions of the Talbot distance, forming the intricate interference pattern of the Talbot carpet. From Wikipedia. (Wikipedia contributors 2024d)

These patterns are of real concern when one considers the whispering gallery modes that arise close to the surface of fiber optics:

Whispering gallery modes used to implement a one-to-eight beam splitter in a cylindrical waveguide. From Talbot interference of whispering gallery modes (Eriksson et al., 2025). (Eriksson et al. 2025) Found via outside five sigma on Twitter. Even more unusual beam splitters are possible with differently shaped fibers.

Mostly I am highlighting Talbot carpets here because they round out our tour of the fractional Fourier transform’s ubiquity. From the quantum revivals in potential wells to the interference patterns in fiber optics, it seems that the fractional Fourier transform is kind of just lying around in nature, waiting to be noticed – not as some exotic mathematical object, but as a fundamental organisational principle underlying the evolution of wave systems in time.

For more on Talbot carpets, please see Fernando Chamizo’s page on classical and quantum Talbot carpets.

Does the fractional Fourier transform show up in subjective experience?

We think that it’s fairly straightforward to make the case that the plain Fourier transform shows up in subjective experience. I presented an example of frequency domain attentional modulation in my previous post:

Perhaps this image is a good example of how phenomena can have spectral components. Consider texture – just as I can focus my attention like a spotlight at a different locations in space, I find I can also use it to tune in to different spectral qualities of a given object. This stimulus is constructed from three sine waves – can you isolate each one in turn? From The dynamics of perceptual rivalry in bistable and tristable perception (Wallis and Ringelhan, 2013).

We believe this becomes easier to observe in altered states. Have you ever taken a small amount of LSD and found yourself engrossed in a pattern on the wallpaper or carpet? Consider what frequency domain transform that might correspond to – are specific spectral peaks being stabilised or amplified, while spatial precision is reduced? Andrés explored uncertainty principle qualia in great detail with The DemystifySci Podcast last year:

What I have found is that there is something akin to the Heisenberg uncertainty principle going on, having to do with what kind of patterns you can perceive. And essentially there’s a trade-off between having more information about the spatial domain – like the position of things – versus having more information about the frequency domain – like the frequencies and vibrations, and the spectrum, like what’s happening in different time scales.

And so one extreme – this happens in high dose LSD, but it’s also a jhāna effect – on the one extreme you can concentrate all the information into position, and when that happens you collapse into one point. It’s a very strange experience – you just become one point, you’re not a person anymore, it feels as if all of your attention is just one point. That’s a real state of consciousness – the Buddhists talk about it – it’s very peculiar.

But then also, the complete opposite – you can concentrate all of the information, or all of the sampling into frequency. So when you do that, you turn into a vibe. You stop being anywhere and you’re just a vibration. You’re just a wave – it’s very emotional, it’s a very different type of experience.

So, my guess is that these are the extremes. So when you’re hyper-concentrated, you can do this or you can do that – I think normal states of consciousness are a mixture. Like we have several attentional centers, and they’re sampling both spatial and temporal information – both momentum and position – and the precise balance that you choose between them corresponds to your personality and your state of consciousness. In a way, your way of being is intimately related with how you sample the world. There’s always going to be something you’re missing – because if you just focus on frequency you’re gonna lose the spatial information, and vice versa. So I wonder if that is connected – no entity can actually know a space fully – it has to do a trade-off, and there’s no way around it.

However, I am interested in looking for signatures of the fractional Fourier transform, which is a little more specific and complicated. I’ve shown a handful of people the visuals I generated for this post, and in the process of doing so I’ve received some amount of positive feedback.

Raimonds’ initial reaction when I showed him the renderings I made for this post.

People have compared these renderings to a variety of unusual phenomena, including k-holes, LSD, meditative cessations, and even the arising and passing of consciousness frames. I find this encouraging, although the states described tend to be quite extreme – only accessible through high doses of drugs or extensive training in meditation. I also don’t think this kind of informal phenomenology is repeatable enough to stake any strong claims on – I’m more interested in finding accessible, low level, simple phenomena which are amenable to study using psychophysics experiments.

A few people have recognised the distinctive shifting noise patterns in the background of the fractional Fourier transform as the kind of thing they observe while on psychedelics or dissociatives. I think they are quite similar to the speckle patterns that occur when a coherent light source scatters off a rough surface. Animation from How are holograms possible? by 3Blue1Brown on YouTube.

Fresnel fringes and ringing artifacts

In the fractional Fourier transform demonstration earlier, did you notice the distinctive ringing artifacts around the edges of objects? These are known as Fresnel fringes. Characteristically, the spacing of these fringes decreases with distance from the edge, such that the fringes become progressively finer. This pattern arises from the quadratic phase terms underlying the Fresnel diffraction formalism, which also underpins optical implementations of the fractional Fourier transform. (Wikipedia contributors 2024b)

Fresnel fringes seen in a transmission electron microscope image. (JEOL Ltd. 2012)

These show up in any wave based media in which Fresnel diffraction applies – not just in optics, but also in acoustic and quantum wave systems. Ringing artifacts also show up in my visual field when I’m on a moderate dose of a serotonergic psychedelic – for instance one or two grams of psilocybin mushrooms. You might also call these auras or haloes.

The two-dimensional fractional Fourier transform for a star. Note the ringing artifacts close to the edges of the star just after the transform begins.

I’ve spent some time looking at ringing artifacts while on psychedelics, and I have found that – at least at low doses – some psychedelics generate coarser or finer ringing artifacts than others. In order from coarse to fine, 2C-B, LSD and psilocybin generate progressively tighter ringing artifacts.

The Qualia Research Institute has published a psychophysics experiment known as the tracer tool which can be used to find the frequency of the strobing afterimages left by psychedelics. Curiously, the order of increasing strobe frequency matches the order of increasing ringing artifact sharpness – there’s a tight correlation between temporal and spatial frequencies. I believe that 2C-B, LSD, and psilocybin tend to generate 12 Hz, 15 Hz, and 19 Hz strobing afterimages, respectively. It’s enough to make me wonder if the strobe effects are just temporal ringing artifacts. This would be an alternate model to the control interrupt theory of psychedelic action.

The key aspect is whether or not the fringes become finer with distance from the edge. If this is the case, then I believe this would be a strong argument that dynamics analogous to Fresnel optics are somehow playing out within the brain – which would mean that the brain may provide a viable substrate for analog computations like the fractional Fourier transform. I’ve shown these animations to some friends who have confirmed that the ringing artifacts that they see while on psychedelics resemble the ones in these renderings – and that the fringes do indeed become finer with distance from the edge.

These ringing artifacts might also be due to the Gibbs phenomenon, which arises when a Fourier series is truncated – i.e., the higher harmonics are chopped off, as Steven Lehar has suggested elsewhere could be the origin of psychedelic visuals. However, I suspect that low-pass filtering alone is sufficient to explain other characteristically optical or psychedelic effects that we observe.

For comparison, two types of ringing artifacts are shown. The top row illustrates Fresnel fringes of fractional orders 1/8, 1/4 and 3/8, while the bottom row depicts the Gibbs phenomenon, obtained by truncating the Fourier transform of the image to within 64 px, 48 px, and 32 px of the center. I recommend clicking to view the full size image, as the details are quite subtle.

Personally, it’s been some time since I last took a decent dose of psychedelics – and the last time I did so, I wasn’t keeping an eye on the ringing artifacts. I’ll have to set some time aside to take a couple grams of mushrooms and check for myself soon whether they resemble Fresnel fringes. If I find this is the case, then I should be motivated to construct some kind of questionnaire or psychophysics tool which could be used to verify whether other people also see the fingerprint of Fresnel diffraction in their subjective experience.

How might the fractional Fourier transform show up in the brain?

There’s a baseline assumption which so far I’ve neglected to address, which is that representations in the mind are structurally similar to the objects in the world which they represent – sufficiently so at least that a transformation of some kind on one can approximate a transformation on the other.

There have been a number of recent successes in reading video and speech representations from the brain, but generally this involves an intermediary decoding layer where machine learning techniques are used to interpret neuroimaging signals. I might be opinionated here when I claim that these results are simultaneously impressive and unsatisfying. Perhaps interpretability researchers might sympathise when I say I think the assumption that neural representations will be forever inscrutable – requiring essentially a neural network to read a neural network – is an overly pessimistic assumption, and the fact that we haven’t uncovered the true representations yet is more a factor of insufficient neuroimaging fidelity than the result of messy, illegible complexity at every layer in the stack. Why should illegibility be the default assumption, when there are computational benefits to well-structured representations?

From the paper, Movie reconstruction from mouse visual cortex activity (Bauer et al., 2024) (Bauer et al. 2024)

Travelling waves and spatiotemporal dynamics

A recent paper from the Kempner Institute makes the case that the spatiotemporal dynamics of traveling waves could provide a reasonable basis for neural representations, as these dynamics can be recruited to encode the symmetries of the world as conserved quantities. From A Spacetime Perspective on Dynamical Computation in Neural Information Processing Systems (Keller et al., 2024): (Keller et al. 2024)

There is now substantial evidence for traveling waves and other structured spatiotemporal recurrent neural dynamics in cortical structures; but these observations have typically been difficult to reconcile with notions of topographically organized selectivity and feedforward receptive fields. We introduce a new ‘spacetime’ perspective on neural computation in which structured selectivity and dynamics are not contradictory but instead are complimentary. We show that spatiotemporal dynamics may be a mechanism by which natural neural systems encode approximate visual, temporal, and abstract symmetries of the world as conserved quantities, thereby enabling improved generalization and long-term working memory.

Keller argues that rather than simply learning features that are invariant to transformations, the brain may be explicitly learning the symmetries themselves:

To begin to understand the relationship between spatiotemporally structured dynamics and symmetries in neural representations, it is helpful take a step back and understand more generally what makes a ‘good’ representation. Consider a natural image – a full megapixel array representing an image is very high dimensional, but the parts of the image that need to be extracted are generated by a much lower dimensional process. For example, imagine a puppet that is controlled by nine strings (one to each leg, one to each hand, one to each shoulder, one to each ear for head movements, and one to the base of the spine for bowing); the state of the puppet could be transmitted to another location by the time course of nine parameters, which could be reconstructed in another puppet. It is therefore more efficient to represent the world in terms of these lower-dimensional factors in order to be able to reduce the correlated structural redundancies in the very high-dimensional data.

At a high level, one can understand the goal of ‘learning’ with deep neural networks as attempting to construct these useful factors that are abstractions of the high-resolution degrees of freedom in sensory inputs, exactly like inferring the control strings of the universal puppet master. This is similar to the way that the renormalization group procedure in physics compresses irrelevant degrees of freedom by coarse graining to reveal new physical regularities at larger spatiotemporal scales, and is a model for how new laws emerge at different spatial and temporal scales. As agents in the complex natural world, we seek to represent our surroundings in terms of useful abstract concepts that can help us predict and manipulate the world to enhance our survival.

One early idealized view of how the brain might be computing such representations is by learning features that are invariant to a variety of natural transformations. This view was motivated by the clear ability of humans to rapidly recognize objects despite their diverse appearance at the pixel level while undergoing a variety natural geometric transformations, and further by the early findings of Gross et al. (1973) that individual neurons in higher levels of the visual hierarchy responded selectively to specific objects irrespective of their position, size, and orientation.

Another Kempner Institute preprint puts these ideas into practice, demonstrating that travelling waves can be used to discover spectral signatures which are useful for shape classification. From Traveling Waves Integrate Spatial Information Through Time (Jacobs et al., 2025): (Jacobs et al. 2025)

Traveling waves of neural activity are widely observed in the brain, but their precise computational function remains unclear. One prominent hypothesis is that they enable the transfer and integration of spatial information across neural populations. However, few computational models have explored how traveling waves might be harnessed to perform such integrative processing.

Drawing inspiration from the famous “Can one hear the shape of a drum?” problem – which highlights how normal modes of wave dynamics encode geometric information – we investigate whether similar principles can be leveraged in artificial neural networks. Specifically, we introduce convolutional recurrent neural networks that learn to produce traveling waves in their hidden states in response to visual stimuli, enabling spatial integration. By then treating these wave-like activation sequences as visual representations themselves, we obtain a powerful representational space that outperforms local feed-forward networks on tasks requiring global spatial context.

As the travelling waves dissipate, each neuron has the opportunity to accumulate spectral information about the shape of the structure it is contained within. Perhaps an animation will be illustrative – from the authors’ GitHub page:

Figure 3: Waves propagate differently inside and outside shapes, integrating global shape information to the interior. Sequence of hidden states of an oscillator model trained to classify pixels of polygon images based on the number of sides using only local encoders and recurrent connections. We see the model has learned to use differing natural frequencies inside and outside the shape to induce soft boundaries, causing reflection, thereby yielding different internal dynamics based on shape.

In this manner, every part may accumulate a representation of the whole, and the accumulated spectral qualities can be used to infer what shape the part is contained within:

Figure 4: Wave-based models learn to separate distinct shapes in frequency space. (Left) Plot of predicted semantic segmentation and a select set of frequency bins for each pixel of a given test image. (Right) The full frequency spectrum for dataset. We see that different shapes have qualitatively different frequency spectra, allowing for >99% pixel-wise classification accuracy on a test set.

I recommend checking out Mozes Jacobs’ video presentation, as he has even generated sounds representative of each shape being classified. Incredibly cool:

This is extremely speculative, but I feel that there’s something analogous to a wave based implementation of the fractional Fourier transform going on here – only, what’s being calculated is not so much the spectral qualities of the signal as the spectral qualities of the resonant cavity it is contained within. I currently lack the mathematical background necessary to explore a reconciliation between these two ideas.

I also could not help but be reminded of Steven Lehar’s work extending Stephen Grossberg’s filling-in model of the visual system, in which he proposed that the visual cortex reifies shapes using a two-phase nonlinear optical setup he describes as a reverse grassfire process. From my earlier post, An introduction to Steven Lehar, part III: Flame fronts and shock scaffolds:

Lehar’s reverse grassfire process for a series of shapes. These animations are just for illustrative purposes and are not actual wave simulations. Created by Scry Visuals using signed distance functions.

I am also reminded of the work done by Bijan Fakhri for the Qualia Research Institute, exploring how subjective experience may be one and the same as electromagnetic waves reflecting around inside a variable-permittivity closed cavity. From his writeup, The Electrostatic Brain: How a Web of Neurons Generates the World-Simulation that is You:

Simulation of low and high frequency electromagnetic waves in a variable-permittivity medium, created by Bijan Fakhri for qri.org.

I really do think all of these efforts are pointing in roughly the same direction – and that the brain is performing something akin to spectral analysis on incoming sensations using Fresnel diffraction within a closed, variable-permittivity cavity – and if you squint a bit, this process somewhat resembles a fractional Fourier transform.

Entorhinal cortex and visual cortex

I did also search for other literature presenting evidence for the Fourier transform in neural processes. Orchard et al. (2013) propose that the grid cells in the entorhinal cortex and place cells in the hippocampus reconstruct a spatial map of an animal’s environment using an inverse Fourier transform – but most fascinating to me was a very old electrode study on cats and macaque monkeys proposing that neurons in the visual cortex actually respond to the spectral components of visual stimuli. As always, I have a soft spot for weird old neuroscience papers, so I include it here. From Responses of striate cortex cells to grating and checkerboard patterns (De Valois et al., 1979), various visual stimuli and their two-dimensional Fourier spectra:

Fig. 1. Stimulus patterns and their two-dimensional Fourier spectra. In the left column are photographs of the oscilloscope displays of the various stimuli. The right column depicts the two-dimensional spectra (out to the fifth harmonic) corresponding to each of the patterns on the left. Frequency is represented on the radial dimension, orientation on the angular dimension, and the areas of the filled circles represent the magnitudes of the Fourier components. A, a square-wave grating; B, a 1/1 (check height/check width) checkerboard; C, a 0.5/1 checkerboard; D, a plaid pattern. The Fourier spectra of the various patterns are discussed in some detail in the text.

Note how the harmonics for the square wave grating are arranged at 0° to the origin, whereas the harmonics for the checkerboard are arranged at 45° to the origin. The authors propose that if orientation selective neurons actually respond to a spectral basis rather than orientation alone, then we would expect neurons which fire when presented with a grating rotated by 0° to also fire when presented with a checkerboard rotated by 45°. Well, that is exactly what they found:

Fig. 3. Orientation tuning of cortical cells to a grating and a 1/1 checkerboard. Angles 0-180° represent movement down and/or to the right; angles 180-360° represent movement up and/or to the left. The orientations plotted for both gratings (■—■) and checkerboards (▲⋯▲) are the orientations of the edges. Note that the optimum orientations for the checkerboard are shifted 45° from those for the grating. Panel A shows responses recorded from a simple cell of a cat. Panel B illustrates responses of a monkey complex cell.

The consensus in modern neuroscience is that neurons in the visual cortex actually respond to Gabor wavelet receptive fields, not spectral components. A Gabor function is constructed by multiplying a Gaussian function by a sine wave – and represents the ideal trade-off between spatial and spectral domain uncertainty. In other words, Gabor wavelet receptive fields are already performing a kind of Fourier analysis, extracting information about both the spatial and frequency domain qualities of visual features simultaneously.

An ensemble of odd and even Gabor filters. From Image Representation Using 2D Gabor Wavelets (Lee, 1996)

The important difference between the Fourier transform model and the Gabor receptive field model is as follows: In the Fourier model, neurons would respond to frequency components regardless of where they appear in the viusal field, whereas in the Gabor model, neurons would respond to frequency components at a specific location. The grating and checkerboard experiments described above suggest that the visual system may track the global frequency decomposition of a scene, not just local frequency patches. I’m not sure how to reconcile this with mainstream understanding or why the findings of this paper are not more widely known. Did nobody ever repeat this experiment – do all modern studies use gratings alone?

I wonder if we could tell these two models apart using psychophysics alone. The McCollough effect is a visual phenomenon whereby a coloured afterglow comes to be associated with different oriented grating patterns. I won’t link the induction images, as the effects are notorious for persisting for months after they have been viewed. I wonder if a similar association could be established between a coloured grating induction image at 0° and a colourless checkerboard test image at 45° – and if so, would this constitute evidence towards the visual cortex handling spectral representations?

If something like the Fourier transform model does turn out to be the case, I’d be inclined to wonder if other sensory cortices may be performing analogous transforms on incoming sensory information – and maybe the entire brain is composed of an complex hierarchy of such structures? The auditory cortex is the obvious candidate for analysis – but perhaps this could also explain the mysterious gaps found on our representation of the somatosensory homunculus?

What is the computational utility of the fractional Fourier transform?

Alright, so what is all this for? As I have written previously, I think a big part of what consciousness is doing is performing a massively parallel analogue self-convolution on the contents of awareness, scouring sensory information for any patterns it can find in the pursuit of reifying a parsimonious world model in which every part still has an influence on the whole. To understand why this would be expensive to implement, consider what convolution actually does. For every point in a signal, you have to compare it with every other point – which adds up to O(n2) operations in total. For a high resolution visual field with millions of “points”, this quickly becomes intractable.

The convolution theorem tells us that the convolution of two time or spatial domain signals is the same as their product in the frequency domain. So, we first take the Fourier transform our signal, multiply the frequency components together, and then take the inverse Fourier transform to convert it back. The multiplication operation only requires O(n) operations, but the Fourier transforms require O(n log n). This is a sizeable improvement, but it’s still computationally demanding, and evolution would still need to find some way to hard code the point-to-point wiring required to implement the Fourier transform into our genomes. This is why the fractional Fourier transform is so appealing – it can be implemented using simple wave dynamics without requiring any hard-wired connections. The waves just do their thing, and the Fourier transform arises naturally from the physics.

Additionally, if reports from the meditators I know are to be believed, upon investigation it appears that consciousness refreshes itself at a rate of around 40 Hz. This isn’t just idle speculation – experienced meditators across multiple contemplative traditions who have cultivated high sensory clarity often report being able to perceive the discrete, flickering nature of conscious experience arising and passing away, often at a rate of roughly forty times per second.

For a thorough dissection of what a consciousness refresh cycle looks like, I highly recommend Kenneth Shinozuka’s writeup, Shinzen Young’s 10-Step Model for Experiencing the Eternal Now.

Is one consciousness frame equivalent to one cycle of the fractional Fourier transform? If this is the case, it would suggest that each moment or “frame” of experience corresponds to a complete rotation through the time-frequency plane and back again. How might we falsify such a hypothesis? Might certain drugs alter the refresh rate in observable or measurable ways?

If this turns out to be true, and if one is willing to put an estimate on the size of the contents of awareness – then we can start putting a lower bound on the number of operations required in order to process the contents of awareness digitally. This said – does it even make sense to talk of computational complexity when discussing analogue computers? In any case, I believe that the fractional Fourier transform offers a plausible mechanism by which this computationally expensive process can be implemented using wave dynamics.

Joseph Fourier, as seen rotated through the frequency plane.

Open questions

As usual, I have a handful of excess tangents which don’t quite fit into the body of the post, but which I’d still like to explore briefly before concluding the post entirely. Here goes:

How would you use the fractional Fourier transform to implement higher order pattern recognition?

Let’s use a musical example. Imagine I take the Fourier transform of a single sine wave – the note A4, which would cause a single spike in the frequency spectrum at 440 Hz. This is all fine if we just want to recognise individual notes, but what if we’d like to recognise intervals on top of that? If I then play a C5, that would cause a spike at 660 Hz, a perfect fifth above the other note – a ratio of 3:2. If we’d like to be able to recognise perfect fifths at any frequency – we’re going to need a secondary transform or convolution operation on top of this one.

I have a weird intuition that in addition to using a hierarchy of such optical elements, either the intermediary orders in the fractional Fourier transform or nonlinear optical effects can be recruited to perform the desired recursive pattern recognition. I still need to explore this idea fully – but helpfully, it seems that such Indra’s net effects get stronger when under the influence of psychedelics, making this amenable to phenomenological study.

If consciousness is implementing the fractional Fourier transform, why do we only experience the initial spatial order, and not the other fractional frequency orders?

Perhaps we normally do only experience the spatial order (a = 0), but it’s possible to dial attention into other stages, or we simply don’t remember them. When this happens is when we experience the weird ringing artifacts – or other, much stranger effects.

As for the upside down spatial order (a = 2), perhaps it’s impossible to tell the difference? Or, as Steven Lehar proposed, nonlinear optical effects could also be used to create phase conjugate mirrors, which would bounce the waves right back to where they came from. This is speculative, but this would mean that we would only ever experience fractional orders in the range 0 to 1.

I’m also not sure whether to expect that we would experience this process cycling – rather, due to the path integral nature of optics, perhaps we’d experience the entire process all at once.

How does nonlinear optics fit into this picture?

For energy to leave such a wave system or for it to have memory, you effectively need some kind of nonlinear effect – the waves need to have some kind of persistent effect on their substrate, for instance modifying the properties of an underlying neuron where they form a high-amplitude cusp.

Nonlinear optical effects can get quite weird, however – this is a topic that is extraordinarily broad and dense. Consider the Kerr self-focusing effect – related to the third nonlinear coefficient, χ(3) – where if the amplitude of the radiation increases above a certain value it changes the refractive index of the medium, focusing more radiation into the same location in a feedback loop. I believe this can be a real problem in high-power fiber optics.

I wanted to simulate what happens when I experimented with χ(3), as I suspect that this is what 5-HT2A receptor agonists do. The results very quickly went haywire: (Zhang et al. 2024)

The effect of slowly increasing χ(3) for a Gaussian masked coherent light source propagating from left to right. The complex magnitude is displayed at the top and the full complex value at the bottom. This simulation may have issues with precision or numerical instabilities or simply be entirely incorrect – I’m not quite sure yet.

Weston Beecroft has published a great nonlinear optical sandbox which can be fun to play around with. Over the coming months, I’d like to spend more time wrapping my head what is possible using nonlinear optics. (Beecroft 2024)

Consciousness, Psychedelics, Wave Computing, Phenomenology, Neuroscience, Physics, Mathematics

Bauer, Nicolas et al. 2024. “Movie Reconstruction from Mouse Visual Cortex Activity.” bioRxiv. https://doi.org/10.1101/2024.06.19.599691.
Beecroft, Weston. 2024. “Nonlinear-Optics-Sandbox.” 2024. https://github.com/westoncb/nonlinear-optics-sandbox.
Eriksson, Gustav, Per Adiels, Daniel Nilsson, and Gunnar Björk. 2025. “Talbot Interference of Whispering Gallery Modes.” APL Photonics 10 (1): 010804. https://doi.org/10.1063/5.0226220.
Flipper, Cube. 2025. “Fractional Fourier Transform Animation (Lena).” 2025. https://www.qri.org/images/distill/fractional-fourier-transforms/frft_lena_1024.mp4.
Jacobs, Mozes et al. 2025. “Traveling Waves Integrate Spatial Information Through Time.” arXiv. https://arxiv.org/abs/2502.06034.
JEOL Ltd. 2012. “Fresnel Fringe.” 2012. https://www.jeol.com/words/emterms/20121023.093457.php.
Keller, Benjamin, Vincent Moens, Edwin Park, et al. 2024. “A Spacetime Perspective on Dynamical Computation in Neural Information Processing Systems.” arXiv. https://arxiv.org/abs/2409.13669.
Neutelings, Izaak. 2019. “Fourier Series.” 2019. https://tikz.net/fourier_series/.
Sanderson, Grant. 2018. “But What Is the Fourier Transform? A Visual Introduction.” 2018. https://www.youtube.com/watch?v=spUNpyF58BY.
Swanson, Jez. 2019. “An Interactive Introduction to Fourier Transforms.” 2019. https://www.jezzamon.com/fourier/.
Wikipedia contributors. 2024a. “Fourier Transform.” 2024. https://en.wikipedia.org/wiki/Fourier_transform.
———. 2024b. “Fresnel Diffraction.” 2024. https://en.wikipedia.org/wiki/Fresnel_diffraction.
———. 2024c. “Linear Canonical Transformation.” 2024. https://en.wikipedia.org/wiki/Linear_canonical_transformation.
———. 2024d. “Talbot Effect.” 2024. https://en.wikipedia.org/wiki/Talbot_effect.
Zhang, Huajian et al. 2024. “Psychedelics Rapidly Reshape Cortical Visual Perception Through 5-HT2A Receptor Signaling.” Science Advances 10 (7): eadj6102. https://doi.org/10.1126/sciadv.adj6102.

References

Citation

For attribution, please cite this work as

Flipper & Gómez-Emilsson (2025, Oct. 11). Here time turns into space: Does consciousness implement the fractional Fourier transform?. Retrieved from https://www.qri.org/blog/fractional-fourier-transforms

BibTeX citation

@misc{flipper2025here,
  author = {Flipper, Cube and Gómez-Emilsson, Andrés},
  title = {Here time turns into space: Does consciousness implement the fractional Fourier transform?},
  url = {https://www.qri.org/blog/fractional-fourier-transforms},
  year = {2025}
}