Go with the FLOW: visualizing spatiotemporal dynamics in optical widefield calcium imaging

Widefield calcium imaging has recently emerged as a powerful experimental technique to record coordinated large-scale brain activity. These measurements present a unique opportunity to characterize spatiotemporally coherent structures that underlie neural activity across many regions of the brain. In this work, we leverage analytic techniques from fluid dynamics to develop a visualization framework that highlights features of flow across the cortex, mapping wavefronts that may be correlated with behavioural events. First, we transform the time series of widefield calcium images into time-varying vector fields using optic flow. Next, we extract concise diagrams summarizing the dynamics, which we refer to as FLOW (flow lines in optical widefield imaging) portraits. These FLOW portraits provide an intuitive map of dynamic calcium activity, including regions of initiation and termination, as well as the direction and extent of activity spread. To extract these structures, we use the finite-time Lyapunov exponent technique developed to analyse time-varying manifolds in unsteady fluids. Importantly, our approach captures coherent structures that are poorly represented by traditional modal decomposition techniques. We demonstrate the application of FLOW portraits on three simple synthetic datasets and two widefield calcium imaging datasets, including cortical waves in the developing mouse and spontaneous cortical activity in an adult mouse.


Introduction
Coordinated organization of neural activity among brain regions is believed to serve many crucial roles, including performing specific computations in the cortex [1][2][3] and supporting brain development [4][5][6]; further, its disruption may lead to neurological disease [7][8][9].One prominent characteristic of neural activity at the scale of brain regions is the rapid and coherent propagation of activity across cortex, which has been widely observed in a variety of contexts, including spontaneous activity, task engagement, sleep, and development [10][11][12].Qualitatively similar patterns of neural activity propagation have been also observed in the retina, often referred to as retinal waves, during development [13][14][15][16].Although such spatiotemporal dynamic features are often visually salient, it remains challenging to quantify and succinctly summarize their behavior directly from neural recordings.
Widefield optical imaging of calcium activity provides a unique opportunity to study coordinated spatiotemporal neural activity among brain areas, because this experimental approach achieves large fields-ofview with high temporal and spatial resolution [17,18].In general, widefield imaging experiments involve fluorescence imaging of the entire brain surface of animals that express optical indicator proteins in known populations of neurons [19][20][21][22][23][24].Many experiments choose to use genetically encoded calcium indicators from the GCaMP family to image neural calcium dynamics, which is a proxy for electrical neuronal activity [25][26][27][28]; more generally, the visualization methods we discuss here can be applied to any widefield optical imaging experiment, such as imaging with voltage-sensitive dyes [29,30].Cortical activity has been measured using widefield calcium imaging in a variety of experiments, notably to study perceptual decision making [1,3,[31][32][33][34][35], to extract cortical functional connectivity [8,36,37], to characterize cortical activity that organizes brain development [38], and to study the effects of disease in the cortex [7][8][9].In all of these data, it is typical to observe multiple regions activating transiently or in regular succession, with distinct initiation sites and wave-like flows across the fields of view.These features can often be described as flow of activity with coherent traveling fronts; interestingly, all of these patterns are well studied as nonlinear features of spatiotemporal dynamical systems [39].
The most widely applied approaches to analyze time-varying recordings of high-dimensional neural activity are dimensionality reduction techniques, which extract modes that correspond to dominant, lowdimensional features of the high-dimensional data [40][41][42].These low-dimensional features are useful as representations of the neural activity that facilitate further analysis and modeling.Furthermore, the observation that the dynamics of neuronal populations can be reduced to a relatively small number of features may be a clue about the mechanisms that underlie coordinated neural activity [43][44][45][46].Common modal decomposition algorithms used in neuroscience [40,47] include singular value decomposition (SVD), which is closely related to principle component analysis (PCA), independent component analysis (ICA), and nonnegative matrix factorization (NNMF).These techniques all solve for combinations of relatively few modes in space and time that reconstruct an estimate of the original high-dimensional data; their solutions differ by making different assumptions about the statistical structure of the modes.
There are many exciting recent innovations in modal decomposition for analyzing large-scale neural data, some of which are extensions and derivatives of SVD, ICA, and NNMF.Interestingly, while some of these methods have explicit representations of the temporal dynamics (for instance, jPCA [44], dynamic mode decomposition (DMD) [48][49][50], and NNMF with temporal constraints [51][52][53][54]), they largely set out to achieve space/time separation.Applying PCA and NNMF to segments of synthetic and experimental data (Figure 1A) yields a set of spatial modes (Figure 1B; temporal modes not shown) that provide a representation of the activity.However, these representations are static modes and may not adequately summarize spatiotemporal data.As an illustrative example, the synthetic data in Figure 1 is a spatial Gaussian that grows, translates, then shrinks with time, and such spatiotemporal coherent features are poorly captured by PCA and NNMF decompositions of the data.
A complementary set of methods have been developed to describe spatiotemporal patterns in widefield neural activity by explicitly extracting propagating waves.Traveling waves are often characterized by their propagation speed and their direction (see [55] and [56] for examples), and these measures are then aggregated for all of the waves observed in a recording to quantify the trends in wave dynamics.While this information has proven useful in studying the roles of waves, the approach is limited because waves need to be identified individually.Several related methods have used the computation of optical flow to convert widefield activity to time-varying vector fields [57,58].This velocity field can then be analyzed using tools from vector calculus to identify fixed points, including classifying each fixed point as a source or a sink of activity.Nevertheless, these methods are constrained to identify only fixed points and cannot identify multiple local planar waves.
Our visualization approach is inspired by the similarity of spatial flows observed in widefield optical imaging to flows of physical fluids.Humans have a deep intuition about fluid flows from our everyday experiences (e.g., the patterns of milk mixing in coffee, a river flowing).Representing brain data as a flow allows us to leverage this intuition and decades of methods from flow analysis and visualization.Propagation of neural activity has many commonalities and differences with physical fluid flows.In both, there exist coherent structures whose boundaries may be invariant even as the activity changes with time.In fluid physics, these invariant manifolds are known as Lagrangian coherent structures (LCS) [59][60][61], which act as transport barriers in the flow, either repelling or attracting material.LCS are often visualized by computing ridges in the finite-time Lyapunov exponent (FTLE) field [62][63][64][65], although there are other computational approaches based on variational theory [66].Some noteworthy biological applications include the use of LCS to study the physics of jellyfish feeding [67] and understanding cardiovascular hemodynamics [68,69].Unlike physical flows, neural activity is not governed by fundamental conservation laws; nevertheless, these dynamics are well described by time-varying vector fields [57,58,70,71].
In this work, we develop a visualization framework to capture the spatiotemporal dynamics of neural activity by extracting field lines in optical widefield imaging, which we call FLOW (flow lines in optical widefield imaging) portraits.FLOW portraits are generated by considering frame-by-frame dynamics as time-varying optical flow vector fields, from which we compute and integrate the ridges in its FTLE.To validate and develop intuition for our approach, we show that FLOW portraits give accurate and interpretable visual summaries of simple synthetic datasets.Next, we apply our methods to analyze bouts of activity from two widefield calcium imaging datasets in mice, both of which exhibit spontaneous, widespread activity across cortex.The first data are recordings of spontaneous cortical activity of GCaMP6s-expressing mouse pups during their first 8 postnatal days [38].The second example is a recording of spontaneous cortical activity in a GCaMP6s-expressing adult mouse [35].In both examples, we demonstrate that FLOW portraits extract meaningful and interpretable outlines of the dominant patterns in the cortical activity that contribute to our understanding of the animals' developmental and behavioral states.

FLOW Portraits
This work introduces FLOW (flow lines in optical widefield imaging) portraits, which are visualizations that provide a concise and intuitive summary of the spatiotemporal dynamics, highlighting coherent structures in widefield recordings.Importantly, FLOW portraits differ from modal decomposition techniques in that they do not provide a basis in which to approximate the data and cannot quantitatively explain variance in the recordings.Instead, FLOW portraits explicitly convert the image stack into time-varying vector fields to extract patterns of activity propagation in the data (Figure 1).As our approach leverages and adapts analytic techniques from fluid dynamics [61] that are unfamiliar to most neuroscientists, this section describes how to compute finite time Lyapunov exponent (FTLE) from time-varying vector fields.We also build intuition for how the ridges of the FTLE field can be interpreted in the context of widefield calcium imaging, using several simple synthetic examples.
The steps of our approach to compute FLOW portraits are illustrated in Figure 2, Figure 3, and Supplemental Video 1.The input data is a video (i.e.image stack) of the relative change of fluorescence of the imaged optical protein indicator, ∆F /F , as it changes in time over many frames.The raw fluorescence may drift over the course of an experiment, so ∆F /F is considered to be a robust proxy for the magnitude of neural activation, normalizing the change in fluorescence over a moving-window baseline [72].FLOW portraits are well suited to summarize data where optical activity is seen to diffuse or flow across the field of view, with varied patterns throughout the recording.To characterize the propagation of recorded neural activity across brain areas through space, we first compute the flow vector field using optic flow.Next, the FTLE is computed from the time-varying vector field using the standard integration method as outlined by Onu et al. [73].Last, the FTLE field is post-processed to visualize ridge-like features that highlight the coherent features of a spatial flow [61,62].It is important to note that we refer to the processed FTLE ridges as FLOW portraits to avoid misinterpretation with traditional LCS analysis in fluid dynamics [61].Details of data collection, preprocessing, and computation are described in the Methods (Section 5).

Optical flow of widefield imaging data
We describe the frame-by-frame spread of neural activity as time-varying vector fields, computed by optical flow.Specifically, as regions of high pixel intensity in ∆F/F move and diffuse across the field of view, these coherent motions can be converted into a vector field of velocities, dx/dt and dy/dt, at every pixel in the recording (Figure 2A).We denote this vector field as v(x, t), defined at every point in space x at time t.Motion velocities are commonly estimated from video data in computer vision using optical flow algorithms [74], and biological visual systems of vertebrates and invertebrates also perceive moving scenes with computations akin to optical flow [75,76].In addition, some prior work has explored optical flow computations in widefield calcium imaging data [57,58].Here we use the Horn-Schunck (HS) [77] method because of its simplicity and its observed strong performance on our sample data.
Figure 3A and B show an example of snapshots of ∆F /F data and the extracted optical flow vector fields.The magnitude and direction of the vector at each pixel is computed by solving for the optimal vector field that describes the change from each frame to the subsequent frame (see schematic in Figure 2A).In order to minimize the effects of noise and numerical differentiation on the optical flow field, we apply temporal scaling and smoothing to the computed vector fields.Briefly, the magnitude of each optical flow vector is scaled proportionally to the relative change in the raw pixel intensity for the corresponding pixel over a prescribed time delay.This scaling attenuates the magnitudes of vectors that do not represent corresponding changes in the widefield imaging data.To mitigate the effects of pixel noise, we also apply temporal Gaussian smoothing to the scaled vector fields.The scaled and smoothed Horn-Schunck optical flow vector fields are used throughout the rest of the FLOW portrait algorithm where velocity data is required.This process of computing the optical flow vector field from widefield imaging data is analogous to the process of extracting the motion vector field from particle image velocimetery (PIV) data [78,79] in experimental fluid dynamics.Both approaches approximate the velocity field from experimental data of material transported through the studied flow.

The Finite Time Lyapunov Exponent (FTLE)
Once a flow velocity field, v(x, t), is computed, there are numerous computational approaches that can be performed to study and characterize the flow.These methods include instantaneous metrics from vector calculus, such as the divergence and the curl of the vector field, modal decomposition techniques [80,81], such as POD and DMD, and Lagrangian metrics such as the FTLE [59,61,62].Although instantaneous metrics have the potential to extract relevant features from widefield imaging optical flow fields (Supplemental Figure 1; [57]), the unsteady nature of this data suggests that Lagrangian metrics may provide a more useful summary of the activity.Here we compute the FTLE fields [62] to extract time invariant features of flow-like widefield activity.
The FTLE field is a scalar field σ(x, t 0 , T ) defined at every point in space x and time t 0 , with respect to some relevant time-scale of integration, T .The FTLE field is a measure of how much neighboring initial conditions separate when integrated through the velocity field v for a duration T .Thus, regions of high stretching for positive T (forward time) or negative T (backward time) provide time-varying analogs of stable and unstable manifolds, respectively [39,61,62].The FTLE field is typically approximated numerically from flow field snapshots at discrete instants in time [62,65].First, the flow map Φ T t0 is approximated on a discretized set of spatial points, typically the same discretized domain where the velocity field is defined.The flow map Φ T t0 describes the position of an initial condition x(t 0 ) after it is integrated along the vector field v for a duration T (Figure 2B) and is defined as Next, the flow map Jacobian DΦ T t0 is approximated via finite-difference derivatives with neighboring points in the flow.In two-dimensions, the flow map Jacobian at a point x is: where Φ T x,t0 denotes the x component of Φ T t0 , and Φ T y,t0 denotes the y component.The finite-time Lyapunov exponent σ is finally computed from the largest eigenvalue λ max of the Cauchy-Green deformation tensor , which is the maximum singular value of the flow map Jacobian: The FTLE value at a point x 0 determines the maximum stretching that may occur between x 0 and a perturbed location x 0 + after time T : where the amplification of the perturbation is bounded by The σ term is understood to depend on x 0 , t 0 , and T .The FTLE field is quite robust to noisy measurements of the vector field v(x, t) [59], since the computation involves integration in time, which tends to average out noise.This robustness was a major factor in its wide adoption in fluid mechanics, where experimentally acquired velocity fields often contain noise and outliers.The same robustness is appealing for optical widefield imaging.
Figures 2 and 3 illustrate the intuition behind this FTLE computation, and additional implementation details are provided in the Methods.The key insight in the FTLE computation is that virtual particles at every pixel location flow according to the vector field from t 0 to t 0 + T , and these integrated optical flow fields form a flow map Φ T t0 (Figure 2B).This flow stretches neighboring virtual particles, so that equidistant particles have stretched in some directions and compressed in others (see also Supplemental Movie 1).Relative deformations are described by the Cauchy-Green strain tensor at every pixel, and the FTLE corresponds to the log-normalized leading eigenvalue of this tensor.The same procedure is repeated by reversing the ordering of frames to compute flow maps in backwards time.The forward and backward FTLE fields computed for each example time snapshot are shown in Figure 3C and D.
Drawing again on our analogy to physical fluid flows, ridges in the FTLE field correspond to timevarying analogs of invariant manifolds, and they approximate LCS [60,61].In forward time, these features repel fluid material, similar to a stable manifold in a dynamical system.The opposite is true for backward time ridges, where material is attracted in forward time, as with the unstable manifold.A similar interpretation can be extended to the FTLE of optical activity flows, where forward time structures repel activity, while backward time structures attract activity.However, additional care must be taken when interpreting the intensity of FTLE ridges for brain activity, since the induced velocity field is not divergence free, as is typically the case when analyzing incompressible fluid systems.When the velocity field is incompressible, then the determinant of the flow map Jacobian is equal to one, so the largest eigenvalue is greater than or equal to one.However, for compressible vector fields (as is in the case for widefield imaging of neural activity), the divergence is nonzero and the product of the eigenvalues of the flow map Jacobian may not equal to one.In this case, we may locally have two positive or two negative Lyapunov exponents.Here we consider only the non-negative Lyapunov exponents, which correspond to repelling ridges in forward time and attractive ridges in backward time (Figure 3C and D).

Ridge extraction for FLOW Portrait visualization
By aggregating the forward and backward FTLE ridges within a window in time, we summarize the coherent structures of propagating activity within that window with a single FLOW portrait.Ridges of an FTLE field have been shown to approximate LCS, and several mathematical definitions are suggested to extract them from data [60,62,82,83].We found that implementing existing strategies for ridge extraction on FTLE fields of widefield calcium imaging data did not adequately extract ridge-like features.Therefore, we developed a post-processing approach to visualize ridges from the forward and backward mean FTLE fields.
Ridges lie along local extremes in a field, thus we can approximate their locations by extracting maximal regions and computing the skeleton structure.To compute the dominant features over the entire recording, we first threshold the mean of all non-negative FTLE values (Figure 4A) to isolate local maxima in the field (Figure 4B).Next, we approximate ridges from the local FTLE maxima by performing a morphological skeletonization operation (Figure 4C).Lastly, these ridges are smoothed by applying morphological image processing (Figure 4D).Thus, the resulting visualization depicts the average approximate FTLE ridges in a recording window to summarize the time invariant patterns of activity.We refer to this visualization as FLOW portraits because it is designed for compressible vector fields typical of widefield imaging of calcium activity.
There are two parameters the user must choose: the integration time T for the flow map Φ T t0 and the threshold percentile for FTLE values to include in the visualization.The choice of these parameters depends on knowledge of the timescales of relevant coherent activity propagation in each dataset.Larger integration time windows filter out shorter timescale waves; lower percentile thresholds admit more ridges with less intense coherence, which can also admit more spurious ridges if the data are noisy.Supplementary Figure 4 shows how a range of these parameters yields different FLOW portraits.As a practical recommendation to users, we recommend repeating the computation for a range of parameter values so that the visually salient features in a dataset are reflected in the FLOW portraits.

How to interpret a FLOW portrait
To build intuition and illustrate how spatiotemporal patterns are visualized by FLOW portraits, let's examine them for several simple synthetic datasets, each capturing the types of coherent activity commonly observed in widefield calcium imaging.The first example is a plane wave that starts in the middle of the field-of-view and travels to the right (Figure 5A).In the corresponding FLOW portrait, the forwardtime FTLE structures delineate where the wave originates in the middle of the field-of-view, while the backward-time FLTE structures outline where the wave terminates (Figure 5B).This type of traveling plane wave closely resembles the spread of neural activity observed by widefield imaging (for instance, data from mouse pup in Figure 5).The second synthetic dataset is a circular wave that initiates in the middle, then grows larger towards the edges (Supplemental Figure 2).Here, the forward-time FTLE structures mark the site of initiation, while the backward-time FTLE structures outline the maximal spatial extent of the circle's spread.Our third synthetic example combines both traveling and growing/shrinking wave fronts.As shown in Figure 1, Supplemental Video 1, and Supplemental Figure 2, the 2D Gaussian dataset includes a Gaussian blob that appears in the field-of-view, grows in diameter, translates to the right, then shrinks.Forward-time FLTE structures capture where the activity originates, including the back edge of the Gaussian as it starts to translate and the outside perimeter of the blob as it shrinks.Similarly, backward-time FTLE structures capture where the activity terminates, including the outside perimeter of the blob as it grows and the center of the shrinking blob.
In all of these examples, FLOW portraits represent succinct summaries of spatiotemporal coherent activity, highlighting regions of activity initiation and termination, as well as the direction and spatial extent of how activity spreads.Specifically, activity originates from the forward-time FLOW ridges (orange lines, analogous to stable manifolds) and goes to the backward-time FLOW ridges (purple lines, analogous to unstable manifolds).This visualization caricaturizes features of coherent activity not accessible by established methods, including modal decomposition (Figure 1), instantaneous metrics like divergence and curl (Supplemental Figure 1), and source/sink classification of fixed points (Supplemental Figures 2 and 3).The forward and backward time FTLE structures carry more information than sources and sinks because they are not constrained to be fixed points; thus, these structures are able to delineate traveling fronts.The intersection of two or more FLOW structures, such as where the orange and purple ridges intersect in Figure 1B, can occur for several reasons.First, intersections of the forward and backward FTLE ridges are reflected as intersections in the FLOW portraits.Points where these FTLE ridges intersect correspond to time-dependent saddle points, as the forward and backward FTLE ridges are time-dependent analogs of the stable and unstable manifolds of the vector field.Second, two different spatiotemporal structures may occur at the same spatial location at different times during the recording, as in the case of the 2D Gaussian synthetic dataset.

Example 1: Pan-cortical waves
Pan-cortical waves are bouts of activity that propagate across large areas of the cortex [5,[84][85][86][87] and are suggested to play a critical role in cortical development [38].These events are defined heuristically as activity that propagates to include a large fraction of the imaged cortical surface.In Figure 6A, the gray bars highlight individual cortical wave events, defined as when the fraction of active cortex rises to above 1/2 and falls back to the baseline (∼ 1/10).To contribute to our understanding of pan-cortical waves in development, we use FLOW portraits to summarize the activity during each wave event, thus facilitating direct comparisons across individual waves and developmental time points.
We construct FLOW portraits to summarize the flow of activity during each pan-cortical wave.Spatial integration of the FTLE fields yields the FTLE intensity (Figure 6A, orange and purple traces), which indicates the relative amount of time-averaged flow throughout the recording.The resulting FLOW portraits for two pan-cortical waves can be seen in Figure 6B, alongside 12 frames of the ∆F /F data from each wave (see also Supplemental Movies 2 and 3).The portraits of every pan-cortical wave are shown in Supplemental Figure 5.
Each FLOW portrait provides a summary of the prominent activity observed during each wave event, highlighting the regions that repel (forward FTLE, orange) and attract (backward FTLE, purple) activity.Indeed, both waves shown in Figure 6B exhibit two stages of propagation, where activity spreads and pauses briefly at sensorimotor cortex (outlined by the purple rings) before spreading towards frontal cortex.This concise visualization allows us to easily compare such qualitative features of wave propagation without having to parse through the raw data manually.

Example 2: Sleep-state cortical activity changes in development
To further investigate the role of spontaneous cortical activity during development, we analyzed optical recordings of spontaneous calcium activity in mouse pups during the first 8 postnatal days of development.We computed FLOW portraits on bouts of spontaneous cortical activity during sleep in 12 animals of ages P1, P2, P3, P5, P7 and P8 (Figure 7).Briefly, the sleep state was determined by binning time points into three categories (sleep, wake, and moving-wake) using the power of nuchal EMG spectrum [38,88,89].We chose to focus on sleep state cortical activity for its proposed developmental roles and observed changes during development [38].For each animal, we computed FLOW portraits for up to the 10 longest bouts of sleep (fewer portraits were computed for short recordings where there were less than 10 sleep bouts).
Five example FLOW portraits for each animal are in Figure 7A, with a complete set in Supplemental Figure 6.This organization allows us to leverage FLOW portraits to examine developmental changes in cortical activity across long recordings from different animals.We observe a qualitative change between the portraits from the early postnatal days (P1-3) to the later days (P5-8).
The FLOW portraits from the early days show more diffuse activity, with less consolidated FTLE ridges.After P5, the FLOW portraits show cortical activity during sleep becoming more consolidated and following more defined flow patterns.We quantify this transition to more consolidated FTLE ridges after P5 by defining a ridge count score.Briefly, this metric is computed by counting the total number of disconnected ridges in a FLOW portrait and dividing this sum by the total area of FLOW portrait.We computed ridge count scores for all FLOW portraits seen in Supplemental Figure 6 and summarized the mean over each developmental day (Figure 7B).We found that the ridge count score for forward FTLE structures, backward FTLE structures, and both combined all decreased between P1 and P5.Further, we found that the mean ridge count score over the early developmental days (P1-P3) was significantly different (p-values of 8.70 × 10 −4 , 7.16 × 10 −5 , and 1.62 × 10 −7 for forward, backward, and combined, respectively; paired t-test) than that over the later developmental days (P5-P8).

Example 3: Cortical activity during spontaneous movement
Lastly, we analyze the FLOW portraits of spontaneous cortical activity in a head-fixed, behaving adult mouse [35].To investigate how FLOW portraits align with an animal's behavior, we analyze IR videos of spontaneous facial and limb movements alongside cortical calcium activity.A movement score was assigned to each recording time point by using the total pixel-wise difference between the current and next frames (the forward difference) and normalizing this to the maximum observed difference.During bouts of limb movement or whisking the movement score was greater, approaching the maximum score of 1, than during periods of rest, when the score approached the minimum score of 0.
We chose two bouts of spontaneous movement (gray shading in Figure 8A highlights the two bouts, i and ii) to compute the corresponding FLOW portraits (see also Supplemental Movies 4 and 5).Large variations in the movement score (Figure 8A, blue trace) can be observed throughout these bouts, indicating that the animal is continuously switching from a resting to a moving state.
We see signatures of these movement behaviors in the calcium activity, when we expect sensorimotor cortical regions to be more active than during periods of rest.Indeed, the FLOW portrait for each activity bout provides a clear summary of calcium activity surrounding the sensorimotor cortex (Figure 8B).During both bouts, a ring-like repelling (forward, orange) field line outlines the sensory-motor region, while attracting (backward, purple) field lines fill in the centers of the rings.We note that these patterns are different from our analysis of Example 1 of pan-cortical waves.Specifically, these features suggest a dominant pattern of cortical calcium activity as diffusion of activity from the entirety (or outer edges) of sensorimotor regions towards the center.In other words, our FLOW portraits point to sensory-motor cortex as a terminus of cortical activity during spontaneous movement behaviors.Interestingly, compared to the overlaid Allen Mouse Brain Common Coordinate Framework (white lines in Figure 8B), the attracting (backward, purple) field lines are close to the boundary between somatosensory and primary motor cortices.We note that the integration length and the threshold percentile parameters chosen for these examples determine which ridges are highlighted in the FLOW portraits.

Discussion
This paper introduces FLOW portraits as a novel approach to visualize spatiotemporal flow of coherent features in optical widefield calcium imaging data.Viewed at this meso-scale of temporal and spatial resolution, neural activity at the cortical surface is typified by multiple brain regions activating transiently and sometimes in spatial succession.Motivated by an analogy between this flow of neural activity over cortex and physical fluid flows, we leverage techniques well established to study physical fluid flows, in particular the finite-time Lyapunov exponent (FTLE).Here we convert movies of ∆F/F over the cortical surface into vector fields, and the FTLE ridges in these vector fields form an intuitive map of dynamic calcium activity.Importantly, our FLOW portraits do not decompose the data into modes and are not models of the data.Instead, they capture succinct portraits of diverse, variable, and non-stationary spatiotemporal patterns, such as those often observed in spontaneous or task-driven widefield calcium imaging experiments.
The FLOW portrait analysis makes several assumptions that are usually true of physical fluid systems but often not met by neural data.Coherent propagation of neural activity on the cortex does not obey mass or energy conservation, so the extraction of FTLE ridges are only approximate "material" accumulation lines.This assumption is particularly invalid for long bouts of data and over long integration windows, so caution must be exercised in choosing these parameters in the analysis (the same is true of FTLE analysis in fluid flows).Because the integration window effectively low-pass filters the dynamics of the data, activity that is on a faster timescale may be attenuated, and local activity may integrate to appear more coherent.The optimal choice of FTLE parameters for visualization widefield activity and how these depend on spatiotemporal statistics will be important to understand in future applications.Further, although widefield imaging offers much larger fields of view at a higher temporal resolution than many other imaging methods, there remains much unobservable neural activity.Brain areas outside the imaging window and underneath the cortical surface contribute to the imaged activity, yet the flow of neural activity among these regions cannot be captured by our analysis and may bias the extracted flow lines.This limitation is more severe in considering brains with sulci and gyri, as our analysis fundamentally assumes that neighboring pixels are also neighbors on the cortical sheet.
The quality and interpretability of FLOW portraits requires the imaging data to have been acquired with sufficient temporal and spatial resolution to support the analysis.To be specific, we require that the sampling in time be fast enough that successive frames of the movie are very similar.If the frame rate is too slow and neighboring frames differ substantially, then the optical flow computation infers inaccurate vector fields and can no longer disambiguate between gradual flow of activity and sudden jumps in activation.
Despite the relatively slow dynamics of GCaMP6s compared to single neuron activity [27], the temporal dynamics of lasting neural synchrony at this meso-scale is adequately matched to the kinetics of the indicator protein in all the data we highlight here.The choice of calcium or voltage indicator also introduces filtering in time, so that our analysis relies on the dynamics of the indicator to be faster than the dynamics of the underlying flow across the brain.Similarly, the spatial resolution of the data need not support disambiguation of single neurons, but it is important that spatial averaging in the field of view does not obscure coherent features of interest.
We suggest our approach expands the toolbox of techniques to analyze and understand widefield imaging data, especially facilitating direct comparison of multiple bouts of spatiotemporal activity that are interpretable in the context of behavior and development.This visualization framework can be developed to explicitly quantify features of the flow (for example, the ridge count score analysis in Figure 7).Such quantification may be of value in further work that connects features of FLOW portraits with states of relevance to behavior, development, or disease.The transformation of widefield calcium imaging data into a vector field representation suggests multiple avenues for development of analytic tools.For instance, where multiple coherent waves are present and propagate locally, future work may develop visualizations of the direction of activity propagation, from individual forward FLOW ridges to backward FLOW ridges.Intriguingly, it may be possible to discover partial differential equations that govern the flow of activity through these vector fields using data-driven techniques [91,92].

Developing mouse datasets
These experimental procedures were conducted at University of Washington, and all protocols were reviewed and approved by the University of Washington IACUC.Neonatal mice expressing GCaAMP6S in cortical neurons were bred by crossing mice heterozygous expressing a Emx1 driven Cre (Emx1-Cre+/-, Jackson Labs ID 005628) with mice homozygously expressing GCaAMP6S under control of a cre promoter (Ai162+/+, Donated by Allen Institute, Jackson Labs ID 031562).This cross resulted in mice expressing GCaAMP6S primarily in glutamatergic cortical neurons early in development.On the day of recording, mice were placed on a heating pad and anesthetized using 1-2% isoflurane carried by 100% O2, while local anesthetic bupivacaine was delivered subcutaneously at the scalp.The skin over the cortex was removed over a window spanning between the ears to just above the eyes of the pup, to reveal the skull.The periosteum was then removed with fine tip forceps and cotton swabs.At this developmental stage, the skull is uncalcified and largely transparent, so thinning or cutting a window was unnecessary.A stainless steel U-shaped bracket was then attached to the skull with cyanoacrylate glue.The bracket was clamped in place to the heating pad and stage to stabilise the head.To prevent the skull from drying and to preserve clarity, the exposed skull was also sealed with a thin layer of cyanoacrylate.Silver wire hook leads were implanted in to the nuchal muscle through the same incision to monitor neck electromyography (EMG).
Once glue had dried, isoflurane anesthesia was removed and the pup along with heating pad and stage was positioned for imaging on a Nikon AZ100 with 2X objective and 0.6X reducer.Nuchal EMG activity was amplified with and AM Systems Model 1700 amplifier (10Hz high pass, 60Hz notch, 10kHz low pass) and was sampled at 10kHz using a Powerlab 4/26 and Labchart v8 (AD Instruments).GCaMPP6s activity was excited using an Intensilight mercury lamp (Nikon), captured using an CCD camera (ORCA Flash 2.8), and recorded using the HCImage application (Hamamatsu).Frame capture rates varied from 10-50Hz with maximum exposure times (100-20ms, respectively).To further increase signal to noise ratio, the camera was set to perform online hardware based pixel binning, reducing a 1920×1440p image to 960×720p.Individual recordings began when the animal began cycling regularly between sleep and wake, and recordings typically lasted between 40-60 minutes, after which the pup was euthanized.
Ca2+ records were processed using MATLAB (Mathworks) to create ∆F /F image stacks for FLOW portrait analysis.Briefly, imaging runs were further downsampled by pixel binning the 960×720p image down to 480×360p.To compensate for slow drift, a moving window of 40-sec was used to calculate baseline F for each frame; each pixel in F was set to the minimum value for that pixel across the 40-sec window.
∆F was calculated as the difference between the raw pixel intensity and this calculated moving minimum.The difference was then normalized to relative change by dividing (∆F /F ).A small Gaussian spatial blur was used to attenuate "speckled" noise.Region of interests (ROI) masks of the visible cortical surface were generated by excluding any pixel whose mean-to-variance ratio was greater than 400:1.This value was determined heuristically to optimize exclusion of any pixels that displayed minimal change in fluorescence over time, such as those that lie outside the cortical window.

Adult mouse dataset
These experimental procedures were conducted at UCL according to the UK Animals Scientific Procedures Act (1986) and under personal and project licenses granted by the Home Office following appropriate ethics review.The dataset and associated procedures was described previously [35].In brief, the data were from an adult (30 weeks) male mouse expressed GCaMP6s in excitatory neurons (tetO-GCaMP6s; CaMK2a-tTa genotype [11]).The mouse was implanted with a metal headplate, plastic light isolation chamber, and transparent covering over the dorsal skull.On the day of recording, the mouse was head-fixed under the microscope on a stable seat with a rubber wheel underneath the forelimbs.Video cameras captured the frontal aspect of the mouse as well as its eye.Imaging was conducted at 70 Hz with alternating blue and violet illumination, and imaging data was corrected for hemodynamic components.The data were processed by singular value decomposition (SVD) compression.
The images were aligned to the Allen Common Coordinate Framework [90] by manually identifying Bregma and the orientation of the midline in the images.Bregma was taken to be located at the coordinate 5.7 mm AP in the CCF.Since the pixel size in the camera was known (21.7 µm / pixel), the CCF region boundaries could then be overlaid on the images.

Pan-cortical wave segmentation
Pan-cortical waves, as defined by [38], are cortical activity events where recorded activity spreads over a large area of the imaged cortex.We defined large cortical area to be when 50 percent of the cortical pixels (pixels which show the cortex) are active.At any time point, a pixel is active if its intensity is more than one standard deviation above the temporal mean for that pixel.To extract pan-cortical wave events, we computed the fraction of active cortical pixels throughout the recording, and noted the time points where the active area exceeded the 50 percent threshold.Each wave event was then defined by the time points when the active area crossed 10 percent active prior to the time of crossing the 50 percent threshold and the time when the active area crossed this 10 percent lower bound following the peak.Overlapping wave events were merged into a signal pan-cortical wave to avoid redundancy.Furthermore, events that lasted less than the FTLE integration length (T ) plus the optical flow scaling delay (3.5 sec or 70 frames for the mouse pup data) were not analyzed because the FTLE and optical flow computations require longer bouts of data.

Sleep bouts during development
Sleep state cortical activity was segmented using the nuchal EMG as an indicator of state (sleep or awake).Time points were clustered into three groups based on the nuchal EMG power spectrum as in [38,88,89], where the lowest power group is known to represent the sleep state.We defined a sleep bout as a period of continuous classification in the sleep state, and extracted the 10 longest bouts from each recording over the developmental time span.Any bout that did not meet the FTLE and optical flow length requirement (3.5 sec or 70 frames for the mouse pup data; 0.8 sec or 30frames for the adult mouse data) was not analyzed further.In cases when there were less than 10 bouts that met the length requirement, we chose to include fewer sleep bouts for that recording.

Movement event extraction
We extracted movement events from video of the face and front arms of the adult mouse during the widefield imaging experiment.We defined a movement score for each time point in the video based on the difference between the current time point and the previous time point.Each video frame was assigned a movement score given by the sum (over all pixels in the frame) of the difference between the current and previous frame.For time point t, the score is given by MovementScore t = pixels (I t − I t−1 ), where I is the pixel intensity for each of the pixels in the frame.The time series of movement scores was normalized to the maximum observed value for ease of interpretation and visualization.Timestamps of video frames were determined by recording TTL pulses emitted by the camera on each exposure, for both calcium imaging and behavioral videos.We then compared cortical activity across varying movement regimes.

Horn-Schunck optical flow
We computed optical flow vector fields using the Horn-Schunck optical flow algorithm [77] implemented in MATLAB [93].Two parameters must be supplied to the optical flow algorithm: the maximum number of iterations and the α smoothness parameter.Values for both parameters were selected such that the errors in the Horn-Schucnk minimization problem (see [77] for details) were simultaneously minimized.We set the maximum number of iterations to 100 and α to 1 for all computations.

Optical flow scaling and smoothing
To minimize the effects noise on optical flow fields, we applied an activity-based scaling to the magnitudes of the optical flow vectors.First, we created a time series of weights for each pixel by normalizing change in raw pixel intensity between the current time and the intensity of that pixel 1.5 seconds in the past to the maximum observed change.We chose a time delay of 1.5 and 0.5 seconds, for the developmental and adult mouse datasets respectively, to empirically to match the time scale of large changes observed in the raw data.Next, we took the sliding windowed average, over a window of 0.25 seconds, of the weights in order to further reduce the effects of recording noise.We then scaled the magnitude of the optical flow vectors by applying the weights to the corresponding vector.Lastly, we temporally smoothed the optical flow fields using a 5-point Gaussian window created with MATLAB's guasswin() function.The gausswin function takes an additional parameter, α, which is proportional to the inverse of the standard deviation of the Gaussian smoothing kernel.We set this parameter to 1.25 for all smoothing operations for it observed ability to reduce noise in the processed vector fields.

Finite Time Lyapunov Exponent (FTLE) fields
We computed the FTLE of all vector fields using the LCS Tool [73] (https://github.com/jeixav/LCS-Tool) MATLAB software package.We computed the FTLE using an integration length of 2.0 seconds (40 frames) for the developing mouse data and an integration length of ∼ 0.5 seconds (15 frames) for the adult mouse data.Additionally, we used an integration lengths of 15 frames, 12 frames, and 10 frames for the plane wave, the circular wave, and the traveling Gaussian examples respectively.To chose the integration length T , we followed the criteria outlined in [62] of choosing a value such that the FTLE ridges are sufficiently resolved.Using a sample of each dataset, we computed the FTLE for a range of integration lengths (0 to 100 frames) and visualized the resulting FTLE fields.We then chose the smallest integration length where the corresponding FTLE field had well resolved, sharp, ridges.Supplemental Figure 4B illustrates the effects of computing FLOW portraits with a range of integration lengths.

FLOW portrait construction
FLOW portraits are constructed through several image processing steps that aim to extract ridges from an FTLE field (see Figure 4 for a visualization of the intermediate processing steps).It is important to note that we process the forward and backward FTLE fields separately and overlay them on the mean ∆F/F image to create the final FLOW portrait.
We begin by averaging the FTLE time-series to aggregate the flow features into mean forward and backward FTLE fields.Next, we isolate possible ridge-like features by thresholding the mean FTLE field at a chosen percentile to form a binarized image.This thresholding step is motivated by recognizing that a ridge can be though of as a continuous path along a local maximum in the field [60,62].Therefore, the binarized mean FTLE fields is thought to contain the ridges whose value is above the chosen threshold value.Throughout this work we denote the specified threshold value as a parameter named the threshold percentile.For each FLOW portrait analysis we choose the threshold percentile to extract the FTLE ridges (see black arrows in Figure 4A for example ridges).Figure 4A and B show the correspondence between the mean FTLE field and the binary versions (the threshold percentile was set to 95 percent).We used threshold percentiles between 90% -93% for the mouse pup dataset and a threshold percentile of 93% for the adult mouse dataset.Additionally, we thresholded the plane wave, the circular wave, and the traveling Gaussian examples to the 91st, the 93rd, and the 85th percentiles respectively.Supplemental Figure 4C illustrates FLOW portraits computes with a range of threshold percentiles.
Next, we perform two sets of morphological image processing operations on each binarized mean FTLE field to produce the final FLOW portrait.The first set of operations aims denoise approximate ridges from the FTLE fields.While the second set smooths the ridges to produce the FLOW portrait.We found that these two series of operations provide strong approximations to the ridge features that we observe in the FTLE fields.We use the bwmorph() function in MATLAB for all morphological image processing operations(see https://www.mathworks.com/help/images/ref/bwmorph.html for details).This function applies a specified morphological operation iteratively, with the number of iterations specified by the n parameter, or until the input image remains unchanged, n = Inf.Unless otherwise specified, we performed morphological operations until the image no longer changed, with n = Inf.We refer the reader to the MATLAB documentation, Gonzales et al. [94] and Haralick and Shapiro [95] for the mathematical details of each morphological processing operation used.
The first set of operations aims to transform the noisy, disconnected, ridges in the binarized images to connected ridges that reassemble those observed in the raw data.First, we perform the 'close' operation (morphological dilation followed by erosion) to close any gaps within the binary image.Next, we use the 'thin' operation to thin the blob-like structures seen in Figure 4B to a series of lines.Lastly, we skeletonize the image by applying the 'skel' (performed with n = 4).Together these operations convert the disconnected, blob-like, structures seen in Figure 4B to the connected, single-pixel, structures in Figure 4C.These skeletonized structures can be thought to approximate the centerlines of the FTLE ridges.
The second set of operations aims to smooth the skeletonized image to produce the FLOW portrait.Here we performing the 'diag' operation to connect regions where two pixels lie corner-to-corner with an additional pixel.We than apply the 'spur' operation to remove any remaining single pixel spurs from the ridges.Lastly, we close any gaps introduced with the and the 'close' operation.
Lastly, we overlay the processed forward and backward images on the corresponding mean ∆F/F image to create the final FLOW portrait.An example FLOW portrait can be seen in Figure 4D.

Quantification of FLOW portrait consolidation during development
In order to to quantify the consolidation of FLOW portraits during development we computed a metric we denote as the ridge count score.The ridge count score is defined as the number of disconnected ridges in the FLOW portrait divided by the total area of the FLOW portrait.This metric is computed by counting disconnected ridges (objects) in the FLOW portrait and dividing by the total number of pixels included in the FLOW portrait.This score was computed for forward FLOW, backward FLOW and the both combined for each sleep bout.We then take the mean ridge count score of all sleep bouts from animals of the same developmental age.Lastly, we use a paired t-test to determine whether the mean of mean ridge score for developmental days P1-P3 is statistically different than that for developmental days P5-P7.

Code and data availability
Our code is publicly available without restriction, other than citation, on Github at https://github.com/natejlinden/FLOWPortrait.The code and data in this repository can reproduce all main analyses, findings, and figures from our paper.This procedure converts widefield imaging data into a vector field of velocities.(B) The flow map at every pixel location is a virtual particle at x integrated through the vector field for a duration of T , from t0 to t0 +T ; in reverse time, particles are integrated from t0 to t0 − T .This integration stretches neighboring particles in some directions (r1) and compresses them in others (r2).(C) The flow map computation is repeated starting at each frame of the movie, at base time t0 +k∆t, where ∆t is the separation between frames; forward maps are orange and backward maps are purple.(D) The FTLE fields are computed from the Jacobians of these flow maps; example FTLE fields are illustrated for successive frames of widefield imaging data.

Figure 3:
Steps to compute a FLOW portrait.Starting with widefield data preprocessed as ∆F /F (A), optical flow is used to convert the frame-by-frame changes in pixel intensity to a vector field, shown zoomed in for the smaller area outlined with the yellow box and at 1/6 spatial resolution for clarity (B).Next, the FTLE fields are computed in forwards (C) and backwards (D) time using an integration length of 2 seconds (40 frames); here we show only the non-negative Lyapunov exponents.Ridges of these fields highlight coherent structures of the flow (E), and these ridges are used to compute the final FLOW portrait (see Figure 4).The threshold percentile was set to 93 percent.The forward time FTLE ridges (orange) highlight regions that repel flow, while the backward time ridges (purple) show regions that attract activity.Note that ridges in neighboring frames are similar but do vary in time.The FLOW portraits for both waves highlight the regions where the wave begins and were the wave ends.Arrows (black and white) show the general direction of wave propagation.FLOW portraits were computed with integration lengths of 15 frames and 5 frames and the threshold percentile was set to the 91st and the 90th percentiles for the plane wave and mouse pup datasets, respectively.Figure 6B as wave ii.The video is played at 20 frames per second such that one second of video corresponds to one second of recording.The FLOW portrait was computed using an integration length of 2 seconds (40 frames), and a visualization threshold of 0.92.
• Video 3 -An example pan cortical wave (left pane) in widefield imaging data recorded from a developing mouse and the corresponding FLOW portrait (right pane).This wave event is depicted in Figure 6B as wave i. Note, the FLOW portrait differs from that seen in the figure due to differences in thresholding.The video is played at 20 frames second such that one second of video corresponds to one second of recording.The FLOW portrait was computed using an integration length of 2 seconds (40 frames), and a visualization threshold of 0.92.
• Video 4 -An example bout of spontaneous (left pane) cortical activity in in widefield imaging data recorded from an adult mouse and the corresponding FLOW portrait (right pane).This bout is depicted in Figure 8 as bout i.The video is played at 20 frames per second such that one second of video corresponds to one second of recording.The FLOW portrait was computed using an integration length of approximately 0.4 seconds (15 frames), and a visualization threshold of 0.93.
• Video 5 -An example bout of spontaneous (left pane) cortical activity in in widefield imaging data recorded from an adult mouse and the corresponding FLOW portrait (right pane).This bout is depicted in Figure b as bout ii.The video is played at 20 frames per second such that one second of video corresponds to one second of recording.The FLOW portrait was computed using an integration length of approximately 0.4 seconds (15 frames), and a visualization threshold of 0.93.
Supplemental Figure 4: Effects of integration length and threshold percentile hyperparameters on FLOW portraits.(A) Synthetic traveling Gaussian examples for which we illustrate the effects of hyperparameters on FLOW portraits.(B) Increasing the integration length resolves more detail in the FLOW portrait until a critical integration length is reached.Integration lengths beyond the critical value begin to lose detail.The critical integration length is around 10 frames in this example.All FLOW portraits were computed with the threshold percentile fixed to 85%.(C) Thresholding the FTLE field enables isolation of FTLE ridges.However, ridge information will be lost if the threshold percentile is too large.All FLOW portraits were computed with the integration length fixed to 10 frames.

Figure 1 :
Figure 1: FLOW portraits capture coherent propagation of structures that are poorly represented by common modal decompositions that aim to achieve space-time factorization.(A) Three examples of spatiotemporal data for which we compare principal component analysis (PCA), non-negative matrix factorization (NNMF), and our FLOW portraits.One synthetic example is a two-dimensional Gaussian that grows, translates to the right, then shrinks.Two further in vivo examples are widefield calcium imaging data from a developing pup and an adult mouse.The dashed white lines at 0 sec indicate the midline of the brain.The mouse pup data includes a pan-cortical wave from a postnatal day 7 (P7) animal; scale bar is 1 mm.The adult mouse data shows spontaneous widefield calcium activity recorded in the dark; scale bar is 2 mm.(B) FLOW portraits show a succinct summary of the spatiotemporal flow in each example dataset, while spatial PCA and NNMF modes do not.The PCA modes are the first 4 spatial components; the NNMF modes are from a 4-mode solution to the factorization and are not ordered.Both sets of modes decompose the growth and translation of activity into static spatial images, from which the flow of the activity cannot be easily appreciated.In contrast, our FLOW portraits highlight regions of activity initiation and termination, as well as the direction and extent of activity spread.Orange structures ('f'; forward time FTLE) capture regions were activity propagates from, and purple structures ('b'; backward time FTLE) capture regions where activity propagates towards.Supplemental videos illustrating all datasets are available as Supplemental Videos 1-5.FLOW portraits were computed with integration lengths of 10 frames, 40 frames, and 15 frames and the threshold percentile was set to the 85th, 93rd and 93rd percentiles for the synthetic, mouse pup, and adult mouse datasets, respectively.

Figure 2 :
Figure 2: Finite Time Lyapunov Exponent (FTLE) fields are computed from spatiotemporal data.(A) An illustration of how optic flow is computed from successive frames of images by correlating the relative movement of pixel intensities.This procedure converts widefield imaging data into a vector field of velocities.(B) The flow map at every pixel location is a virtual particle at x integrated through the vector field for a duration of T , from t0 to t0 +T ; in reverse time, particles are integrated from t0 to t0 − T .This integration stretches neighboring particles in some directions (r1) and compresses them in others (r2).(C) The flow map computation is repeated starting at each frame of the movie, at base time t0 +k∆t, where ∆t is the separation between frames; forward maps are orange and backward maps are purple.(D) The FTLE fields are computed from the Jacobians of these flow maps; example FTLE fields are illustrated for successive frames of widefield imaging data.

Figure 4 :Figure 5 :
Figure 4: Ridges in the FTLE field are extracted to form the FLOW portrait.(A) The forward and backward FTLE fields are separately averaged to aggregate flow structures over time.Black arrows indicate examples of FTLE ridges which are extracted in the following analysis.(B) Next, the mean FTLE fields are binarized using a threshold which is chosen by the user at a specified percentile (denoted the threshold percentile; here this is chosen to be the 95th percentile).The binary forward and backward FTLE fields are shown overlaid on the mean ∆F/F image.Pale orange and purple arrows show the same forward and backwards ridges as in (A); pale orange corresponds to the forward time ridges and pale purple to those in backward time.(C) Ridges in the FTLE are approximated by performing a skeletonization procedure on the binarized FTLE fields.(D) Lastly, FLOW portraits are produced by further morphological image processing to smooth the approximate FTLE ridges.The final FLOW portrait highlights the example ridges observed in the original mean FTLE fields.

Figure 6 :
Figure 6: Pan-cortical wave events in a P7 mouse pup are summarized as FLOW portraits.(A) Pan cortical waves are defined as events where the fraction of active cortex (black-trace) exceeds 50-percent.Briefly, fraction of active cortex is defined as the fraction of pixels whose intensity is greater than one standard deviation above the mean (in time) for that pixel.FTLE intensity is defined as the sum of the FTLE values for each frame, normalized by the maximum value in time; this intensity is computed for both the forward and backward FTLE time series.(B) FLOW portraits are shown for two example waves, indicated by i and ii in A. Orange indicates forward time FTLE ridges where calcium activity originates.Purple indicates backward time FTLE ridges where calcium activity propagates towards.White arrows highlight the general direction of activity propagation during the cortical wave.The FLOW portraits are computed using an integration length of 2 seconds (40 frames) and a threshold percentile of 93 percent.

Figure 7 :
Figure 7: FLOW portraits highlight developmental changes of sleep-state cortical activity.(A) FLOW portraits for 5 sleep bouts from 12 P 1-8 mouse pups are shown.During the first 3 postnatal days, activity is diffuse, as indicated by many short-length structures in the FLOW portrait.As animals grow older (postnatal days 5-8), sleep-state cortical activity becomes more structured, as indicated by a consolidation of features in the FLOW portraits.Orange indicates repelling structures, and purple indicates attracting structures.All images are of the left-hemisphere, such that the mid-line and anterior directions are oriented towards the bottom and left of the images, respectively.FLOW portraits were computed using an integration length of 2 seconds (40 frames) and a threshold percentile of 93 percent for all sleep-bouts shown.(B) Quantification of the number of FLOW ridges seen versus developmental day.The ridge count score for a FLOW portrait is computed by counting the number of FLOW ridges, either forward, backward, or both combined, and dividing by the total area of FLOW ridges in that portrait.When the ridge count score is high there are many smaller ridges in the image, whereas when the score is low there are fewer ridges with a larger ridge area.Here, the mean ridge count score (black point) decreases between developmental day one and day five and then remains constant.Further, the mean ridge count score over days P1-P3 (0.0038, 0.0027, and 0.0022 for forward, backward and combined, respectively) is significantly different than the mean ridge count score over day P5-P8 (0.0019, 0.0015, and 0.0010 for forward, backward and combined respectively; paired t-test, p-values 8.70 × 10 −4 , 7.16 × 10 −5 , and 1.62 × 10 −7 for forward, backward and combined, respectively).Blue dashes show individual data points; the ridge count score for an individual FLOW portrait.Error bars show ±1 standard error measure.

Figure 8 :
Figure 8:Examples of spontaneous cortical calcium activity associated with movements of an adult mouse summarized as FLOW portraits.(A) A movement score extracted from IR video of the mouse moving spontaneously in the dark shows bouts of large movements among more quiescent periods.These bouts of movements do not correspond necessarily to when a large fraction of the cortical surface is active (see Methods for threshold criteria).(B) FLOW portraits for two bouts involving spontaneous movements labeled i and ii show coherent structures that highlight activity appear in sensorimotor regions and are then attracted to the centers of these regions bilaterally.Boundaries aligned to the Allen Mouse Brain Common Coordinate Framework[90] are overlaid in white.FLOW portraits were computed with a 15 frame integration length and threshold percentile of 93 percent.