Journal of The Royal Society Interface
You have accessResearch article

Imaging and analysis of a three-dimensional spider web architecture

Published:https://doi.org/10.1098/rsif.2018.0193

    Abstract

    Spiders are abundantly found in nature and most ecosystems, making up more than 47 000 species. This ecological success is in part due to the exceptional mechanics of the spider web, with its strength, toughness, elasticity and robustness, which originate from its hierarchical structures all the way from sequence design to web architecture. It is a unique example in nature of high-performance material design. In particular, to survive in different environments, spiders have optimized and adapted their web architecture by providing housing, protection, and an efficient tool for catching prey. The most studied web in literature is the two-dimensional (2D) orb web, which is composed of radial and spiral threads. However, only 10% of spider species are orb-web weavers, and three-dimensional (3D) webs, such as funnel, sheet or cobwebs, are much more abundant in nature. The complex spatial network and microscale size of silk fibres are significant challenges towards determining the topology of 3D webs, and only a limited number of previous studies have attempted to quantify their structure and properties. Here, we focus on developing an innovative experimental method to directly capture the complete digital 3D spider web architecture with micron scale resolution. We built an automatic segmentation and scanning platform to obtain high-resolution 2D images of individual cross-sections of the web that were illuminated by a sheet laser. We then developed image processing algorithms to reconstruct the digital 3D fibrous network by analysing the 2D images. This digital network provides a model that contains all of the structural and topological features of the porous regions of a 3D web with high fidelity, and when combined with a mechanical model of silk materials, will allow us to directly simulate and predict the mechanical response of a realistic 3D web under mechanical loads. Our work provides a practical tool to capture the architecture of sophisticated 3D webs, and could lead to studies of the relation between architecture, material and biological functions for numerous 3D spider web applications.

    1. Introduction

    Silk exhibits exceptional properties, such as strength, toughness, elasticity and robustness, which cause it to surpass other biological and engineering materials [15]. Indeed, for millions of years, silk's properties have been tuned to carry out specific functions. In the case of spider webs, it provides protection, housing and a tool for prey foraging [46]. Silk's mechanical properties derive from its hierarchical structure, which ranges from its protein sequence to macroscale material architectures such as spider webs or cocoons [1,7]. For instance, spider silk in a web has to be strong enough to carry the weight of the spider and tough enough to withstand prey impact and environmental threats such as wind load or debris impact, while being strong enough despite the existence of defects [8,9]. The robustness of spider webs relies on their discrete architecture and the nonlinear behaviour of dragline spider silk [8], while cocoons protect their pupae through the multilayered non-woven silk structure [1]. In addition to its remarkable mechanical properties, silk is also biodegradable and biocompatible, which means that silk-based materials can be used for biomedical applications [10,11]. Spider webs have inspired numerous biomimetic material and structural designs in diverse fields such as art [12], architecture [13,14] batteries [15], and acoustics [16].

    Spiders (order Araneae) are extremely abundant in most ecosystems on the planet as a result of their evolutionary success, making up more than 47 000 different species [17] that have existed for over 380 million years [1820]. Indeed, spiders have survived and proliferated in diverse environments due to the adaptive skills made possible because of their silk [18]. Spiders can spin up to eight different types of silk that have different properties and functions, such as flexibility and stickiness of viscid silk to catch prey, or strength and stiffness of dragline silk for making the frame of the web or as a safety line [1,8,18,21]. Silk structures spun by spiders play a significant role in their survival. The web architectures are abundant and varied: from primal trapdoor subterranean burrows [19], minimalist silken T webs, triangular webs or vertical geometrical orb webs and more complex three-dimensional (3D) webs such as tangle, funnel, cobweb, and complex orb webs [18,21]. Extensive research has been done on the mechanics of silk and of the vertical orb web, the typically two-dimensional (2D) web built from stiff radial silk and from extensible and sticky spiral threads to catch aerial prey [8], despite the fact that orb weaver spiders account for only 10% of spider species [21].

    The robustness of 2D vertical orb webs results from the nonlinear material behaviour of the radial threads (dragline silk) that helps to localize the deformation and ruptures caused by a point load (prey or debris) [8], and from the distribution of thickness of the fibres that may allow sacrificial failure of thin spiral threads [9]. In addition to the effect of nonlinear mechanics of silk, the geometry of spider web, which defines how the individual silk fibres are connected and how they are anchored to substrates, plays an important role in carrying out its biological functions such as housing, protection and prey capture. Biologists suggest that spiders needed a 3D barrier to protect themselves from predators [2123]. In the case of sheet webs and cobwebs, the geometric regularity and the use of capture silk are reduced, which lead them to capture prey differently compared to aerial orb webs [21,24]. For example, it has been observed that cobwebs catch ambulatory prey by catapulting the prey inside the scaffolding of the web, in contrast to the orb webs that capture prey by dissipating the energy of the prey stuck to the webs' spiral threads [21]. 3D webs are energetically costly [24,25] and are often more permanent structures [21,26], requiring silk fibres that can withstand prolonged and repetitive stresses [21,27]. Therefore, no weak viscid silk can be included [21,28]. These ecological observations of the differences between 2D and 3D webs highlight the importance of the interplay between the biological function and the structural features of 3D spider webs, a subject that has remained elusive.

    In order to fully understand the structure–function relationship of 3D webs, the intricate architecture of 3D webs needs to be precisely revealed. Because of the complex irregular 3D network and the microscale thickness of the silk fibres, there is no existing imaging toolkit that allows direct quantification of the topology of web structures. Indeed, the extremely thin silk fibres (approx. 1 µm [1]) are only visible to human eyes under appropriate lighting conditions and angles [25]. Although it is possible to distinguish the fibres and visualize the webs by dusting the web or spraying water [29], these methods are not suitable for precise imaging because they apply unknown loads that can generate deformation, such as contraction of up to 50% from water [1]. Non-invasive imaging techniques such as CT (computed tomography) scans or ultrasound imaging do not work for webs because the fibres are too thin to be captured [12,30]. A confocal microscope could describe the web's architecture, but only partially for a small size sample with very close observation for its 2D projection [30,31]. Micro-CT machines have enough resolution and capability for 3D imaging, but their sizes are limited to small samples that are not suitable for capturing an entire spider web [32].

    For a tens-of-centimetre-scale 3D spider web, the Spider Web Scan (SWS) laser-supported tomographic method was recently proposed, developed by Studio Tomás Saraceno (STS) with the collaboration of Peter Jäger (Senckenberg Institut) and the Photogrammetric Institute at the TU Darmstadt. The SWS approach consists of illuminating one plane of the spider web, housed in a clear Perspex box, using a red sheet laser to capture stereoscopic image pairs and then repeating this process along the length of the web. The use of stereo cameras allowed for sharper and more precise image of fibres. STS developed this method and used it to capture the architecture of a 3D spider web (Latrodectus mactans) in a non-automated way [12,3336]. One of the shortcomings of this method is that it was tedious to move the laser and take an image for each imaging event in a stepwise fashion. Moreover, the web geometry that was derived lacked certain elements, as the gap size between each scan was too large (5 mm) [12,37] and lacked an automated functionality that made image-processing labour intensive. These structures were not sufficient to be directly translated into a computer model of the web. Similarly, 3D webs had been scanned with infrared cameras that captured slices of the web illuminated by a sliding infrared sheet laser [38]. However, infrared cameras often have a lower resolution (640 × 480 pixels) than commercial visible light cameras (5184 × 3456 pixels). In a low-resolution image, regions of dense fibres could become indistinguishable. As a result, this method was used to approximate the topology of a 3D spider web structure with missing elements, but was capable of tracking spider movements during web construction [38].

    Due to these existing limitations, we have developed a new systematic and rigorous toolkit, inspired by the SWS approach [37], to directly capture the geometry of the 3D architecture of the spider web and generate a computer model automatically, in a fast, convenient and more precise way. We will use this method to reveal the structure of 3D spider webs and investigate the relation between architecture, material, and performance of numerous 3D spider webs, and the role of this interplay in completing the functions of the web. Understanding 3D spider webs could contribute to structural and material optimization for bioinspired composite material design.

    Our paper is organized as follows. First, the experimental set-up of the laser scanning machine is presented. In the second part, an image processing method to study scanned fibres is described to visualize the intricate architecture of spider webs through a 3D model derived from the processed scans. In the third part, the web topology is analysed and quantified using the model. Finally, in the fourth part, future applications of our 3D spider web model are explored, by combining silk mechanics and web architecture to deepen our understanding of spider web ecological and evolutionary fitness. This future effort could lead to 3D-spider-web-inspired structures such as high-performance lightweight long-span structures, safety nets, or the designs of fibre-reinforced composite materials.

    2. Methods

    2.1. Spider web construction

    The spider built its web in a rectangular frame over a few days (figure 1a). Spiders build webs in low lit environments, often using branches, rocks and corners as support [18]. To build intact spider webs in controlled conditions, we built a rectangular 35.6 × 35.6 × 24.4 cm frame (figure 1a), adapted from the Spider Web Frame (SWF) method developed by STS [39]. We constructed the frame using carbon fibre tubes of 3.175 mm diameter. To connect the tubes, we built a tripod made of 0.889 mm diameter spring-back stainless steel wire. For each connection: we bent three wires to 90° to connect the perpendicular tubes. We added glue at the corners to strengthen connections. The frame is reusable, dismountable, and was assembled from readily available components. The materials used for the frame were purchased from McMaster-Carr [40]. To prevent the spiders from escaping, we purchased large storage containers and filled them with water to surround the frame (figure 1a). Depending on the size of the spider, we needed to increase the ratio of the size of the container to the size of the frame because some spiders could still escape by jumping out of the container. We purchased spiders via Bugs In Cyberspace [41]. The spiders built their initial web structures over a few days but kept modifying them over time, waiting to catch prey in their webs. Spiders can survive without eating for weeks but need water which occurs naturally on the web as dew [42]. We fed the spiders, moths, flies and other insects. The spider web used for scanning was a tent-web built by the Cyrtophora citricola spider.

    Figure 1.

    Figure 1. Experimental set-up. (a) Schematic of the rectangular 35.6 × 35.6 × 24.4 cm frame where the spider spun its web. The frame was placed in a container filled with water so that the spider did not escape. (b) Laser scanner set-up. The frame with the spider web was placed on the supporting stand which moved on rails along the depth (z) direction of the spider web. On the moving supporting stand, a sheet laser that lit up slices of the web and a high-resolution camera were fixed. The camera was always focused on the laser plane. (c) Scanning steps. The supporting stand moved 0.5 mm every 11 s for 3 s. It stopped to allow equilibration of the laser and camera for 3 s. The camera took a scan of a slice of the web after a 2 s exposure. The next motion started 3 s after the scan. This process was repeated 660 times to scan the full web. (Online version in colour.)

    2.2. Construction and set-up of moving rail

    The camera and sheet laser moved along a moving rail (figure 1b). We machined and assembled the moving rail for web scanning at the machine shop of the Department of Civil and Environmental Engineering at MIT. The rail was mainly composed of two 1200 mm long and 12 mm diameter linear shafts made of stainless steel that were supported by two L-shape aluminium beams at the two ends. We installed four linear bearings, connected to a supporting stand, on the linear shafts. The supporting stand, made of an aluminium plate, supported the web and moved linearly along the depth (z-axis) of the web with low friction force on the shafts. A rubber belt provided the linear motion of the supporting stand that was connected to the two ends of the belt. Compared to a linear screw, the rubber belt provided more quiet and stable motion. The belt was wound across a 24 V stepper motor with a holding torque of 6 Nm that was sufficient to convert its rotating motion to the linear motion of the belt. A motion control resolution of less than 20 µm was achieved by using a motor with a step angle of 1.8°, subdivision of 16 and gear wheel diameter of the belt of 2 cm. To avoid vibration of the web during scanning, the web frame did not move; instead, the camera and the sheet laser, attached to the supporting stand, moved linearly with a constant offset during scanning. Maintaining a constant distance between the camera head and the laser plane allowed for clear images without the need to adjust the camera focus during scanning. The rotation of the stepper motor was entirely controlled by a CNC controller board (Toshiba TB6560) that connected to the serial port of a computer using Mach3 [43] software that could fully control the moving speed, distance travelled and direction.

    2.3. Laser scanning process

    The camera captured images of 2D slices of the web illuminated by the green sheet laser when the supporting stand is stationary (figure 1b,c). Once the spider had finished its web in the carbon frame, we placed the frame containing the web on a raised table in the middle of the supporting stand of the moving rail machine. We used a green (532 nm) sheet laser with a width of 1 mm to illuminate each 2D plane of the 3D web. We selected this specific wavelength because the charge-coupled device (CCD) in commercial cameras are typically two times more sensitive to green than red or blue. We fixed a single high-resolution EOS-1D X Canon camera to the supporting stand so that the plane lit up by the laser was perpendicular to the axis of the camera. This set-up was simpler and cheaper for gathering and processing scans than the manual SWS set-up that required two cameras [37]. The camera was equipped with a Canon 24-70 mm f/4 L EF IS USM lens with an aperture of f/22, that was positioned 61 cm away from the sheet laser. Each scanning step was composed of four stages: (i) the movement of the moving rail (together with the camera and the sheet laser) of 0.5 mm in 3 s; (ii) the stopping and stabilization of the moving rail for 3 s; (iii) the shutter action and exposure time of the camera for 2 s; (iv) a break before the next motion for 3 s. The camera shots were timed with a Vello Wireless Shutterboss timer remote, which differed from the timer of the system used for the moving rail (Mach3 [43]). As a result, we used a 3 s pause between snapshots to ensure that the shutter action and exposure time were synchronized and did not occur during the movement of the rail between images. By repeating this scanning step, we managed to obtain a series of snapshots that each included a slice of the web structure. Here, we considered only one direction of displacement and speed. We chose to slide the frame every 11 s, to allow time for camera exposure, image capture, sliding of the frame and stabilization, which was equal to the time-lapse period of the camera. Figure 1b shows the laser scanning set-up and figure 1c the timeline of the workflow. The scans were shot in a dark room to obtain clear images of the fibres. The web was scanned using 660 images every 0.5 mm, which was sufficient to visualize the fibre architecture. The resolution was 5184 pixels × 3456 pixels × 24 BPP (bits per pixel). After scanning, we observed that the web exhibited a very dense region (‘tent’ web) and porous regions with distinguishable fibres.

    2.4. Image processing

    We used image processing to transform 2D colour images into a 3D skeletonized binary image (figure 3). As the fibres were very thin, the laser light was scattered, which made the fibres look thicker and increased the noise of the image. This scattering also induced the fibres that were close to the laser plane to be illuminated and captured by the camera. The images required processing to decrease the noise and determine the coordinates of the fibres in the laser plane. The image processing aims to transform a colour picture into a simpler binary image without losing any spatial information. This is equivalent to transforming a 3D matrix (colour) into a simplified 2D matrix of 0 and 1 (black and white). To reduce computational time, we divided the entire spider web structure into 100 samples that were analysed independently and subsequently assembled to make the full web. For describing our approach, we considered a representative 76.2 mm cubic volume obtained from the porous region that was composed of 160 images each containing 1001 × 1001 pixels, such that each pixel corresponds to 76 µm. First, the colour channels of the images were split into three greyscale images: red, blue and green. In every image, the fibres that were cut directly by the laser plane appeared white, while the fibres that were close to the laser plane appeared green. To reduce the noise of the green colour and keep only the white coloured fibres, the minimum values of the greyscale images of the blue and red colour channels were merged, as white is composed of all three colours (figure 2). Each red-blue greyscale image was filtered with a Gaussian blur with a standard deviation of 1.

    Figure 2.

    Figure 2. Colour channel split image processing. Three greyscale images (red, green, blue) were split from the original colour picture. The blue and red images were combined by taking the lowest value of each.

    Figure 3.

    Figure 3. Image processing steps from the original colour image to the 3D image skeleton. The original colour image was transformed into a greyscale image using the minimum values of the red and blue colour channels. This greyscale image was smoothed to erase the sharp edges using Gaussian blur with a standard deviation of 1, which was binarized with a threshold of 0.75. All the binary images were stacked together to make a 3D image which was skeletonized. The thickness of the fibres was one voxel.

    The intensity values of the images were adjusted to increase contrast. Finally, the greyscale images were transformed into binary images by applying a normalized threshold of 0.75 which we had chosen after testing on numerous images. The binary images were stacked together into a 3D matrix of 1 and 0, with 1 a white pixel and 0 a black pixel. The fibres were represented by white pixels and the background in black. For this small sample of the web, the resulting 3D image was composed of 643 374 white voxels. Voxels are the 3D equivalent of 2D pixels which represent cubic elements in 3D space. We dilated the 3D image to create a new image of 1 873 187 white voxels. Dilation is an image processing operation that enlarges objects in a 3D image by adding voxels to their boundaries, which allowed us to increase the continuity of the images in the depth (z) direction as images were stacked. The dilated image (figure 4a(i)) had been skeletonized (figure 4a(ii)) using an algorithm adapted from [44] so that the thickness of fibres became 1 voxel (figure 3), reducing the total number of white voxels to 40 775. The skeletonized process sometimes created additional branches due to noise during scanning and the non-uniformity of the thickness of the fibres that were removed in a later step. This image processing, summarized in figure 3, was able to transform 160 colour 2D images into a set of points of about 40 000 points. The image processing was performed with Matlab and its image processing toolbox [45].

    Figure 4.

    Figure 4. Schematic of the image-to-fibre network process. (a) From the image processed dilated 3D image to a straight line fibre network. (i) The dilated 3D image was (ii) skeletonized, making the fibres 1 voxel thick. (iii) The segments of the skeleton were defined by their extremities and (iv) transformed into straight line segments. (v) The noise line segments generated by the non-uniformity of fibre thickness were deleted. Finally, (vi) the adjacent line segments, making a polyline, were regrouped into one line segment describing one fibre. (b) Verification and cleaning process. Fibres whose free-end extremities were closer than 3 mm were combined into one fibre. If a fibre's free-end extremity was closer than 10 mm to another fibre and if the extension segment that linked the free end to the line was almost parallel (angle smaller than 15°), then the free-end fibres was extended and connected. The free-end fibres were shorter than 20 mm or almost parallel to other fibres were deleted. (Online version in colour.)

    2.5. Image-to-line algorithm

    We derived a straight-line fibre network from the processed 3D image (figure 4a). Web fibres were in tension due to supercontraction that had occurred after construction, meaning that the fibres were not sagging and we could consider them as straight lines. The line-finding algorithm aims to transform a skeletonized image (figure 4a(ii)) into a list of nodes, which are the extremity points of lines, and a list of pairs of indices. Each pair gives the node indices of the first and last points of the line. For this part of the analysis, we measured distances in ‘units’, with one unit in the xy plane equivalent to 76 µm (pixel size), and one unit in the depth (z) direction equivalent to 0.5 mm (gap size between slices). First, we looked for the connection and free-end voxels of the skeleton using an algorithm adapted from [44]. The connection voxels were those that linked at least three skeleton branches while the free-end voxels were part of branches that were linked to only one other voxel (figure 4a(iii)). These extremity points were used to identify a network of straight-line segments (figure 4a(iv)). To delete the noise of this network that remained after the image had been skeletonized, short lines that had one free-end extremity and a length shorter than 20 units were deleted (figure 4a(v)). We categorized the remaining segments into two types: (1) segment in which at least one extremity was connected to only one other segment, (2) all other segments in which extremities were either connected to two or more other segments or no segment. The category (1) segments were linked together to make a polyline composed of smaller segments, and these polylines were transformed into straight lines by linking their extremities (figure 4a(vi)). At the end of this step, all segments were category (2) and represented one complete fibre. Figure 4a illustrates these steps.

    2.6. Verification and cleaning of the network

    We obtained a 3D fibre network model (figures 5 and 6) after linking the fibres, fixing them to the frame or deleting them (figure 4b). After obtaining a preliminary fibre network, we verified its consistency by checking that all the fibres were connected to at least two other fibres or attached to the boundary of the sample. We also identified the fibres that had at least one free-end extremity and were closer than 20 units from the boundary of the sample, and considered them as unconnected lines which belonged to fibres in an adjacent section of the web or as fibres fixed to the frame. We also identified the anchor threads (fibres fixed to the frame), which were the fibres that had one extremity end close (less than 20 units) to the frame position. The fibre network model was scaled to its original dimensions, by multiplying the xy coordinates with the pixel size, and the z coordinates with the scanning gap size. We connected the remaining free-end lines if the distance between the free ends was within 3 mm, or if a segment extending between their free ends was shorter than 10 mm and almost parallel (angle smaller than 15°). The last step of the process was to clean the web from any remaining unconnected fibres that were artefacts from image processing by deleting unconnected fibres that were parallel to other fibres or shorter than 20 mm in length. These conditions for cleaning up the fibre network were found iteratively through the analysis of many samples within the overall web structure, and may need to be adjusted if scanning and imaging conditions were changed. Following this methodology, we obtained a fibre network that was described by a list of nodes (endpoint of the fibres) and a list of pairs of nodes that represented the endpoints of the fibre. Figure 4b illustrates these steps. Figure 5 shows the result of the 3D spider web model. By repeating this process on all the samples composing the entire web, we automatically generated, for the first time, the architecture of the spider web for porous regions or tangle webs (figure 6). This network forms the basis for mesoscale bead–spring models, which we can use to study the response of a realistic web structure to mechanical loads.

    Figure 5.

    Figure 5. From data points to spider web network. Top view: plane of the scan. Front and left views: perpendicular to the scanned plane. (Left) Data points of the 3D image. (Right) Spider web fibre network. The fibres were connected and followed the path of the fibre data points on the left. (Online version in colour.)

    Figure 6.

    Figure 6. Comparison between the 3D spider web and its 3D network model. (a) Picture of the 3D spider web spun by a Cyrtophora citricola spider. (b) 3D spider web model architecture on VMD. Full web assembled from 100 cube samples (76.2 × 76.2 × 76.2 mm). Complex tangle webs at the top and bottom part of the structure and very dense tent web in the middle. (Online version in colour.)

    2.7. Bead–spring model

    We used the model derived from scanning and image processing algorithms to map its topology to a mesoscale bead–spring model of the web. This bead–spring model can be used to carry out molecular dynamic simulations and investigate the mechanics of the 3D spider web. A similar approach was used to investigate the implication of silk's nonlinear behaviour in orb webs [8]. This mesoscopic ‘bead–spring’ method consisted of approximating silk fibres as straight chains of beads, with an equilibrium spacing that could be selected based on the distance between scanning slices (0.5 mm). To describe the stress–strain behaviour of silk, we used behaviours from previous experimental and numerical studies on spider dragline silk [8] that were assigned to the spring connecting adjacent beads. This model is an invaluable tool to investigate the impact of silk material behaviour and 3D fibre architecture on the overall functions of the web.

    2.8. Limitations

    The laser scanning and line-finding algorithms are fully automatic, however, there are some intermediate steps that require manual intervention, such as placing the frame and its web on the scanning table, initiating the acquisition of laser scanning images, and importing the images into a computer for processing. This automatic laser scanning process and image processing cannot currently capture individual fibres within the localized dense regions that create the ‘tent’ of the Cyrtophora citricola web. In these tent regions, the fibres were too densely packed together to be distinguished by image processing. However, we noticed that the dense region had regular features such as parallel threads leading to an open hub. To complete the web, we will replace the dense region with this regular thread geometry in future work. This work is the first method proposed for automated scanning and modelling of 3D web networks, and we limited our imaging process to scanning in only one direction. We chose to scan in the depth direction (z-axis) because it allowed us to observe the high-density ‘tent’ region and the individual fibres that compose it. Scanning a different axis (x or y), would make it even more difficult to capture distinct fibres in the tent region. The use of images from only one direction was sufficient to construct the entire web because our laser-sliding gap of 0.5 mm was smaller than the distances between fibres in the porous regions of the web, so that all of the fibres were distinguishable and continuous. In the future, we are developing a set-up that allows for two-directional scanning of the web using two perpendicular sets of cameras and sheet lasers. Multi-directional scanning will increase the precision of fibre connectivity and eliminate the need to rotate the frame to allow scanning in different directions, thereby reducing the risk of damaging the web.

    3. Results and discussion

    From the Cyrtophora citricola 3D spider web model, we quantified features of its topology. We focused on porous cube web regions, where we could precisely measure the topology, in contrast to dense tent-like regions of the web. The porous web region selected made up 72% of the volume of the spider web frame (30 923 cm3). It was composed of 80 350 fibres, with lengths varying from 150 µm to 56 mm in length as shown in figure 7a. The total length of the porous web, i.e. all fibres combined, was 244 m for a box volume of 21 397 cm3, which was averaged to a length-to-volume ratio of 11.3 mm cm−3. For an Araneus diadematus spider 2D orb web, the combined length of radial and spiral threads was about 12.9 m for a 30 × 30 × 5 cm frame, averaged to 2.9 mm cm−3 [46]. The 3D web's length-to-volume ratio was about four times higher than that of the 2D web, which could explain the longer web construction time. The average fibre length in the 3D web was 3.0 mm which was comparable to the 2.3 mm mesh spacing (distance between each spiral turn) for 2D webs [46]. Considering a fibre thickness of 4.34 µm [47] for the Cyrtophora citricola 3D spider web, the total fibre volume was of 3 mm3. This web weighed 4.7 mg, calculated using silk's density of 1.3 g cm−3 [48], from which we derived a web density of 2.2 × 10−7 g cm−3.

    Figure 7.

    Figure 7. (a) Fibre length distribution histogram showing that 84% of the fibres were shorter than 5 mm. Lengths varied from 150 µm to 56 mm with an average of 3.0 mm. (b) Connectivity distribution histogram showing that most of the nodes (72%) connected three fibres. The nodes of connectivity 1 represented the free-end dangling fibres, the fibres connected to the frame and actual broken fibres. Omitting connectivity 1 nodes, the connectivity distribution followed the power law of a scale-free network. (c) Visualization of nodes of connectivity 3, 4 and 5 from the 3D spider web network model. (Online version in colour.)

    We defined the connectivity of the web as the number of fibres to which one node is connected. Figure 7c illustrates different connectivity nodes. Figure 7b shows the distribution of connectivity in the web. We observed that most nodes were connected to three other fibres, with a maximum connectivity of 9. The high connectivity nodes originated from the noise of the images after scanning. Nodes of connectivity 1 were only connected to one fibre, which made the fibre a free-end fibre, some of which were artefacts created during the construction of the model, whereas others represented actual free-end dangling fibres or fibres connected to the frame. There were no connectivity 2 nodes because all the fibres were straight. Omitting connectivity 1 nodes, we observed that the connectivity distribution of our spider web fibre network followed the power law of a scale-free network: Inline Formula [49], with k the connectivity of the node, and γ = 5.9. A scale-free network was observed in numerous fields such as biology (e.g. cellular metabolic networks), social interactions (e.g. spreading of ideas on social networks) or World Wide Web (e.g. web pages links) [49,50]. Compared to random networks, scale-free networks showed higher robustness under accidental failure [50]. This is consistent with the function of spider webs which have a low sensitivity to flaws and defects, a well-known concept for 2D webs that avoid catastrophic failure through sacrificial failure of a limited number of fibres [1,8].

    This new automatic laser scanning method for tracing the complex topology of 3D spider webs can lead to new critical insights into the functions of spider webs [36,51]. We now have a powerful tool to understand how the properties of silk and the geometry of a 3D web influence the functions of catching prey and providing protection. Previous research showed how the interplay between mechanical properties of silk fibres and the structure of 2D aerial orb webs influences the prey capture function of the web. The radial threads—made of strong and nonlinearly elastic dragline silk—are structural, while the spiral threads—made of viscid silk—are very extensible and tough. The silk type arrangement combined with the distribution of thicknesses leads to a web that is robust against distributed load (wind) and localized load (impact of prey), avoiding catastrophic failure [8,9]. However, this interplay between structural and material properties and its influence on the web function has yet to be investigated in 3D spider webs.

    Biologists observed that starved black widow (Latrodectus hesperus) spiders built cobwebs that are optimized for catching prey, while well-fed spiders built webs that offered better protection against predators [52]. The architecture of the web defines its functions [52]. Moreover, the 3D web is a relatively permanent structure [21,26]; it needs to be robust and to be able to preserve its functions in the event of structural defects. This raises some critical questions regarding the function, silk fibre material and web structure of the 3D web: how does the geometry of 3D web architecture and its silk properties help the web to carry out its functions? How do structural defects influence these functions? How does supercontraction of the fibres change the structure, and impact the web's functionality? Combining the 3D topology of the web and using computational methods, such as mesoscale molecular dynamic simulations will allow a better understanding and validation of observations reported in biological experiments. Models and simulations allow us to carry out non-destructive experiments on tunable webs. For example, future use of this spider web model, obtained by our automatic scanning method, could include the study of prey trapping mechanisms; investigations of the efficiency of web prey capture via ‘bouncing’ and entanglement; and to determine how long prey is retained. We can also use our model to explore the protection and robustness functions of the web, by applying predator impact or wind load, and determining whether the web fails catastrophically or whether this failure is only localized, and to detect whether the damaged web retains its functions.

    While we were able to observe the complexity of the spider web architecture in both the physical web and our model in figure 6, we could not follow the web-building process from the completed web structure. Using this optimized automatic laser scanning method during stages of the construction of the web will lead to important insights into the ‘smart construction’ process of complex 3D spider webs. Spiders build webs that are commonly several times their body size, with some exceptions such as the Darwin's bark spider (Caerostris darwini) that are capable of building webs that span rivers [53]. The size of a spider web scaled up to human scale would be similar to the height of a multi-story building. While humans need bulky scaffolding or at least a crane to construct such a building, the spider only uses its silk. Previous studies recorded the construction patterns of 3D web-building spiders, in particular, theridiid spiders [29,54,55]. The construction stages for Tidarren sisyphoides species include the exploration phase, anchorage of the retreat (for example, a leaf), scaffolding, and construction of the sheet and tangle webs [29]. Benjamin & Zschokke [54,55] used automated methods with infrared cameras, to precisely track spider movement and web construction patterns to understand their effect on spider and web evolution. Similarly, they observed that gumfoot-web construction is organized and stereotyped; its stages consist of the set-up of the retreat, exploration, scaffolding of the supporting structure and attaching gumfoot lines [54,55]. Gumfoot threads are silk fibres that are glued with viscid silk to the bottom substrate. They detach and catapult prey toward the spider inside the web [21,54,55]. These spiders build their web structure and gumfoot threads continuously over days [54,55]. We can use our scanning and image processing method to derive precise topologies of webs at intermediate stages, and investigate their mechanical behaviour using modelling and simulations, which we can compare to biological observations [54,55]. The structural mechanics of these web-building stages and temporary web architectures remain elusive. For example, it would be interesting to know whether the scaffolding fibres have functions other than as a tool for construction: are the scaffold fibres retained in the permanent structure to signal the presence of an intruder? Are these fibres structural, and necessary for maintaining the completed spider web? Do these fibres contribute to the prey capture or protective functions of the web? Or, are these fibres recycled after the spider no longer needs scaffolding? Spiders have the blueprints of their webs and silk in their genes, which have been tuned for millions of years and have contributed to spiders' ecological success [20,56]. Learning from their invaluable experience, this model can be used to focus on the mechanics and architecture of temporary web structures to understand the function of each stage of web construction. This can lead to applications for web-building-inspired innovative methods to build 3D fibre network architectures.

    A further application of this automatic web tracing method is its use in designing innovative and high-performance 3D spider web-inspired fibre networks for structural and material applications. Using the spider web model, we can explore different spider web-inspired architectures and material distributions to tune the fibre network for specific functions; such as impact resilience or reinforcement. As most spiders are solitary, it is difficult to mass-produce spider silk through spider farming [57], requiring the use of bioinspired or engineered material in fibre network structures. However, synthetic fibres are not as strong, tough and extensible as dragline silk [4,58,59]: replacing the silk in the spider web structure with bioengineered materials may reduce its unique functionality. The architecture and material distribution of the network should be adapted to the material change. We can develop a computational–experimental approach to build web-inspired fibre network structures, by using modelling and simulations to predict the properties of the structure and experimental testing on 3D-printed prototypes to validate the simulations. New insights from 3D web-inspired architectures and engineered materials could lead to numerous applications, such as improving robust fibre network with redundancies, vibration propagation, safety/fishing nets or fibre-reinforced hydrogel. For instance, adding a tangle web-like structure on top of a safety net could help to redirect and decelerate falling objects into the bottom of the net.

    4. Conclusion

    The study reported here presents a new method to trace 3D spider webs in an automatic, rapid and precise way. Our method improved on previous manual laser scanning methods developed by the authors by introducing increased sensitivity through a green light laser, resulting in improved image resolution, and automation of the scanning process. Furthermore, we introduced a new image processing component to precisely map the real web architecture to a meso-scale model. This new method to trace the topology of 3D spider webs can lead to critical insights into the functions of spider webs. Using the model as a basis for mesoscale bead–spring models of the entire web, we can study the interplay between silk fibre mechanics and fibre network architecture and how it influences the functions of the web. The laser scanning process can be used to follow the stages of web-building construction, and understand their functionality. Furthermore, this fibre network model can be used as a tool to deepen our understanding of spider web evolutionary fitness, which can be used to design innovative 3D spider web-inspired structures.

    Ethics

    The spiders were fed during web construction and released after building a web. None were harmed or killed during the study.

    Data accessibility

    The spider web images and model are available from the corresponding author on reasonable request.

    Authors' contributions

    I.S., Z.Q., T.S. and M.J.B. designed the research. Z.Q. designed and built the automated laser scanning machine. T.S. invented the original manual laser supported tomographic spider web scanning (SWS) and spider web frame (SWF) techniques with the support of A.K., and both provided recommendations on the design of the laser scanning machine and the SWF. A.B. and R.M. provided information about the spider and STS research. I.S. scanned the web. I.S. and Z.Q. developed the image processing method to translate the scans into a 3D model. I.S., Z.Q. and M.J.B. wrote the paper.

    Competing interests

    We declare we have no competing interests.

    Funding

    This work was supported by ONR and ARO-MURI, with additional support from NIH and Studio Tomás Saraceno GmbH, as well as a grant from MIT CAST.

    Acknowledgements

    We are grateful to Leila Kinney from MIT CAST for support and stimulating discussions. We acknowledge Stephen Rudolph for the help with using the machine shop, as well as the preliminary work by MIT CEE Capstone students Santé Nyambo, Billy Ndengeyingoma and Yvonne Wangare for constructing the darkroom and helping in designing the moving rail system. We also acknowledge the UROP student Jacob Higgins for helping with the rail assembly, Bogdan Andrei Demian in constructing the 3D digital web of a black widow spider, as well as Afnaan Qureshi for the collection of various spiders and web samples.

    Footnotes

    Published by the Royal Society. All rights reserved.

    References