 Methodology
 Open Access
 Published:
A 2D/3D image analysis system to track fluorescently labeled structures in rodshaped cells: application to measure spindle pole asymmetry during mitosis
Cell Division volume 8, Article number: 6 (2013)
Abstract
Background
The yeast Schizosaccharomyces pombe is frequently used as a model for studying the cell cycle. The cells are rodshaped and divide by medial fission. The process of cell division, or cytokinesis, is controlled by a network of signaling proteins called the Septation Initiation Network (SIN); SIN proteins associate with the SPBs during nuclear division (mitosis). Some SIN proteins associate with both SPBs early in mitosis, and then display strongly asymmetric signal intensity at the SPBs in late mitosis, just before cytokinesis. This asymmetry is thought to be important for correct regulation of SIN signaling, and coordination of cytokinesis and mitosis. In order to study the dynamics of organelles or large protein complexes such as the spindle pole body (SPB), which have been labeled with a fluorescent protein tag in living cells, a number of the image analysis problems must be solved; the cell outline must be detected automatically, and the position and signal intensity associated with the structures of interest within the cell must be determined.
Results
We present a new 2D and 3D image analysis system that permits versatile and robust analysis of motile, fluorescently labeled structures in rodshaped cells. We have designed an image analysis system that we have implemented as a userfriendly software package allowing the fast and robust imageanalysis of large numbers of rodshaped cells. We have developed new robust algorithms, which we combined with existing methodologies to facilitate fast and accurate analysis. Our software permits the detection and segmentation of rodshaped cells in either static or dynamic (i.e. time lapse) multichannel images. It enables tracking of two structures (for example SPBs) in two different image channels. For 2D or 3D static images, the locations of the structures are identified, and then intensity values are extracted together with several quantitative parameters, such as length, width, cell orientation, background fluorescence and the distance between the structures of interest. Furthermore, two kinds of kymographs of the tracked structures can be established, one representing the migration with respect to their relative position, the other representing their individual trajectories inside the cell. This software package, called “RodCellJ”, allowed us to analyze a large number of S. pombe cells to understand the rules that govern SIN protein asymmetry. (Continued on next page)
(Continued from previous page)
Conclusions
“RodCellJ” is freely available to the community as a package of several ImageJ plugins to simultaneously analyze the behavior of a large number of rodshaped cells in an extensive manner. The integration of different imageprocessing techniques in a single package, as well as the development of novel algorithms does not only allow to speed up the analysis with respect to the usage of existing tools, but also accounts for higher accuracy. Its utility was demonstrated on both 2D and 3D static and dynamic images to study the septation initiation network of the yeast Schizosaccharomyces pombe. More generally, it can be used in any kind of biological context where fluorescentprotein labeled structures need to be analyzed in rodshaped cells.
Availability
RodCellJ is freely available under http://bigwww.epfl.ch/algorithms.html.
Background
Common biological imageanalysis tasks typically involve some sort of cell or protein analysis. It is becoming increasingly apparent that spatial control of protein function plays a central role in many aspects of the life of an organism or an individual cell, influencing its development, proliferation, migration or communication between cells. To analyze spatial regulation, imageprocessing techniques are needed to detect and track fluorescentlytagged proteins, and to measure the intensity of the signal at particular locations, as well as its size and shape. Many wellknown model organisms such as the fission yeast Schizosaccharomyces pombe and the bacterium Escherichia Coli are rodshaped. In this paper we present an image analysis package to characterize motile structures in rodshaped cells recorded in fluorescence images. We successfully have tested it on synthetic and real data. In the following subsection we briefly describe the biological application that we used to validate the implementation of our image analysis system.
Biological application: analyzing spindle pole asymmetry in S. pombe
Asymmetry is a key feature of many biological processes; for example, it is essential for specifying cell fate during development, and also for maintenance of stem cells in the adult organism [1]. Asymmetric segregation of regulatory molecules is also important in simple, single celled organisms, such as yeasts; for example, the correct pattern of mating type switching in S. cerevisiae requires the sequestration of an RNA in the daughter cell [2]. The fission yeast S. pombe has proved to be an excellent model for the study of cell division, including the final step of the cell cycle, cytokinesis. The Septation Initiation Network (SIN) is a key regulator of cytokinesis (reviewed in [3]). SIN proteins associate with the poles of the mitotic spindle (SPBs) via a scaffold of three coiledcoil proteins. In the absence of SIN signaling, cytokinesis does not occur, and cells become multinucleated. In contrast, if SIN signaling is deregulated, cells undergo multiple rounds of septum formation and cytokinesis is uncoupled from its dependency on other cell cycle events. Some SIN proteins distribute asymmetrically on the SPBs during mitosis [4–6], which is thought to be important for regulation of SIN activity [7–9], (reviewed in [10]). We have applied our imageanalysis system to characterize spindle pole asymmetry in S.pombe.
Requirements for the image analysis
The task of screening images of cell populations and tracking the SPBs poses several problems. First, all the rodshaped cells of interest in the images need to be segmented simultaneously, regardless of their orientation. Second, the image quality is inconsistent with respect to parameters such as contrast and noise level. The third problem to be solved is to identify the structures of interest (SOI). In order to analyze the signal intensities of SIN proteins associated with the two SPBs during mitosis, the two SPBs must be located and tracked individually. Since the signal associated with one of the SPBs approaches the threshold of detection at the end of mitosis, we use a second, SPBassociated protein whose fluorescence intensity does not vary significantly throughout mitosis as a reference. Therefore, the software must track two structures (the two SPBs) in two different channels (red, for the reference protein, green, for the SIN protein of interest). Finally, since the orientation of the mitotic spindle is variable with respect to the long axis of the cell, particularly in the earliest stages of mitosis, the software has been designed to analyze both 2D and 3D image stacks.
Related work and state of the art
Cell segmentation and protein tracking are widely studied subjects in biological image processing. Usually the two tasks are tackled separately. For cell segmentation wellknown models range from machine learninglike algorithms [11] to level set methods [12] and texture analysis [13]. Recent developments in wavelet theory have also contributed to the topic of cell segmentation [14, 15] as well as to the research in object tracking [16]. Other popular models include graph cuts [17] or approaches based on probabilities [18]. The implementation of our imageanalysis system, called “RodCellJ” has several advantages over existing tools for cell segmentation and tracking of fluorescently labeled structures. Our model combines the task of cell segmentation and protein tracking into a single algorithm and introduces several steps to increase the robustness of the tracking routine. For example, unlike other ŤtwostepŤ tracking algorithms that first detect structures and then link them through a minimizing criterion [19, 20], we implemented a dynamic programming approach to reconstruct the globally optimal track that ensures robustness with respect to intensity variations and can be applied to data with a high noise level. Our segmentation approach has been optimized for the analysis of rodshaped cells [21], by designing a novel parametric active contour model [22]. We also use of the fact that the structures to be tracked remain inside the cell to increase the robustness of the analysis. Finally, since an asymmetry of signal intensities of only twofold can be considered significant, we implemented an algorithm that estimates the local shape of the structure being studied, taking into account spatial correlations, as well as background fluorescence, image noise and image quality.
By combining multiple analysis tasks into a single package, RodCellJ provides an additional benefit to the user: the use of different software to solve bioimage analysis tasks often results in file format conversion issues that force people to do a substantial part of the analysis by hand, which can be time consuming and makes it more difficult to take full advantage of computational accuracy and speed. Though the software was designed with a specific task in mind, the algorithm has been implemented in a generic manner to make it useful to a wider community interested in rodshaped cells. RodCellJ is easy to run and is freely available as an ImageJ [23] and Fiji [24] plugin. ImageJ is a popular open source and publiclicense image analysis software that can be easily extended by additional plugins. The fact that it is opensource facilitates the reproduction of results through its transparent processing pipeline. To our knowledge, there exists no software yet, implementing these image analysis tasks into a single algorithm, allowing an efficient and rapid analysis of migrating structures inside rodshaped cells in the possible context of highthroughput screening.
Results and discussion
Algorithm
The goals in designing the algorithm were as follows; first, the algorithm for signal detection should account for background noise and background fluorescence within the cells, in order to provide a level of accuracy that cannot be achieved when evaluating the images by eye. Second, after imageanalysis, we wished to visualize the result of detected asymmetry of SPBassociated SIN proteins as kymographs representing the SPB migration with respect to each other, as well as the movement of SPBs with respect to the cell. Third, the relevant parameters of the cell being analyzed, such as its width, length, orientation angle, location of its extremities points, must also be extracted. Finally, information about the fluorescently labeled structures of interest, such as their exact position and intensity on each frame should be displayed in a table and saved in a tabdelimited text file for further statistical evaluations (e.g. with Excel, Matlab, etc.). Though RodCellJ is capable of analyzing a timelapse series of image stacks, if an image representing a single timepoint is used as the input, RodCellJ, only does the cell segmentation and protein identification including the calculation of cell and protein specific parameters. The implementation of the algorithm that leads to the final result is generated in a sequential manner, allowing the user to verify and edit intermediate results in a semiautomatic and userfriendly way to guarantee robustness and accuracy.
Modularisation
The different subtasks that are carried out by the software to yield quantification of the fluorescentprotein labeled, SPBassociated SIN protein signals are implemented in a modular way (Figure 1). Their execution is handled in an interactive way through a graphical user interface (GUI) as shown in Figure 2. Its purpose is to guide the user through the different steps allowing for the verification of intermediate results giving the possibility of editing them. In the following sections the different modules and their underlying algorithm are explained in detail. We first describe the tracking algorithm, then the segmentation method, since that corresponds to the order of their implementation.
Structureassociated fluorescentprotein detection (static images) and tracking (dynamic images)
In this section we describe a robust algorithm to determine the spatiotemporal trajectory of structures (in this case, SPBs) in very noisy dynamic image sequences, such as timelapse movies. Since the SPBs appear as spots in the image, we will refer to them as such in the following discussion. First, the images are processed with a spot enhancing filter, the Laplacian of Gaussian (LoG) [25], which also has smoothing characteristics enabling noise and background removal. The LoG has the advantage that it is fully characterized by only one parameter (σ), which is directly related to the size of the spot that has to be detected. In addition to the noise level of the image a spot does not have the same intensity throughout the image sequence. In some images the spot is no longer clearly distinguishable from the noise. To overcome these difficulties we track the spots with a dynamic programming algorithm [25], a robust method that is able to deal with timevarying signals in noisy conditions. For this purpose we take advantage of the aspects that characterize the behavior of the spots. The biological context allows us to make the following assumptions:

the spots remain in the region within a (segmented) cell.

the movement of the spots is limited by a few pixels from frame to frame.

the starting point is defined by the spot detection in the first image of the timelapse movie.
Dynamic programming
We can reformulate our problem of spot tracking in terms of finding the optimal path for a spot throughout the image sequences taking into account the characteristics of the data mentioned above, which in turn is the same as finding the optimal path between a pair of vertices in an acyclic weighted graph. In this context “optimality” refers to the assumptions made above.
We define a vertex as Υ_{ i } and the path from Υ_{ i } to Υ_{ j } as Ω_{ i j }. We observe that if the optimal path Ω_{ k l } passes through Υ_{ p }, then the two subpaths Ω_{ k p } and Ω_{ p l } also must be optimal. Therefore, the problem satisfies Bellman’s principle of optimality, which states that the globally optimum solution includes no suboptimal (local) solution. Hence, we can solve our problem by dynamic programming (DP) [26].
For an analytical formulation of the problem we first need to state the following conditions:

A path Ω_{ i j } has cost $\mathcal{C}(i,j)$.

The graph contains n vertices numbered $0,1,\dots ,n1$ and has an edge from Υ_{ i } to Υ_{ j } only if i<j (causality condition).

Υ_{0} is the source vertex and Υ_{ n−1} is the destination (which is unknown).
If we define $\mathcal{G}\left(x\right)$ the cost of the optimal path from Υ_{0} to Υ_{ x }, then
Thus, for every possible spot ${\mathcal{S}}_{k}$, with $k\in \{0,\dots ,K\}$ on the last frame n−1, we end up with a path Ω_{0n−1,k } with cost ${\mathcal{G}}_{k}(n1)$. Note that K is the number of possible spots on the last frame. The overall optimal path then is given by
where ω verifies
The cost function is defined as follows:
where the λ _{ q } are weighting factors that can be adjusted through the GUI and the f _{ q }(i,j) are parameters relevant to the image data such as intensity, intensity variation, distance of migration as well as possible directional persistence if required. For example the cost function penalizes heavily a spot that is outside of the segmented cell that surrounds the starting location. Therefore, without imposing absolute restrictions the optimal path through the noisy data can be found. An additional criterion that we have to be aware of is that the spots are not necessarily visible until the last frame of the timelapse movie, because in the usual asymmetric case the signal strength of one of the two spots decreases significantly at some point in the time trajectory. Therefore, we implemented an additional global criterion to detect the frame ${\mathcal{F}}_{t0}$, where it was last visible. This consists of calculating a threshold that depends on the noise and background level of the image sequence, below which a spot is no longer distinguishable from noise (Figure 3).
Estimating the signal intensity of the spot
In the case where they appear as spots in images, a common method that is used to estimate signal intensities of labeled proteins is to take the extreme value that is detected in a certain neighborhood (e.g. local minima/maxima). This method may yield inaccurate values because it does not account for the influence of local background, noise or the shape of the structure being analyzed. A much more precise method for fluorescence particle intensity estimation is to take into account the point spread function (PSF) of the microscope. According to [27, 28] an unbiased estimation of the PSF can be obtained by modeling it with a 2D Gaussian function taking into account the diffraction limit for the microscope and the pixel size of the image. In our algorithm we implemented a model that accounts for these issues (the default microscope related parameters can be specified by the user through the GUI of the plugin). After the detection of the local extrema we fit a 2D Gaussian curve to it including a possible offset for noise/background estimation (5) and a rotation angle θ.
where the value of B captures the background and noise, x,y stand for the two dimensions of the image plane and μ _{ x },μ _{ y } and σ _{ x },σ _{ y } are the center of the 2D Gaussian and its variance respectively. The initial variances to initialize the optimization algorithm can directly be estimated by dividing the value obtained for the diffraction limit by the corresponding value of the pixel size. Therefore, the actual value estimated for the protein intensity corresponds to A. The rotation parameter θ describes the rotation with respect to the Cartesian coordinate system (i.e. if θ=0 we end up with the general expression of a 2D Gaussian, where the variances point in x and y direction). Using this method we additionally circumvent the drawback of the discretization of the image in terms of intensities and spatial coordinates to obtain much more precise values than in the case of doing only integer calculations. For the estimation of the parameters of the Gaussian we use the LevenbergMarquardt algorithm [29].
Cell segmentation with an active contour model
The method that we use for cell segmentation strongly depends on the nature of the image data. In order to use our algorithm the cells need to be rodshaped and immobile. There is no restriction with respect to the size and orientation of the cells (even in the same image cells with different sizes can be segmented simultaneously as long as they are rodshaped). The cells can have arbitrary and different orientations within the same image and the number of cells that has to be segmented in an experiment is unlimited, as long as they do not overlap each other in the image. The algorithm requires that the area in the image that is delimited by the cell membrane (i.e. the inside of the cell) should have a different intensity than the background of the image. To test our software package we used S. pombe cells expressing cdc7pGFP. This protein associates with the SPBs during mitosis. There is also a significant cytoplasmic signal, which allows us to identify cells against the background. The protein is excluded from the nucleus, generating a dark “hole” in each cell (two in late mitotic cells).
Active contour model a.k.a. “snake”
Models that do not exploit shape information such as watershed or region growing approaches may produce oversegmentation due to the noise level and possible intensity inhomogeneities within the cell background. The problem cannot be solved by a a simple conversion to a binary image followed by “filling the holes” because the cell’s cytoplasmic fluorescence intensities are not sufficiently consistent to enclose a convex set. Taking into account these considerations, we decided that an active contour model suits our needs best. Therefore, we took advantage of the cells’ rodshape to formulate a parametric shape model which ensures robust segmentation.
Since the position of the cells remains fixed throughout the image stack we first calculate the (average) zprojection of the image stack, where the zaxis is perpendicular to the image plane. This reduces noise. Afterwards a contrast enhancement is performed. For the actual segmentation of the cells we designed an active contour model, in the literature often called “snake”, in the form of a rod shaped structure. Following the annotation of the snakes described in [30, 31] we call it the “Rodscule”. The Rodscule is a surface snake, which means that a certain energy function can be associated to it that depends on the image data enclosed by it. It consists of an inner rod Σ^{′} and an outer rod σ (Figure 4).
The Rodscule optimizes an energy term that should be minimal when the contrast between the image data averaged over Σ^{′} and Σ∖Σ^{′} is maximal. Here, Σ is the area of the outer rod σ and Σ^{′} is the area of the inner rod Σ^{′}. The energy term is given by (6), where the directions of x and y define the Cartesian coordinate system. It is important that $\left{\mathrm{\Sigma}}^{\prime}\right=\frac{1}{2}\left\mathrm{\Sigma}\right$. Thus, none of the two energy subterms overweights if we apply the Rodscule over a region where everywhere the intensity is constant (in that case we have ${\mathcal{E}}_{\mathcal{R}}=0$). For simplicity, we want the inner rod to have the same orientation as the outer rod. To find the minimum of ${\mathcal{E}}_{\mathcal{R}}$ we use a conjugate gradientbased method (the derivation of the gradient $\nabla {\mathcal{E}}_{\mathcal{R}}$ can be found in the Additional file 1).
To minimize the computational cost we imposed some restrictions on the parametrization of our snake. First, we used as few parameters as possible, with the additional condition that they should be independent of each other. Second, we want the impact on the area of a changing of a parameter by a small δ X to be the same for every parameter. This excludes the possibility of parametrizing the Rodscule by two points P,Q representing the centers of the two semicircles and a third point R where the distance $\overline{\mathit{\text{PR}}}$ determines the radius of the semicircles (Figure 4).
From the Ovuscule [30] we know that an ellipse can be parametrized by three arbitrary points that define a triangle. These three points belong to the border of the ellipse. Since our rod shape can be defined by an ellipse where the four extremal points of the ellipse belong to the border of the rod shape, this means that the rodscule can be defined by exactly the same three points which define the ellipse and, hence, the Ovuscule (Figure 4). The two ellipses and the two rod shapes all have the same barycenter. The complete derivation of the expression of ${\mathcal{E}}_{\mathcal{R}}(P,Q,R)$ as well as a detailed description of the construction and implementation of the Rodscule can be found in the Additional file 1.
Validation
Cell segmentation on synthetic data simulating noisy conditions
In this experiment we validate our active contour model on synthetic data. We successively augmented the presence of additive Gaussian white noise in the artificially created phantom image shown in Figure 5, while running our algorithm on it. In the top left image (Figure 5), the initial configuration of the Rodscule is shown. It remains the same throughout the experiment. In the remaining 5 images the standard deviations (std) of the noise were increased. They correspond to {10, 30, 60, 90, 120} (with respect to an 8bit gray scale image, where the pixels take values between 0 and 255) and the resulting decrease in the signaltonoise ratio (SNR) for the same remaining 5 images is equal to {26.2, 16.8, 11.1, 7.6, 5.3} dB. In all 5 images the Rodscule found the correct segmentation only through the optimization process demonstrating the robustness of the algorithm with respect to the presence of photometric noise. Figure 6 represents the same image as the bottomright of Figure 5 (std =120, SNR =5.3 dB). It additionally shows a closeup of a boundary region between the segmented artificial cell and its background to emphasize the advantage of the Rodscule over segmenting by human eye. While it seems very difficult to find the ground truth segmentation in this image manually, the Rodscule does so due to the optimal way it minimizes the corresponding energy function. Figure 7 shows an analogous example of a real cell.
Segmentation of yeast cells
We demonstrate the utility of the segmentation algorithm on real data using images of the fission yeast S. pombe. They are typically rodshaped but the length of the cell can vary within the same experiment. In contrast to the previous experiment, we now want to segment many cells simultaneously. Since the outcome of the optimization algorithm for cell segmentation depends on the location of initialization, the two spots (SPBs) were detected before segmentation. Since they approximately define the longitudinal axis of the cell (they are oriented towards the two poles of the cell) we can use them to initialize the Rodscules. Figure 8 shows the result of such an experiment. In an experiment where 212 cells had to be segmented we segmented 184 cells correctly, yielding a rate of 87% of true positives. In the case where the segmentation fails, the user can reinitialize the segmentation by one mouseclick and subsequent dragging of the mouse to change the initialization position. The editing of a cell takes about 3 seconds.
Protein tracking
For the evaluation of the protein tracking algorithm implemented with the DP routine we refer to the results of prior work (Sage et al.) [25], where we have shown that with our tracking algorithm we are able to trace particles in images where the peak signaltonoise ratio (PSNR) on average is as low as 0 dB. The PSNR is defined as
with σ being the noise variance and A the amplitude of the Gaussianshaped spot. Figure 9 illustrates an example of a result obtained with our tracking routine. It shows a comparison between two particles, one yielding a high SNR, whereas the fluorescence of the second spot decreases with time (decreasing SNR).
Spot signal intensity estimation
For the evaluation of our routine for spot intensity estimation, in a first step we compare it to two existing standard techniques commonly used by biologists. For this purpose we created artificial images showing cells, each containing two simulated spots (SPBs) (Figure 10, bottom). The shape and intensities of the spindle pole bodies (i.e. the two white spots located towards the poles of each cell) are calculated using our 2D Gaussian approximation to estimate the PSF described above. Table 1 shows the discretized Gaussian kernel that we used to carry out this experiment. In order to test the robustness of our algorithm we added an arbitrarily chosen offset value to the kernel of the 8bit test image (i.e. intensity values are between 0 and 255) as shown in Table 2. Additive Gaussian white noise (std =15) was then added to the whole image. Figure 10 (top row) shows an example of such a corrupted Gaussian kernel used to simulate such proteins of interest. In our synthetic images we gave the cells significant cytoplasmic fluorescence. However, our spot intensity estimation can also be applied when this is not the case, since the intensity of the signal in the vicinity of the spot does not influence the algorithm. To further demonstrate the robustness of the algorithm we chose three different image/petri dish backgrounds for testing as shown in Figure 11. The two techniques that we used to compare our method with, are the wellknown “rolling ball” algorithm [32] and the maximum intensity technique, where the maximum intensity value in a welldefined neighborhood (e.g. 5 ×5 pixels) of the protein is evaluated. Table 3 summarizes the results of the comparison. It shows the mean absolute error, ${e}_{\mathit{\text{mean}}\mathit{\text{abs}}}=\sum _{k=1}^{n}{I}_{k}{I}_{\mathit{\text{real}}}$, as well as the standard deviations obtained with the three different methods. Here k is the index that runs over all detected spots n=38, i _{ k } is the intensity obtained with respect to the method applied and I _{ r e a l } is the actual intensity that should be detected (i.e. 95.9719, the maximum of the Gaussian described in Table 1).
Looking at the values shown in Table 3, it becomes evident that our algorithm is very robust with respect to noise and, additionally, is quite independent from the background of the image. Furthermore, because the algorithm approximates the PSF of the microscope in theory it is totally independent from the vicinity of the 2D Gaussian and only depends on the values enclosed by it.
Even though there is great variability between the three test images shown in Figure 11 (bottom row), our algorithm only introduces little variability in terms of spot intensity estimation. The percentage error is ${e}_{\mathit{\text{PSF}}\mathit{\text{fit}}}=0.14=\frac{(11.5199+10.8913+12.6662)}{256}$, whereas with the rolling ball algorithm we obtain e _{ r o l.−b a l l }=0.58 and with the maximum intensity method e _{ m a x.−i n t.}=0.96. Analyzing these results we notice that the two latter methods are highly dependent on the background values. Furthermore the maximum intensity method is also very susceptible to the different kind of noise in the image. The rolling ball algorithm is less influenced by noise, however it still remains unsuitable if the background of the image is not uniform or if the image contains other features than the background, such as in our test images, where the cell has significant background fluorescence and the petri dish is also visible in the image.
Comparison of manual and automatic evaluation
A test protocol was designed in order to reflect the manual evaluation conditions in a realistic manner. For this purpose four human observers (o∈{1,2,3,4}) evaluated the 3 test images (i∈{1,2,3}) shown in Figure 11 (bottom) manually using the standard image analysis tools of ImageJ. The goal was to measure the intensities of a previously defined number of spots in each image. The images were resized using bilinear interpolation to obtain a 10x magnification in order to facilitate the observers’ task. The observers were free to use the available tools (e.g. image zoom, contrast adjustment, etc.) knowing that the procedure was meant to simulate real work conditions (i.e. it was their choice how to manage the tradeoff between time constraint and accuracy of the result). The time of evaluation per image was also measured. The whole procedure was repeated with the same images with a background subtraction corresponding to the rolling ball algorithm [32]. In total all observers evaluated each image three times (r∈{1,2,3}). We define the interobserver and intraobserver variability as
where s∈{o r i g,b c k S u b} refers to the original image or the image where the background was subtracted respectively and p runs over all the spots, P, considered in an image. x∈{(x _{1},x _{2}),I}, where I=I(x _{1},x _{2}) stands for the intensity and (x _{1},x _{2}) represents the spatial coordinate of a point in an image, implying that the interobserver and intraobserver variability can be measured with respect to the evaluated intensity values itself, I(x _{1},x _{2}), or their spatial coordinate (x _{1},x _{2}). The subscripts of x correspond to x _{ s,o,r }. The variabilities with respect to intensity estimation are shown in Table 4. We notice that the very high interobserver variability makes it difficult to obtain reproducible results. Although the mean intensity values calculated over all series of the 6 test images (Figure 11, bottom; 3 times without and 3 times with background subtraction) were 105.19 and 76.58 respectively (the true intensity is 95.97, all values are on a 8bit scale, i.e. in the range [0,255]) the high standard deviations of 20.33 and 58.90 again suggest that the manual method is unreliable and not well suited to obtain comparable results with respect to a general norm that is independent of the operator or user. These results are clearly in favor of our method, which yields stable results with low standard deviations (see Table 3). Furthermore, due to the local convergence of our algorithm there is no interoperator variability when evaluating the results obtained by different users. The average time required by the users to measure the intensity of 38 spots was 1 min 37 sec, whereas (depending on the computer used) RodCellJ can evaluate the intensities about 100 times faster (c.f. section “Computational aspects”).
Kymographs
Once the spots have been tracked, two different kymographs can be established. The first kymograph (Figure 12, upper left) shows the movement of the spots with respect to the cell center (i.e. the barycenter of the Rodscule), whereas the second plots the movement of the spots with respect to each other (Figure 12, upper right). An illustration of these results is provided in Figure 12.
Computational aspects
RodCellJ is developed to run on multiprocessor architectures. Several tracking and segmentation algorithms can therefore be run in parallel. There is no limit on the number of cells that can be segmented or the total number of spots that can be tracked per session. On a 2.8 Ghz Intel i7 QuadCore processor with 16GB SDRAM on average (evaluating 48 cells, 83 tracked spots) it took 971 milliseconds to segment one cell and 823 milliseconds to track one spot through a timelapse movie containing 50 frames. Fitting the spot shape with the PSF approximation algorithm takes about 36 milliseconds per spot.
Conclusions
We have presented a new image analysis system to fully characterize the protein analysis in rodshaped cells. The image analysis system was implemented as an ImageJ/Fiji plugin called RodCellJ. It is able to handle 2D or 3D static and dynamic images. It includes new and robust stateofthe art algorithms to semiautomatically segment rodshaped cells and detect and track up to four spots in two different channels located within the cells. A novel algorithm to accurately estimate signal intensities independent of their shape and background, which approximates the microscope’s point spread function was also presented. The software outputs several cell and protein specific parameters. In the case of dynamic image analysis two different kymographs that represent spot migration can be displayed and saved. We successfully demonstrated the utility of this tool to measure the asymmetry of the signal produced by an SPBassociated, GFPlabeled signal transduction protein in S. Pombe. The rapidity and efficacy of this tool will allow it to be used for screening large numbers of mutant strains to study their effects upon SIN regulation. Though we have developed this software for the analysis of SPB behaviour during mitosis in fission yeast, it is applicable to tracking other large structures in the cell, for example nuclei. It could also be applied to track other fluorescentlylabeled complexes that adopt a spotlike morphology in the cycle.cell.
Methods
Cell lines and microscopy
The strains used in this study were obtained from crosses between 2 strains: cdc7(ura4+)EGFP, ura4D18, leu132 h+ and pcp1(ura4+)mCherry ura4D18, leu132 h to obtain a double mutant carrying both tagged alleles. Cells were grown in yeast extract medium to early exponential phase (exponentially growing culture) and centrifugal elutriation was used to isolate small G2 cells. Cells were concentrated by filtration, and after 1 hour recovery time, the cells were imaged using a PlanSApo 60x N.A. 1.42 objective lens mounted on a PerkiElmer spinning disk confocal microscope. The culture was sampled for imaging during both the first and second mitoses after elutriation. Images were exported and their parameters were assessed using RodCellJ.
Abbreviations
 SIN:

Septation initiation network
 SPB:

Spindle pole body
 GUI:

Graphical user interface
 LoG:

Laplacian of Gaussian
 STD:

Standard deviation
 SNR:

Signaltonoise ratio
 PSNR:

Peak signaltonoise ratio
 dB:

decibel.
References
 1.
Knoblich JA: Asymmetric cell division: recent developments and their implications for tumour biology. Nat Rev Mol Cell Biol 2010,11(12):849–860. 10.1038/nrm3010
 2.
Long RM, Singer RH, Meng X, Gonzalez I, Nasmyth K, Jansen RP: Mating type switching in yeast controlled by asymmetric localization of ASH1 mRNA. Science 1997,277(5324):383–387. 10.1126/science.277.5324.383
 3.
Goyal A, Takaine M, Simanis V, Nakano K: Dividing the spoils of growth and the cell cycle: The fission yeast as a model for the study of cytokinesis. Cytoskeleton 2011,68(2):69–88. 10.1002/cm.20500
 4.
Hou MC, Guertin DA, McCollum D: Initiation of cytokinesis is controlled through multiple modes of regulation of the Sid2pMob1p kinase complex. Mol Cell Biol 2004,24(8):3262–3276. 10.1128/MCB.24.8.32623276.2004
 5.
Jwa M, Song K: Byr4, a dosagedependent regulator of cytokinesis in S. pombe, interacts with a possible small GTPase pathway including Spg1 and Cdc16. Mol Cells 1998,8(2):240–245.
 6.
Furge KA, Wong K, Armstrong J, Balasubramanian M, Albright CF: Byr4 and Cdc 16 form a twocomponent GTPase activating protein for the Spg1 GTPase that controls septation in fission yeast. Curr Biol 1998,8(17):947–954. 10.1016/S09609822(98)70394X
 7.
Schmidt S, Sohrmann M, Hofmann K, Woollard A, Simanis V: The Spg1p GTPase is an essential, dosagedependent inducer of septum formation in Schizosaccharomyces pombe. Genes Dev 1997,11(12):1519–1534. 10.1101/gad.11.12.1519
 8.
GarcíaCortés JC, McCollum D: Proper timing of cytokinesis is regulated by Schizosaccharomyces pombe Etd1. J Cell Biol 2009,186(5):739–753. 10.1083/jcb.200902116
 9.
Singh NS, Shao N, McLean JR, Sevugan M, Ren L, Chew TG, Bimbo A, Sharma R, Tang X, Gould KL et al: SINinhibitory phosphatase complex promotes Cdc11p dephosphorylation and propagates SIN asymmetry in fission yeast. Curr Biol 2011, 21: 1968–78. 10.1016/j.cub.2011.10.051
 10.
Johnson AE, McCollum D, Gould KL: Polar opposites: finetuning cytokinesis through SIN asymmetry. Cytoskeleton (Hoboken) 2012,69(10):686–699. 10.1002/cm.21044
 11.
Yin Z, Bise R, Chen M, Kanade T: Cell segmentation in microscopy imagery using a bag of local Bayesian classifiers. In Proceedings of the Seventh IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’10). Rotterdam, The Netherlands; April 14–17, 2010:125–128.
 12.
Chang H, Yang Q, Parvin B: Segmentation of heterogeneous blob objects through voting and level set formulation. Pattern Recognit Lett 2007,28(13):1781–1787. 10.1016/j.patrec.2007.05.008
 13.
Ruberto CD, Rodriguez G, Vitulano S: Image segmentation by texture analysis. In Proceedings 10th International Conference on Image Analysis and Processing, issue 33. Washington: IEEE Computer Society; 1999:717–719.
 14.
Rajpoot KM, Rajpoot NM: Wavelet based segmentation of hyperspectral colon tissue imagery. In 7th International Multi Topic Conference, 2003. INMIC 2003. Islamabad, Pakistan; 2003:38–43.
 15.
Bernal AJ, Ferrando SE, Bernal LJ: Cell recognition using wavelet templates. In Canadian Conference on Electrical and Computer Engineering, 2008. Niagara Falls, Ontario, Canada; CCECE 2008. 4–7 May 2008:001219,001222
 16.
Liu JC, Hwang WL, Chen MS, Tsai JW, Lin CH: Wavelet based active contour model for object tracking. In Proceedings of the 2001 IEEE International Conference on Image Processing (ICIP’01), vol. 3. Thessaloniki, Greece; October 7–10, 2001:206–209.
 17.
Freedman D, Turek MW: Illuminationinvariant tracking via graph cuts. 2005 IEEE Comput Soc Conf Compu Vision Pattern Recognit CVPR05 2005,2(c):10–17.
 18.
Liang D, Huang Q, Yao H, Jiang S, Ji R, Gao W: Novel observation model for probabilistic object tracking. In 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). San Francisco, California, USA; 13–18 June 2010:1387–1394.
 19.
Dufour A, Shinin V, Tajbakhsh S, GuillenAghion N, OlivoMarin JC, Zimmer C: Segmenting and tracking fluorescent cells in dynamic 3D microscopy with coupled active surfaces. IEEE Trans Image Process 2005,14(9):1396–1410.
 20.
Meijering E, Dzyubachyk O, Smal I: Methods for cell and particle tracking. Methods Enzymol 2012,504(February):183–200.
 21.
Zhang B, Enninga J, OlivoMarin JC, Zimmer C: Automated superresolution detection of fluorescent rods in 2D. Biomed Imaging Macro Nano 2006.
 22.
Kass M, Witkin A, Terzopoulos D: Snakes: active contour models. Int Comput Vision 1988,1(4):321–331. 10.1007/BF00133570
 23.
Schneider C, Rasband W, Eliceiri K: NIH Image to ImageJ: 25 years of image analysis. Nat Methods 2012,9(5):671–675.
 24.
Schindelin J, ArgandaCarreras I, Frise E, Kaynig V, Longair M, Pietzsch T, Preibisch S, Rueden C, Saalfeld S, Schmid B, Tinevez JY, White DJ, Hartenstein V, Eliceiri K, Tomancak P, Cardona A: Fiji: an opensource platform for biologicalimage analysis. Nat Methods 2012,9(7):676–682. 10.1038/nmeth.2019
 25.
Sage D, Neumann F, Hediger F, Gasser S Unser: Automatic tracking of individual fluorescence particles: application to the study of chromosome dynamics. IEEE Transactions Image Process 2005,14(0):1372–1383.
 26.
Bellman R: Dynamic Programming. 1 edition. Princeton: Princeton University Press; 1957.
 27.
Cheezum MK, Walker WF, Guilford WH: Quantitative comparison of algorithms for tracking single fluorescent particles. Biophys J 2001,81(4):2378–2388. 10.1016/S00063495(01)758845
 28.
Zhang B, Zerubia J, OlivoMarin JC: Gaussian approximations of fluorescence microscope pointspread function models. Appl Opt 2007, 46: 1819–1829. 10.1364/AO.46.001819
 29.
Moré JJ: The LevenbergMarquardt algorithm: implementation and theory. In Numerical Analysis. Edited by: Watson GA, Watson GA GA. Berlin: Springer; 1977:105–116.
 30.
Thévenaz P, DelgadoGonzalo R, Unser M: The Ovuscule. IEEE Trans Patt Anal Mach Intell 2010,33(2):382–393.
 31.
Thévenaz P, Unser M: Snakuscules. IEEE Trans Image Process 2008,17(4):585–593.
 32.
Sternberg S: Biomedical image processing. Computer 1983, 16: 22–34.
Acknowledgements
We thank David Romascano and Olivia Mariani for the manual evaluation of protein intensity estimation and Irina Radu for critical reading of the manuscript and language correction. This research was supported by the Swiss national foundation (NSF) through a SYNERGIA grant.
Author information
Affiliations
Corresponding authors
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
DSc designed, wrote and implemented the algorithm and contributed significantly to the study design and the draft writing. DSa contributed to the study design, algorithm writing and critical revision of the manuscript. MU contributed to critical revision of the manuscript and the design of the active contour model (Rodscule). PW carried out the cell cultivation and cell imaging experiments and contributed to the draft writing and the validation of RodCellJ. VS participated in the project coordination, study design and manuscript revision. AC contributed to manual cell parameter evaluation. IX contributed to the study design. All authors read and approved the final manuscript.
Electronic supplementary material
The Rodscule.
Additional file 1: Additional file 1 is a pdf containing a complete and detailed descriptioin of the active contour model, named “The Rodscule”, that was implemented in RodCellJ as a model for cell segmentation. It contains its mathematical description and the mathematical derivation of the gradient needed for the optimization algorithm. Additionally a description of its implementation is provided, explaining the issues that had to be solved when discretizing the continuously defined active contour. (PDF 280 KB)
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Schmitter, D., Wachowicz, P., Sage, D. et al. A 2D/3D image analysis system to track fluorescently labeled structures in rodshaped cells: application to measure spindle pole asymmetry during mitosis. Cell Div 8, 6 (2013). https://doi.org/10.1186/1747102886
Received:
Accepted:
Published:
Keywords
 Cell segmentation
 Protein tracking
 Rod shape
 Kymograph
 Asymmetry
 Fluorescence timelapse microscopy