Skip to main content

Computing, Robotics, and Imaging for the Surgical Platform

A robust method and affordable system for the 3D-surface reconstruction of patient torso to evaluate cosmetic outcome after Breast Conservative Therapy

Abstract

Breast cancer is the most common cancer among women around the world. Breast Conservative Therapy (BCT) is a treatment where the complete tumor and a margin of healthy tissue are surgically removed (partial mastectomy or lumpectomy) and the remaining breast tissue receives radiotherapy. Although the outcome of this treatment method usually provides good clinical results, some cosmetic defects like asymmetry may nevertheless emerge. The objective of this work is to present a robust, mobile, and affordable three-dimensional (3D) surface reconstruction system that can be used to assess post-surgery breast cosmetic outcome after a lumpectomy. Preliminary results obtained with this system have allowed us to test some of the hypotheses we made in our multiscale predictive modeling of BCT outcome.

Background

Introduction and motivation

Our goal in this paper is to present a robust, mobile, and affordable three-dimensional (3D) surface reconstruction system that can be used to assess post-surgery breast cosmetic outcome after a lumpectomy.

Our motivation is to validate our predictive multiscale model of cosmetic outcome in a clinical study. This model provides the breast contour surface based on a combination of soft tissue mechanical deformation and a numerical simulation of healing. We look for a comparison of this prediction with the patient outcome at the millimeter scale. It is essential for the clinical study that our surface reconstruction system can be easily used by the nurse and/or medical doctor during the standard follow-up examinations of the patient and does not generate any additional cost.

Breast cancer is the most common cancer among women with more than 10% being diagnosed with it in their lifetime. A lumpectomy is a procedure in which a surgeon removes the tumor with a circumferential margin of healthy tissue [1]. Although the recurrence rates after lumpectomy and mastectomy (removal of an entire breast) are similar (about 3% after mastectomy against 6% after lumpectomy), patients seem less anxious about reoccurrence after mastectomy [2]. However, lumpectomy is much less invasive than mastectomy. According to the quality of life outcome study in [3], lumpectomy has more emotional and physical advantages than mastectomy, because women do not permanently lose their breasts. Nevertheless, after undergoing a lumpectomy, the shape of the breast can show some defects such as asymmetries, concave deformities, or distortions of the nipple-areola complex [4]. Garbey et al. [5]–[8] developed a multiscale model with a virtual surgery toolbox (VST) interface used to predict the outcome of breast conservative therapy (BCT) before surgery. Thanks to these predictions, it is possible to facilitate the dialogue between the patient and the doctor to decide on a procedure. Overall, the quality of life of the patient would be improved.

There were several difficulties in our project for 3D surface reconstruction. One can enumerate our main design constraints as follows.

The system should be:

  • low cost and robust enough to provide convincing results that can assess even small cosmetic defects,

  • can be moved on demand to a small standard examination room used by the surgeon for the follow-up of the patient,

  • easy to use by a nurse without distracting him/her from her main tasks,

  • minimal sensitivity to the breathing motion of the patient.

We also noticed several additional technical difficulties in the core of our study related to the color of the skin and light conditions offered by a standard examination room. In principle, the system should not be sensitive to skin color and potential reflections of light. For example, scars on the skin at the location of the lumpectomy change local imaging features. Temporary markers on the breast necessary for radiotherapy treatment may also impact surface imaging. Finally, there may be complicated 3D surfaces with lines of discontinuity due to possible hidden surfaces below the breast.

To answer this challenge, we first started from the classical digital stereotactic system idea using the projection of a structured light pattern on the torso of the patient. In this setup, two digital cameras are mounted in parallel to take simultaneous pictures. In principle, the deformation of the projected pattern gives enough geometric information to reconstruct the surface depth. The idea was appealing especially since the geometric theory is rather simple. There are also high-quality open-source software available to do the reconstruction [9]. It gave us the opportunity to quickly build a low-cost system from two used D70s Nikon cameras (Nikon Co., Shinjuku, Tokyo, Japan) with a total cost on the order of $800, which was adequate for a small pilot study. The first surface reconstruction results obtained with the systems were, however, very disappointing: the result was overloaded by numerical noise and presented multiple areas where the 3D reconstruction failed.

In this paper, we will show how we addressed some of the drawbacks of the method by using an image analysis adapted to our breast conservation applications. We will also provide some quantitative validations with errors on the surface on the order of 2 mm in the maximum norm at the region of interest for our BCT cosmetic evaluation.

In the `State of the art’ section, we will briefly discuss the state of the art in 3D surface reconstruction. In the `Methods’ section, we will present our system and our contribution with a method for preprocessing and postprocessing of the images adapted to the BCT application.

The `Results and discussion’ section will provide examples of the 3D surface reconstruction with a realistic torso mannequin as well as a patient. We will show that our system compared favorably with the Kinect system that is less expansive than our system but less accurate.

Finally, we will conclude our study and present some ideas for future improvement.

State of the art

The third dimension plays a decisive role in the analysis of dynamic or static objects. The ability to perform fast and accurate 3D reconstruction of an environment is central to many applications. Computer graphics [10] uses 3D imaging to interact with the environment and track people or objects in a scene. This innovation is also expanded in the medical field. Indeed, motion can be analyzed like the respiratory gating [11] or detection of sleep apnea [12]. Over the past decades, this domain has seen an explosion of implementation due to this advancement.

The 3D modeling methods can be divided into two classes: active and passive systems. Active systems are systems that emit energy to retrieve information about the environment. The energy can be electromagnetic, infrared light, or acoustic waves. On the other hand, a passive system only receives information from the surroundings without emitting any specific signal. These methods require vision sensors (e.g., CCD cameras) to restore the 3D shape [13].

We will briefly describe some of the main 3D imaging technologies, starting with the method used in this paper.

Multiview stereoscopic reconstruction

In this passive technique, a depth measurement can be assessed from stereo matching by using multiple camera views. We will focus here on binocular stereovision that emulates human eyes.

Stereoscopy is a technique that reproduces the relief of a 3D world from at least two 2D images acquired at different angles. The reconstruction process will be detailed in the `Methods’ section.

Achieving 3D reconstruction using this method requires calibrated cameras [14]. The main problem of this approach is that some of the computational steps are extremely expensive. The image resolution depends on the density of points we are able to extract. A significant volume of image processing is then needed to match stereoscopic pairs [15].

Salvi shows in his paper that the precision of reconstruction depends heavily on the calibration method that is implemented [15]. We will later discuss additional significant sources of errors that are typical of the BCT study.

Active systems

Active systems can be considered as a passive binocular stereoscopic system where one of the cameras is replaced by a light transmitter projecting a known pattern.

Structured light

For structured light, a transmitter replaces one of the cameras. A structured light 3D scanner projects a known light pattern into the 3D space [9]. This motif is then viewed by one or more cameras. The distortion of this pattern allows for a reconstruction of the 3D environment. The system geometry is then retrieved via a calibration step in order to know the direction of the projected motif, the relative position between the transmitter and the receiver, and also the observed position [16]. The 3D coordinates are then computed by an active triangulation method.

To sweep the object's total surface, the system that consists of the camera and the projector can be shifted or rotated. Thus, the resolution of this method depends on the scanning step, the projected pattern, and also the resolution of the camera. Often, the accuracy of this method is better than a stereoscopic system [9].

Such systems provide high resolution and accurate reconstruction. However, the scanning is not instantaneous which is a complication in our project given that we have to deal with breathing patients. Also, the projected pattern might be close to infrared light. There is also increased light absorption with darker skin results in suboptimal reconstruction.

In the gaming area, a structured light device has been developed by Microsoft: the Kinect sensor. This sensor platform is a motion sensing system that features a RGB camera to provide color to the reconstruction and a depth sensor which consists of an infrared laser projector and a monochrome sensor. It generates a 640×480 depth map at 30 Hz [17].

The Kinect works by projecting speckles at near-infrared frequencies, which coincide with the known pattern [18]. To be accurate, the calibration between the projector and the camera has to be known. Each point has a corresponding speckle. Their size and shape depend on the distance and the direction to the 3D environment. The depth data are computed by the triangulation of each speckle between the projected pattern and the pattern observed by the camera.

While this device is inexpensive, the acquired data are very noisy [18] and contain many `holes’ where no depth has been computed. Furthermore, we have to deal with motion blur when the device is moved quickly. The optimal operation range is between 0.8 and 3.5 m.

This sensor was initially developed for gaming. However, many applications have been developed around this tool. Cui et al. [19],[20] proposed a method to capture a body model just using the Kinect sensor. To deal with the low resolution and the random noise of the camera, they implemented a super-resolution algorithm in which the color constraints were taken into account. Tong et al. [21] also presented a scanning system to model human bodies. They used several Kinects to capture different parts of the scene. Even so, their approach needed to consider the interference phenomena and misalignments in the registration steps. We will compare our solution using a stereoscopic system to this technology in the `Results and discussion’ section.

Time-of-flight

Time-of-flight (ToF) imaging refers to the measurement of depth by quantifying the phase changes between the emitted light signal and the received once it bounces back from objects in the scene [22]. This is equivalent to computing the travel time between the light emission source and the received reflection. This method relies on modulation and demodulation processes. An infrared wave is directed to the target and the sensor detects the reflected light from the object.

For each pixel that encodes the depth map, the phase difference between the sent and reflected waves is computed. Finally, the distance d to the object is calculated as follows [23]:

d= c 2 f ø 2 π
(1)

where

  • c=3.00×108 m.s-1 is the speed of the light,

  • ø represents the computed phase difference,

  • f corresponds to the signal frequency.

The quantity c 2 f is the maximum distance that can be measured without ambiguity because of phase superimposition. The main advantage of this method is the need for only one specific camera. Moreover, the high-energy light pulses are not very sensitive to the background illumination.

Nevertheless, the accuracy depends on the object materials and colors since the IR light will be absorbed or reflected differently. Hansard et al. [22] show large differences according to the material properties: from 26.68±10.95 mm root mean square error for diffuse material object to 93.91±87.41 mm for specular material objects, and 131.07±73.65 mm for translucent objects with subsurface scattering.

Teams are working on the use of such cameras in medicine. Platch et al. [24] and Le Fur et al. [25] proposed a solution to position the patients during radiotherapy treatment by implementing a registration framework. Wentz et al. [26] and Scaller et al. [27] achieved systems tracking the patient surface deformation due to breathing in radiotherapy. Falie et al. [12] used ToF cameras to detect sleep apnea instead of using sensors on patients.

In our pilot study, the cost of such device was not appealing. We will next present our experience with a classical stereoscopic system and focus on the algorithm work we did in image processing.

Methods

Background

As mentioned earlier, stereoscopic reconstitution is a process that allows us to assess the third dimension from a stereoscopic pair of images. This technique is divided into three main stages. First, the system is calibrated to determine the intrinsic and extrinsic parameters (stage 1). The intrinsic parameters characterize the camera properties (e.g., focal lengths, optical center), while the extrinsic parameters describe the relative position and orientation of the two cameras. Then, points from one view have to be matched to points from the other view (stage 2). Finally, the 3D coordinates are computed (stage 3).

To mathematically formalize the geometric algorithm, three different coordinate systems are used (Figure 1):

  • (X,Y,Z), the 3D object coordinate system,

  • (x,y,z), the camera coordinate system,

  • (u,v), the image coordinate system.

Figure 1
figure 1

Overview of the coordinate systems for stereoscopy.

Homogeneous coordinates are employed because they are more relevant in projective geometry and allow us to define a transformation between two frameworks with matrix multiplication.

Calibration of the stereoscopic system is a crucial step where the relation between the real world and the digital image coordinates is estimated. The accuracy of this phase will highly affect the final 3D reconstruction [15]. First and foremost, the intrinsic parameters are computed. They characterize the mapping of an image point to image coordinates in each camera. Then, the extrinsic parameters are gauged. They describe the relative position and orientation of the two cameras. Finally, we can use the pinhole camera model.

However, real lenses usually present some distortions. Therefore, the above model is extended by adding radial and tangential distortion coefficients [28].

To solve the system of equations for the calibration problem, we can use a chessboard because its geometry is well known. To figure out the position of a corner, we need to know the number of rows and columns the chessboard has and the size of each square. The corners are easy to detect using computer vision algorithms.

The next and most difficult challenge is stage two: searching for corresponding points in a stereoscopic pair. Some occlusions may appear, for example

  • a 3D point is only depicted in one image,

  • an object may hide another one in one of the views,

  • points can be in a different order and distance changes from one image to the other.

Imprecise mapping leads to an erroneous 3D reconstruction. Many algorithms have been implemented to match points from both images. Among these methods, one solution would be to extract feature points (e.g., corners) from both images and match them [29]. Another technique suggests a well-known correlation method between two rectangular windows. One is fixed in the first image, and the second is shifted in the second image until the maximum correlation between both windows is reached [30].

Our stereoscopic system is calibrated so that the epipolar constraint can be applied. Epipolar geometry is a notable tool. The epipolar constraint assures that corresponding points are constrained to lie along conjugate epipolar lines. Thus, this reduces the search from 2D to 1D. The point matching can still be simplified by rectifying the stereoscopic pair. Each image is subject to a transformation such that epipolar lines become collinear and parallel to the x-axis of the image, which is equivalent to mapping the epipoles to infinity. Therefore, for each point of the left image, we only need to look along the same row in the right image.

The third and last stage is straightforward. Once the images are rectified and the mapping finished, the passive triangulation method provides the 3D coordinates.

Contribution

We built a low-cost stereoscopic system. It was made of two D70s Nikon cameras, a video projector, and a laptop to project black and white stripes (see Figure 2). The choice of the camera did not matter, as we were able to use any digital reflex camera that provided 6 megapixels or more.

Figure 2
figure 2

Our stereoscopic system. Our stereoscopic system is made up of two D70s Nikon cameras, a high-resolution video projector, and a laptop.

However as mentioned earlier, we found out that a classical approach of stereoscopic restitution did not work with the adequate level of resolution for our BCT application.

The patients’ skin was uniform, so the mapping process became almost impossible in stage 2, no matter how precise the calibration in stage 1. Using the idea of structured light [31], we decided to project vertical stripes on the patient in order to match points in an easier and robust way. Next, pictures of this scene were then acquired simultaneously by the two cameras. We will describe successively the preprocessing and postprocessing steps we implemented to achieve an accurate reconstruction.

Preprocessing

The major challenge was to design an image processing algorithm that properly segmented the projected stripes. A simple threshold algorithm was not adequate. In fact, the video projector provided highly non-homogeneous illumination. The intensity of the light was concentrated in the middle of the beam and decreased as a function of the distance to the center of the projected beam.

By displaying a cross section of the acquired image, we observed that the light saturated at the middle of the image, and its intensity decayed sharply as the distance from the middle increased. This explained how a simple threshold would not be appropriate. To correct this result, the image bias was corrected. Thus, the slope of the signal on either side of the large black line was computed. In Figure 3, the magenta and cyan lines show the slope on the left and right sides, respectively. Then, the intensity of each pixel on the image was corrected such that the slope become horizontal.

Figure 3
figure 3

Cross section of a commonly acquired image proving the intensity of the light is not uniform.

This initial solution gave overall reliable detection of the stripe lines. Nonetheless, the segmentation of the vertical line edge failed in several areas such as around the nipples, at scar locations, or at other particular spots on the skin such as birthmarks.

We took advantage of the fact that the projected lines on the image could be interpreted as an ideal periodic rectangular signal. The Fourier transform (FT) for the rectangular function is a sine cardinal (sinc).

We will illustrate our technique with the image acquired on a mannequin representing a realistic woman torso. The Fourier transform on the image is shown in Figure 4. The black circle on the plot coincides with the main spike of the sine cardinal in 1D.

Figure 4
figure 4

Fourier transform. Result of the Fourier transform on the image where the black circle exhibits the main spike of the sinc.

The edge of the signal is equivalent to high frequencies in the Fourier domain. The idea was to delete as much as of the projected lines as possible and hence construct a low-pass filter. Consequently, high frequencies of the temporal signal were set to zero, which conformed to a simple Gaussian function. The corresponding frequencies in the image were also set to zero. Finally, the inverse Fourier transform (IFT) was applied on both signals. The IFT of a Gaussian is also a Gaussian which explains the blurred image reached by computing the IFT of the image FT after removing high frequencies. The substraction of the above result from the original gray scale image matched with a high-pass filter gave us the result in Figure 5.

Figure 5
figure 5

Result after substraction. (a) Original gray scale image. (b) Blurred image. (c) Result of the image substraction that affords a clear image to process.

This observation produced excellent line detections, but needed to be refined further in terms of image filtering.

The most effective tool that we found for the task was the Gabor filter [32]. It is known as an edge scale detection filter in image processing. This filter is a linear filter whose impulsion response is a sine curve modulated by a Gaussian. The goal of this filter is to pick out, in the Fourier domain, the set of frequencies which pattern the region of interest. Figure 6 demonstrates the outcome of this filter. At this point, a threshold provides the final segmentation.

Figure 6
figure 6

Achieved image after applying the Gabor filter (a) and final segmentation retrieved by thresholding (b).

3D coordinate computation

We were now able to use the processed images to run the 3D reconstruction computation. The calibration of the camera had been done with a chessboard to complete stage 1 of the algorithm.

To fulfill stage 2, these images were rectified in order to reach parallel and horizontal epipolar lines. As explained above, this rectification reduced the degree of complexity. A point in the left image and its corresponding point in the right image are on the same horizontal line, having the same y-coordinate.

In order to map stereoscopic pairs from the two images, a landmark was used. This landmark was defined as the largest vertical black line. It stood roughly near the middle of both images acquired by the stereotactic system. The position of this line was gauged. Then, we assumed that the closest line from the landmark in the left image was also the closest from the landmark in the right image and so forth. This completed the second stage of the algorithm.

Finally, from the known intrinsic and extrinsic parameters of our stereoscopic system, the 3D coordinates of a cloud of points laying at the surface were computed as explained above.

Postprocessing

The set of points obtained from the 3D reconstruction was still affected by some numerical imperfection from the segmentation algorithm. We could assume, however, that the surface of the breast was a relatively smooth surface. We used a fourth-order filter developed by Gottlieb and Shu [33] in numerical analysis to remove that noise without affecting the overall numerical accuracy of the reconstruction. This applied filter is expressed as follows:

σ()= y 4 35 - 84 y + 70 y 2 - 20 y 3
(2)

where y=0.5(1+ cos(π )).

This filter was applied in the vertical direction y only, i.e., `column wise’ for the depth map.

To fulfill the high order of accuracy provided in theory by this numerical technique, the underlying function to be filtered needs to be itself a smooth periodic function. We subtracted a first-order polynomial shift as in [34] to make the y-signal a C1-periodic function. We then applied the filter and added back the shift to remove only the high-frequency noise. This concludes the description of our modified algorithm for the 3D surface reconstruction. We next report on the result obtained with this method.

Results and discussion

Performance of the system

We used this system with our first patient enrolled in our BCT study. The patient was seated upright in a comfortable chair. Our cameras and video projector system were mounted on a small piece of furniture that could easily slide on the floor. The picture taking process took only a few minutes and did not interfere significantly with the standard clinical exam by the surgeon. We should also mention that the room for the clinical exam was small and had very standard lighting conditions. We did not use the flash and obtained pictures with a 3,008×2,000 resolution.

The reconstruction process was relatively tedious because of different spots on our patient’s body (e.g., birthmarks, scars, radiotherapy landmarks) thereby impeding line detection. It gave us the surface in Figure 7. This 3D reconstruction seemed relatively realistic, but presented defects at the area below the breast where illumination condition was rather poor. Another problem arose in line discontinuities in the images just under the breast. A solution to this problem was to cut the image into two parts (the superior one and the inferior one) by following the curve described by the points of discontinuity and to separately reconstruct these two parts.

Figure 7
figure 7

Reconstruction of the first patient. Results for the patient’s right breast 3D reconstruction (a) in the coronal plane and (b) in the sagittal plane.

Typically, the reconstruction took on the order of an hour of computing on a standard PC. Initially, additional time was needed on a case by case basis if the segmentation was not performed properly and/or the image quality was poor.

It turned out that this first patient in the BCT study came out of surgery with no visual cosmetic defect at all. Nevertheless, this reconstruction could be used to accurately measure volume change and global change on the breast lift related to inflammation or change in tissue stiffness that often accompanies the natural healing process. Due to our specific clinical conditions and breathing motion, it was not possible to get a true measurement of the surface depth on this first patient.

To rigorously validate our model, we decided to use a rigid mannequin to obtain an accurate measure of the numerical error with the reconstruction.

Validation (VS Kinect)

We acquired a plastic mannequin of a woman torso that had realistic dimension. We manually introduced a defect on the left breast that is similar to what might be observed with a significant size lumpectomy on a small breast (see Figure 8).

Figure 8
figure 8

Reconstruction of a mannequin. Results for the mannequin 3D reconstruction. (a) Overall view. (b) View showing the artificial defect on the breast.

We did a 3D reconstruction of this mannequin with both our system and a Kinect. For the Kinect, we used RecFusion, an open-source software implemented by a German company. It offers a solution to generate a 3D reconstruction. Their software is based on the KinectFusion approach with some modifications detailed in [17]. The depth images acquired from the Kinect are registered to the current 3D reconstruction using the iterative closest point (ICP) algorithm. Once registered, the current 3D reconstruction was updated using the new depth measurements. Before the acquisition, a voxel resolution can be fixed as well as the reconstruction volume. To get a reasonably accurate reconstruction, the acquisition lasts at least 10 s, during which the model has to rotate to allow a new view of the sensor. The Kinect can also be shifted to scan the whole model. When the reconstitution is complete, the scan is stopped. Then, postprocessing is done to change the mesh visualization, remove small disconnected parts, or smooth the outcome. We noted that this time lengthy data acquisition method will not work directly with a patient due to breathing motion.

The realism of the reconstruction with the Kinect did give a striking resemblance (see Figure 9).

Figure 9
figure 9

Reconstruction with Microsoft Kinect processed by RecFusion (a, b).

The result obtained with our system looks relatively good. However, in designing our multiscale BCT model of cosmetic outcome [6], we need rigorous quantitative assessment on shape and volume that reflect the potential tissue lost and change of stiffness of the breast post-surgery.

We then decided to follow this simple protocol: we took a side view of the mannequin with a high-definition digital camera and extracted the two-dimensional contour of the torso. The error on this segmentation was expected to be on the order of 2 pixels, which was about 0.73 mm.

We projected our 3D reconstruction with both the Kinect and stereoscopic system on the same plan. We could then compare the contour of the torso in the same plan of projection with the segmentation result of the side view image.

The result is given in Figure 10. It showed that while the result with the Kinect looked visually quite good, the global error was significantly larger than that of our system. The error in the region of interest with our system from top to bottom of the breast was about 2 mm in maximum norm. We can still observe a somewhat larger error below the breast, as noticed with our patient result. One can also observe that the high-frequency oscillations in the error curves are actually coming from the segmentation imperfection of the 2D side view used for validation. Those oscillations are indeed negligible compared to the overall error obtained with our system.

Figure 10
figure 10

Comparative evaluation. (a) Comparison of the cross section between the actual mannequin (blue curve), the Kinect reconstruction (green curve), and the stereoscopic reconstruction (red curve). (b) Error for the Kinect reconstruction (green curve) and the stereoscopic reconstruction (red curve).

Conclusions

We designed a robust and affordable system to capture the 3D shape of a breast cancer patient torso in order to compare the clinical observations to the long-term multiscale prediction of breast conservative therapy outcome done in our lab (see Garbey et al. [6]). The accuracy of this system and data processing was validated on a torso model with a 2-m m accuracy and a distance on the order of 1.3 m from the model. This level of accuracy is adequate for our clinical study. Potential improvements for such a device relies first on improved equipment. It would be advantageous to use a high-quality overhead projector with more contrast than our system and better lightening conditions than the standard one we have in the clinic. The use of additional cameras will add a broader field of view reconstruction and useful redundancy without penalizing the elapse time for the acquisition. An alternative would be to combine the Kinect output for a global view of the torso and a local stereotactic reconstruction with our system of the breast that would be unsensitive to breathing motion artifact.

Our main contribution in this article is the new preprocessing and postprocessing algorithms we applied on structured images to accurately recover the 3D surface of the skin. We speculate that these algorithmic improvements can be advantageous to a broader set of image acquisition devices than our basic stereotactic hardware solution.

References

  1. American Cancer Society: Breast Cancer, Treatment Guidelines for Patients. Fort Washington, PA 19034: NationalComprehensive Cancer Network; 2011.

    Google Scholar 

  2. Brewster AM, Hortobagyi GN, Broglio KR, Kau SW, Santa-Maria CA, Arun B, Buzdar AU, Booser DJ, Valero V, Bondy M, Esteva FJ: Residual risk of breast cancer recurrence 5 years after adjuvant therapy. J Natl Cancer Inst 2008, 100: 1179–1183. 10.1093/jnci/djn233

    Article  Google Scholar 

  3. Gilles M: Breast conservative surgery pilot study: data acquisition design and image processing. Master’sthesis. Department of Computer Science, University of Houston, Houston, 2012.

    Google Scholar 

  4. Dewar JA, Benhamou S, Benhamou E, Arriagada R, Petit JY, Fontaine F, Sarrazin D: Cosmetic results following lumpectomy axillary dissection and radiotherapy for small breast cancers. Radiother Oncol 1988, 12(4):273–280. 10.1016/0167-8140(88)90016-3

    Article  Google Scholar 

  5. Thanoon D, Garbey M, Bass BL: Computational modeling of breast conserving surgery (BCS) starting from MRIimaging. In Computational Surgery and Dual Training. Edited by Garbey M, Bass BL, Berceli S, Collet C, Cerveri P.69121 Heidelberg, Germany: Springer; 2014:67–86.

    Chapter  Google Scholar 

  6. Garbey M, Salmon R, Thanoon D, Bass B: Multiscale modeling and distributed computing to predict cosmesis outcome after a lumpectomy. J Comput Phys 2012, 244: 321–335. 10.1016/j.jcp.2012.08.002

    Article  Google Scholar 

  7. Thanoon D: Computational framework for breast Cancer. PhD thesis. Department of Computer Science,University of Houston, Houston, 2011.

    Google Scholar 

  8. Garbey M, Thanoon D, Salmon R, Bass B: Multiscale modeling and computational surgery: application to breast conservative therapy. JSSCM 2011, 5: 81–89.

    Google Scholar 

  9. Scharstein D, Szeliski R: High-accuracy stereo depth maps using structured light. In Proceedings 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 18–20 June 2003 Madison, Wisconsin, USA. 2003, vol. 1:195–202.

  10. Kolb A, Barth E, Koch R, Larsen R: Time-of-flight cameras in computer graphics. Comput Graph 2010, 29(1):141–159.

    Google Scholar 

  11. Dawood M, Lang N, Jiang X, Schafers K: Lung motion correction on respiratory gated 3-D PET/CT images. IEEE Trans Med Imaging 2006, 25(4):476–485. 10.1109/TMI.2006.870892

    Article  Google Scholar 

  12. Falie D, Ichin M, David L: Respiratory motion visualization and the sleep apnea diagnosis with the time offlight (ToF) camera. In Proceedings of the 1st WSEAS International Conference on Visualization, Imaging andSimulation; 7-9 November 2008 Bucharest, Romania. 2008.

    Google Scholar 

  13. El-Mejbri EF, Grabowski H, Kunze H, Lossack R-S, Michelis A: 3D reconstruction of paper based assembly drawings: state of the art and approach. Graph Recogn Algorithms Appl 2002, 2390: 1–12. 10.1007/3-540-45868-9_1

    Article  Google Scholar 

  14. Trucco E, Verri A: Introductory Techniques for 3-D Computer Vision. Upper Saddle River, New Jersey, USA: Prentice Hall;1998.

    Google Scholar 

  15. Salvi J, Armanguè X, Batlle J: A comparative review of camera calibrating methods with accuracy evaluation. Pattern Recogn 2002, 35: 1617–1635. 10.1016/S0031-3203(01)00126-1

    Article  MATH  Google Scholar 

  16. Zhang S, Huang P: Novel method for structured light system calibration. Opt Eng 2006, 45(8):083601–0836018. 10.1117/1.2336196

    Article  Google Scholar 

  17. Newcombe RA, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison AJ, Kohli P, Shotton J, Hodges S, Fitzgibbon A:KinectFusion: Real-time dense surface mapping and tracking. In 2011 10th IEEE International Symposium onMixed and Augmented Reality (ISMAR); 26-29 October 2011 Basel, Switzerland. 2011:127–136.

    Google Scholar 

  18. Khoshelham K, Elberink SO: Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 2012, 12(2):1437–1454. 10.3390/s120201437

    Article  Google Scholar 

  19. Cui Y, Changz W, Nöll T, Stricker D: KinectAvatar: fully automatic body capture using a single kinect. In 11thAsian Conference on Computer Vision (ACCV 2012); 5-9 November 2012 Daejeon, Korea; 2012.

    Google Scholar 

  20. Cui Y, Stricker D: 3D shape scanning with a kinect. In SIGGRAPH 2011. Vancouver; 7-11 August 2011 Vancouver,Canada. 2011.

    Google Scholar 

  21. Tong J, Zhou J, Liu L, Pan Z, Yan H: Scanning 3D full human bodies using kinects. IEEE Trans Vis Comput Graph 2012, 18(4):643–650. 10.1109/TVCG.2012.56

    Article  Google Scholar 

  22. Hansard M, Lee S, Choi O, Horaud R: Time-of-Flight Cameras: Principles, Methods and Applications. 69121 Heidelberg,Germany: Springer; 2012.

    Google Scholar 

  23. Remondino F, Stoppa D: TOF Range-Imaging Cameras. Springer, 68121 Heidelberg, Germany; 2013.

    Book  Google Scholar 

  24. Placht S, Stancanello J, Schaller C, Balda M, Angelopoulou E: Fast time-of-flight camera based surface registration for radiotherapy patient positioning. Med Phys 2012, 39: 4–17. 10.1118/1.3664006

    Article  Google Scholar 

  25. Le Fur E, Wentz T, El Kabbaj O, Roman J, Visvikis D, Pradier O: Evaluation du Repositionnement en RadiothérapiePar Caméra Temps de Vol. In 24ème Congrès National de la SFRO CNIT Paris La Défense. Paris; 3-5 October 2013 Paris,France. 2013.

    Google Scholar 

  26. Wentz T, Fayad H, Bert J, Pradier O, Clement J-F, Vourch S, Boussion N, Visvikis D: Accuracy, of dynamic patient surface monitoring using a time-of-flight camera and B-spline modelling for respiratory motion characterization. Phys Med Biol 2012, 57: 4175–4193. 10.1088/0031-9155/57/13/4175

    Article  Google Scholar 

  27. Schaller C, Penne J, Hornegger J: Time-of-flight sensor for respiratory motion gating. Med Phys 2008, 35(7):3090–3093. 10.1118/1.2938521

    Article  Google Scholar 

  28. Bradski G, Kaehler A: Learning OpenCV: Computer Vision with the OpenCV Library. Sebastopol, CA 95472, USA: O’REILLY;2008.

    Google Scholar 

  29. Ying C, Hong-e R, Ben-zhi D: An improved algorithm for feature point matching. In 2010 International Conferenceon Environmental Science and Information Application Technology (ESIAT), vol. 4; 17-18 July 2010 Wuhan, China.2010:112–115.

    Google Scholar 

  30. Hartley RI, Zisserman A: Multiple View Geometry in Computer Vision. 2nd edn. Cambridge, United Kingdom:Cambridge University Press; 2004. ISBN: 0521540518.

    Book  MATH  Google Scholar 

  31. Liu Y, Zhang D, Guo J, Lin S: Stripemodel: an efficientmethod to detectmulti-form stripe structures. InAdvances in MultimediaModeling. Volume 7732. Edited by Li S, Saddik A, Wang M, Mei T, Sebe N, Yan S, Hong R,Gurrin C. 68121 Heidelberg, Germany: Springer; 2013:425–435.

    Google Scholar 

  32. Weldon TP, Higgins WE: Design of multiple Gabor filters for texture segmentation. In 1996 IEEE InternationalConference On Acoustics, Speech, and Signal Processing, 7-10May 1996 Atlanta Georgia 1996. ICASSP-96. ConferenceProceedings. Volume 4; 1996:2243–2246.

    Google Scholar 

  33. Gottlieb D, Shu C: On the Gibbs phenomenon and its resolution. SIAM Rev 1997, 39(4):644–668. 10.1137/S0036144596301390

    Article  MATH  MathSciNet  Google Scholar 

  34. Dupros F, Garbey M, Fitzgibbon WE: A filtering technique for system of reaction-diffusion equations. Int J Numer Meth Fluid 2006, 52(1):1–29. 10.1002/fld.1082

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the Atlantis Exchange Program and the Partner University Fund.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicole Lepoutre.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

MG, MG, BB set up the pilot study and the medical protocol. NL, MG, RS, CC, MG participated in proceding the acquired images to get the 3D-surfaces. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lepoutre, N., Gilles, M., Salmon, R. et al. A robust method and affordable system for the 3D-surface reconstruction of patient torso to evaluate cosmetic outcome after Breast Conservative Therapy. J Comput Surg 1, 11 (2014). https://doi.org/10.1186/s40244-014-0011-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40244-014-0011-4

Keywords