BreastGAN: Artificial Intelligence-Enabled Breast Augmentation Simulation (2024)

Article Navigation

Volume 4 2022

Article Contents

  • Abstract

  • METHODS

  • RESULTS

  • DISCUSSION

  • CONCLUSIONS

  • Disclosures

  • Funding

  • REFERENCES

  • < Previous
  • Next >

Journal Article

,

Christian Chartier

McGill University Faculty of Medicine

,

Montreal, QC

,

Canada

Corresponding Author: Mr Christian Chartier, McGill University Faculty of Medicine, 3605 Rue de la Montagne, Montréal, QC H3G 2M1, Canada. E-mail: christian.chartier@mail.mcgill.ca; Instagram: @chrischarts8

Search for other works by this author on:

Oxford Academic

,

Ayden Watt, BS

Department of Experimental Surgery, McGill University Faculty of Medicine

,

Montreal, QC

,

Canada

Search for other works by this author on:

Oxford Academic

,

Owen Lin

McGill University

,

Montreal, QC

,

Canada

Search for other works by this author on:

Oxford Academic

,

Akash Chandawarkar, MD

Manhattan Eye, Ear, and Throat Hospital

,

New York, NY

,

USA

Search for other works by this author on:

Oxford Academic

,

James Lee, MD, MS, FRCSC

Division of Plastic and Reconstructive Surgery, McGill University Health Center

,

Montreal, QC

,

Canada

Search for other works by this author on:

Oxford Academic

Elizabeth Hall-Findlay, MD, FRCSC

Search for other works by this author on:

Oxford Academic

Dr Hall-Findlay is a plastic surgeon in private practice in Canmore, AB, Canada

Author Notes

Aesthetic Surgery Journal Open Forum, Volume 4, 2022, ojab052, https://doi.org/10.1093/asjof/ojab052

Published:

11 December 2021

Article history

Editorial decision:

02 December 2021

Published:

11 December 2021

Corrected and typeset:

13 January 2022

  • PDF
  • Split View
  • Views
    • Article contents
    • Figures & tables
    • Video
    • Audio
    • Supplementary Data
  • Cite

    Cite

    Christian Chartier, Ayden Watt, Owen Lin, Akash Chandawarkar, James Lee, Elizabeth Hall-Findlay, BreastGAN: Artificial Intelligence-Enabled Breast Augmentation Simulation, Aesthetic Surgery Journal Open Forum, Volume 4, 2022, ojab052, https://doi.org/10.1093/asjof/ojab052

    Close

Search

Close

Search

Advanced Search

Search Menu

Abstract

Background

Managing patient expectations is important to ensuring patient satisfaction in aesthetic medicine. To this end, computer technology developed to photograph, digitize, and manipulate three-dimensional (3D) objects has been applied to the female breast. However, the systems remain complex, physically cumbersome, and extremely expensive.

Objectives

The authors of the current study wish to introduce the plastic surgery community to BreastGAN, a portable, artificial intelligence (AI)-equipped tool trained on real clinical images to simulate breast augmentation outcomes.

Methods

Charts of all patients who underwent bilateral breast augmentation performed by the senior author were retrieved and analyzed. Frontal before and after images were collected from each patient’s chart, cropped in a standardized fashion, and used to train a neural network designed to manipulate before images to simulate a surgical result. AI-generated frontal after images were then compared with the real surgical results.

Results

Standardizing the evaluation of surgical results is a timeless challenge which persists in the context of AI-synthesized after images. In this study, AI-generated images were comparable to real surgical results.

Conclusions

This study features a portable, cost-effective neural network trained on real clinical images and designed to simulate surgical results following bilateral breast augmentation. Tools trained on a larger dataset of standardized surgical image pairs will be the subject of future studies.

Managing patient expectations is important to ensuring patient satisfaction in aesthetic medicine.1 A disconnect between a patient’s preoperative assumption of postoperative result and what is surgically attainable can affect satisfaction and harm the surgeon-patient relationship. In breast surgery, additional technical variables, such as implant type, size, and profile, may further complicate the ability of the patient to imagine a realistic postoperative result. Current popular methods for forecasting surgical results, including arithmetic nomograms and the use of bra sizers, are largely inaccurate.2,3 As a result, the most common cause for elective reoperation among breast patients is unsatisfactory implant size.4,5 This has amplified plastic surgeons’ responsibility to help patients set realistic surgical goals in the preoperative setting—a longstanding communication challenge in the field.

Patients are more likely to undergo surgery if they have accurate information about the postoperative result.5 To this end, computer technology developed to photograph, digitize, and manipulate three-dimensional (3D) objects has been applied to the female breast.6 Currently available imaging systems can render a sequence of photographs as a 3D surface and simulate various surgical procedures (breast augmentation, reduction, mastopexy, etc.) to digitally generate a plausible result.7 In the years since the first versions of these systems were launched, developers have added features such as automated measurements, breast volume estimates, and breast implant selection.8 However, the systems remain complex, physically cumbersome, and extremely expensive, with prices ranging from $12,000 US to $49,000 US (June 2013 prices).7-10 Furthermore, they do not incorporate artificial intelligence (AI) technology, instead relying on linear finite elements (computer-generated imagery representing expected soft tissue changes in response to surgery), and may not be based on databases of true, attainable surgical results.6

The field of AI is predicated on the rigorous analysis of large datasets for the purpose of making predictions.11 Neural networks (NNs) are algorithms designed to replicate decision-making pathways in the human brain. Tasks already mastered by NNs in their current form include image classification, free text analysis, and defeating human experts at abstract strategy games such as chess and “Go.” 12 More advanced tasks now being tackled by AI include the generation of “deep fakes,” fake images synthesized from a set of general constraints based on previously learned real images.13 These applications require a specific kind of NN, called a generational adversarial network (GAN).14 Generative modeling is a subfield of machine learning that concerns itself with automatically recognizing patterns in visual input (training) data with the main goal of generating “fake” examples (images) that are indistinguishable from the training images.14 Quintessential examples of GANs include models to turn images of horses into zebras, summer landscapes into winter landscapes, and non-emotive faces into smiling faces. The key to successful GAN development is a careful selection of standardized training images for the algorithm to “learn” from.

While there are few datasets of surgical images sufficiently large for AI studies, plastic surgery “before-and-after” images, primarily meant to be referenced by prospective patients or presented in journals or conferences, provide a large dataset of surgical outcomes. This sets the stage for GAN development on clinical images spanning the entire gamut of plastic surgery procedures (Figure 1).

BreastGAN: Artificial Intelligence-Enabled Breast Augmentation Simulation (5)

Figure 1.

Overview of a clinical generative adversarial network (GAN). AI, artificial intelligence.

Open in new tabDownload slide

The authors of the current study have developed BreastGAN (Montreal, Canada), an AI-driven algorithm trained on real clinical images to automatically simulate breast augmentation outcomes.

METHODS

Database Creation

Charts of all patients who underwent bilateral breast augmentation (without mastopexy) performed by the senior author (between January 2003 and January 2018) and who consented to their images being used for research purposes were retrieved and analyzed. Written consent was provided by which the patients agreed to the use and analysis of their data. No intervention was performed on any human or animal patients as part of this study.

Frontal before-and-after images were collected from each patient’s chart, cropped to limit background visualization, centered vertically on the midpoint between the sternal notch and the umbilicus, and centered horizontally on the midpoint between the elbow creases. In total, before-and-after image pairs were collected from 1235 patients and included in the final analysis. The database was split such that 75% (n = 926) of image pairs constituted the “training set,” while 25% (n = 309) of image pairs constituted the “test set.” No features such as implant type, size, or profile were included in the analysis. Features including skin quality/thickness, breast gland density, and level of ptosis only impacted the analysis and results to the extent that they could be observed on the clinical images by the algorithm during training.

BreastGAN Training

The GAN in this study was developed based on the “pix2pix” framework published by Isola et al and trained on an Nvidia K80 12GB GPU hosted on Google (Alphabet, Mountainview, CA) Colaboratory.15 The pix2pix GitHub (GitHub Inc., San Francisco, CA) repository was cloned from https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix.git and mounted in Google Colaboratory, a popular machine learning environment. All images in the database were resized to 850 by 950 pixels for the purpose of training. The training was done up to 250 epochs, or iterations through the entire set of training image pairs. After each epoch, the surgical results generated by the model were retrieved to illustrate the model’s improvement throughout the training process.

BreastGAN Testing

Testing consisted of introducing the algorithm to the 309 individual frontal before images constituting the test set, recording the outputs (309 AI-generated frontal after images), and comparing the AI-generated frontal after images to the real surgical results.

RESULTS

Training and Testing

A representative sample of AI-generated surgical results retrieved from training epochs 1, 50, 100, 200, and 250 are shown in Figure 2.

BreastGAN: Artificial Intelligence-Enabled Breast Augmentation Simulation (6)

Figure 2.

Sample of artificial intelligence-generated surgical results retrieved throughout the training process: (A) Epoch 1, (B) Epoch 50, (C) Epoch 100, (D) Epoch 150, (E) Epoch 200, and (F) Epoch 250.

Open in new tabDownload slide

Evaluation

Standardizing the evaluation of surgical results (test group) is a timeless challenge which persists in the context of GAN-synthesized after images.16 While surveys and metrics such as per-pixel mean-squared error exist to compute the geometric distance between AI-generated and authentic after images, we are currently seeking a more holistic approach to the evaluation of surgical images. For now, randomly selected results of GAN testing are shown in Figure 3.

BreastGAN: Artificial Intelligence-Enabled Breast Augmentation Simulation (7)

Figure 3.

Sample of BreastGAN testing results: (A) 35-year-old female, (B) 42-year-old female, (C) 44-year-old female, (D) 44-year-old female, and (E) 39-year-old female. Leftmost panels are true preoperative images; middle panels are BreastGAN-simulated postoperative results; rightmost panels are true postoperative images.

Open in new tabDownload slide

DISCUSSION

Previous studies have described the use of 3D surface-imaging systems for preoperative planning in plastic surgery.7,17-20 They offer a multitude of innovative features to help plastic surgeons better manage their patients’ expectations in the preoperative setting. This shift in the field’s approach to preoperative goal-setting is correlated with increased patient satisfaction.18 However, these tools remain too costly for some clinics and medical centers and can only be administered in a clinical setting.

In this study, we propose BreastGAN, an AI-equipped tool capable of simulating breast augmentation from a preoperative image taken. It leverages similar technology as FaceApp (FaceApp Technology Limited, Limassol, Cyprus), an application designed to show users how they would look older, younger, or with different hairstyles, and can be similarly deployed as a mobile application. Consistent with findings published by Isola et al in the index study introducing GAN training on paired images, this study suggests that GANs can be used to accomplish image translation tasks (eg, translate before images into after images) on clinical images.15 While BreastGAN in its current form may have fewer features when compared with popular legacy 3D imaging systems, we believe access to our tool will empower patients, allowing them to consider whether to pursue a surgical procedure from the comfort of their own homes.

Readers should be cautioned to interpret this study’s findings considering certain limitations. As will all studies describing the use of AI-equipped tools, model outputs are only as reliable as the data used to train them. In this study, the GAN was trained on surgical images provided by the senior author. This means that each image synthesized by the model will reflect surgical results achieved by one plastic surgeon and not necessarily reproducible by another surgeon. Furthermore, given the relative limitation in available training data (1000 s instead of 10,000 s), the model in its current form has not been designed to output results tailored by implant size, incision type, or photographic angle. This will be the subject of future studies describing GANs trained on a wider array of clinical examples. For the reader’s reference, it has been our experience that suitable results can be achieved with as few as 1000 pairs of training images. To generate surgical results across multiple implant sizes from a single input image, at least 1000 additional pairs of images would be required per additional implant size desired (eg, 1000 pairs of images of patients who received implants less than 250 cc, 1000 pairs between 250 and 500 cc, and 1000 pairs between 500 and 750 cc). It is also worth noting that a GAN trained on images aggregated from multiple surgeons would output the average result achieved by all the contributing surgeons. A version of this tool trained on images provided by other surgeons seeking to implement BreastGAN in their practice would output results consistent with their own surgical results. Furthermore, our results only include simulated breast augmentation in patients who were candidates for breast augmentation as determined by the senior author. Using a neural network to determine which surgery a patient is the best candidate for (augmentation vs augmentation mastopexy, for example) is the subject of an ongoing study. Lastly, as described in the Methods section, features such as skin quality/thickness, breast gland density, and level of ptosis only impacted the analysis and results to the extent that they could be observed on the clinical images by the algorithm during training.

With the lack of a rigorously standardized approach to photographing patients undergoing breast augmentation, the model is limited by the features of the images it was trained on. The training was done on images taken from a wide range of distances/angles and with a wide range of backgrounds. Some before-and-after image pairs were also dissimilar. Subsequent studies will describe a methodology for capturing clinical images optimal for the training of AI tools. Lastly, BreastGAN in its current form has been trained to generate the entire projected postoperative image, not just the augmented breasts. This accounts for distortions in non-breast features including the neck/body contours, hairline, and elbow/armpit creases. Subsequent models may involve generating only the breast to improve the plausibility of the results. Additionally, modifiable variables such as size and implant plane may be added as inputs to provide patients with AI-driven projections to help them make more informed decisions, thereby decreasing revisions and increasing postoperative satisfaction.

Community-Driven Research

The nature of studies involving the use of GANs—and AI more generally—is that they rely on standardized training data often unavailable in the context of plastic surgery. Thus, the authors of this study call on members of the surgical community to consider the standardization and wider distribution of their data and clinical photographs of consenting patients. This will greatly accelerate the pace of research at the intersection of AI and surgery and make possible the development of more accurate forms of tools such as the one described herein. Furthermore, the authors wish to extend an invitation to their peers interested in developing AI-equipped tools trained on data they have collected to contact our team. The principles underpinning this tool can be applied to other areas of aesthetic surgery, including fillers, neuromodulators, rhinoplasty, facial surgery, and body surgery.

CONCLUSIONS

The development of preoperative imaging tools capable of simulating breast augmentation has empowered patients by helping them visualize a plausible surgical result. However, these tools remain costly, non-portable, and based on graphical manipulations instead of a database of true surgical results. This has been the impetus for the development of BreastGAN, a generational adversarial network trained on paired before-and-after images of patients undergoing breast augmentation. This tool is portable and can be deployed through a mobile application. GANs trained on a larger dataset of standardized surgical image pairs will be the subject of future studies.

Disclosures

Dr Chandawarkar is a clinical consultant for Cypris Medical (Chicago, IL, USA). None of the remaining authors have a financial interest in any of the products, devices, or drugs mentioned in this manuscript.

Funding

No funding was received for this manuscript.

REFERENCES

1.

Poulsen

L

,

Klassen

A

,

Jhanwar

S

, et al.

Patient expectations of bariatric and body contouring surgery

.

Plast Reconstr Surg Glob Open.

2016

;

4

(

4

):

e694

.

2.

Creasman

CN

,

Mordaunt

D

,

Liolios

T

,

Chiu

C

,

Gabriel

A

,

Maxwell

GP

.

Four-dimensional breast imaging, part I: introduction of a technology-driven, evidence-based approach to breast augmentation planning

.

Aesthet Surg J.

2011

;

31

(

8

):

914

-

924

.

3.

Creasman

CN

,

Mordaunt

D

,

Liolios

T

,

Chiu

C

,

Gabriel

A

,

Maxwell

GP

.

Four-dimensional breast imaging, part II: clinical implementation and validation of a computer imaging system for breast augmentation planning

.

Aesthet Surg J.

2011

;

31

(

8

):

925

-

938

.

4.

Adams

WP

Jr,

Mckee

D

.

Matching the implant to the breast: a systematic review of implant size selection systems for breast augmentation

.

Plast Reconstr Surg.

2016

;

138

(

5

):

987

-

994

.

5.

Costa

CR

,

Small

KH

,

Adams

WP

Jr
.

Bra sizing and the plastic surgery herd effect: are breast augmentation patients getting accurate information?

Aesthet Surg J.

2017

;

37

(

4

):

421

-

427

.

6.

Epstein

MD

,

Scheflan

M

.

Three-dimensional imaging and simulation in breast augmentation: what is the current state of the art?

Clin Plast Surg.

2015

;

42

(

4

):

437

-

450

.

7.

Tzou

CH

,

Artner

NM

,

Pona

I

, et al.

Comparison of three-dimensional surface-imaging systems

.

J Plast Reconstr Aesthet Surg.

2014

;

67

(

4

):

489

-

497

.

8.

Wood

KL

,

Zoghbi

Y

,

Margulies

IG

,

Ashikari

AY

,

Jacobs

J

,

Salzberg

CA

.

Is the Vectra 3D imaging system a reliable tool for predicting breast mass?

Ann Plast Surg.

2020

;

85

(

S1 Suppl 1

):

S109

-

S113

.

Google Scholar

OpenURL Placeholder Text

9.

Wesselius

TS

,

Verhulst

AC

,

Vreeken

RD

,

Xi

T

,

Maal

TJJ

,

Ulrich

DJO

.

Accuracy of three software applications for breast volume calculations from three-dimensional surface images

.

Plast Reconstr Surg.

2018

;

142

(

4

):

858

-

865

.

10.

Overschmidt

B

,

Qureshi

AA

,

Parikh

RP

,

Yan

Y

,

Tenenbaum

MM

,

Myckatyn

TM

.

A prospective evaluation of three-dimensional image simulation: patient-reported outcomes and mammometrics in primary breast augmentation

.

Plast Reconstr Surg.

2018

;

142

(

2

):

133e

-

144e

.

11.

Chandawarkar

A

,

Chartier

C

,

Kanevsky

J

,

Cress

PE

.

A practical approach to artificial intelligence in plastic surgery

.

Aesthet Surg J Open Forum.

2020

;

2

(

1

):

1-7

.

12.

Silver

D

,

Schrittwieser

J

,

Simonyan

K

, et al.

Mastering the game of Go without human knowledge

.

Nature.

2017

;

550

(

7676

):

354

-

359

.

13.

Crystal

DT

,

Cuccolo

NG

,

Ibrahim

AMS

,

Furnas

H

,

Lin

SJ

.

Photographic and video deepfakes have arrived: how machine learning may influence plastic surgery

.

Plast Reconstr Surg.

2020

;

145

(

4

):

1079

-

1086

.

14.

Goodfellow

I

,

Pouget-Abadie

J

,

Mirza

M

, et al.

Generative adversarial networks

.

Commun ACM

.

2020

;

63

(

11

):

139

-

144

.

15.

Isola

P

,

Zhu

J-Y

,

Zhou

T

,

Efros

AA

.

Image-to-image translation with conditional adversarial networks. Paper presented at: IEEE Conference on Computer Vision and Pattern Recognition

; July 21-26, 2017; Honolulu, HI. Accessed July 12, 2021. https://openaccess.thecvf.com/content_cvpr_2017/html/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.html.

16.

Salimans

T

,

Goodfellow

I

,

Zaremba

W

,

Cheung

V

,

Radford

A

,

Chen

X

.

Improved techniques for training GANs

.

Neur IPS

.

2016

;

29

:

2234

-

2242

.

Google Scholar

OpenURL Placeholder Text

17.

Burke

PH

,

Beard

FH

.

Stereophotogrammetry of the face. A preliminary investigation into the accuracy of a simplified system evolved for contour mapping by photography

.

Am J Orthod.

1967

;

53

(

10

):

769

-

782

.

18.

Koban

KC

,

Härtnagl

F

,

Titze

V

,

Schenck

TL

,

Giunta

RE

.

Chances and limitations of a low-cost mobile 3D scanner for breast imaging in comparison to an established 3D photogrammetric system

.

J Plast Reconstr Aesthet Surg.

2018

;

71

(

10

):

1417

-

1423

.

19.

Yasunaga

Y

,

Tsuchiya

A

,

Nakajima

Y

,

Kondoh

S

,

Noguchi

M

,

Yuzuriha

S

.

Three-dimensional simulation for breast augmentation of female asymmetric pectus excavatum: a case report

.

Aesthet Surg J Open Forum.

2019

;

1

(

2

):

1-6

.

20.

Killaars

RC

,

Preuβ

MLG

,

de Vos

NJP

, et al.

Clinical assessment of breast volume: can 3D imaging be the gold standard?

Plast Reconstr Surg Glob Open.

2020

;

8

(

11

):

e3236

.

Author notes

Dr Hall-Findlay is a plastic surgeon in private practice in Canmore, AB, Canada

© 2021 The Aesthetic Society.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.

Topic:

  • client satisfaction
  • artificial intelligence
  • cost effectiveness
  • computers
  • esthetics
  • intelligence
  • surgical procedures, operative
  • afterimage
  • breast
  • plastic surgery specialty
  • plastic surgery procedures
  • augmentation mammoplasty
  • community
  • datasets

Subject

Breast Surgery

Issue Section:

Breast Surgery > Original Article

Download all slides

Advertisement

Citations

Views

10,284

Altmetric

More metrics information

Metrics

Total Views 10,284

9,494 Pageviews

790 PDF Downloads

Since 12/1/2021

Month: Total Views:
December 2021 16
January 2022 110
February 2022 120
March 2022 142
April 2022 133
May 2022 89
June 2022 91
July 2022 78
August 2022 87
September 2022 94
October 2022 92
November 2022 135
December 2022 141
January 2023 161
February 2023 167
March 2023 176
April 2023 227
May 2023 225
June 2023 256
July 2023 366
August 2023 394
September 2023 326
October 2023 502
November 2023 572
December 2023 680
January 2024 690
February 2024 519
March 2024 395
April 2024 447
May 2024 438
June 2024 411
July 2024 455
August 2024 458
September 2024 301
October 2024 396
November 2024 394

Citations

Powered by Dimensions

16 Web of Science

Altmetrics

×

Email alerts

Article activity alert

Advance article alerts

New issue alert

In progress issue alert

Subject alert

Receive exclusive offers and updates from Oxford Academic

See also

  • Companion Article

    • Commentary on: BreastGAN: Artificial Intelligence-Enabled Breast Augmentation Simulation

More on this topic

Awake Plastic Surgery Procedures: The Use of a Sufentanil Sublingual Tablet to Improve Patient Experience

An Updated Review of Plastic Surgery-Related Hashtag Utilization on Instagram: Implications for Education and Marketing

A Practical Approach to Artificial Intelligence in Plastic Surgery

Subpectoral Implant Repositioning With Partial Capsule Preservation: Treating the Long-Term Complications of Subglandular Breast Augmentation

Citing articles via

Google Scholar

  • Most Read

  • Latest

Post-Hyaluronic Acid Recurrent Eyelid Edema: Pathophysiologic Mechanisms and a Proposed Treatment Protocol
The Psychological Impact of Aesthetic Surgery: A Mini-Review
Aesthetically Ideal Breasts Created With Artificial Intelligence: Validating the Literature, Racial Differences, and Deep Fakes
Radiofrequency Microneedling: Technology, Devices, and Indications in the Modern Plastic Surgery Practice
Cellulite: Current Understanding and Treatment

More from Oxford Academic

Cosmetic Medicine

Medicine and Health

Plastic and Reconstructive Surgery

Surgery

Books

Journals

Advertisement

BreastGAN: Artificial Intelligence-Enabled Breast Augmentation Simulation (2024)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Carmelo Roob

Last Updated:

Views: 6032

Rating: 4.4 / 5 (65 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Carmelo Roob

Birthday: 1995-01-09

Address: Apt. 915 481 Sipes Cliff, New Gonzalobury, CO 80176

Phone: +6773780339780

Job: Sales Executive

Hobby: Gaming, Jogging, Rugby, Video gaming, Handball, Ice skating, Web surfing

Introduction: My name is Carmelo Roob, I am a modern, handsome, delightful, comfortable, attractive, vast, good person who loves writing and wants to share my knowledge and understanding with you.