0:00:13.080,0:00:21.000 um everyone both in the room and uh the hopefully  significant number of people online and whether 0:00:21.000,0:00:31.720 it's evening for you or early morning um thank  thank you for joining us today so this is our uh 0:00:31.720,0:00:42.840 fifth annual uh 20 well the Midwest radiation  oncology Symposium um and uh I'm so glad that 0:00:42.840,0:00:52.200 all of you are here today um so for those of you  joining us either in person or in line um we have 0:00:52.200,0:00:59.920 a fairly diverse group that includes uh Physicians  from radiation medical and Surgical Oncology 0:01:00.920,0:01:08.880 um we have oncology nurses Nurse Practitioners  physician assistants that are part of their um 0:01:08.880,0:01:16.960 uh audience and certainly our medical physicist  a symmetrists ration therapists and then um 0:01:16.960,0:01:24.640 uh fellows uh from different programs uh and  our residents in both radiation oncology and 0:01:24.640,0:01:31.880 medical physics as well as students who are here  and certainly um are International guests which 0:01:31.880,0:01:41.240 was why I said uh good evening um I do want  to acknowledge uh uh the educational grant 0:01:41.240,0:01:47.160 that we have actually received on uh multiple  years through Varian Medical Systems we're very 0:01:47.160,0:01:55.640 appreciative for their ongoing support and also  the exhibitors who are here today regeneron Ser 0:01:55.640,0:02:02.320 your Pharma and Varian Medical Systems and  during breaks I would hope that you would 0:02:02.320,0:02:12.080 stop by and visit with the people that are here  representing their um their companies um I want 0:02:12.080,0:02:21.960 to thank the Symposium committee um uh several of  those have been U on there on prior years um but 0:02:21.960,0:02:33.840 we also have new members and obviously I want to  thank both uh Dr Chin um our U professor and vice 0:02:33.840,0:02:41.560 chair of clinical research well research I guess  I should say and Brenda Ram who wears many hats 0:02:41.560,0:02:52.600 but is involved in all of the uh UNMC educational  conferences uh last time I saw Brenda it was in 0:02:52.600,0:03:00.120 uh Maui um which was great until I couldn't get a  flight out but that's okay it's all good not a bad 0:03:00.120,0:03:08.640 place to get hung up um and I want to welcome our  new faculty um so Physicians who have joined us 0:03:08.640,0:03:18.320 since our meeting last year um is Patrice Cohen um  and then who joined us uh last month was Brianne 0:03:18.320,0:03:30.480 Bower and kib Khan Dr Khan will be speaking and uh  William low who um uh also joined our uh physician 0:03:30.480,0:03:41.520 faculty and then physicists who um started after  last year's Symposium is u ray ray L and Dr Mark 0:03:41.520,0:03:54.960 Chan um and ham mtan and Ellie bacon and both Dr  low and Dr and and Ellie bacon will be part of 0:03:54.960,0:04:07.280 our physician physics team at Carney uh and then s  mafus who um is recent joined our research faculty 0:04:07.280,0:04:15.240 um I want to acknowledge our new cancer center  director Joanne Sweezy uh so Joan uh Dr Sweezy 0:04:15.240,0:04:24.440 joined us in November um uh I was on the search  committee there was Joan Sweezy and then a pretty 0:04:24.440,0:04:32.400 large Delta between our other candidates but what  I really like about Dr Sweezy um is her background 0:04:32.400,0:04:39.400 is in radiation DNA damage repair uh and so I  kind of feel like we have a cancer center director 0:04:39.400,0:04:52.280 who's one of us um and so um we're uh hoping  to um get a larger project off the ground to be 0:04:52.280,0:05:03.360 doing a lot of direct research with Dr Sweezy um  the we don't Coast is actually from the uh Omaha 0:05:03.360,0:05:12.520 Chamber of Commerce their slogan um but um this  actually was hot off depressed I received this 0:05:12.520,0:05:23.840 Tuesday evening so since we opened in June of 2017  uh the wner cancer hospital has taken care of over 0:05:23.840,0:05:38.400 25,000 patients on the inpatient setting um the uh  researchers um both uh physician physics um uh PhD 0:05:38.400,0:05:48.480 uh cancer biologists um have uh received over 240  million in new grant funding um in the outpatient 0:05:48.480,0:05:59.240 clinic setting we've seen and treated over 83,000  patients uh We've added 94 new physician and uh 0:05:59.240,0:06:05.600 uh physics should I I saw this last night was  like oh we should have had physics also as a 0:06:05.600,0:06:12.960 separate but U research scientists as well U the  physics are included the 94 though don't don't 0:06:12.960,0:06:21.480 worry because we value what you contribute um but  that has been a number of new um so if you looked 0:06:21.480,0:06:30.240 uh PR ER I think we added nine um since last year  so we're helping that uh new recruitment part and 0:06:30.240,0:06:39.680 we've seen um over 152% increase in patients that  are being enrolled on cancer clinical trials so 0:06:39.680,0:06:50.840 that's that's important um we have uh had success  in making the top 50 for us news and World Report 0:06:50.840,0:06:59.920 um they have a peculiar metric system how they  gauge that but they did make some changes and uh 0:06:59.920,0:07:09.440 I was pleased um when I was in Hawaii and found  out that we got ranked 45 in cancer um as this 0:07:09.440,0:07:18.560 tends to be about two to three years in the lag uh  I anticipate we'll move up but uh congratulations 0:07:18.560,0:07:26.880 to everybody that was part of that um you know  this is the present so that's a great view of 0:07:26.880,0:07:38.240 the Fred and Pamela Buffett uh um cancer uh Center  of which we have the CL Warner cancer hospital uh 0:07:38.240,0:07:47.840 and uh Clinic Labs including radiation oncology  on the left of the of the big building um in the 0:07:47.840,0:07:56.120 background it's again L-shaped building is where  both uh faculty offices um and the research towers 0:07:56.120,0:08:03.880 are located but what's going to be happening  with that 8 acre parcel of land um that took 0:08:03.880,0:08:11.280 about two years to clear and dig out um probably  a 100 Years of building um well that's at least 0:08:11.280,0:08:20.800 one vision of what will be there and that will be  the new main um uh cancer well not cancer excuse 0:08:20.800,0:08:29.600 me that would be awesome but that will be the new  main um inpatient and Outpatient Clinic space uh 0:08:29.600,0:08:39.800 for Nebraska medicine U you can see Buffet um  is in the background and that will have over 0:08:39.800,0:08:47.720 1.1 million square feet I think this building  is around 380,000 square feet so substantially 0:08:47.720,0:08:56.400 larger um price tag is also substantially larger  estimated about 2.2 billion with a B but we're 0:08:56.400,0:09:04.040 um uh in the planning phase for that now now and  hopefully we'll actually be looking at potentially 0:09:04.040,0:09:12.960 breaking ground late calendar year 2025 um the  artist rendition here isn't nearly as good as the 0:09:12.960,0:09:19.760 actual building that you can drive down on Saddle  Creek and sea uh that gray part in the middle used 0:09:19.760,0:09:27.080 to be a steel foundry that they have repurposed  and added um on either side of that so that 0:09:27.080,0:09:39.120 building is blocks long um and it's essentially  an incubator Farm uh collaboration between um uh 0:09:39.120,0:09:46.000 investigators working with industry for developing  new projects uh and bringing that actually to 0:09:46.000,0:09:55.520 Market um and unimed will move their uh offices  over into that facility as well but really we hope 0:09:55.520,0:10:04.160 that will be a catalyst to really uh bring growth  forward um in terms of bringing a lot of things to 0:10:04.160,0:10:15.720 Market um we are planning to open the new  cancer center facility on the uh University 0:10:15.720,0:10:25.800 of Nebraska Carney campus um later this year um  probably looking um maybe some component in late 0:10:25.800,0:10:34.840 November um early December raan ology is on the  the right of that building I was out there uh May 0:10:34.840,0:10:42.400 or June it's a really impressive facility um and  I'm really excited about that and actually the 0:10:42.400,0:10:50.200 whole Medical Campus will have a second Medical  School uh campus that will be based there as well 0:10:50.200,0:10:59.240 um so we're excited about that I suspect the name  will change but um I won't say any more about that 0:10:59.800,0:11:09.600 um and so uh I want to welcome all of you to the  2024 R oncology or the Midwest ration oncology 0:11:09.600,0:11:18.520 Symposium um me included please silence your  phones um and uh you know we're going to do 0:11:18.520,0:11:27.280 a little bit of a marathon 28 presentations over  the next two days and also uh Dr Lynn uh reminded 0:11:27.280,0:11:36.280 me to say for our speakers if you want to put um  questions in for them to add that to the question 0:11:36.280,0:11:49.920 and answer Hugh on the bottom that would you huh  well our attendees yes thank you all right so 0:11:49.920,0:11:59.520 um a a big focus of ours uh meeting today is  kind of and tomorrow is the interaction with 0:11:59.520,0:12:10.400 um with artificial intelligence and radiation  oncology um and uh that that was me taking the 0:12:10.400,0:12:19.840 Geico caveman um which was much better actually  than the gecko uh but uh you know using lower Tech 0:12:19.840,0:12:27.280 was managed to turn that into somewhat fun we'll  I'll date myself so I don't know if any of you 0:12:27.280,0:12:36.240 know who this is but there was a TV show called  Lost in Space um and that was B9 he was famous for 0:12:36.240,0:12:47.200 Not only would he alert uh the uh the uh stranded  uh crew of the spaceship on a foreign Planet but 0:12:47.200,0:12:54.480 he was famous for not only verbally saying danger  danger but he would wave his arms and flail around 0:12:54.480,0:12:59.560 um I suspect that's probably some pretty low  Tech duct work that they used for the arm 0:12:59.560,0:13:09.520 but you know this was the 60s um and and you know  but that was kind of the view of Technology you 0:13:09.520,0:13:19.480 know and then it was followed by something called  the Corbin project um that was a predated uh war 0:13:19.480,0:13:26.560 games but it was the same premise you know  hey we don't want uh humans to accidentally 0:13:26.560,0:13:33.120 blow everybody up so we'll have we'll turn it  over to machines to make that decision of when 0:13:33.120,0:13:38.960 it's time to push the button and then you know the  premise would be the machines decided humans are 0:13:38.960,0:13:45.960 really stupid and we need to take care of them so  we'll just run everything and uh and that was the 0:13:45.960,0:13:51.000 premise of some of the the movies I I didn't  put Terminator in but obviously that would be 0:13:51.000,0:13:59.520 another one so with everything going on today  um you know there is inexpensive entertainment 0:13:59.520,0:14:07.120 these were two pictures that I took off my deck  this summer um the partial rainbow was much more 0:14:07.120,0:14:13.840 uh impressive than the photo I had of the complete  rainbow which we had um for those of you who've 0:14:13.840,0:14:19.760 never observed a double rainbow you'll notice that  the colors in the rainbow from top to bottom were 0:14:19.760,0:14:27.400 all familiar but on the secondary rainbow it's  kind of the reverse order um so he kind of goes 0:14:27.400,0:14:36.680 from uh reddish uh on the top of the main rainbow  and kind of uh Violet on the bottom it's just the 0:14:36.680,0:14:43.200 opposite on the upper and the photo on the right  um it was one of those things I went out to grill 0:14:43.200,0:14:49.040 and all of a sudden I grab my wife and I go get  out here get out here because I've never seen just 0:14:49.040,0:14:56.880 that a red streak in the sky it was the coolest  thing I'd ever seen it was um it it remind me 0:14:56.880,0:15:03.640 of a Terry Redland painting if you knowing about  Terry redin he'll frequently have cloudy skies and 0:15:03.640,0:15:09.720 yet it's things are really brightly lit and that's  exactly uh you can't find where the light's coming 0:15:09.720,0:15:19.800 from but that was a very neat photo so our uh our  keyote speaker this morning um who actually met 0:15:19.800,0:15:29.160 for the first time yesterday and have the distinct  pleasure of having uh a great dinner with Dr jeang 0:15:29.160,0:15:37.600 um is um Dr uh Clifton Fuller who's a professor in  the department of varation oncology at University 0:15:37.600,0:15:43.520 of Texas MD Anderson Cancer Center in Houston  Texas and he's going to be doing our first 0:15:43.520,0:15:51.960 keynote presentation um just a little bit of  background um I I thought about reading his 61 0:15:51.960,0:15:59.040 page CV but then I thought might be a problem one  time and Dr Lynn would be stroking out and I don't 0:15:59.040,0:16:10.920 want that to happen with her so um uh he uh  his uh background uh he is on the head neck 0:16:10.920,0:16:21.880 oncology service um and um is heavily involved  in research um we had a lot of fun uh discussing 0:16:21.880,0:16:29.320 old technology um you know that may seem new  to me uh he looks at as past tense and where 0:16:29.320,0:16:39.040 are we today and uses that for perspective um  but um it was it was an a really uh enjoyable 0:16:39.040,0:16:44.480 dinner last night a lot of discussion about  where we've been but more importantly where 0:16:44.480,0:16:56.600 we're going with things um and so I'm going to uh  stop at this point and uh turn this over to um Dr 0:16:56.600,0:17:06.600 fer and at the end of his session there will  be uh time for Q&A so um I'll come on on for 0:17:06.600,0:17:21.920 that wherever wherever 0:17:21.920,0:17:33.920 works right great so uh thank you all Dr iny  Dr Lynn thank you for inviting me um I uh I 0:17:33.920,0:17:39.560 have to say I'm I'm I love the caveman picture  but I object to the use of my image without 0:17:39.560,0:17:46.240 reimbursement um but I think that's a great  analogy because at the end of the day we talk 0:17:46.240,0:17:53.160 about AI or we talk about AI prediction models  um in a very basic sense we are cave people uh 0:17:53.160,0:18:00.120 human beings have evolved over an incredibly long  period of time but we've done done is we've had 0:18:00.120,0:18:06.040 approximately better tools so I would argue  that we're still just like those cave folks 0:18:06.040,0:18:12.480 only instead of sharp sticks and a and a piece  of flint we now have uh writing and and AI so 0:18:12.480,0:18:16.640 as our tool sets develop it's very important  for us to remember that we're really not doing 0:18:16.640,0:18:21.240 anything that that humans haven't done for an  enormous amount of time and that is learn how 0:18:21.240,0:18:26.240 to approximate these tools better so these are  my disclosures I will show them to you faster 0:18:26.240,0:18:32.040 than you can read them um but as you heard a  head neck radiation oncologist um and this is 0:18:32.040,0:18:39.280 my group I'm I'm kind of H obscured here but I'm  the middle-aged guy uh but I work with a really 0:18:39.280,0:18:44.160 fantastic team of folks also in our head neck  section this is our head neck multidisciplinary 0:18:44.160,0:18:50.440 group that's all focused on head neck cancer and  then uh from a research perspective I'm part of 0:18:50.440,0:18:58.720 a series of of collaborative efforts uh this is  one I'm with u oh I know where is not a problem 0:19:00.640,0:19:10.480 okay great so um I'm part of a multi uh is  there a way to hide this little thing okay uh 0:19:10.480,0:19:15.160 we're part of a multi-institutional effort with  the University of Iowa University of Illinois 0:19:15.160,0:19:20.080 Chicago that's focused on computational models  you'll see a lot of work by our students in 0:19:20.080,0:19:28.720 this group for toxicity minimization um hope  I'm hitting the right button there one okay 0:19:40.360,0:19:47.440 that's no problem okay good I'll try and move  her here so I can move around um working on 0:19:47.440,0:19:53.640 toxicity minimization efforts I work with uh  Phillips and Electa on adaptive replanting and 0:19:53.640,0:19:58.640 then with Rice University on marov decision  processes for when to make those adaptations 0:19:59.640,0:20:04.200 uh and then under christe Brock who many of  yall know I'm part of an effort where we train 0:20:04.200,0:20:11.040 phys physicists Physicians scientists surgeons  scientists together in image guided therapy so 0:20:11.040,0:20:16.280 um these efforts are are are really underpinning  the environment in which my lab group the folks 0:20:16.280,0:20:24.840 you see here operate and so our major focus  is on spatially aware uh methods for clinical 0:20:24.840,0:20:32.480 decision support or treatment optimization now  why spatially aware as radiation oncologists what 0:20:32.480,0:20:40.520 separates us from the other oncologic disciplines  is we have a decidedly spatial approach where we 0:20:40.520,0:20:49.000 radiate is uh an an alterable uh uh dose dependent  phenomenon and we can guide that with a series of 0:20:49.000,0:20:54.240 both semantic that is to say you know text based  or clinical variables but also all this Imaging 0:20:54.240,0:21:01.640 data and the Imaging data we use is very very rich  I'm going to start by kind of contextualizing that 0:21:01.640,0:21:07.920 spatial um uh uh awareness be by pointing out  the fact that we're becoming more spatial over 0:21:07.920,0:21:14.920 time but also that we are seeing the sea shift  in how we think about the science of AI or how 0:21:14.920,0:21:22.560 the science of um decision making or is really  changing a lot so I happen to be in grad school 0:21:22.560,0:21:28.160 during what was called the second uh deep learning  winter there was this period of time where deep 0:21:28.160,0:21:34.720 learning really didn't work very well but other  AI techniques which we kind of generally classes 0:21:34.720,0:21:39.960 machine learning nowadays worked really really  effectively and the Godfather of these techniques 0:21:39.960,0:21:45.240 is a guy named Leo briman and Leo bryman came  up with these techniques called classification 0:21:45.240,0:21:49.600 and regression trees or random forests and  many of you guys have probably heard of these 0:21:49.600,0:21:55.200 techniques before they're very very they're very  very computationally efficient they're very very 0:21:55.200,0:22:01.160 effective and you might say well you know what  is Leo br have to do with radiation oncology well 0:22:01.160,0:22:07.400 behind the scenes Leo bryman's techniques once  they became available to radiation oncologist we 0:22:07.400,0:22:14.720 implemented those tools and we took data from rtog  studies this is Lori gaspar's work and we put all 0:22:14.720,0:22:19.200 those data together we said well hey how can we  figure out which brain cancer patients are going 0:22:19.200,0:22:25.360 to have the highest risk which ones are going to  be the ones who do really bad and not just look 0:22:25.360,0:22:32.800 at classical human stage things like it's a big  tumor that I can't remove so they threw all this 0:22:32.800,0:22:39.400 data into a AI system a machine Learning System  and what they got out was this little recursive 0:22:39.400,0:22:46.200 tree and what came out of that was the uh we  call the we we called classification trees in 0:22:46.200,0:22:53.320 radiation oncology recursive partitioning or RPA  and so these RPA techniques really took off this 0:22:53.320,0:22:59.520 is essentially the basis of modern risk staging  systems right where did we get these staging 0:22:59.520,0:23:05.720 systems for now nobody goes around we all almost  all of us who were of a certain era used these 0:23:05.720,0:23:11.720 RPA classes for brain tumor we didn't sit there  and go oh it's AI I wonder if I trust it right we 0:23:11.720,0:23:16.560 just hey there's this RPA system and it works  really well and it really classifies patients 0:23:16.560,0:23:21.840 when we think about these kinds of approaches  it's very it's very obvious we've been using AI 0:23:21.840,0:23:26.480 or AI like techniques for statistical learning  for a long time we just didn't brand them and 0:23:26.480,0:23:32.800 hype them the same way um Leo bryman the guy who  came up with this technique says hey there's this 0:23:32.800,0:23:39.680 very there's this kind of basic way you can think  about the world there is some idea in something 0:23:39.680,0:23:45.680 happening in nature where there's some phenomenon  there's some alteration of that phenomenon and 0:23:45.680,0:23:50.640 then there's some output that can be measured in  a very basic sense that's science right this is 0:23:50.640,0:23:55.800 a phenomenological basis of science but there's  kind of two ways you can try and figure out how 0:23:55.800,0:24:03.600 to predict y from X and he calls this the two  cultures he says there's one culture that's the 0:24:03.600,0:24:10.320 traditional culture and that's the data modeling  culture so you say I've got this phenomenon I'm 0:24:10.320,0:24:17.720 G to fit it to a model that I understand a maybe  it's a a t test you're going to say there's a a 0:24:17.720,0:24:22.760 gaussian model maybe it's a survival model  you're going to fit it to a survival curve 0:24:22.760,0:24:28.560 maybe it's a um logistic regression doesn't  matter right you're going to provide a model 0:24:28.560,0:24:32.480 and then you're going to make a prediction based  on what the model says and this is the way we've 0:24:32.480,0:24:38.160 classically done science the hypothesis testing  method but what really was coming out of AI 0:24:38.160,0:24:45.800 was not just all these new tools but a different  culture that said I'm going to take these decision 0:24:45.800,0:24:52.760 Nets and neural neural Nets and decision trees and  I'm going to predict why but I'm not going to I'm 0:24:52.760,0:24:57.200 not going to try and predict what the mechanism is  or provide a model that I already know I'm going 0:24:57.200,0:25:04.280 to generate a model from the data right this is  the algorithmic modeling culture and often times 0:25:04.280,0:25:10.120 one of the problems has been is that it's kind of  a black box the modeling culture doesn't care as 0:25:10.120,0:25:15.360 long as the prediction is right how do you get  from point A to point B well you know you make 0:25:15.360,0:25:19.920 all these measurements and you present a model  no I just get there the fastest way I can and 0:25:19.920,0:25:26.560 as long as you get there on time who cares right  this is I think the more important part about AI 0:25:26.560,0:25:32.520 is understanding the cultural implication on us as  humans right because otherwise they're just tools 0:25:32.520,0:25:37.960 what happens at the same time as you're thinking  about these two cultures is I want you to think 0:25:37.960,0:25:45.160 about the rise in information uh in information  dimensionality so this is Richard bman he was the 0:25:45.160,0:25:54.320 first author of mathematical uh biosciences which  is this optimization um uh uh uh uh publication 0:25:54.320,0:26:00.680 and he said hey we're getting these complex models  they're way way higher dimensional uh models then 0:26:00.680,0:26:08.320 a human can fit in their head so we have this  problem because as we increase the volume and 0:26:08.320,0:26:14.120 the complexity and the interrelatedness of the  data our capacity as humans stops at the caveman 0:26:14.120,0:26:20.920 level right we can't really conceptualize or  abstract those kinds of things and this is a real 0:26:20.920,0:26:25.280 problem because we're in the counting sticks era  or I would argue you know we were talking about 0:26:25.280,0:26:33.760 the um grease pencil era well those are simpler  models and you could keep them in your head and so 0:26:33.760,0:26:38.680 medical education and physics education was about  memorization because you had to carry your books 0:26:38.680,0:26:45.480 with you right we're now far away from that we're  now at levels of dimensional representation with 0:26:45.480,0:26:51.440 any large scale radiation oncology practice so  that you can't keep all that information in your 0:26:51.440,0:26:58.280 head try we were talking I was talking to a traine  explaining that we used to handal um uh setups for 0:26:58.280,0:27:05.280 we used to can Cal dose on um clinical setups  right uh now you can uh you know take an ethos 0:27:05.280,0:27:13.200 or a houseon or an MR linac and you can autop plan  a uh uh from scratch on that patient immediately 0:27:13.200,0:27:19.280 trying to explain to you know your past self how  would you handal check that you can't even can't 0:27:19.280,0:27:26.680 even start to get there so this requires in most  cases us to bring the information down to caveman 0:27:26.680,0:27:30.640 level through dimensional reduction and this  is the big problem this we're going to talk 0:27:30.640,0:27:36.880 about when we talk about AI prediction models  is you have this Rich 3D data set and you have 0:27:36.880,0:27:44.320 this Rich Imaging or clinical or semantic data  set and we burn it down to a single point on a 0:27:44.320,0:27:51.680 dvh and we burn it down to a single do you have  dry mouth and then we plot it on a logistic curve 0:27:51.680,0:27:58.480 in the hypothesis uh generating method we always  used and we say done with my model right because 0:27:58.480,0:28:04.320 that's as simple a model as I can hold in my head  how many dose constraints can day Fuller remember 0:28:04.320,0:28:10.640 not very many right so this dimensionality  reduction is not a function of the AI process it's 0:28:10.640,0:28:16.000 a function of how well I can interpret it so I'll  start by talking about our kind of classic models 0:28:16.000,0:28:22.760 ntcp models which were built in this I'm not going  to say caveman but like you know let's say grease 0:28:22.760,0:28:29.400 pencil era where we talked about uniform doses  to organs we talked about these early radiation 0:28:29.400,0:28:34.560 models we talked about it as if the um uh in the  head neck space at least which is all the examples 0:28:34.560,0:28:39.000 I'm going to use because that's all I know in the  head neck space when we used to treat with these 0:28:39.000,0:28:46.320 old 2D plans the uh Target volumes were either in  or out of field and so were the normal structures 0:28:46.320,0:28:52.720 and so if we hit your pared it got 50 Gray right  it was a binary the dose to the pared the mean 0:28:52.720,0:28:58.120 dose to the pared was very easily represented by  mean dose because they were homogeneous fairly un 0:28:58.120,0:29:04.160 form plans and so this was something that really  worked if you said hey what's the PED dose because 0:29:04.160,0:29:10.360 the pared was either in or out right mean dose  was a really good measure so these models worked 0:29:10.360,0:29:16.280 really really well that era by comparison we've  now frame shifted the complexity of the data 0:29:16.280,0:29:23.440 because now every patient it's not uniform pared  dose it's some patient specific customized dose 0:29:23.440,0:29:28.240 based on the location of the tumor and based on  the other structures and so I've shifted from a 0:29:28.240,0:29:32.960 low complexity model to a high complexity model  with every patient every patient being unique 0:29:32.960,0:29:39.560 and we're just talking about dose we're just  talking about where did the dose grid go so to use 0:29:39.560,0:29:45.040 two-dimensional models made more sense on the left  than it did on the right those standard phenomenal 0:29:45.040,0:29:50.800 phenomenological models the kind of classic models  just like you saw from Leo bryman they took the 0:29:50.800,0:29:57.160 data we dimensionally reduced it we took all that  dose we made it into a dvh we had some single 0:29:57.160,0:30:02.920 representation of the the dose and then we fit it  to our logistic curve and we did some fancy maths 0:30:02.920,0:30:07.800 so we could say well maybe we could count for  like lack of uniformity but we're still boiling 0:30:07.800,0:30:14.240 it down to one number and that works really well  but it removes all the spatial information it 0:30:14.240,0:30:21.360 doesn't um uh account for sensitivity of all  of the parts of the Oar so if one part of your 0:30:21.360,0:30:28.080 parit or one part of your lung is doing all of the  the important work in terms of the physiologic or 0:30:28.080,0:30:35.200 or or physiologic process and you knock out that  part it assumes that they're uniform sensitivity 0:30:35.200,0:30:39.720 um and so there's there's these kind of simple  approaches you can do where you can say well 0:30:39.720,0:30:46.760 I can you know use different parts or I can do  substructures or I can model local response but 0:30:46.760,0:30:52.400 what we're really moving for with AI is a way to  conceptualize larger degrees of of dimensionality 0:30:52.400,0:30:58.240 so there's a few ways you can do that you can  do that with these um kind of classic Tech te 0:30:58.240,0:31:02.720 voxal based approaches or we'll talk about deep  learning approaches which are the algorithmic 0:31:02.720,0:31:07.040 cousin of that and then we'll talk about how  you incorporate those Imaging is just examples 0:31:07.040,0:31:12.520 of decision models that are kind of fun so you  know in a voxal based analysis what you do is 0:31:12.520,0:31:18.520 you essentially say instead of thinking of those  structures as uniform structures of uniform risk 0:31:18.520,0:31:26.080 I can create um using Mo the data modeling culture  approach you know these kind of P values that tell 0:31:26.080,0:31:32.400 me where the significant voxel are so assuming  that these sub regions are are are differentially 0:31:32.400,0:31:38.960 useful and this can be really really powerful  because now you're not preemptively making this 0:31:38.960,0:31:45.000 uh data um uh uh dimensional reduction and  saying well you know a pared is a pared is 0:31:45.000,0:31:50.160 a pared you're letting the voxal information tell  you what's important right so even though it's a 0:31:50.160,0:31:55.800 a DAT a hypothesis generating frequentist approach  really very powerful and there's some really great 0:31:55.800,0:32:02.000 work by lots of groups that shows that um these  depend on you know structures of the dose data 0:32:02.000,0:32:09.440 dose delivery they're very um uh statistically  informative um but you know they're they're uh 0:32:09.440,0:32:15.080 they're a little kind of hard to um to scale in  a lot of cases on the other hand if you think 0:32:15.080,0:32:20.240 of this is that is that hypothesis generating way  the algorithmic way of doing it is to say hey I'm 0:32:20.240,0:32:25.520 going to feed everything into a prediction model  I'm not going to really know how it works it's 0:32:25.520,0:32:31.080 going to make a prediction about the patient but  but then maybe I can come back and see what the 0:32:31.080,0:32:37.520 um what the Deep Learning Net is paying attention  to right so in both cases you're getting that that 0:32:37.520,0:32:43.360 rich spatial information out of the process and  that can be done with what's called saliency Maps 0:32:43.360,0:32:49.520 or sensitivity Maps or attention maps and the idea  there is you just kind of say well okay I've made 0:32:49.520,0:32:55.280 this prediction the prediction works sufficiently  well for clinical use but instead of leaving it 0:32:55.280,0:33:00.600 black box I'm going to go look back and see where  was the Deep Learning Network paying attention to 0:33:00.600,0:33:06.280 the voxels so these are very these are very kind  of you know conceptually complimentary approaches 0:33:06.280,0:33:12.640 right it's different ways to skin the cat So  boxal based analysis in in many different organ 0:33:12.640,0:33:20.080 sites appears to improve prediction um you know  uh over over these kind of U classic logistical 0:33:20.080,0:33:27.320 regression techniques uh but it also because like  all of these kinds of approaches it seems to fail 0:33:27.320,0:33:32.640 when we move to external validation so one of  the core components we always have to do is if 0:33:32.640,0:33:38.400 I train a model off my data using a black blocko  BLX approach it'll do really well why because I 0:33:38.400,0:33:43.960 train it off my data but if your data is different  you're going to get a different answer because I 0:33:43.960,0:33:49.240 didn't train it off data that's look like yours  so external validation is important a similar kind 0:33:49.240,0:33:56.200 of idea is incorporating Imaging data so again  rather than than come up with an individualized 0:33:56.200,0:34:01.840 reduction of this information you can then have  these multi-level mixed effects approaches I 0:34:01.840,0:34:06.720 won't go into the details of these mechanically  but basically then you can have a single model 0:34:06.720,0:34:14.280 that links uh both the dose and local response  or dose and toxicity um using uh kind of more 0:34:14.280,0:34:19.480 elaborate or effective techniques and where this  is really going to come into play is in things 0:34:19.480,0:34:26.960 like proton where dose is not the only part of the  story spatially so right now if you're in photon 0:34:26.960,0:34:33.920 space you can think about that most of the dose  response information is explicitly linked to dose 0:34:33.920,0:34:42.000 in Gray once you move to um uh particles that's  not the case right where is the end range let 0:34:42.000,0:34:49.160 information rbe information may be differential  compared to physical dose um deposition in some 0:34:49.160,0:34:56.200 ways so this allows you to then use those Imaging  changes to make uh statements or make observations 0:34:56.200,0:35:01.720 even before you have uh actionable outcomes you're  actually talking about a biologic indicator or 0:35:01.720,0:35:07.640 surrogate indicator so this is really attractive  this is again I would say kind of in the in the 0:35:07.640,0:35:14.000 pipeline these L based models are going to be more  important moving forward um what I can tell you is 0:35:14.000,0:35:22.040 I've I've you know operated in the prebe modeled  proton uh uh world and there are potential side 0:35:22.040,0:35:26.720 effects if you don't consider these things right  so effective consideration of these in models I 0:35:26.720,0:35:31.320 think is something we're moving boards but most  of the time it's still kind of in research mode so 0:35:31.320,0:35:37.000 I'll show you kind of how this works in an example  this is just uh excepted for publication by sonan 0:35:37.000,0:35:45.320 djk one of my posts who um is now faculty at um  chonan but the premise is said okay let's let's 0:35:45.320,0:35:51.480 talk about how we could learn about xerostomia  late xerostomia using patient reported outcomes 0:35:51.480,0:35:58.040 and accounting for this Rich data and not doing  classic dose reduction techniques so we are or 0:35:58.040,0:36:04.880 dimensionality reduction so we fed a deep learning  model with dose CT data Imaging data over time and 0:36:04.880,0:36:12.560 segmentations we fed it through a kind of very  classic uh deep learning process but also um a 0:36:12.560,0:36:17.360 deep learning platform that allows us to include  clinical data so again this allows us to include 0:36:17.360,0:36:23.200 Rich semantic data not just uh you know the  Imaging and dose so you're already you're already 0:36:23.200,0:36:29.960 making the supposition that a 80-year-old patient  with zero Oma may be different than a 30-year-old 0:36:29.960,0:36:35.240 patient with xerostomia in terms of radiation  response recovery which is I think a very valid 0:36:35.240,0:36:42.440 thing so we used lots of different um uh deep  learning techniques and compared them uh this the 0:36:42.440,0:36:48.880 Deep CNN worked really well what we found when we  looked about these is different parts of of the of 0:36:48.880,0:36:55.600 the spatial information involve different amounts  of um uh uh Imports right so as you can imagine 0:36:55.600,0:37:00.560 if you remove the dose well you remove a lot of  information and the model doesn't work really 0:37:00.560,0:37:06.000 well right so most of the info is in the dose but  there's also information in the segmentation and 0:37:06.000,0:37:11.760 there's also information in the images themselves  and you can then mine that and say let's pay 0:37:11.760,0:37:18.000 attention to where the AI is paying attention and  what it turns out is if you say Hey where's the AI 0:37:18.000,0:37:25.080 paying attention it's not the whole pared it's  sub regions of the pared and it's sub regions 0:37:25.080,0:37:30.920 of the submandibular gland and it's those areas  where the ducts and the stem cells are located 0:37:30.920,0:37:37.600 and so if you have a choice between prioritizing  this part of the pared and prioritizing this part 0:37:37.600,0:37:45.000 of the pared you should prioritize this more it  is at least phenomenologically more attentionally 0:37:45.000,0:37:51.800 important these kinds of things become even more  important when we talk about complex toxicities 0:37:51.800,0:37:57.560 that aren't restricted to one organ at risk so  this is a I mean this is a pretty complex model 0:37:57.560,0:38:03.640 already because we're using four oars you know  we're looking at we're looking at all of the 0:38:03.640,0:38:09.000 Imaging information we're not constraining it in  the sense of saying only look at submandibular 0:38:09.000,0:38:15.080 glands or only look at parids we're looking at the  whole image now does it show us what we understand 0:38:15.080,0:38:21.120 a prior about anatomy and physiology yes that's  that's useful it's helpful but still we're talking 0:38:21.120,0:38:27.680 about xerostomia and a limited number of glands  in that case when you move to things like swall 0:38:27.680,0:38:34.320 ing it gets way more complex because do I radiate  this swallowing muscle or that swallowing muscle 0:38:34.320,0:38:40.880 is always a nonzero some game and head net  and if I hit your Fingal constrictors and that 0:38:40.880,0:38:47.120 causes dysphasia and I spare them I'm pushing the  dose through some other structure right so this 0:38:47.120,0:38:52.400 really means that we have to balance and say are  there differential radio sensitivity profiles for 0:38:52.400,0:38:57.560 different muscles does high dose or medium dose  or low dose to different muscles lead to different 0:38:57.560,0:39:02.760 outcomes so that means you have to model all  of the dose response curves for swallowing 0:39:02.760,0:39:07.960 dysfunction for all of those uh for all those  structures and it turns out that all of them 0:39:07.960,0:39:12.400 have dose response models if you radiate a muscle  to enough dose it will stop working effectively 0:39:12.400,0:39:15.960 and your swallowing function will be impaired and  you'll aspirate and you'll be at risk of dying and 0:39:15.960,0:39:22.880 that's not good so we took all this data to say  um if we put all this together we also saw that 0:39:22.880,0:39:28.440 it's not just dose and it's not just dose response  that if you take a patient's age this black line 0:39:28.440,0:39:35.840 is all patients what we found is is that age  matters a lot for swallowing recovery so if 0:39:35.840,0:39:41.360 you're a 30-year-old patient and I radiate you to  70 gray three years later you're probably going to 0:39:41.360,0:39:46.560 be swallowing fine because I'm a good radiation  oncologist no because you're young and you're 0:39:46.560,0:39:51.760 healthy and you recover and if you're 80 and I  Blast Your fral constrictors you're going to have 0:39:51.760,0:39:56.760 some problems even at lower doses so it gets you  away from this idea that a prediction models are 0:39:56.760,0:40:03.480 necessary because I can't remember in my head what  the dose constraint for 80y olds for swallowing 0:40:03.480,0:40:09.920 dysfunction is right we have to implement these  models clinically not because the AI is so good 0:40:09.920,0:40:17.480 but because the humans are so bad right I can't  remember these dose Curves in any way I can get 0:40:17.480,0:40:23.720 them into a TPS so we can do some like very slick  dose ruction so we did a bunch of like you know 0:40:23.720,0:40:28.000 collapsing the data so that the physician could  understand it for this paper so that we said well 0:40:28.000,0:40:34.600 it's really myoid volume receiving 69 grade age  there's no way you're going to remember that this 0:40:34.600,0:40:40.000 is something that needs to be just Auto imported  into a TPS and then weighted just like you would 0:40:40.000,0:40:46.280 a normal constraint it gets even more kind of you  know important as we start a looking at the fact 0:40:46.280,0:40:53.360 that almost all of these models are consequential  late models so I've got xerostomia and it's bad 0:40:53.360,0:40:58.960 when did you get the xerostomia well I got it at  two years okay when was your swallowing function 0:40:58.960,0:41:04.320 bad oh my swallowing function got really bad 3  years after radiation but what's been happening 0:41:04.320,0:41:10.280 there's been physiologic changes subclinical  injury to those organs at risk that's potentially 0:41:10.280,0:41:16.720 modifiable for a long time so if we look at  people's muscles after we radiate them what we 0:41:16.720,0:41:22.840 see is that there's Imaging changes that are dose  response Associated far in advance of when they 0:41:22.840,0:41:27.960 start to have swallowing problems that they start  to have inflammation of the muscle that looks like 0:41:27.960,0:41:35.680 T2 two T2 changes and they start to have fibrosis  that looks like T1 changes months or even years 0:41:35.680,0:41:40.280 before they start saying well I'm really having  a lot of problems with this aspiration pneumonia 0:41:40.280,0:41:44.960 right so what does that mean that means that  we have these rich information where we could 0:41:44.960,0:41:53.040 say even before the patient has active swallowing  dysfunction that there's been radiobiologic injury 0:41:53.040,0:41:58.840 what does that mean that means we can build models  that have not only consequential injuries in them 0:41:58.840,0:42:04.800 at some arbitrary time point we can build  continuous models of trajectories of things 0:42:04.800,0:42:11.480 so really is moving us to this idea that symptoms  after radiation or models aren't something static 0:42:11.480,0:42:18.280 in time is the injury is not static in time so  our models have to account for that our models 0:42:18.280,0:42:23.480 have to account for the fact that both clinically  and imaging you might have periods where things 0:42:23.480,0:42:29.240 get worse and things get better maximum toxicity  for acute effect for us in head neck is always at 0:42:29.240,0:42:34.040 the end of radiation does that represent how  a patient's going to do five years down the 0:42:34.040,0:42:40.320 road not alwayss so you need comparatively complex  models so another good example is we've done a lot 0:42:40.320,0:42:48.640 of work in uh a rare toxicity that we see a lot of  which is Osteo Rion necrosis so this is bone death 0:42:48.640,0:42:55.320 due to devascularization in the radiation field  we see this about 6 to 7% of patients which means 0:42:55.320,0:43:03.960 at MD erson about 65 a year we treat about a, to  1500 um patients uh with head neck radiation and 0:43:03.960,0:43:11.240 of those um a large fraction about a thousand get  significant jaw do so we've done like the simple 0:43:11.240,0:43:17.880 ntcp data reduction curves just to kind of get  started right so we came up with a an ntcp curve 0:43:17.880,0:43:23.400 for Osteo radial necrosis um we're really proud of  this work it's really good but we really said well 0:43:23.400,0:43:31.120 but here's the problem if we look at the whole dvh  curve these complex curves uh it doesn't really 0:43:31.120,0:43:37.760 track that the a single dose parameter is driving  all of it right it's not it's not one you know one 0:43:37.760,0:43:44.440 one constraint is not going to solve this problem  if these are the dvhs for folks with orn versus 0:43:44.440,0:43:49.240 the controls right now there's a trend but it's  not like there's one sharp part of the curve that 0:43:49.240,0:43:56.200 drives all the the injury for o n and in fact if  we look at kind of AI based cluster analysis this 0:43:56.200,0:44:02.000 is work um uh by Muhammad hosseinian um that  we can use AI techniques or machine learning 0:44:02.000,0:44:08.880 techniques to discriminate kind of variable uh  risk components and you can see if you were to 0:44:08.880,0:44:14.520 say to me well Hey where's the line between these  intermediate risk groups I I couldn't tell you now 0:44:14.520,0:44:19.440 I could tell you that the patients with lots  of dose on their dvh are going to do worse I 0:44:19.440,0:44:25.560 couldn't risk stratify those in any meaningful way  nor threshold that do that that risk in any way 0:44:25.560,0:44:30.920 that I could put into a TPS or tell the patients  right so a patient comes in and I'm radiating 0:44:30.920,0:44:36.160 their whole mandible I can say yeah your risk  of Osteo radi necrosis is really high how high 0:44:36.160,0:44:43.600 uh bad right and if I'm totally missing their jaw  I could say yeah your risk of rad Osteo necros is 0:44:43.600,0:44:50.920 real low but everybody in the middle like kind of  go brush your teeth you know I mean it's it's it's 0:44:50.920,0:44:56.120 it's incredible how little information we go on  with these prediction models but with an AI model 0:44:56.120,0:45:03.280 I can very rapid ly risk stratify these patients  uh we can risk stratify them very concretely and 0:45:03.280,0:45:09.520 we can risk stratify them using the whole of the  dvh not just that single one variable so this 0:45:09.520,0:45:15.680 model I only you know this is a simple model this  is a real model this is an actionable reasonable 0:45:15.680,0:45:20.800 model I can take to folks and so we can actually  look at these patients and say you know these are 0:45:20.800,0:45:27.080 these are o instance about two dvh groups that  look very very similar if you look at mean dvh 0:45:27.080,0:45:33.880 right but they have vastly different or or in  this case v30 vastly different Osteo Radion 0:45:33.880,0:45:40.640 necrosis injury because it's a complex structure  your mandible has vascular trees that are that are 0:45:40.640,0:45:45.520 non-uniform so we've been moving forward even  with that is saying okay let's talk about the 0:45:45.520,0:45:52.680 fact that there's also the necessity for tempor  temporal component so L ofadon and sag ataya built 0:45:52.680,0:45:58.680 an AI based um doomric and dental risk model  that incorporates ated uh the entirety of the 0:45:58.680,0:46:04.640 3D dose grid and then generated um Osteo Rion  necrosis free survival because what's important 0:46:04.640,0:46:09.920 to understand is that if you radiate a mandible  and a patient has a damaged mandible not only do 0:46:09.920,0:46:15.120 they have a higher risk of oin but the time to  which they get that orn changes substantially 0:46:15.120,0:46:20.880 so if a patient is going to get orn because we  blasted their whole mandible they get it fast 0:46:20.880,0:46:27.040 right but there's other patients who have low  but detectable risk over long periods of time 0:46:27.040,0:46:33.360 time so if you say hey Doc am I going to have dry  mouth when are you going to have dry mouth when 0:46:33.360,0:46:38.080 are you going to have oste radio necrosis when are  you going to have swallowing dysfunction we've got 0:46:38.080,0:46:44.840 to move away and AI enables the the the background  of the model whether I used a deep Learning Net 0:46:44.840,0:46:51.120 which we used um as part of it or we used uh kind  of more classic machine learning techniques that 0:46:51.120,0:46:57.760 magic is teaching ourselves how to think about  high complexity data in ways that we can use 0:46:57.760,0:47:02.520 it uh clinically and then generating that you  can run whatever visualization you want it's a 0:47:02.520,0:47:09.280 dynamic model with this is a is a thing that we  could potentially say are potentially moving now 0:47:09.280,0:47:15.240 this is once this is excepted for publication um  into uh dose volume histogram inputs for for human 0:47:15.240,0:47:21.920 planning um we also then said okay if we're going  to do that let's talk about how AI drives decision 0:47:21.920,0:47:28.440 support so we created and have tested a guy where  you can load in and we did this in a very simple 0:47:28.440,0:47:33.920 way where you can put in simple features for  simplification but we're in the process of putting 0:47:33.920,0:47:41.880 in a more elaborate API um uh component for this  guy where we can potentially load the whole um 0:47:41.880,0:47:50.720 dose grid load the dcom file push a button and  generate a um uh um annualized risk of OST radio 0:47:50.720,0:47:59.040 necrosis um or any other toxicity of choice right  it's a flexible model now um these AI models don't 0:47:59.040,0:48:07.200 always do better okay so we we're looking at  o our first efforts at using deep learning for 0:48:07.200,0:48:15.320 orn prediction we F we took about um a thousand  patients at 1100 patients with without o n matched 0:48:15.320,0:48:23.920 them to 170 patients with orn and we ran um every  model we could think of for deep learning and we 0:48:23.920,0:48:29.600 thought well these deep learning models will  really just I mean these are great hot models 0:48:29.600,0:48:36.440 the AIS of today will blow the doors off these  old logistic regression or these old um machine 0:48:36.440,0:48:42.600 learning techniques that I learned in the 90s and  2000s um turns out they didn't turns out they did 0:48:42.600,0:48:50.200 terrible right why because Osteo Radion necrosis  is a pretty rare event State and prediction of 0:48:50.200,0:48:55.800 rare event States is really hard for some of these  architectures and so what we found is is that no 0:48:55.800,0:49:04.240 matter how much we tried to feed more training  data in the models didn't improve and they none 0:49:04.240,0:49:08.920 of the deep learning models did as well as the  old school logistic regression machine learning 0:49:08.920,0:49:15.120 models so we talk about Ai and we get super hyped  about these really sweet architectures and you 0:49:15.120,0:49:20.840 know CNN and those kinds of things those are all  fantastic but we don't need to throw out the baby 0:49:20.840,0:49:26.760 with the bath water right if simplified methods  are more effective you know if some sometimes 0:49:26.760,0:49:32.560 times you just need a sharp uh sharp Rock on the  end of a stick right sometimes you don't need a 0:49:32.560,0:49:37.600 laser so at the end of the day it's important  to understand that the performance of these AI 0:49:37.600,0:49:44.040 systems is always going to be something that has  to be mediated or filtered through a process and 0:49:44.040,0:49:49.800 a kind of cognizance of of an awareness of that  and that's going to be done on two fronts by our 0:49:49.800,0:49:55.680 our folks that's going to be done by our physics  folks who know and understand the application and 0:49:55.680,0:50:01.200 complexity of this data and the Physicians who  for the most part let's be honest use very very 0:50:01.200,0:50:08.040 strong but basic charistics for making decisions  on the benefit of a patient right when we think 0:50:08.040,0:50:13.600 about artificial intelligence utilization all of  these things come into play where you talk about 0:50:13.600,0:50:18.640 you know what do you expect from the model what  do you expect in terms of the situation how much 0:50:18.640,0:50:24.040 cognitive workload and perception those drive your  trusted artificial intelligence in both a general 0:50:24.040,0:50:28.400 sense and in a specific sense right now if you're  scared of things you'll be scared of aii because 0:50:28.400,0:50:33.720 you're scared of new things and if you like shiny  objects you'll like AI because you like new things 0:50:33.720,0:50:40.160 okay that's that's General at a personal level  but this trust has to be developed by a team 0:50:40.160,0:50:45.760 that understands what's all of these elements  in an effective way in in a radiation oncology 0:50:45.760,0:50:50.320 system but the important thing is that at the  end of the day there's also this accountability 0:50:50.320,0:50:55.720 and the accountability for us is to the patients  so when we talk about these AI methods um you're 0:50:55.720,0:51:02.240 going to hear from um from some of the Dr courts  folks from Hannah talking about um RPA how do you 0:51:02.240,0:51:06.920 make those things trustworthy and believable  well you show the Physicians what you're doing 0:51:06.920,0:51:12.200 you show under the hood you admit that the tool  is not perfect and in a very fundamental sense 0:51:12.200,0:51:19.040 you take accountability to fix that tool and this  is where I think there's such an important kind of 0:51:19.040,0:51:25.280 problem at the end of the day you as the physicist  or physician take accountability right if I make a 0:51:25.280,0:51:31.760 decision about a patient my job for that patient  is to be their fiduciary representative to save 0:51:31.760,0:51:35.840 their life and extend their quality of life  as much as I can that's my accountability to 0:51:35.840,0:51:42.280 that patient in all respects so if I'm using an AI  system and it saves me c cognitive workload that's 0:51:42.280,0:51:46.720 fine but at the end of the day I have to do what's  best for the patient and so if that means ignoring 0:51:46.720,0:51:50.840 the AI or implementing the AI I have to I have to  be cognizant of that that means you don't need to 0:51:50.840,0:51:56.680 be an AI expert you need to be a patient expert  right as a physicist you don't need to be an AI 0:51:56.680,0:52:00.720 ERT you need to be a safety expert why because  your fiduciary responsibility of the patient is 0:52:00.720,0:52:06.680 the safe and effective delivery of the plan right  and the protection of that patient right so when 0:52:06.680,0:52:11.440 we talk about accountability I think that it's  funny because we can talk about all these things 0:52:11.440,0:52:18.320 most of us don't know how to write um mon Carlo  code for our optimization routines right uh we buy 0:52:18.320,0:52:27.480 that from our vendor colleagues right but what do  we do we check it on a phantom right now mon Carlo 0:52:27.480,0:52:33.240 is such a such a great um statistical method it  you're not going to beat it statistically I'm not 0:52:33.240,0:52:38.960 sitting there with a calculator checking mon Carlo  calculations but I check it on a phantom right 0:52:38.960,0:52:43.800 what does that mean we have to do we have to pay  attention to real life use cases through tracking 0:52:43.800,0:52:51.160 our own data because if we Implement a model and  it doesn't work we have to adopt new methods I 0:52:51.160,0:52:56.760 was talking with Chuck last night um for years we  used uh pencil beam calculations on the early IMR 0:52:56.760,0:53:05.320 systems and underdosed people's sprt to the L  lungs in the place I trained by 20% right 20% 0:53:05.320,0:53:09.520 under dosing because we didn't understand  attenuation as well back then in treatment 0:53:09.520,0:53:15.400 planning systems what did we do we paid attention  to our failures we paid attention to the fact that 0:53:15.400,0:53:19.160 patients were recurring we paid attention to  the fact that we didn't see toxicities other 0:53:19.160,0:53:26.840 people were seeing in the central long and we  had to adapt this same idea of Serial except 0:53:26.840,0:53:33.000 testing serial monitoring is the core I think for  AI applications so I'll show you a few examples 0:53:33.000,0:53:37.920 of where I think we're going with this and the  model I like to talk about is something we call 0:53:37.920,0:53:45.280 digital twins so in radiation oncology um we're  developing technology our colleagues across the 0:53:45.280,0:53:52.440 hallway are also developing technology the hottest  technologies that we see in surgery in head neck 0:53:52.440,0:53:59.240 surgery are robotic uh transoral reection and  so so the robot takes these little robot arms 0:53:59.240,0:54:06.560 it lasers or uh B Electro boves out and you can  Shu out a tumor and the idea is sometimes you 0:54:06.560,0:54:13.120 could remove the head neck cancer transorally and  never um have to give the patient radiation the 0:54:13.120,0:54:19.240 problem is if you see risk factors after surgery  the patient needs radiation or chemo and radiation 0:54:19.240,0:54:25.240 this approach has been shown at least as far as  we can tell to offer equivalent survival so the 0:54:25.240,0:54:31.920 patient not at more risk of the tumor coming back  if they get robot surgery or radiation but there 0:54:31.920,0:54:37.840 is a significant difference in the fact that these  patients have conditional risk so if I do the 0:54:37.840,0:54:43.040 surgery and there's no risk factors then there's  no radiation and the patient has no radiation 0:54:43.040,0:54:49.200 injuries and that's pretty cool right but if we do  the surgery and then there are risk factors then 0:54:49.200,0:54:54.880 they need adjuvant radiation after and that means  they get two therapies instead of just radiation 0:54:54.880,0:55:01.920 alone and their side effect risk is pretty much  the same using a series of of uh patient reported 0:55:01.920,0:55:07.480 outcomes or objective measures so it's about the  same but not not it doesn't you haven't helped the 0:55:07.480,0:55:13.960 patient any and if they do the surgery and there's  multiple risk features or high-risk features the 0:55:13.960,0:55:19.840 patient gets surgery and chemotherapy and  radiation and the outcomes are worse so I 0:55:19.840,0:55:25.680 can go in and do the surgery but I don't know yet  if I've helped the patient what's my job to help 0:55:25.680,0:55:31.800 the patient so so we looked at these scores the  mad is a dysphasia index patients who don't get 0:55:31.800,0:55:38.000 radiation have very little radiation Associated  side effects patients who get uh surgery plus 0:55:38.000,0:55:43.680 radiation do about the same as radiation alone or  radiation with chemo patients who get trimodality 0:55:43.680,0:55:49.240 therapy do worse as as you would anticipate so  we said we can model this using machine learning 0:55:49.240,0:55:56.600 techniques we we used a a modified uh um this  is an mdp decision process uh led by and shaer 0:55:56.600,0:56:03.160 in soale homat you can model those decisions as  conditional decisions you can take data we have 0:56:03.160,0:56:08.840 about short and longterm swallowing function and  you can generate these models where you say okay 0:56:08.840,0:56:15.120 at a certain risk of post-operative extranodal  extension it makes absolutely no sense for you 0:56:15.120,0:56:20.600 to do one modality or the other so in some cases  if it if it's very clear you're not going to 0:56:20.600,0:56:26.240 have these risk factors you should do surgery  every time and if there's any risk above 30% 0:56:26.920,0:56:34.360 you should do radiation okay so we have those  models the problem is Physicians can't predict if 0:56:34.360,0:56:40.200 the patient will need radiation afterwards because  we can't predict those risk factors so just 0:56:40.200,0:56:47.720 looking at a single risk factor if a lymph node  has extra noal extension they need adant radiation 0:56:47.720,0:56:54.040 we showed a bunch of Radiologists radiation  oncologists and surgeons a bunch of lymph nodes 0:56:54.040,0:57:00.400 and asked them to um rate those if they were going  to need adant therapy based on EX Neal exension 0:57:00.400,0:57:06.480 risk and it turns out they're terrible right it's  a coin flip nobody can tell humans are terrible 0:57:06.480,0:57:13.640 at it but you know what's really good at it is AI  so Ben Conan and his group at Harvard have these 0:57:13.640,0:57:20.080 models of extranodal extension where they take the  Imaging data they feed it into a neural net and 0:57:20.080,0:57:27.440 they can get performance with au's of like 91% way  better than humans and in fact when they tested 0:57:27.440,0:57:34.360 it against Radiologists this is the model in the  black the Au of 86 these are human observers it's 0:57:34.360,0:57:37.920 like way better right like we're not talking  about it's as good as humans we're talking 0:57:37.920,0:57:44.960 about humans are bad and this thing is good so  why don't we use it because the Physicians are 0:57:44.960,0:57:51.120 scared because they say well I'm not sure about  this case now in fact they're not sure about any 0:57:51.120,0:57:58.440 cases but they can't admit that what if it misses  okay they the accountability of the patient I just 0:57:58.440,0:58:04.760 don't trust it like I trust my colleagues their  colleagues are worthless at this they're bad at 0:58:04.760,0:58:12.360 this right so it speaks to the problem of we have  to move to not just methods that work methods 0:58:12.360,0:58:17.320 they're trustworthy so the last few minutes I'll  talk about how I think we solve that problem the 0:58:17.320,0:58:23.440 clinical problem is trustworthiness or uncertainty  estimation we were talking last night one thing 0:58:23.440,0:58:30.320 that AI methods can do potentially that you're  that your surgeon or you can't do is tell you 0:58:30.320,0:58:38.160 how unsure they are in a quantized way so if you  say I've got this system decision and I can tell 0:58:38.160,0:58:44.360 you that hey this is you know uh this is a wild  guess this is me just pulling it out of my back 0:58:44.360,0:58:51.920 pocket this is me totally you know just yoloing it  you're G to take that information differently than 0:58:51.920,0:58:59.160 if I say I am very sure I know exactly I've seen  this 100 times times AI systems are the same way 0:58:59.160,0:59:04.760 so our current system the problem with the AI is  it gives you a blackbox estimator and says there 0:59:04.760,0:59:09.360 it is and it tells you how well it does over a  group of patients but it doesn't tell you its 0:59:09.360,0:59:16.520 own certainty estimate with uncertainty Quantified  methods the whole the whole value shift is that 0:59:16.520,0:59:22.040 you have this model performance and you say well  yeah I can tell you my models at 80% but I haven't 0:59:22.040,0:59:27.240 seen many of this patient I'm not very certain  about that one but I'm real certain about this one 0:59:27.240,0:59:35.120 I've seen that one a lot I really know that that  ISE if we think about that I feel like without 0:59:35.120,0:59:42.320 uncertainty quantification AI is stalled right so  moving to and these are and the reality is you can 0:59:42.320,0:59:49.200 use these uncertainty quantification techniques on  the back end of any model it doesn't have to be uh 0:59:49.200,0:59:54.960 it can be the fanciest deep Learning Net you've  ever had it can be the dumbest uh random Forest 0:59:54.960,1:00:01.240 you've ever seen you know as as as elaborate or as  simple as you'd like to make it these uncertainty 1:00:01.240,1:00:06.360 quantification models are very very flexible  they're just a backend you put on things I think 1:00:06.360,1:00:12.000 this is going to be something that you start to  see and instead of just reporting the Au or a P 1:00:12.000,1:00:18.120 value you're going to see this moving forward so  this is done in a lot of a lot of areas so this is 1:00:18.120,1:00:25.960 work by um Kevin Z looking at meta Radiology they  were looking at um uh retinal um uh alteration 1:00:26.680,1:00:31.040 and again you know the traditional Jeep model  just gives you a single prediction but if you 1:00:31.040,1:00:36.040 have you know you're you're always going to be  using a test and training data set and then some 1:00:36.040,1:00:42.160 a generalizable comparator comparator why not just  generate this prediction certainty and you can do 1:00:42.160,1:00:47.840 that at the level of the case you can do that at  the level of the voxel it's very very um it's very 1:00:47.840,1:00:55.160 very doable so this is really useful because um  it really is good at detecting out of distribution 1:00:55.160,1:01:00.920 data so let's say I've got a great model for Ora  ferin Cancers and it's a complex model that I've 1:01:00.920,1:01:07.880 just built that I showed you and I feed a larynx  cancer case into it is it going to work well I 1:01:07.880,1:01:12.800 don't know but it's really helpful if the system  can flag for me and say hey you just gave me 1:01:12.800,1:01:19.400 something I don't understand if I shove a prostate  case in it better know that there's that there's 1:01:19.400,1:01:25.040 an error most of the systems we currently have  don't have that built in as a process right and 1:01:25.040,1:01:30.080 so these are these are simple architectural things  that can be done and there's a host of different 1:01:30.080,1:01:34.400 you know you can Choose Your Own Adventure on  which uncertainty method you want to use there's 1:01:34.400,1:01:38.600 not one that's better than another they're all  they're all very good um we've already started 1:01:38.600,1:01:43.200 using this because one of the ways we found um  you know the Lawrence's group Lawrence courts 1:01:43.200,1:01:48.200 group you'll hear about really revolutionized we  don't Contour organs at risk anymore outside the 1:01:48.200,1:01:53.760 skull base it's a solved problem but contouring  tumors is hard and contouring tumors is hard 1:01:53.760,1:01:59.520 because humans are bad at contouring tumors and  so it's hard to train them but what we can do is 1:01:59.520,1:02:05.760 we can take multiple different inputs and we can  generate a uncertainty map and so we can generate 1:02:05.760,1:02:13.000 a pre-labeled uh tumor segmentation that doesn't  just say tumor not tumor but says well here I'm 1:02:13.000,1:02:19.160 sure this is tumor and here I'm suspicious  right that really helps the physician because 1:02:19.160,1:02:25.040 now there's a level of the the you're not turning  over agency to the AI you're getting information 1:02:25.040,1:02:31.160 inputs that you to make a good heuristic decision  for your patients um recently uh kareim wahed in 1:02:31.160,1:02:36.680 our group has done a a nice scoping article that  just came out in PMC looking at these uncertainty 1:02:36.680,1:02:41.120 quantification techniques and again they're mostly  things like failure detection but I think this 1:02:41.120,1:02:45.800 is the wave of the future this is really is  really kind of moving forward um this allows 1:02:45.800,1:02:51.160 you to talk about direct assessment in terms of  competing hazards so if you have this uncertainty 1:02:51.160,1:02:56.680 quantification and you're really really sure  about your ntcp model for PR but you're not 1:02:56.680,1:03:05.160 really sure for mandible you can make those Hazard  distinctions not just on the um risk of the on the 1:03:05.160,1:03:11.560 dose but talk about modeling the different risk  profiles and your uncertainty within them so what 1:03:11.560,1:03:19.280 this allows I think is um this is from um Kaiju  Wong at the hutch um this kind of reapproximation 1:03:19.280,1:03:26.440 of the two cultures of Leo Bradman whether you use  an ml or you use some other technique a Universal 1:03:26.440,1:03:32.680 interface and then some uncertainty quantification  um or visualization on the back end becomes the 1:03:32.680,1:03:38.160 way to bridge those together so it doesn't matter  how you get to the prediction model of choice 1:03:38.160,1:03:42.640 you're going to have to tell me how certain you  are and you're have to help me visualize it so um 1:03:42.640,1:03:47.840 this is where graphical exploratory tools for  decision support come in so with s van djk we 1:03:47.840,1:03:52.720 took this um outcome prediction tool and tried  to make it into a guey how well can we cure the 1:03:52.720,1:03:59.040 tumor so we used a fundamental clinical model we  put in optional Imaging components and we said 1:03:59.040,1:04:04.440 we want to stratify patients for intensified or  deintensified treatment who should get tours who 1:04:04.440,1:04:10.240 should get standard therapy whose disease outcomes  are expected and we had a pretty big data set um 1:04:10.240,1:04:16.880 3,000 patients from mdacc we split these we  uh we modeled that with a thousand patients 1:04:16.880,1:04:26.200 from uh UMC chonan and 500 from um tcia uh with  external validation so this is the training cohort 1:04:26.200,1:04:31.720 these are the um validation cohorts they worked  really really well um but what you can see is 1:04:31.720,1:04:37.840 we've also Quantified all the uncertainty on our  estimators in terms of survival so that when we 1:04:37.840,1:04:42.200 come to these risk groupings you can modify  them or change them or threshold them over 1:04:42.200,1:04:49.120 time they're not static right they're not they're  it's a dynamic um Dynamic tool and the resultant 1:04:49.120,1:04:56.880 guey allows you to input the variables and then  generate um risk scores or risk models that are 1:04:56.880,1:05:02.720 dependent on your stratification so you can pick  overall survival you can stratify your risk and 1:05:02.720,1:05:11.000 you generate the um you can generate the outputs  with um real-time uh estimates so um in just the 1:05:11.000,1:05:16.000 last few minutes we'll talk about how we move  that even far further forward so just moving from 1:05:16.000,1:05:21.560 web- based models to now visualizing all this  information at once so as you develop these AI 1:05:21.560,1:05:26.040 models you need to push them into platforms or  goys that allow you to track these things over 1:05:26.040,1:05:34.560 time so this is Liz marai um phis is our human  machine longitudinal symptom uh plotting software 1:05:34.560,1:05:40.000 it allows us to look at individual symptoms by  clusters of patients identify individual side 1:05:40.000,1:05:47.600 effects this is mucus uh uh scores on the mdan  SYM invatory these are breath scores plotting 1:05:47.600,1:05:55.640 individual patient predictions across whole groups  so again very rich um visualization matching very 1:05:55.640,1:06:01.320 Rich data and this allows us to do things like  you can take all of your AI pre-processing dose 1:06:01.320,1:06:08.440 features insert your model here you cluster the  patients based on some uh visualization criteria 1:06:08.440,1:06:13.360 and then you can compare those models across tests  and make decisions about whether you should give 1:06:13.360,1:06:19.000 induction chemo whether you should give surgery  up front whatever clinical decision is has been 1:06:19.000,1:06:25.800 represented within your data set and and not only  that this is dynamic presentation of the kinetics 1:06:25.800,1:06:32.320 response to the tumor Dynamic uh uh representation  of the kinetics of the of the side effects so we 1:06:32.320,1:06:39.280 tested this using a a virtual tumor board using  reinforcement learning where we fed patients to 1:06:39.280,1:06:44.720 the model and we said okay we don't really  understand exactly why certain patients are 1:06:44.720,1:06:48.880 getting induction chemotherapy and why they're  not and is it benefiting them and we couldn't 1:06:48.880,1:06:53.560 tell because there's this big confounder because  if you're have really really bad disease you get 1:06:53.560,1:06:58.360 induction chemotherapy because the disease is  so bad but if you're too sick to get induction 1:06:58.360,1:07:04.440 chemotherapy we can't give it to you so there's  this kind of like bias in who gets it and we don't 1:07:04.440,1:07:10.000 have randomized data at MD Anderson so we ran it  through this reinforcement step we put all the 1:07:10.000,1:07:16.960 features in we ran it through iterative decision  agents and um use this architecture of a treatment 1:07:16.960,1:07:23.600 simulator to run a virtual tumor board and what we  found is is that we could make decisions as good 1:07:23.600,1:07:30.160 as Physicians and potentially improve side effect  profiles through patient selection so this is a 1:07:30.160,1:07:34.760 realtime thing and the nice part is you can have  this running in the background of your tumor board 1:07:34.760,1:07:41.200 every week you just update it and so if there's  changes in how you model the patients if you a 1:07:41.200,1:07:45.520 new drug comes out while your system tells you  I don't have enough information I'm uncertain 1:07:45.520,1:07:51.000 I'm just going to collect data until I can do that  so on the test we improved the predicted survival 1:07:51.000,1:07:55.600 rate and the dysphasia rate by marginal amounts  but we showed that the safety and feasibility 1:07:55.600,1:08:00.680 the system so this is what the guey looks like  um we're running this in the background at tumor 1:08:00.680,1:08:06.120 boards and and using it for patient decisions  and it incorporates both risk models for tumor 1:08:06.120,1:08:11.200 recurrence and uh models for toxicity so  instead of just thinking about xerostomia 1:08:11.200,1:08:15.640 and then thinking about or and then thinking about  survival these things are collated so I'll show a 1:08:15.640,1:08:22.760 brief demo of this uh digital twin and then we'll  a demo over system ditto which uses a digital twin 1:08:22.760,1:08:27.600 simulation to help Physicians plan treatment  for head and cancer patients we can see that 1:08:27.600,1:08:32.440 our system is divided into three components  in addition to our header at the top we have 1:08:32.440,1:08:41.560 panels for information on our system as well as  details on data Providence on the left we can 1:08:41.560,1:08:47.160 see the input panel where the users set the model  parameters and patient features at the top we have 1:08:47.160,1:08:52.680 settings to determine the strategy the digital  physici uses by default it attempts to imitate 1:08:52.680,1:08:57.400 what a physician would do using our previous  patients we then decide on what stage in the 1:08:57.400,1:09:02.160 treatment we are looking at in this example  we will look at if the patient should receive 1:09:02.160,1:09:08.800 chemotherapy alongside radiation users can also  decide to fix other treatment decisions but these 1:09:08.800,1:09:14.800 are decided by the model otherwise below that we  have input features for the new patients using 1:09:14.800,1:09:20.720 either buttons or free text which is validated  by the system at the bottom we have spatial 1:09:20.720,1:09:25.360 diagrams that are used to input patient tumor  location and location of affected lymph nodes 1:09:27.840,1:09:34.440 once all changes are cued we can update our  model and view the results this Center so 1:09:34.440,1:09:40.280 pause that the uh um this is a demo of our  system ditto which uses a dig we can see 1:09:40.280,1:09:45.360 explicit tables of all prediction outcomes for  the selected models going to the second table 1:09:45.360,1:09:50.120 we can see a higher risk of hospitalization  for certain toxicities for the treated Group 1:09:50.120,1:09:55.440 which may explain the lower survival risk when  tumor control risk is similar moving to the 1:09:55.440,1:10:01.400 right panel we have additional views at the  top we can see the treatment recommendation 1:10:01.400,1:10:06.440 from the digital twin and similar patients in  this example both agree that the patient would 1:10:06.440,1:10:11.640 receive concurrent chemotherapy below that we see  feature importances to explain why the digital 1:10:11.640,1:10:17.080 twin made its decision in this case we see that  race is actually the most important feature when 1:10:17.080,1:10:21.640 imitating what a physician would do which may  indicate bias in the data as this patient was 1:10:21.640,1:10:28.080 African-American the difference in the prediction  is also enough to change the predicted treatment 1:10:28.080,1:10:35.600 additionally we can look at similar patients this  view shows a so I think it's hopefully I've shown 1:10:35.600,1:10:39.920 you that there's a lot of work we need to do but  at the end of the day the view looks very good 1:10:39.920,1:10:44.680 for these computational models in radiation  oncology I want to kind of you know demystify 1:10:44.680,1:10:48.880 these because I think you're going to start to  see these AI advances just as you've seen here 1:10:48.880,1:10:55.400 not as complex uh diagrams that you see in red  Journal but it's pretty easily usable goys that 1:10:55.400,1:11:00.200 updated in the background of Epic this will end  up being something that looks we think more like 1:11:00.200,1:11:04.680 clinical decision support tools you get from  the pharmacy then some elaborate process you 1:11:04.680,1:11:08.200 have to be an expert in so what do you have  to do you have to do just like you did as a 1:11:08.200,1:11:12.360 caveman right do the thing that's safe for the  group take care of our patients and always put 1:11:12.360,1:11:25.240 accountabil and safety first so thank you for your  time and appreciate the opportunity to talk to 1:11:25.800,1:11:35.040 so we have time for sorry um we have time for a  few questions um and actually we have those up 1:11:35.040,1:11:43.080 on screen I'm going to translate what I think the  first one says it says one question comes to mind 1:11:43.080,1:11:51.560 as there are U examples of dimensionality  reduction models in the field of machine 1:11:51.560,1:11:58.520 learning are you know is there any reason  you're highlighting specific methods over 1:11:58.520,1:12:06.680 others this is beside the fact that deur networks  function more reliable than traditional models to 1:12:06.680,1:12:11.960 reduce data with respect to Vision Transformer  models with much better performance yeah yeah 1:12:11.960,1:12:16.880 so it's a a great point so I think just like  you heard from the from the commenter here your 1:12:16.880,1:12:21.120 model is going to depend on your problem and  your data so I think if you're talking about 1:12:21.120,1:12:25.160 let's say you have a heterogenous dose  model with a complex Imaging biomarker 1:12:25.800,1:12:29.640 you're probably going to do better with deep  learning techniques they're just they're just 1:12:29.640,1:12:34.760 better at handling voxelized data on the other  hand there's some of these semantic models where 1:12:34.760,1:12:40.920 those Things Fall Apart very quickly or there's  um very sparse survival models where deep learning 1:12:40.920,1:12:47.120 does terrible right so it depends what you're  trying to solve the tool you need no one would 1:12:47.120,1:12:51.600 say I've got to use a t test for my survival  curve that would sound insane right which is 1:12:51.600,1:12:57.000 better the two test to the survival curve uh what  What's the problem we try to solve so I think it's 1:12:57.000,1:13:03.360 very imperative that you pick the you pick the  um approach that's tailored to the complexity of 1:13:03.360,1:13:09.720 your data so I I totally agree with the comment  of there so yeah yeah and so actually the second 1:13:09.720,1:13:17.640 and third are somewhat overlapping but and this  one's easy because you get to predict the future 1:13:17.640,1:13:24.640 but we're not there yet so you're okay whatever  you say what future advancements in AI do uh you 1:13:24.640,1:13:32.640 foresee having the greatest impact on radiation  therapy so so I I would kind of class um I would 1:13:32.640,1:13:38.640 I would basically say the the greatest impact in  terms of our workflow will be automation um High 1:13:38.640,1:13:44.840 complexity low performance human tasks like image  segmentation are going to be less of your practice 1:13:44.840,1:13:51.120 right if you feel like your job is to draw circles  on on dieom images that is that is increasingly 1:13:51.120,1:13:55.640 going to be less a part of your job I think it's  also going to change the workflow because you're 1:13:55.640,1:14:03.160 going to be more like someone who is taking Rich  model inputs and making realtime decisions right 1:14:03.160,1:14:07.080 what you're going to start seeing is I mean one  of the things we worked on is adaptive replanning 1:14:07.080,1:14:13.600 using Markov decision processes so that every day  there's a decision for the physics team and the 1:14:13.600,1:14:18.120 physician do I replan this patient so the real  thing that's going to change is your workflow 1:14:18.120,1:14:22.680 is going to be more like monitoring stuff the  example I give to folks is it's going to be a 1:14:22.680,1:14:28.920 workflow in the old days uh anesthes I olist sat  there with a pump right now the anesthesiologist 1:14:28.920,1:14:32.760 sits in front of a series of monitors and is  paying attention to lots of different Rich 1:14:32.760,1:14:37.600 inputs our workflow is going to change through  automation I think the biggest clinical impact 1:14:37.600,1:14:42.720 is I do think my personal opinion is automated  um automated planning with biomarkers I think 1:14:42.720,1:14:49.640 is going to be a GameChanger once you can model  immune modulation and radiation effects I think 1:14:49.640,1:14:55.200 that is that is potentially disruptive right now  we're just talking about dose response in simple 1:14:55.200,1:15:00.480 models once you move into imuno radiation  modeling personalized do dosing the Pulsar 1:15:00.480,1:15:04.960 stuff Bob timberman's doing a patient might walk  in and you say well you're going to get nine grade 1:15:04.960,1:15:09.000 a day and Seven Grade tomorrow and three grade in  a week I think that's going to revolutionize the 1:15:09.000,1:15:19.360 game and one I just came in is what is your view  of AI large language models uh making an impact 1:15:19.360,1:15:27.240 and radiation oncology yeah it's going to be huge  llms especially llms are big for like prediction 1:15:27.240,1:15:34.600 of you know if we've all seen chat GPT um those  are going to be kind of a front end that you load 1:15:34.600,1:15:40.200 onto all the other models so a good example is  these segment anything models where you just put 1:15:40.200,1:15:47.280 an image in and it decides what it is they can use  data that's gleaned from text in addition to dcom 1:15:47.280,1:15:53.400 or picture data so as you start to aggregate all  of this text information through llms it's going 1:15:53.400,1:15:58.720 to just kind of like tweak incrementally every  iteration how well all the other models you 1:15:58.720,1:16:04.080 build are right so you're going to have this base  image model and the llm is just going to juice it 1:16:04.080,1:16:09.160 every time it gets larger inputs it I think  llms are kind of the way I the way I describe 1:16:09.160,1:16:20.280 it to people is llms are an accelerant on all the  other models you're GNA have oh questions from the 1:16:20.280,1:16:30.080 audience really great talk and also I I know  remind remind me a lot of discussion last night 1:16:30.080,1:16:40.240 but just extend from the LM model kind of make me  want to ask you when does all the AI model knows 1:16:40.240,1:16:48.680 for the AI intelligence that they are not able to  make yeah so the the beautiful part about these is 1:16:48.680,1:16:55.480 because especially like llms llms are essentially  probabilistic models right so they can also tell 1:16:55.480,1:17:02.640 you how close they think they are to whatever your  Baseline state is and how derivative they are of 1:17:02.640,1:17:08.440 the underlying data and so that's very helpful  so you can actually say like hey you've given me 1:17:08.440,1:17:14.600 this result how sure are you right the interesting  thing is like we had a I'll give a good example we 1:17:14.600,1:17:19.320 had a case the other day that's an amalo blastoma  we don't we don't see amalo blastomas we don't 1:17:19.320,1:17:28.960 have uh data on them uh would I trust um the uh  prediction of llm on amalo blastoma no right why 1:17:28.960,1:17:34.080 well because I don't trust myself and and we  don't know right so I think you start to get 1:17:34.080,1:17:39.520 to these portions where the real problem becomes  that these models have about as much certainty 1:17:39.520,1:17:44.120 as whatever you've trained them on and if the  data doesn't exist where you're going to see the 1:17:44.120,1:17:49.760 real problem is these out of the weird cases the  case that's not a bread andb butter case that's 1:17:49.760,1:17:53.920 when the physician is really going to earn their  money right you're not going to earn the money 1:17:53.920,1:17:58.120 on the bread and butter easy case that everybody  knows what to do you're going to earn it on that 1:17:58.120,1:18:01.760 weird case or that patient who has something  different so recognizing why your patient's 1:18:01.760,1:18:10.640 your patient and not the model is going to be a  core physician physicist skill so Dr what ethical 1:18:10.640,1:18:17.960 considerations should be taken account when we use  AI with the radiation so let let me just so what 1:18:17.960,1:18:25.040 ethical considerations do we need to consider  when using AI to help in decision making so I 1:18:25.040,1:18:32.640 struggle with this um immensely because um in some  ways the AI utilization of an AI with uncertainty 1:18:32.640,1:18:39.720 quantification I know how good or bad it thinks  it is I don't know how good or bad my physician 1:18:39.720,1:18:45.320 colleagues are right so we talk about the ethical  considerations at the end of the day the ethical 1:18:45.320,1:18:50.960 responsibility always lies with a physician our  role in healthare is less about being the smartest 1:18:50.960,1:18:56.160 person in the room and more about being the person  who's willing to take responsib for the patient in 1:18:56.160,1:19:02.800 an ethical sense right I am doing this thing  for you so I can first Do no harm I am taking 1:19:02.800,1:19:08.640 ethical and fiduciary responsibility for you as my  patient and I am and you the patient will accept 1:19:08.640,1:19:14.080 the risk that I am willing to give as a human  being that's a human and relational role that 1:19:14.080,1:19:18.000 never is going to pass from The Physician right  just like billing will never pass from us right 1:19:18.000,1:19:21.440 so as long as there's billing there's going to  be a physician but as long as there is an ethical 1:19:21.440,1:19:27.520 responsibility it the buck stops with me right  so if use a model that you don't have sufficient 1:19:27.520,1:19:32.400 trust in it's like referring to a colleague who's  good or referring to a colleague who's bad who's 1:19:32.400,1:19:37.440 responsibility is that that's mine right I send  it to a if I send it to a model if I send to if 1:19:37.440,1:19:41.440 I send you to a physician I don't trust and you  have a bad outcome whose fault was that I I would 1:19:41.440,1:19:48.440 feel it it's mine right so one of the one of the  really encouraging things I heard is when you said 1:19:48.440,1:19:55.440 you know I won't have to you know counter circles  I just figured I just got eight hours back on the 1:19:55.440,1:20:02.040 weekend so that that was well when when we rolled  out when we rolled out Auto seg for for for organs 1:20:02.040,1:20:06.840 at risk that literally saves me somewhere between  four and four and six hours a week right I mean 1:20:06.840,1:20:15.240 like that's that's a half day of my work appeared  so well I want to thank you for great presentation