Scaling Asset Performance Management: Insights from GE Vernova’s Monitoring & Diagnostics Center

GE Vernova

For utilities, the need to transition from time-consuming manual processes to lean and efficient systems is clear. But any utilities operator will know this is far from straightforward.


In this on-demand webinar, you’ll see how GE Vernova’s Gas Power monitoring and diagnostics center (M&D) — which spans over 1,000 power plants — transitioned to efficient processes using GE Vernova’s Asset Performance Management (APM) software.


Hear from Ben Myers (Global Monitoring & Diagnostics Leader) at GE Vernova’s Power business and Rahul Chadha (VP of Technology) at GE Vernova’s Software business as they discuss:


• How APM enables GE Vernova’s Gas Power to help assure reliable power for more than 400 million people globally.
• How APM’s deep insights from the fleet level to individual plants are leveraged to improve processes.
• Real-world examples of how APM in the cloud is used to prevent equipment failures through predictive maintenance.

Welcome Back
John thomas
Not You?
GE Vernova

Scaling Asset Performance Management: Insights from GE Vernova’s Monitoring & Diagnostics Center

For utilities, the need to transition from time-consuming manual processes to lean and efficient systems is clear. But any utilities operator will know this is far from straightforward.

In this on-demand webinar, you’ll see how GE Vernova’s Gas Power monitoring and diagnostics center (M&D) — which spans over 1,000 power plants — transitioned to efficient processes using GE Vernova’s Asset Performance Management (APM) software.

Hear from Ben Myers (Global Monitoring & Diagnostics Leader) at GE Vernova’s Power business and Rahul Chadha (VP of Technology) at GE Vernova’s Software business as they discuss:

• How APM enables GE Vernova’s Gas Power to help assure reliable power for more than 400 million people globally.
• How APM’s deep insights from the fleet level to individual plants are leveraged to improve processes.
• Real-world examples of how APM in the cloud is used to prevent equipment failures through predictive maintenance.




--
TRANSCRIPT

Hello and welcome to today's webcast titled how GE Vernova’s Monitoring and Diagnostic Center scaled APM to more than a thousand power plants. I'm Aaron Larsen, executive editor of Power Magazine. I'll be moderating today's program, during which we'll hear from Ben Myers, Global Monitoring and Diagnostic Leader with GE Vernova Gas Power, and Rahul Chadha, Vice President of Technology with GE Vernova Software.  
 
The presentation is expected to last about 30 to 35 minutes and will be followed by a Q&A session. Before we get started, I'll run through a few housekeeping items. In the webinar platform, you should see a chat area and a Q&A area. If you are experiencing any technical difficulty, you can ask for help using the chat function and our production staff will assist you. To submit questions for our speakers, please enter those in the Q&A section rather than in the chat area. You can enter questions at any time during the presentation, and we'll answer as many as possible at the end of the program. Any that we don't get to during the live session will be answered via email. After the webinar.  
 
Today's presentation will be archived on our server for up to a year, and future viewing will remain free of charge. You can use the same URL to reach the archive program as you did to reach the live program. PowerPoint slides will also be available upon request. A certificate of completion for professional development hours will be sent via email to every registered participant who attends. Before we begin, I'd like to thank GE Vernova for underwriting today's program. The company's generosity allows everyone to attend the presentation at no cost. And with that, Ben and Rahul as the experts, can you show us what's under the hood?  
 
Thank you so much and good morning. Good evening. Good afternoon everyone. Really excited and honored to be at the stage with all of you and representing our APM with monitoring and diagnostic. Switching on to the next slide. I do want to start with, who we are, Vernova. This is our new name. GE Vernova. You'll see us at the ticker symbol on 2nd of April? We are collaborate of multiple businesses, We divide ourselves into power business. Which is your first line on the top of the chart? Steam power. Gas power. Nuclear. Hydro. Ben Myers, my colleague is from the gas power business. Then we have our renewables business, which constitutes from onshore offshore wind. And then we have the electrification business, which is grid solutions, power conversion. And the software business. We just changed our name. Sorry, we have not updated our, slide yet, but we are GE Vernova Software. Who I am part of.  
 
Team, I do want to start with a very high level view of what we are going to be presenting today and showing and telling as well. This is our flagship software Asset, Performance Management APM, which is positioned to hold all the heavy duty industrial segment customers with the energy transition challenge that we all face. We do have several modules each. Blob here is a module within the software. The one that will be deep diving in today, which, Ben uses a whole lot is around reliability and reliability plus, which is basically condition monitoring and predictive monitoring software. We do have a module for emissions and generations management. Definitely promise you to come back and discuss a lot more about that. But that's not our focus area. Today, our focus is going to be around APM and reliability plus conversation. With that, I pass it on to my colleague Ben.  
 
Thanks Rahul and good day everybody. So what I want to speak to is the journey that we've been on here at GE Vernova Gas Power, in terms of being able to deliver engineering insights at scale using digital technology. And we've been at this over 25 years. The very earliest days with the F class machines was really focused around just a few failure modes that the business was, intent on understanding. This was as simple as, a modem, an Oracle database, all very manual. Think of the dial up kind of days and, and kind of the constraints that we had there. As the business grew and the F class fleet grew. Tons of growth in this space. We actually moved our monitoring from, Schenectady, New York to Atlanta. Mr. Chadha, Rahul, was involved with that move years ago. And it's it's grown ever since.  
 
Many of you may have visited us in Atlanta at our monitoring center, or one of our other monitoring centers around the world. Over the last 20 or so years, we've we've followed the GE fleet in terms of its growth, its expansion, and a newer and more sophisticated technologies like the HA gas turbines, but also expanded into, other centerline types of equipment, steam gen aero derivative, other OEM machines. And beyond the center line equipment all the way to balance of plant was looking to look at ways that we can kind of accelerate, the value creation for plant operators through these insights. About two years ago, we migrated from very much homegrown systems for monitoring to our software teams' APM solution. So we went from centralized on prem data center solutions that were fully managed by GE to cloud based solution, full web interfaces, for how the team interacts and is involved in collecting, monitoring these machines today. So with that really like how do we do this?  
 
And, what are we really doing? So again it's engineering insights at scale. And what I mean by that is we're developing analytics that do prediction and diagnostics of these machines and their failure modes. Really focused around the the, the power plan and within the fence. And what this means is over a thousand sites monitored, over 6000 machines and assets that we connect to today, and we're streaming over 15 billion data points every day. Some machines may have up to 200 analytics. And to orchestrate these analytics that run between five minutes and once a day, it means over a million analytic runs each and every day. But we do this with a team in total of 200 engineers, and the monitoring consists of only about 40 engineers. And so to do this you need very efficient, technology as kind of our underlying platform. And that's what we really want to speak to in terms of how we've achieved this. So, but with that, one thing I want to emphasize is in order to do this at scale, it's meant influencing what these products and underlying technologies look like. We have needs that really stress test these types of systems.  
 
And so we're always looking to kind of continuously improve. And I'll talk through some of our processes that we're using these systems and point out some of these areas where we continually look for improvements. So, general construction of our architecture for how we do monitoring. And this is not unlike monitoring of things that you have in your personal life as well, you know, connecting into the control systems. We're always focused around security, and it is our number one. And so ensuring that we have the right, levels of defense and multiple levels of defense in place to secure your data, as it's transferred and secure your systems as we're connecting into, into these plants. We're streaming this data to the cloud, and that's where we're both collecting and storing data. But we're also using the cloud for all our analytics operations. So we're orchestrating these million plus analytic runs in the cloud. And what I mean by orchestration is this is knowing exactly what analytic needs to run exactly when using what data for a given serial number. And so tons of back end support and and platform that's needed just to set up all of that.  
 
And we've set that up in an automated fashion. So it's it's seamless and it's seamless not only to run every day. It's seamless in terms of adding additional assets that we're doing, every day, every week. Where this all comes together, though, is is the general user interface. And this is really kind of common platform across the software teams. APM product in terms of how we do visualization and what this ultimately looks like. You know, for the team is the monitoring of all these power plants. So each dot represents a power plant across the globe that we're monitoring. And again, you know, gigawatts of power. So at the moment, over 179GW online that we're monitoring, and what this also allows us to do is respond quickly when a plant has an anomaly. So services that we offer include not only the monitoring and diagnostic checks with predictive analytics, but also trip type support. So a unit trips online. We see that within seconds and are able to respond with our engineers connecting with the site and being able to have dialog with each operation room.  
 
And what makes that dialog very rich are two things. It's it's the data that we're able to stream from the machine, from the controller, and also the standard work that we've built around many years of, you know, these trips across a fleet of machines. And so a thing I would emphasize with respect to what APM is allowing us to do is workflow, these kind of activities. It allows us to workflow not only the analytics when they fire and say, hey, there's an anomaly that we should check out as an engineer. They also allow us to visualize immediately that data set that's associated with that anomaly. It collects all the information around the asset. So we understand the context not only from the operational data standpoint, but also from the asset. And then we're able to do this, you know, with respect to trips tens of thousands of times a year and with respect to analytic alerts, it's over 40,000 alerts that we receive. And our engineers process every year.  
 
The, the really interesting aspect of this is being able to not only take that, but do it at a very centralized location so we can take best practices from each individual site and each plant that we learn from and scale them across the entire fleet, all through this kind of central main, central team, but also, centralized in terms of orchestration of this workflow. The the other aspect of it, though, is we can get down to kind of the customized level and where we need to do something bespoke or custom for unique situation around a plant or an asset. We're able to tailor instructions for the team and analytics for that, specific asset. So it's fleet level kind of insights. That we're able to provide. But it's also very customized to individual machines and their needs where that's needed. And just an example of where, we're doing this today and just wanted to give something very relatable.  
 
On this is gas turbine combustion system. We're monitoring the, effectively the sound acoustics or the pressure wave, that's created with the combustion and at low, what we'd call low dynamics or relatively low frequencies, the, combustion start is indicating that it's becoming unstable. So we call it the lean, potential lean blowout situation. And what the team's really trying to do is not not replicate what the operators are doing, but supplement what the operators maybe have challenges to see. Or it may be impossible or very, very difficult. So in this situation, it may be a subtle shift in tones or these dynamics that the analytics able to pick up. And of course, we don't have engineers watching traces of data go across the screen. It's the analytics that are able to watch a thousand different machine sites all at once and do this, and this really kind of amplifies the effectiveness of the engineering team that I work with every day. The other aspect of this is it's really difficult when you're looking at, control control room, HMI type screens and you may see a shift, you may not see a subtle shift in something, that's occurring with the operability of the machine, what our analytics are capable of doing is look at three things occurring at once.  
 
It's a very subtle change. And three different things that helps you triangulate what the failure mode may be. And there is, in fact, a problem. Examples like this. Or if you were to see, very slight change in vibration on a gas turbine or a steam turbine rotor, and then associate that with the performance changing in the machine and subtle changes of those things happening simultaneously indicate that you may have a mass loss within the rotating, steam pack, or, say, compressor section of a gas turbine. And that's really, really difficult to see with, the human eye. It's these triangulations that these analytics are capable of doing that, makes this really powerful as well as the physics that you can bake in with, you know, effectively what we would call a digital twin.  
 
And of course, having designed these turbines, we're designing around physics, running analysis on components and at system levels, we're able to take those analysis models and run them simultaneously as the machines running and create kind of that digital twin of how the machine should be operating and compare how it it is actually operating to see anomalies. And so with that, I’ll hand it over Rahul. Very impressive, Ben. Thank you very much. Team, what I do want to share, further on is the, the analytics that just Ben touched upon, blueprints. He talked about those, these are, the two models in our APM software. One is with our journey so far, we have a package of analytics that we can come bring it to your door. We call it predictability monitoring analytics package.  
 
Smart Signal is, is a very industrial known name which runs and has a lot of analytics, blueprints already developed for different kind of machines. Another model is which is bring your own analytics. A lot of our, colleagues, a lot of our customers, even Ben, they have been developing analytics. Gas power has been developing analytics for the last 20 years. So they have analytics somehow written in Fortran that needs to be executed. The analytics that are written and latest and greatest LLM models or Python or Java. And the the software does allow you to bring your own packages, deploy and use the production workflows to get the insights. We do have many, many analytics, packages under the hood. Technologies wise, that image analytics definitely. There's a lot of physics based analytics. There's applied statistics. There is machine learning. We have several algorithms that different of our customers are using to bring the insights on what they are looking for in specific. Here is a view of a a quick view of our digital blueprint twin blueprints. This is primarily based on. I mentioned Smart Signal, which is out of the back predictability monitoring package.  
 
We have two plus million twins. These are different kind of equipments. These are different kind of make and model. This is OEM agnostic. We can monitor for example heavy duty gas turbines from all our volumes that are across the world, including GE machines. Similarly there is a number of assets, almost 7000 assets. Now a little older slide, it says 6500 that are monitored by our IMS team, a service that we provide to ensure that when customers are starting the journey, they can lean on or get trained by experts who have done this for years. And then we pass it on to them. Some of the benefits are on the lower side. Cumulative benefit of $1.6 billion. This is available on our website by industry. You will see industries like definitely power generation, oil and gas, chemical, aerospace, food and beverage, all of these technologies across applies wherever there's an asset heavy industry and a very, approximate model or number of catches per week.  
 
With that, I do want to, jump into the software itself, because I know you all are very excited to see how it looks like. So, Ben, perhaps you start with your view, and then I bring it back with the platform view. So my view in terms of the software. Yeah, sure. Go back to the go back to the map. I'm not going to deep dive through all the aspects of APM because it's quite comprehensive and what the team does, but, maybe walk you through real world example of what, the how the team would operate. And what we get excited about is just some of the integrations that the tool does, not only, in addition of what's built in. So, for example, that, talk to the lean blowout on a gas turbine, what that's going to do is generate an alert for our engineer.  
 
And what we have set up is multiple tiers of engineers supporting this. And we use a kind of a frontline team to, accept the alert in the workflow in APM, and then begin to triage it. And they have standard work to understand is this something that is real or this is, potentially, an anomaly associated with just a fluctuation of environmental conditions? It's okay. They also have a choice as to whether or not they need to pass it on for more in-depth review. So, I mean, sitting in Renova Gas Power, I've got my team of a dozen people working on a front line. I've got 30 plus additional engineers that can support them with more in-depth review. And I've got 4000 more engineers, supporting us, that are designing and helping maintain these power plants every day.  
 
And so what we do to escalate this across kind of that, that or that, ecosystem of engineers is we then integrate our APM solution in the workflow with, ServiceNow platform that we use across engineering for kind of task management. And so this allows us to go between the team that's dedicated around monitoring focused in APM into the platform where the other engineers that may support or other personnel across the business may support do their business every day. And this is really powerful for us in terms of not having to swivel chair between tools, in terms of understanding where you do your work. And as I mentioned, as we do this, it all links back to an APM case. And that case contains all the details on the asset, the situation, the trend data, that's available behind that. And maybe, Rahul, as you jump into this, I don't know if you have a case or any sort of trend data. I can pull that up if the team would like to see that. That would be very nice.  
 
And Ben while you're pulling that, if you can also show an integration. Perhaps, the one that you're using is ServiceNow. And we can come back to that. So while Ben pulls that has a little flicker on the screen because I now I'm going to share my screen to show you under the hood how that works. A little bit the secret sauce. Please let me know when you guys can see my screen. Got it. Yep. So team, what you see here is a quick view of our platform in last one day. As you can see, these are very, very big numbers. I understand when somebody asked, what's the frequency of the data? Because we are ingesting almost 1.9 trillion data points every single day. So I've just taken a one day view. As you can see, this platform consists of several technologies under the hood that functions in tandem real time ingesting all those data points. It's a online storage of almost 400GB of when you talk about on a daily ingestion side.  
 
And this is constituted of 21 million tags, sensors we are streaming from across the globe now. One of the heaviest usage is definitely Ben's usage, gas power usage, which I'll come to. This is my entire system in a view, which is running in our US West. Pop. So APM hosted in two pops today, one in America and one in Europe. This is all my 151 tenants production tenants. So 152 today that are actually connected live and are using M & D is one of our biggest tenant. Definitely one of our biggest users as well. But we do have other industrial giants from PowerGen as well as oil and gas and chemical industries, as I was mentioning earlier, who are using the same capabilities now from this big view, I do want to jump into a little bit of a, M & D view that you have been seeing from us. If I can figure that out quickly. Let me just switch screens. Right here. Share.  
 
Now, what this is, is this is a very specific from that 152 tenants to exactly one tenant, larger data points. But where I want to start is, again, Ben mentioned 13 billion points. Ben, last day you ingested 45 billion points. So that's your your ingestion. It goes up and down. And also depends on the number of generating units. Every green dot on the map that Ben was sharing is a generating unit. We do have lot of standby units. On standby units, the data ingestion is little less. On generating units it is a little bit more. So it fluctuates. Definitely the number of queries that are fired 107 billion queries. You're exactly right. Out of that, a lot of it is automated systems that are making analysis analytics packages that we talked about. But all of this data is used across the globe by all of Vernova gas powered engineers, whether they are using that data for monitoring, all they are doing for their analysis, working with the customers or externally, we are using it for, NPI development for our next gen machines. A lot of this data gets used by that.  
 
So that many number of analysis that are being run. So as you can see, a large usage, large number of packages, and it takes an army to keep it as well, where we want to help our customers is, is to accelerate that journey and not go through the learning curve that we have gone through. All of this comes as a package for your APM SaaS model, APM on prem model as well. And that's how we partner with our customers and accelerating that journey. With that Ben, I will pass it on to you and then we'll get into the Q&A. Yeah. I'll jump back into APM real quick here. And so I apologize about the glitch, but, so just take a minute. The, So again, just quickly, this is a view that, one of the engineers might receive in terms of, the alert that's been fired. Respect to this is, again, kind of a combustion issue.  
 
And what they, quickly go and do is they click into, associated trends with this and also have, built within the system their standard work. So they're able to go and look at the trends, and see that real time data, they can, manipulate this to see different windows. But this is the standard window we've determined for this particular issue. And then what they may do is go and create, a case where they're going to, kind of close this out. And this again is our mechanism to either escalate into deeper paths of engineering or it's to to close out the case, with notes that may get reflected back to site. And what that looks like is a button click in order to open up a service now case that includes not only kind of the details around the the alarm that occurred, but also, details that include kind of, around the machines and so on. So this is something that I just want to, emphasize in terms of what we're able to do with respect to to integrations.  
 
How my business process works is that we are communicating to you all as operators, what we would recommend in terms of solutions, what gets really interesting with the APM product that doesn't fit my process, because I'm not operating the plants, is being able to not only integrate this with, the workflows that we use in terms of support for engineering or diagnosing an issue, but the workflows that you may use for doing your actual maintenance, like an SAP or a Maximo system, and that gets really powerful in terms of closing the loop between we found this anomaly we work through and trouble shot and we say it's, you know, we recommend to go do this as an engineering team.  
 
We'd suggest change out this valve, let's say. And then you put that Maximo and you can kind of close the loop on was that the effective solution. And this is the whole kind of constant learning and continuous improvement path that you perform with these analytics. Very well said. So thank you Ben. And with that I think we are ready to move to Q&A session. All right. Thank you gentlemen. That was a great presentation. A lot of information provided there. I appreciate you taking the time to explain all that to us. Also, I'd like to point out to attendees that there are four items loaded in the webinar platform under the handouts tab. So I'd like to make sure that you go in and access those. You can download them, and do that before you exit the platform today so that you get that additional information. So again, that's under the handouts tab, which is up in the same area that the Q&A and chat functions are. And again there's four items in there that you can access.  
 
During the course of the presentation, we've had a number of questions come in and I want to encourage attendees. There's still time to ask your question. Just enter those under the Q&A tab and we'll get to them as we, progress through the Q&A session. Any questions that we don't get to due to time constraints will be answered individually. Following, the presentation via email. So again, starting with, some of the questions that have come in. First on the list. What are the data limitations, number of points, sample frequency? Does data collection require direct Ethernet connection to a sensor or computer, or can data be collected via remote cellular connections? Well, a lot of questions in there. Can you, start with the data limitations? As far as number of points and sample frequency? Very good.  
 
And I'll take that question and Ben please jump in as you have large data sources as well. Yeah. Very good question. I guess the engineer or the gentleman who or the lady who's asking this question. They have lived through a very complex ecosystem of IoT. So data ingestion, as I showed you just a little bit earlier, we have a large data ingestion platform. And we do understand that we have a large frequency data sets or maybe one second data sets. So the example that Ben was referring to, the lean blowout is coming from a, that is calculating FFDs and is generating a large amount of that, those data sets. So we collect those data sets in batches, and we transfer them to the online system where we run those analysis. Now part of the FFD analysis is also a one second data set that we can put into the real time monitoring system that can then generate alerts and insights that people can react to. Those data sets are continuous data sets that are being ingested. There is no limitation. Yes, we charge by the drink. So it's not that it's all free.  
 
As you consume more, you use more. And then in terms of what kind of connectivity models we have using, I can tell you as an engineer what changed things for us was that windows 95 modem that used to make those screeching noises. We still have those modems in some places, by the way. The world evolves, but it doesn't evolve similarly everywhere. So, we have array of connectivity devices. Security is a given thing for us. So we, we have models for each of those. But going back to what kind of technologies we are using, we are using as latest as fibers in some cases. And we do have some modem connections that are still trailing, and we are collecting data sets from that. As you can imagine, the data throughput and data speed will be a little different from those two connectivities. But we do have a robust method of just not managing and making sure data is is captured, but using the tools that are embedded into our software to managing the health of these data quality and data connectivity as well. Ben, if you may want to add something.  
 
Yeah, maybe, I'll touch on kind of the question around the, the sensors and, and how we connect because there's just a plethora of, of what that looks like. And it continues to evolve with more and more innovation for, for sensor technologies. Often these sensor systems are coming back through, like what Rahul mentioned, the combustion dynamics monitoring. It's coming back to a centralized controller. And we're able to connect with those, most often the easiest platforms to connect to our historian type devices in the application, that my team specifically does, we apply effectively, a historian on the site, we connect that directly into, say, a Mark VIe controller that gives us kind of unique abilities to, harness information off of, a turbine or a system that you may not be able to get off of, a standard historian with OPC type connections. But again, the centralization of data on site makes this easier. There's also solutions that are able to kind of multiplex these data sources back end. So dream as big as you can in terms of sensor technologies.  
 
And it's it's often about how do you just centralize, all those connections back into one place. All right. Thanks, guys. And I know around the industry, there's a lot of buzz around artificial intelligence or AI and machine learning and and things like that. Here's a question that asks what different AI or machine learning techniques are used for predictive analytics. Another very good question, my friend. You guys are engineers. As simple as linear regression. I'll start there, because that's what I started when I started in my career again. But as complex as neural networks, VBMs and SVMs, deep learning models, image analytics. Now we are also playing with LLMs. I wouldn't say that we have put something in production with that, but we are evaluating that. But we have a significant number of, algorithm packages that are being used both from not just the AIML, but also from from the physics based. There is one more question I don't know I would like to address while we are on analytics. Yeah, we do have thermo-models running, thermodynamics models. I think there's a question around performance. Steam turbine. We have in-built, house built, GE models, which we call gas turbine performance models.  
 
That is part of the package. Plus we have Gate Cycle, another, very industry known tool which helps you model your total plant specifically, your steam cycles, and your bottoming cycles as well, and help you identify the losses that can be with respect to your thermal dynamics specific heat losses. So all of this constitute if you are interested, do follow up us with us and we can specify you share some of the analytics example in this space. Okay. And you did mention some of the thermal losses. And here's the next question. The list actually addresses that. What are your capabilities for identifying thermal performance losses in the steam cycle such as on main turbines condensers, feedwater heaters and cooling towers? Can you go into that a little bit more? Yes, I can.  
 
Actually, there is a performance intelligence module as part of APM. I would highly, highly encourage this engineer to come. Please connect with us, send us an email directly. We will be more than happy to walk you through specifically this, this element of our software. And we have good references where customers have been able to use the softwares on, combined cycle boilers, oil fired boilers, coal boilers to help understand the losses and recover those losses by taking some specific recommendations. All right. The next question is this individual would like to see an example of an issue. How was that resolved with a timeline? Is there a way that you can demonstrate that? I don't know if we have anything right here at our fingertips? I mean, I'll speak to one that just came into my inbox, this morning, the, and just kind of highlighting where we were helping the customer. But I'll, I'll break largely what we were able to do with these data and analytics into like kind of two spaces. One is around the predictive aspect, see things ahead of time and be able to respond to them.  
 
You know, where it's, optimal in terms of planning. So or be able to alter operations to avoid a failure. So that's one around the predictive piece. The other piece of just being able to harness the data, it allows you to diagnose issues very, very rapidly. And so yesterday one of our engineers, she actually sits in Atlanta with me, was supporting a site that we had detected that they are, having high vibrations during startup. The site didn't reach out to us. We ended up reaching out to them with recommendations, because the the startup on their steam turbine was inhibited. By these high vibrations. And what they found was, and this is all yesterday afternoon, they found that the machine is is what we call rolling off turning gear. So it starts to spool up to about 200 rpm when turning gear speeds, you know, on the order of ten rpm. And what was causing this was a steam leak at a non-return valve that's generating the energy to roll the rotor off. What was occurring that resulted in the vibration was a rub when it rolled off. So it was rubbing it's bowing the rotor. And then they go to start up and that results in really high vibration. This is all in hours yesterday to allow them to understand what actions they need to take in order to get a a clean and smooth startup. It turns out that non-return valve had a foreign object in it, that, you know, without being able to tie these couple things together, it's really difficult to go figure out. So this is an example of diagnostics that we're able to do right there on the spot within hours.  
 
What further is going on here though is this is the like with, the ability to harness this data centrally and have centralized kind of engineering teams supporting or able to amplify the effect of what the site's able to do. It's not just what the site's can troubleshoot themselves. They've got people behind them with data looking at the same kind of problem, maybe from a different angle, help solve it more rapidly. Yeah, I can see how the teamwork would really be important. And Ben, if I may add, and I want to share my screen because I do want to share, a little bit of a marketing, website as well, but, it feeds right into the question. Yes, we do have a website. If you have not published this link, we will publish it again for you. Here is, a case studies for different equipment types, which you see from industries perspective and different asset. So I just pulled up a very simple case study answering a very specific question that you, so what did we see, how did we contact and what was a little benefit of that working with our customer. I'm not going to walk through the example. I think Ben did a very good job in walking through the example. But we have many, many, several examples like this. And the key is, as we all know, with all our condition monitoring and predictive monitoring, you have to define your value for your mothership.  
 
And this is where APM comes into picture. We have a cost benefit analysis CBM module baked into every alerts so that you can capture the value that it avoided, it prevented from happening so that you can demonstrate your value internally and externally. So we will be more than happy to share this website. It gives you several examples like this, and more than happy to take any questions afterwards on this site as well. And I may touch on there's a question later on, from an individual regarding like a ten megawatt machine. And does this make economic sense for a product like APM?  
 
And at the economics depend on what that ten megawatts is doing in terms of your revenue. So that could that could be responsible or critical to like an extreme amount of revenue, especially in maybe an oil and gas refinery situation or if it's a critical asset on a combined cycle plant, or if it's critical in terms of, the resilience of your plant for maybe black start capability. And so it's like, absolutely ten megawatt machine can be very, very complex technically and, and have high application in terms of being able to do predictive analytics. And if that that assets tied, critical, nature with respect to revenue. Yeah, it makes a lot of sense. Okay. Great point. Next question is about access to the data analytics modules. Can customer personnel access them and reports directly, or do the customers need to really, rely on GE personnel to generate the reports? Reports can be generated by our customers directly. They don't have to depend on GE people.  
 
Those tools are built in within the software as well. But I do want to mention that a lot of our customers also lean on our, services called IMS, Monitoring Service for at least the beginning of the journey so that they can learn the jobs, learn the software itself, and then they take it over in the period of one year or six months as they feel comfortable. But yes, you can generate your own reports. Right. Question here about edge devices. Are any of the analytics being performed by edge devices that customer sites? So we talk about M & D use case first because they do a lot of edge conversations. And then I'll talk about a little bit more broadly where we are moving towards. So Ben, I'll leave it to you first on the edge side. Yeah, I have a lot of fun debates with my team on where we apply edge. We've got that capability today and we deploy it, using, you know, proprietary algorithms that we've built, actually, in fact, many years ago, for doing these, what we call edge analytics, but they have their time in place. They, they're really, really well suited for kind of critical, timely, high sampling rate, types of, of data sets and functions. And so, yes, we use that.  
 
I mentioned our trip support earlier to be able to to capture that right in the moment, and trigger an alarm for our engineers. We don't want to have latency of minutes to the cloud, so we trigger on edge analytic, immediate notification to our engineering team. So that's an example. Yeah. And, where we are going, like, definitely always there'll be a need for edge analytics. In the same in the cloud technologies are evolving as well. Right now what we call is what is called streaming analytics to interrogating data as it is moving in. So the timeliness aspect that Ben touched upon can be addressed by having a ingestion pipeline that can pick up those bits that are of interest to you. So we see a lot of this moving to that model. One thing Ben will hopefully, agree with me, maintaining edges and managing edges is quite complex, as you can see.  
 
And cybersecurity is even making it more difficult with one way traffic or data diodes. So it's not that easy for maintaining those edges. So keep, adjustable as possible and move the capability where you can manage it better. So what we are, we see technology growing is more on the streaming analytics side of the house, where we are able to run the analytics continuously and detect those same signatures that was being used by edge in the past. Yeah, I'll totally agree with you on the maintenance aspect of it. And, and I live that every day because as engineers we're always looking to improve. And so you put this edge analytic out there on the device on site, and my engineers find a way to improve the math. And so now you've got to go update the math on that site, and every other site that you've, you've done this as opposed to just one place centrally in the cloud. Okay. I'm going to combine a few questions because I've noticed, a lot of people asking what you can monitor and what you do monitor.  
 
So, one individual says do you monitor any wind sites? Another person asked about nuclear sites and another person asks about heat recovery, steam generators and duct burners. So can you, do you monitor all of those things? So I would like to call out my friend Katie. I see, I think she asked for the wind turbines. I know Katie well. So, Katie, as you are very aware, we do monitor with turbines, but just not with Ben Myers. So within, our portfolio, we have 12 businesses. If you want, I can switch back to that slide quickly. And everyone kind of have their own OEM model. Underlying technology is being used the same with more analytics from that specific domain. But yes, we are monitoring wind with the same technology. Nuclear team is also using APM for monitoring nuclear steam turbine generators. A lot of moving equipments as well. We work with them very closely. Steam turbine, I think I have already mentioned, both in combined cycle.  
 
Oil fired, gas fired. Concentrated solar plants, steam turbines, a lot of, space covered with our technology. So the answer is absolutely yes. But with Ben, it is primarily the power generation, fossil site. Right. Fossil being mostly combined cycles and gas turbines. Yeah. I was just in Schenectady a couple weeks ago with a wind team, talking about their monitoring, I mean, really impressive, like tens of thousands of assets. So almost an order of magnitude greater than the 5 or 6000 that my team monitors today. And they're, not only making plans in terms of how do they continue to create value for their existing monitored fleet, but getting ready for just more and more as the we're hitting the hockey stick in terms of wind growth. All right. Thanks, guys. Question here about virtual power plants. And do you anticipate moving into the VPP space? Very good question. Now comes the very exciting stuff that world is talking about. Energy transition. So team, I think you guys have probably seen announcements from our side on from Vernova software on grid OS and definitely working on Derms, ATMS, DMS, EMS. In that conjunction, we are partnered with our colleagues in Grid Software. We are looking to work more, more closely in this virtual power plant space because this will entail all of the equipment that we just talked about, just a little while ago.  
 
So absolutely, we are part of that journey. In some regards, we are leading that journey through our grid software business, but we are one Vernova. I think our big message is we want to transform, to transition, digitally transform, to transition to this new sustainability world. And we want to be the partners for the entire power generation industry, electrification industry in that regards. So absolute. The answer is yes. More to come on that. And we are working with GridOS team on the specific VPP topic. Okay. I there's a question here about the end to end lifecycle cost. I assume there's probably a lot of variables that will factor into this. And maybe every case is, site specific, but can you talk at all about the lifecycle cost for APM? Lifecycle cost for APM. Okay. So one way I dig this example is that, there is an investment required to have a software like APM to do what you do for your business drivers. And, that entails a cost. Definitely. There's a subscription cost to it.  
 
There is a, a maintenance cost to it. And definitely there's also a operation's cost to our customers who are using the outputs of them of that software and driving the outcomes, business deliverables within their own sites. What we have seen is all three pillars has to be instituted. I cannot give you a specific number. What is a lifecycle cost? But what I can tell you is we have seen more and more organizations like Bence, where we are customers have their own monitoring team, they have their own management team, which is working internally with the sites. They are the change agents. They are the front lines. However, for technology, they partner with GE Software or Vernova software so that they can accelerate that journey. And, we are getting more and more customers. So I would like to assume and believe that software does deliver a lot more value than they spend. Because lifecycle cost cannot be less than ROI. So there is an ROI. There is a lot of, low hanging fruit to go after to improve your availability, reliability, maintenance cost. And we do have some published stories to our customers, with our customers on how to go about doing that. That's how I would respond if I have not responded your question correctly. Little bit more specific, and we can get into the specifics of that review.  
 
I may add, upfront cost in terms of connecting an asset, pretty standard and steady. You know, so you have that investment. Each asset you're adding in terms of just making the connection and, and, and setting up the flow of data. In terms of the operational cost as you add assets. It's, it's a very, very shallow curve. And, and that's something that you harness with respect to being able to pull all data back into one cloud based source, run the same set of analytics across like an asset class or very similar analytics across an asset class, same workflows and so on. So that and that that curve in terms of adding analytics in your operational costs for doing so or at sorry, adding assets and your up your operational costs is very, very shallow. Here's a question about I think connectivity. Do any of your sites use a data diode? And if so, how does that impact your data collection and ingestion?  
 
Yeah, I think this is becoming, you know, more and more the norm in terms of air gapping. Even though we have in place, you know, standard firewalls with ports and services that are uniquely identified to how we connect, very secure kind of edge devices for data collection and buffering and, and pushing data to the cloud. But, you know, I mentioned earlier the, the lines of defense in data diodes, a good one. You want multiple lines of defense. The air gap is good. So we do that today and what it ultimately entails in terms of what happens. It may require obviously additional hardware with respect to the data diode. We do different solutions in order to achieve like the level of service that that someone needs. And what I mean by that level of service, sometimes there needs to be kind of, talk back towards, in my case, if I'm talking to Mark VIe to grab specific data sets, given like an edge analytic like we're talking about, that requires an additional set of hardware. All doable and done. You know, today, pretty widely. So, Ben, I would add, the engineer who was asked or the friend who has asked that we connect into 106 countries today with 7000 assets across the globe. Believe me, we have seen data diode since 2007. We have been with them for longest of time.  
 
Now, It is a lot more discussed than it was in the past, but, definitely a lot more complex, increases the complexity and number of servers you have outside or edges. You have your site, as well as for the maintenance and upkeep of addition of more tags and sensors like that. But good thing is, we do have a data model. Ben supports that across the globe, and we have been doing it for several, several years. Okay. I'm going to combine another couple of questions here. One is talking about the sources. And what about data from multiple sources outside of GE’s assets. Are there any that you are not able to connect to? And it's further down. Somebody asks about Yokogawa, Honeywell, can you connect to these types of sources directly or not? Maybe I start and just point out for you, Rahul, with our APM product from our software team, we probably connect to more non GE assets than we do GE assets. My team obviously very focused around GE assets and where where we can add the most value, especially with our engineering prowess. But with respect to a platform like APM, it's built for well it's it's built to be non-discretionary in terms of or discriminatory in terms of OEM. Yes. And again, a very good question. As you can see for all this space, you got to be very flexible. If you have an organic device, I'm going archaic.  
 
Now we do connect to those including our own Mark Vs. But we do use third party integration softwares like metric on very heavily or, own effects or simplicity. So we convert these older, methods or older protocols into more newer OPC, UA, DA kind of protocol to ingest data. But we do have solution that we can partner with you on how to connected to almost all of these different devices from different vendors that you will see, including GE. Okay, here's a quick question about data historians. Does GE offer a data historian product? Since 2001, my friend, that's that's the answer. While it is at this and just to, like we saw this need very early on, we, we believe we have a very robust historian product as well. In fact, Ben runs this on all his edge devices. We call that on site monitor or OSMs, and we have how many what, 950 or something all across the globe running that are streaming data back into our cloud. So that's our model. But historian is a very established product through GE software, Vernova software. And we are in heavy duty industry, car industry, you know, you name it, food and beverage industry, so on and so forth.  
 
So it's been used by several of our customers. All right. Very good. We're coming up on the top of the hour. So it does not look like we're going to get through all the questions because as I said, we will answer via email following the presentation to any that we didn't get to. I think we've got time for a couple more maybe how much time is needed in order to get insights? And what are the prerequisites to start? You know, we we do monitor this KPI. We call it time to life. And what we are pushing it to. And it depends on the complexity and readiness of but we are pushing it to 12 weeks, from start of the engagement to getting insights on your on your specific assets does include data connectivity, setting up of a tenant, deploying the models, training the models and then getting the the insights as well. Some assets go faster if you have a good data collection, some asset to go slower, but that's a KPI we're always looking to push down. Our goal is to get into four weeks, eight weeks kind of timeframe. But today we set out around 16, All right. One last one here.  
 
Can the predictive models be built by the customer team on UI? Answer is yes. You can bring your own analytics. That is, I was trying to say, when you ask the right question, you can bring your own models. If you're a python shop, you can drop those models as well. And if you use one to bring your own or make your own predictive models using a software smart signal software, do you have something called blueprint Center where you can develop this, the model, deploy that using the tools that are available to you. You don't have to engage GE and use that in production environment. Okay. And actually I'm going to ask one more because I think this will be a quick answer. Is the historian able to provide an API to connect to a customer's own system? Answer is yes. Okay. Very good.  
 
Gentlemen, thank you so much for your time. Do you have any last words you want to leave the audience with? Before we wrap up the webinar? Just appreciative. Everybody taking the time with us. Yes. Thank you very much. Really good questions. Keep it coming. Makes us better engineers. All right. So as I said, we will answer the questions that we didn't get to. And there are still, several in the queue, but we will answer those via email. And, again, I want to thank you GE Vernova for underwriting today's program. I think, Rahul, Ben, you guys did a great job. A lot of great information was presented. So thank you for that. We hope everybody that attended found the presentation beneficial. And I hope you have a great rest of your day. Thanks, Aaron. Thank you. 

How Can We Help You?

Let our experts show you how GE Vernova’s Software business can accelerate your operational excellence program and energy transition.

“Thank you for getting in touch!” 

We’ve received your message, One of our colleagues will get back to you soon. Have a great day!