GE Vernova

Scaling AI/ML Programs for Advanced Asset Maintenance

Learn how GE Vernova customers deploy Asset Performance Management (APM) software with SmartSignal predictive analytics to help reduce asset downtime, decrease overall O&M spend, enable remote monitoring of any asset type, and further digitize Operations to meet performance goals.

In this on-demand webinar, experts from GE Vernova, AWS Energy Solutions, and Italian energy company A2A discuss:

- Emerging trends in advanced pattern recognition
- How GE Vernova’s top-rated Asset Health & Reliability software can be used on any asset type
- How A2A is using GE Vernova’s SmartSignal analytics
- AWS’ vision for the future of advanced pattern recognition in the cloud for an enterprise



--
TRANSCRIPT
Good afternoon and potentially good evening, everybody. Thank you for joining us. It's early October, so we know you're all busy coming off of potential summer trips or just trying to wrap up the year. So so thank you for being here. Today we're going to get to some really good content. We have a lot of experts on the call. And really the conversation today is going to be about scaling AIML programs for advanced asset maintenance. And we'll get to some introductions. But we have a group here, with a lot of expertise. Both from A2A. You'll hear from Massimiliano later. From ourselves, we have Truman, who is our technical product manager for SmartSignal, our predictive analytics solution, and you'll hear from Sid as well. And then we'll kind of get into those intros.  
 
And this topic today is really important because we talk globally about what it means to scale and grow your asset performance programs. And a lot of the hot topics today is around predictive. It's moving into prescriptive. And if you've been following along or use our APM, you know, we're a little bit more than that. Right. But we also understand that predictive is a huge, huge part of where you want to go. So we're going to get into this conversation. I just wanna say thank you.  
 
First and foremost, AWS, if you haven't seen the news earlier this year, we entered into a strategic collaboration agreement, and we've been working with AWS really closely to modernize, re-architect and kind of optimize our platform. So a move to microservices and move to more scale. And really that's enabling a lot of what you're going to hear today. And we're going to talk at the end a little bit about the future of what we're doing together. So in this time, the chat is open. Feel free to ask questions. If we don't answer them live, which we will try to do, we will get a follow up to you in written answer.  
 
So we have about five sections here today, a couple we're going to go really, really deep, which is going to be exciting. But we're going to go through introductions. We're going to talk about some of the enterprise trends, that we are seeing collaboratively. Fun little title here To infinity... and the cloud. We’re gonna have Truman talking about our predictive analytics and what we're doing in SaaS delivery. And then we're going to hear from Massimiliano at A2A and really talk about their journey with APM and specifically predictive. And then we're gonna wrap it up and talk about what's next. And we're going to go from there. So without further ado, again, I'll be moderating.  
 
My name is Ryan Finger. I'm the director of product marketing here. For platform as well as APM, and I work closely with our global partners. I'll pass it over to you. Massimiliano. Hello, everybody. Just to add some more context to what Ryan just said. I joined A2A over five years ago and I've been working at an energy center for how our combined cycle sites ever since we have been using on-prem Smart Signal in the first place and and cloud based APM reliability later on, up to that moment, to monitor and support our power plant. Over all, I have 15 years experience in the power industry. And I'll give you some more details about our journey later in this presentation.  
 
And I'm Truman Hwang, I'm the technical product manager for GE Vernova’s predictive analytics, SmartSignal within the APM product suite. I have over eight years experience in the digital transformation software space and over ten years in the oil and gas and chemical spaces, I'm a reliability and maintenance electrical engineer. Hi. Hi. Thanks, Ryan, and thanks Massimiliano, and Truman, good morning.  
 
Good afternoon everyone. Depending on where you all are joining from, I'm Sid. I'm from AWS, Amazon Web Services, energy vertical. I'm working as an EMEA energy technical media lead. I'm based out of Dubai. I have 16 years of industry experience across energy value chain. And I'm happy to be here with GE, representing AWS with GE, and strengthening our partnership together and talking about SmartSignal. So thank you, Ryan, for the opportunity.  
 
So without wasting time, I will just start with the presentation and the first section, which is more around which is more around aligning the technology with the processes. So what we see in the market today is the market energy industry is changing really fast. The main driver here is digital transformation and IoT integration, where companies are essentially connecting more of their assets through sensors. So much of this data is sitting in silos as we talk and not capitalized by the companies, even though there is much value which lies in it. And with the AWS cloud, this data is processed in real time, giving companies insights that can potentially improve their operations with solutions like SmartSignals.  
 
Another trend which we see in the market is around the growth and adoption of SaaS. As this gives these energy companies and all of you the flexibility to scale up your solutions without, you know, heavy investments upfront. And this is especially useful for asset management and predictive maintenance use cases. Last but not least, another trend which we see like from the outset of the year, if you talk COP which we have seen, we see, there is a quite a lot of movement around AI and advanced analytics, and most of the energy players are trying to incorporate this in improving the decision making by, you know, adopting AI, ML in their day to day operations.  
 
And what we see, most of these energy companies have increased their investments in these technologies by more than two X in the last few in the last year or so. In addition to see we also see those old age challenges like we see aging employees, we see increasing costs, shrinking margins across oil and gas downstream, which are being essentially phased in the industry as such. That being said, talking about the overall industry trends and talking as a foundational piece to address this domain, specifically in the area of asset management in AWS with our partners GE, and their SmartSignal solutions, we, offer tools to companies like yours to monitor your performance, predict failures, and optimize essentially the performance. All of this while keeping the data secure and easy to manage. Having talked about the trends, I would let's say I would like to step back and talk about the traditional maintenance model shortfalls, which we see from our energy customers and what we hear from them day in, day out. We see energy customers are still using traditional maintenance methods like reactive and preventive, and which have their limitations of their own.  
 
Based on our experience with customers, the biggest issue is limited visibility into their asset health. With real time monitoring companies often face unexpected failures that can be really costly. Information is not shared effectively. That is another problem. Like people are working in silos. The processes are not connected well with each other. The systems are not talking to each other, which is essentially leading to a poor decision making at the enterprise level. Another issue which we see is unplanned downtime. And they have models which do not help them minimize these unplanned downtimes because they don't provide reliable insights into when assets might fail. And that affects the metrics like mean time between failures leading to high risks, maintenance costs, and even the loss of equipment. And finally, another thing which we see, with the traditional maintenance models is the operating cost is increasing and longer run. And because of this, and the core reason, because of which this is happening is, the fact that companies are not able to predict the failures accurately. This leads to spending more both on operating and capital expenses. We really believe that the solutions like GE's SmartSignal solve these issues by using AI to predict failures and giving companies to better control over their operations and budgets.  
 
Moving on next and talking a bit about future and the predictive maintenance for which you all are here today, I would say that predictive maintenance is indeed the next step beyond the traditional methods, right? With AWS and GE SmartSignal, we’ve come together and with companies like you, we can predict potential failures before they can actually happen. And that will not only reduce the need of reactive or even the preventive maintenance, but will also help you secure your assets, increasing the life of your assets, and reduce the operational expenses which you incurred due to unplanned downtime and the unplanned maintenance. In this slide, what you see essentially here is how you move from early design stages to identifying potential failures long before they become catastrophic by catching issues earlier, and companies now can schedule maintenance at the right time, reducing downtime and avoiding expensive repairs.  
 
This approach, powered essentially by AI and ML at heart, helps extend the life of assets and make the maintenance more efficient. We are providing definitely the infrastructure to support this shift and allow companies like you to use SmartSignal to monitor assets in real time and act proactively. Before we step forward, I just want to talk broadly about APM, that APM is not just, you know, a maintenance tool or it's not more just about maintenance. We personally believe that it offers a complete view of asset and health and performance. What we see APM has definitely four stages. The very first stage is all about the monitoring. Here you are setting up the baseline. You're setting up the asset health and responding quickly to the alarms and event so on and so forth. The stage two is more about reliability and the preventive maintenance, where you are trying to predict when maintenance is needed. The third stage is more about predictive maintenance, where you can use AI, ML to predict anomalies. And with this you are able to take smart decisions. And obviously the majority, stage four is more about operational excellence, where you are combining everything together. You are using prescriptive analytics with the broader business strategies, connecting everything together. And then companies are able to optimize the maintenance cycles, reduce energy usage, reduce opex, achieve better operational excellence and performance management. What we have seen at AWS that companies across the globe are at different levels of maturity when it comes to four stages of this APM. And obviously one of some of these problems are being solved by SmartSignals here. And and then eventually by the GE APM.  
 
And together, we believe at AWS and GE. We can help you move through these stages no matter which stage or which maturity level of your journey you are at improving the reliability, reducing cost and optimizing performance. Just to recap, GE and AWS are offering a powerful solution with SmartSignal and combining the predictive maintenance with real time data cloud scalability. We are helping energy companies overcome the limitations of traditional maintenance. Whether it's reducing the downtime, optimizing the maintenance cycles, or improving asset life, it's all about, you know, at the end of the day, driving the business value, which we believe can be done together on this.  
 
With this, I will pass on to Ryan to build upon this section before we move forward. Ryan, over to you. Yeah, thanks, Sid. And this is why I love having the experts on the call and I always joke, I'm not I'm not an engineer, but I think this sums it up really well into kind of where we know the GE and AWS collaboration where we fall. So today is obviously very heavy around predictive, where we're heading with AI and things around that. But this step change into from asset health all the way up to a full APM. I think it's really important to take a pause on this because as Sid said, when we're working together with customers. They might be in various stages, right? Either you have your data siloed in historian. You need to find a way to connect it with your EAM, right?  
 
You need to find a way to get your EAM back to your APM. There's a lot of activities that need to happen before you even step into what we would call a predictive or prescriptive maintenance stage. Right? And when you think of, well, I get excited about kind of where we're going with our APM, and I hit on it a little bit in the beginning, is that modernization with micro services, you know, GE for having very highly, highly performant applications. And we're kind of lockstep with the market and where we have to go in terms of scale, speed, data security. And that's what's very exciting about this. And I think this graph does a great job summarizing. I also like to say from an outside of an industry perspective, when you think about the levels of maturity, there's a lot of trends, there's a lot of hype around certain technologies.  
 
But what I often see when I used to work in financial services, on the technology side, people tend to jump into a forward thinking technology without thinking of the process that goes behind it. So they're still left with silos, even though they think they have a short term solution. So I love this because when you look at an APM program, it could be a piece of this, multiple pieces of this, all of it. Right. And really it comes down to what your goal is as a company. So really, really well summarized and with that note we're going to pass it over to our product expert, Truman. We're going to get deeper into the tech. And again the Q&A box is open. Please ask questions. We'll answer them as we go. So over to you, Truman. Thank you so much, Ryan. Thank you, Sid. It's really important that you both set that value context. You know, understanding your risk and then and then identifying a tool that helps solve that risk. Right. And I love that slide, that shows the stages one through four. Everything that we've got in our APM portfolio and really understanding where are your gaps that you'd like to fill? And that's just a starting point.  
 
Right. And then you can continue, continue on. So with that in mind, let's let's dive into that stage three and four portion, where we have within, you know, GE Vernova's APM solution, our digital twins and, and let's talk a little bit as well as how it's beneficial for, for you as, as a customer and, and how you're using, and leveraging the cloud, right, with your instrumentation and sensors and their data already connected in the cloud, perhaps, there's already that value in that time to value is is already evident. But in bringing these sensors into GE Vernova's advanced analytical digital twin, which we call SmartSignal, we can utilize AI / ML modeling to generate multivariate pinpointed estimates for every sensor that is connected to your asset. So, how does our software help augment what your operators and engineers are already doing? It's through a well built digital twin model accessible anywhere but securely as well, capturing various operational process and environmental states.  
 
SmartSignal then generates an accurate expected value with SME defined type dynamic bands, which we call residuals. That's what you see there in the mint green sections, SME identified failure modes and rules in exceeding those bands is what results in diagnostic advisories that you can action off of this all occurring despite never having crossed any of those standard fixed alarm thresholds which are, depicted there in the, the dotted red lines. We then take So here's another example. So as we proceed through this example, you can see that as you're exceeding your bands, you may have initially a priority four. But as it progresses, because you're also exceeding in your lube oil pressure and your lube oil temperature as well, you're able to get much more pinpointed alerts that lead to a cooling loss. In this example. Okay. And we then take advantage of linear regression techniques to forecast out, giving customers another data point to take corrective actions accordingly. So you can understand, at what point in the future might I actually cross those red dot lines, those DCS alarms.  
 
Okay, let's just step away for one brief moment to continue, focusing about those failure mode diagnostics. It's important to understand, that a data only perspective towards advanced analytics of critical assets can can take you far. But marrying that with specific failure modes identified by SMEs that have utilized that have utilized advanced analytics that produces far greater results because it allows you to narrow your focus to issues and deviations and alerts that matter. Right? Otherwise, you may be, you know, the solution might provide to a lot of anomalies, but without tying those, again to specific failure modes, it takes a little longer to figure out what exactly is this showing. Shown here is just one example of that. One of the hundreds of blueprints that GE Vernova has developed and has already ready to deploy in our library across multiple industries. Such as wind, power, mining, hydro, chemicals, and many more. And, pinpoint the failure modes that they support identifying. Coming back to how what what that means in the cloud space, right. We're able to leverage additional features such as on demand analytic template updates, capability to maintain multiple versions of this of these blueprints, adding additional flexibility to how you can deploy your analytics and utilize the templates.  
 
Additionally, the micro apps and services running SmartSignal in the cloud leverages quarterly release cycles to ensure customers are running with the latest functional updates and bugfixes. So with that, I want to reserve most of our time for, Massimiliano as we are really privileged to have him speak towards A2A’s journey in advanced analytics and their journey in the cloud. Thank you, Truman. I thank everybody for joining once again. So before going to the main part of my speech, I'll be about trying to give you some real experience to support what Truman just showed. I would like to give you a short introduction about our company. A2A was founded in 2008 by the merging of two old public companies, and as of today, A2A is one of Italy’s largest utilities.  
 
At a high level, some four years ago, the company designed and announced a long term business strategy, having sustainability at the center, with our target to become net zero by 2040. More specifically, the business strategy has two main pillars. The first one is called energy transition and the second one a circular economy. So we do business and provide services in several areas. The business here at the M & D center the main is the power generation section in our company. We have several combined cycle units, one of Italy's largest hydro fleet. We have also been growing very fast in solar and wind lately. Then, the environment. And by that we mean waste collection, waste management, owning and managing material recovery, or waste to energy facilities. We manage large distribution networks for both gas and electricity, which we also sell.  
 
We have district heating and water cycle, public lighting and smart infrastructure. As I said, I joined the company five years ago and I've been part of, monitoring and diagnostic center, that is in charge for monitoring and supporting our combined cycle sites that will leverage cloud based APM reliability to pursue this goal. The locations of such sites are shown area here in this map of Italy. As you can see it, they are most of the notable catchment. We also have a site in central Italy. Although we have 100% cloud based APM reliability users now, our journey with predictive analytics starts a long time ago, started a long time ago, even before A2A was actually founded. And it started as a power company that was at that time named Edipower, and which was later acquired by A2A in 2007. First as a POC and then with a full fleet rollout. Edipower decided to adopt on prem Smart Signal models to start monitoring it to start gathering predictive insights, predictive warnings for their power plants. So when this company was acquired by A2A a few years later, A2A started their journey with SmartSignal Analytics as well. Cloud based reliability APM reliability was then adopted in 2017 for a couple of sites, namely Cassano and Chivasso.  
 
With then a much larger revamping package that was aiming at increasing reliability and flexibility for those power stations to battle the market demands, change, the always change in market demands. Starting in 2018 till 2020, we carried out a full migration of existing on premise Smart Signal analytics to the cloud and generally speaking, we completed a fleet rollout of our APM Reliability Analytics. Halfway in this journey, in 2019. That is when the monitoring center was established. And up to that point, it was the Paris based GE IMS service that was in charge of our APM models and alerts. After the in-house monitoring center was established, we gradually took over from IMS. We started with processing our own alerts. We then moved to model maintenance, all the way up to a full rework of our blueprints when needed, until developing ad-hoc analytics for very specific use cases when we need to. So this is in general how our journey, what the solution was, a technical solution that has been here to support us with gathering insights into our data, and have an early warning of actual failure for our power plants.  
 
So in general, we've been using both cloud based APM reliability. But as I said, we started with on premise signals. I also personally started with our Smart Signal for 1 or 2 years before we moved to the cloud. What I would like to do here is to provide a sort of comparison. Tell you something more about the difference. The main differences we found when migrating to the cloud. The first advantage we saw is what we call information management. It's easier for us to manage our alerts in the cloud to browse, than filter sorting down and so forth. And then there was a very useful case management tool that serves as our sort of repository to kept track of all relevant information, and past issues you share with production sites. In the cloud, you have access to, as Truman had said earlier, to a very comprehensive digital twin for blueprints and proactive diagnostics library you can take to develop your own models, and you can also rework them if you need to before deploying them for you. Then, there are some new nice features that make your analysis easier.  
 
And there's one that I will show you in details later on. That is called event frame analysis. And there will be a real use case where we took huge advantage of this visualization tool to understand what was going on. But probably the main difference and the main advantage we experienced is with our IT infrastructure. I’ve been an APM user for over five years. I, honestly, I know little or nothing and I care nothing about our IT backbone. And that's because you do not need it. If you work on the cloud, you will not need any physical machines to run your analytics on. You will not need to maintain any machine. You will not need to worry about product upgrades. Sometimes you don't even notice them as they happen seamlessly. Have us in case you have an issue. From minor glitches to a major interruption, it's easier and quicker to get support by the GE service team.  
 
Let me go a bit deeper into a couple of real use cases. The first one is about gas turbine. So, in other words, as our real application of the blueprint Truman showed earlier today, it's not the very same blueprint, rather it is sort of an adaptation of that blueprint that was originally designed for a status state monitoring, but then the specific use case, that blueprint was re-adapted to monitor a transient state of this machine, that is the machine’s start up and our re-adaptation gives us a solution that allows us to monitor such start ups. This specification was cold during a couple of startups. So it's basically about a failure with the GT inlet blade valve, and what we found was that such a valve at the power station of ours was taking much longer than usual to respond to opening requests by the control system. I'll show you the evidence, the supporting evidence in a while. We raised this evidence of this, and the notification almost immediately to our colleagues outside, and they inspected the valve and found unusual friction from some internal moving parts, which was fixed by replacing some items, some with better lubrication as well.  
 
This early warning, this prompt action helped us prevent more serious issues with such a valve, which would probably have led to a startup trip, and maybe also to unstable combustion and static state operation potential leading to higher emission, emissions. So let me switch to the next one, which gives us some insight into the evidence behind this use case. This is what I as I said earlier, this is the event frame visualization to know basically what I'm doing here is showing some relevant signals in a side by side visualization for eight different startups. So that that's easy for a user, for our analyst or an SME to understand how such relevant signals were behaving and how each single startup compared to the other ones. And if you look at the bottom of the screen, you'll see, the signal for the valve position actually you'll get two lines. The blue line is the actual value, or from your DCFs, from your control system, while the green line is the estimated expected value that was calculated by our model. And it's easy to understand at first that under entirely different circumstances this valve was taken much longer than expected to react to a control system request. Another visualization, as what GE calls the analysis the VOT analysis, the IT analysis, the visualization analysis, etc. So what's happening? What I'm doing here is something different.  
 
So I picked a specific event from the three I highlighted in my previous image, I’m particular to one that occurred on January 9th, 2023. And I'm displaying a couple of signals and an overlay, and I overlaid them. And specifically we have the blue line that shows our control system request, and the orange line that gives us the actual valve position. And it's very easy to understand. That it was taking some ten minutes for this valve to adapt to a control system opening request, which is very unusual. It was an indication about, it was a very much an early warning of a valve failure, that was caught on time. That gave our people the opportunity to fix that before more serious, possible, possible, more serious consequences. Well, the second use case is probably a more traditional one, as we're going to see a sharp rise in vibrations for a condensate pump monitor. I said more traditional because, at least with our experience, but I think Truman will be able to confirm that. SmartSignal analytics were first designed to catch small deviations, propagating overtime and steady state operation, and that as a steady state use case compare to the previous one that was a transient use case, which is, as I said, there re-adaptation of original blueprints to monitor transient situations. Back to this number two use case we’re picking a condensate pump monitor. And as you'll see in my next tran, we detected a rise in vibration. August 2023. We again notified this major deviation to our colleagues on site, and site investigation revealed that significant motor failure. But though significant that fail was caught on time. By acting promptly, we were able to prevent much larger damages to the equipment as the motor was taken out of service on time, replaced with a spare one, and sent out to be repaired. So, by HAC for sure.  
 
We reduced maintenance cost to restore such a matter. It’s a small chance. Still small, but still we had the chance of causing a full unit trip because of this fail in March. And that was again prevented by this early warning, and prompt action by our colleagues. Again, the supporting evidence which will give us again a better understanding how fast Smart Signal actually works. You'll see basically three boxes on this image. The first one is our light green box. And again you got two lines, the blue one actual value, and the green one the expected values calculated by our models. The difference between such values as called residual and as shown at the bottom of the screen, and the green. The light green box was a normal operation with very, very small residual. That means that this piece of equipment was working properly, but also that our model was a real digital twin of our actual piece of equipment. Than fast forward to the red box, you'll notice a sharp change in the actual value. Parallel schedule well, schedules ranging from two millimeters per second up to three millimeters per second. A big change if you look at that. Such a way. But still below the DCS alarms.  
 
So nobody was going to notice that otherwise we were able to catch these deviations. Thanks to SmartSignal elaboration our on our data. We raised a notification, as I said earlier, and the motor was replaced with a spare one. And we go to the dark green box where we see there's still a minor but persistent deviation between actual and estimated values. That means that that's something I can expect, because you're using your original digital twin to monitor, to replicate as a replicant, however, slightly different piece of equipment, because you replace the model with a different one. So that deviation in the dark box shall not be interpreted as something that was not working properly.  
 
Rather, that meant that you need to perform a model retrain to regain your full monitoring capabilities in your digital twin as to restart situation that that's closer to the light green box. That's all for my part. Thanks again for joining. If you have questions, I'll be glad to answer back to you, Ryan. Yeah. Thank you. Massimiliano and I just want to double down from what Truman said, thank you for joining us. We know, you're very busy. A2A is very busy. So having you with us is has been great. And like you said, A2A has been a long time, I would say, partner in this. We go back and forth a lot. We have a lot of great conversations. So we want to thank you all for the continued feedback and work with us. So I'm going to kind of wrap this up here. And there's a ton of questions coming in. And we'll moderate that. We'll answer a bunch of these live. But before we get to that point, I want to just kind of put a bow on the topic here today. And, you heard from Sid, you heard from Truman, now you heard from Massimiliano. I just want to come back to this work that we're doing when we think about cloud APM and specifically analytics in the cloud or a SaaS deployment.  
 
On why GE Vernova and AWS. Right. So a lot of you've been working with us for a long time, and we've been around 130 years. As you can see, the branding has changed. We're one GE Vernova now as of April. And we span the entire energy lifecycle in the metal and mining and also renewables, right, with the technology that we provide. So we have over 300 customers today. And when we talk about models, there's a few good questions in here about modeling. And bring your own analytics that we'll get to. We are getting 30 billion plus data points a day. We're, executing over a million analytics a day, and we're monitoring over a thousand plants, whether that's gas turbines, and anything in between. So when you think about all of that, plus these models that we have for predictive, heading into prescriptive, we have a really strong foundation there.  
 
And some of the questions that we get and I'll pause on this one, is, what are you doing with generative AI? Right. What are you doing with LLMS? What are you doing with all these emerging technologies? And I know Truman's on the call, so people have questions. We are working before I introduce the AWS side, within this strategic collaboration agreement, we are actively working with customers and internally today to really provide and generate a differentiated approach to generative AI, whether that be Copilot or within applications or in our platform itself. What we tend to hear is a lot of the overhype and under under delivering around this technology. So we look at our expertise on the left. What comes with that is also the responsibility to make sure that we're doing this right. So I know everyone has questions, but that first curve that Sid shared, there's a lot of step changes you take. There's a lot that needs to be done for your organization to shift from kind of a reactive maintenance methodology into predictive or even prescriptive, and we understand that. So what you'll see and what you'll see on the next page is we have a couple reports you can look at on kind of why we're different, why we help support that whole lifecycle of APM, along with the work that we're driving towards with, with Copilots and some other emerging technology. On the AWS side, they are a growing, energy and utilities business unit. You heard from Sid. Sid 16 years of experience in the space.  
 
A lot of their team also does. And as you know from AWS, they are a leader in cloud infrastructure. So when we talk about what we're doing with our platform, and how we're supporting scale and security and speed, we've done a lot of work since our V5 release in 2022 to really move to a composable and scalable architecture via microservices, right? So, I mean, think about the shift a lot of you might be undergoing from an on premises model into the cloud. This is really why we're collaborating, because we see the need to support customers like you, to get to the point of scale. So, us and AWS again, the combined expertise between OEM and energy expertise, plus the infrastructure, cloud energy expertise really was a no brainer for us to kind of enter this kind of collaborative agreement. And with that, before we go into into questions, I know there's questions in the chat. We know there might be some follow up questions. You'll find our information here. And really the ask around this and we'll we'll answer some of the questions is if you're currently on prem and you're a GE customer and you're looking at, hey, is this the right thing to do is to move to SaaS and get to the cloud? Reach out. If you're looking into predictive solutions or a full cloud APM, you can reach out to us. Or if you want to engage with Sid and the AWS team on more of an enterprise. What's my digital roadmap for cloud with APM being inclusive?  
 
That's really the the areas where you can reach out to us and engage with the conversation. Like I said, we're performing POCs, we're migrating customers today. We're scaling customers today. So please reach out. We're happy to answer any follow up questions. And, Sid, before we go to Q&A, is there anything you want to add to kind of this collaboration before we start answering questions? No, I mean, you know, Ryan, you have perfectly summarized the collaboration and the power with GE, GE Vernova, and AWS, which we bring on the table. The only message which I have for our viewers or for our customers on the call is like every time, like you are thinking, if you are not on SmartSignals or you want to go for it or you have it on prem, I mean, think big, I mean start small, try and do POC, try to test, validate, the business case and eventually scale.  
 
And that is the philosophy on which Amazon was built. That is the philosophy on which we build in AWS. And that is what we are, you know, share with our customers as well that you need to train your mental model in a way that you need to move things fast. You need to take decisions faster. And to do that you need to always, you know, think big, start small and scale. That's pretty much it from my side. Thank you. Great. Yeah. Great. So let's get into questions. And Truman this first one is for you. What is the expected runtime time needed when deploying the digital blueprints to new applications? Example a new pop. Yeah, that's a perfect question. We get all the time, right?  
 
I think I even started off my presentation talking about time to value and and right away, with, with only about two weeks of, runtime data brought into the model, you're able to leverage. And that's yet another example of, you know, it's not just a data driven, only driven approach, but marrying that with, blueprints that have SME built in failure modes. So even with the limited runtime, you've you've got coverage with those rules and those failure modes. And then as the model, gets more data brought into it, it gets even more and more pinpointed, more and more narrow in terms of of how it's able to detect anomalies. So I hope that answers the question. Very good. Thank you for asking. Yeah, I think the cool part about that, we talk about Truman as well as the composability of the blueprints. Right. So there's deviations or you want to add some more models to it. You're able to and then kind of click to deploy those across multiple pumps and compare them.  
 
So that's a that's a great answer. For sure. So we're go to the next one here and Sid this is going to be for you. Given our company's expertise with on prem APM solutions like SmartSignal, what's the most effective strategy for piloting, let’s just say, something in the cloud to ensure a successful transition and maximize its value? Should we focus on migrating specific equipment or adopt a more holistic approach? Yeah. I also thank you for the question. So it's a very good question. I mean, I guess again, go back to what I've just said, you know, like saying Amazon, we are always believing in think big, start small, and scale. Right? So the most effective strategy for piloting your cloud APM would be a phased approach. You need to start with specific critical equipment that offers a high value for predictive maintenance. This will essentially allow you to demonstrate quick wins, gather performance data, fine tune things, optimize the cloud native capabilities such as, you know, scalability, real time analytics.  
 
So our suggestion to our customers always in such a situation is that, you know, start by migrating a high risk equipment, for example, or a high value assets such as turbine or pump. We have predictive maintenance as potential to minimize downtime and impact. You know, operational costs substantially move the needle. And once you validate the cloud APM performance and value, you can scale the solution to the broader asset class. Eventually adopt a more holistic approach for, enterprise wide, integration. Right. So this strategy will essentially not just help you, you know, scale, but also help you balance the risks, enable gradual learning, train your team, get people used to like, fix the change management piece and maximize the benefit of the cloud capabilities. Like, you know, real time monitoring, AI driven insights, security, scalability, so on and so forth. I hope that answers the question. Thanks. Yeah, a good response to Maximiliano, I wanted to ask you, you and A2A went through this, transition to SaaS. Curious on your your thoughts, how you went about it. I'm sure you have some good insights, from that project. Well... That was part of my, my presentation was about the differences and advantages we're seeing now.  
 
So I would like to give this chance to speak a bit more about the transition. We have actually went for a Side-By-Side approach. But I do agree with what Sid just said. If should I go back to first half? You can play that assignment. There's always something you'll learn in the process. And if you're given the same assignment, once again, you'll do things differently. It's you always want to improve. It's continuous improvement process. So should I go back to something I would like to change? First, have your data. You need to take care of your data. You need to understand first what you are interested in. Then, solve which is the use cases you want to address in the first place. And you need to be able to give your partner, your GE partner who’ll have all the information they need, documents, data.  
 
Take care of your data. Make sure all your signals work. The sampling rate is adequate. Whatever. After doing so, address those use cases, prove the value and I'd go for a first rate rollout. This is what I would do. We didn't do that exactly. We went for a side by side approach, which worked, but in my opinion, it's not the most efficient thing you can do. Thank you. Thank you for that, Massimiliano. Really good real world context. We're gonna jump into the next one. The first part of this question is, does GE APM also help you understand the cause of the failure, performance degradation after detecting the issue? So, Truman. You’re the the technical technical side. Thoughts on that? Yep. Yeah. Thanks, Nathan, for this question. Are blueprints have those failure modes right. And also what you'll see in those those failure modes diagnostics is is what we call them. It will see the progression of, of your degradation. So let's say that it first identifies maybe a slight exceedance from showing that mint green envelope around, estimated value.  
 
But there's different rules built in so such that as the issue progresses, the priority and the alert can increase, as well as, again, that example I shown and I think, Ryan, we're going to send out a copy or you you'll be able to look at the recording again, and go back to it. But as you see, other other sensors contribute towards that diagnostic and different rules applied towards those sensors as well. You'll get you'll get additional understanding and priority of the alert that you're receiving things such as, alert density, a diagnostic alert density, diagnostic alert counts all that contributes to helping you understand, your your performance degradation and then the time to action that, to project out, to understand, you know, where you think you might be with linear regression techniques, to getting to maybe, DCS alarm value or something like that. Perfect! No, great. Great answer. And I think part of that too, in this question is when you look at our APM, kind of a holistic APM, we're talking about predictive, right. But we offer root cause analysis, production loss analysis, reliability analysis, your Monte Carlo, your Weibull. So there's a lot of flexibility.  
 
So when we say enterprise APM we don't just mean predictive at scale. We have tools in our solution that can also help you go back and work through that RCA and further determine the cause of failure. Right alongside that analytics. So great. Great answer. The next one is, if given a plant for our site has multiple machine types with different sensors, can custom ML models be built to cater to each asset type, improving precision and predictions and reductions in false negatives? And I'll start with an answer on this one. And obviously Sid, Massimiliano, Truman, I'm trying up after our our cloud platform offers bring your own analytic capabilities. You'll hear that a lot for me. I was just talking to Viva. I was actually our technical product manager for that solution yesterday. So if you have models you're running or you have models in Xcel or any other system, and you want to export those in alongside your SmartSignal models, you can absolutely do that. It's not limitless. We're not an open development platform where you can bring anything that you want, but we're pretty flexible in terms of exporting those models in. So whether that's trying to bring wind assets into your APM today or expand your modeling around a turbine, we can we can definitely, definitely do that. Truman, Massimiliano, Sid?  
 
Any thoughts on I don't know, Massimiliano, if you're doing any of the bring your own analytics today as well. We're not bringing our own analytics. Not to AMP. We're running them on our dedicated platform, owned by our company and managed by a company. Still, in our experience, it was very, very useful and efficient to rework the existing blueprints for sharing the deployment process. Going to blueprint is the closest you think you have to your real piece of equipment, and if you spec a very standard piece of equipment, for example, AG9FA, gas turbine. Well, that blueprint will probably do the job. But if you go to your probably a steam turbine or is recovery steam generator, how the things we have in our fleet are very diverse. They're not. They're similar, but not exactly the same. So you'll have a blueprint, but you'll probably need to adapt that to your real pieces of equipment. That was very easy, not straightforward in the first place, but after you’ll learn how to do that. The data was very beneficial and easy for us to tweak and to sometimes rework GE blueprints and analytics to incorporate more signals to slightly change the diagnostics. To add some other failure modes and so on. So it's open. There is something I would say small but not that small. You can do to to build better digital twin than if you have something that that's completely different. Well, I'll stick with Ryan answer, but that's not our real experience. So the blueprint library is very extensive. You're probably to rework something, but you can take from that on. Awesome. Truman? Any any thoughts from you?  
 
Yeah. Let me, I love Massimiliano’s answer. And, you know, I'll add on to it. Right. I think you think you've got you understand from his response. The blueprint library a pretty much has the different asset types. Even within the asset types, sometimes there are, manufacturing specifics, sometimes there's not, but there's after that, there's there's a lot of personalization that can occur or needs to occur, actually, to exactly that point of reducing false negatives. And that's the flexibility of our analytic is it can it can you know, I let's go back to what I first mentioned when I was presenting about, you know, first understand your, your problem and then and then generate the solution.  
 
Right. And, and that really will drive how you, create your model. And if you, if you want, a model that's quick to deploy and, get quick time to value, you could start there and then you can adjust the model to be, more narrow to, to reduce those false negatives. Once you understand what your, your true use case or needs are. So, it's definitely not one of those. There's only one way to deploy it. And, and then you're, you're stuck with it. It's something that as you understand how this tool benefits you and what you what, you see where your gaps are. This analytic, the solution will essentially mold to to your needs. So thanks. And, Sid, any thought I know microservices in the AWS there's a lot of flexibility with SaaS and cloud analytics. Any any quick thoughts on the bringing your own model and the benefits there. Yeah. So I mean, I mean, I think the answer is already, well summarized. So probably something some points will be reiteration of what I'll be speaking about.  
 
At AWS, we do support the development of custom ML models, you know, based on built upon the Lego blocks services which we have got in AWS. And these models can be essentially trained to different machine types that, you know, whereas since with, you know, varying from sensors to vary tremendous vendors and this flexibility essentially, you know, will allow you to cater each as a type in and obviously help you enhance the precision in the predictions, reducing the false negatives. The platform leverages, you know, the AWS Modular Cloud Services, enabling the creation and deployment of custom built models. Quite easy. These models can incorporate essentially, you know, a lot of different use cases at the end of the day from anomaly detection, predictions, diagnostics on and so forth. By creating these models, I would always encourage to work backwards from the use case you're trying to solve, try to define the problem statement, try to define the challenge, because at the end of the day, that is something which is the key, right? And then the system will be able to better capture your unique operating conditions, improve your accuracy accordingly based on your scopes and the limitations, and minimize the positives and negatives across the band. That's pretty much it from our side.  
 
Great. And I know we have a minute. I'm going to sneak one more answer in here. Then we're going to we're going to wrap up. So the last question we have is does APM do RAM dashboards reports and can it help with asset lifecycle extension to evaluate asset investment plans? So first and foremost, we are not an asset investment planning platform. We integrate and work with the EAM systems we are APM strategy application to help perform those analyzes. But it's not something that's fully native. So yes, we can help with RAM right out of the box in our applications in terms of trying to create a life cycle analysis, we we kind of coordinate integrate with those EAM systems to do so. So again, if you have a more technical question, reach out. We can get you the right answers.  
 
And then last but not least is one reminder before we all drift. Well, first, thank you again to Massimiliano. We know it's it's takes a lot of time. We really appreciate it. Truman. Thank you. And then Sid as well. So please reach out. So if you have an account contact and you're interested, or you have more questions, reach out to your customer success manager. Your account contact, someone on your services side. If you have more questions, please do let us know. If you have questions for Massimiliano. Shoot them over to me if you're interested in his use case. You're watching this recording.  
 
We can see if we can get you some answers from him as well. So again, thank you. We really appreciate it. AWS thank you for the collaboration. Massimiliano, thank you for the partnership for A2A. And then Truman, thank you for for bringing your expertise. So thank you so much. And, we'll see you guys soon. Thanks. Thank you, everyone. Take care. Bye bye, everybody. Bye bye. Thank you.