WEBINAR

Cloud-First Data Management with Proficy Historian & APM for Asset-Intensive Organizations

Cloud-First Data Management
Asset-intensive organizations continue to face increased pressure to produce. And beyond that, to produce in a way that is efficient, reliable, safe, and sustainable. To do so, organizations are being tasked with making sense of more data than ever before to improve operations, predict potential downtime, maintain compliance, and fine-tune processes to support demand.

To help organizations meet these pressures, GE Vernova’s Software business provides cloud-first process historian and Asset Performance Management (APM) software, with proven integration, to help manage data better than ever before.

This webinar explores how a cloud-first approach to data management with GE Vernova’s Proficy Historian and Asset Performance Management (APM) can assist organizations in:
  • What is unique about the Proficy and APM architecture
  • Data collection and storage
  • Analysis, insights, and visualization
  • Asset and process optimization for any asset criticality
  • Improved decision-making via centralized time series data
  • And integration of new technology like Artificial Intelligence (AI)
Welcome Back
John thomas
Not You?
Cloud-First Data Management

Cloud-First Data Management with Proficy Historian & APM for Asset-Intensive Organizations

Asset-intensive organizations continue to face increased pressure to produce. And beyond that, to produce in a way that is efficient, reliable, safe, and sustainable. To do so, organizations are being tasked with making sense of more data than ever before to improve operations, predict potential downtime, maintain compliance, and fine-tune processes to support demand.

To help organizations meet these pressures, GE Vernova’s Software business provides cloud-first process historian and Asset Performance Management (APM) software, with proven integration, to help manage data better than ever before.




--
TRANSCRIPT

Good day everybody. Thank you for joining us. I'm really excited to have you. Whether you're you're new to our webinars or been to a few, this is gonna be a really good topic for you all. So thank you for joining and taking time out of your day. We should have plenty of time for questions. And we have a pretty concise webinar here with our experts, on the historian connectivity and also a predictive maintenance side. So really what we're here to discuss today is the scalable data foundations for advanced maintenance. And the way I want to kick this off is a big did you know, since the Vernova spin off and, and since digital is now part of Vernova, I always like to make it clear. We have our own historian offering. It's called Proficy Historian.  
 
So we're here today to one. Talk about the work we have ongoing with our Proficy team to discuss how we're connecting and doing data a little bit differently, and then three, how you can build off this foundation and, for some advanced maintenance techniques and really talk about the future on why getting data right is super important for where we are today. In the energy space before we get rolling, there's a couple of housekeeping items. This is being recorded. So at the end, if we don't get to your questions, or maybe if you hop off early, this recording will be available shortly after. Please submit your comments and questions in the Q&A chat box. These questions that you submit will just be visible to us. And the person asking, if the question, if we have time to answer live, we will answer them live. So please put them in the chat. We'll be monitoring. And if we don't get to it during, we'll follow up with an email and get that question answered as we go through. You also have access to the resource center.  
 
There's a lot of great pieces in there around Proficy Historian, APM, Smart Signal and really around the topics we'll be discussing here over the next 40 45 minutes. So without further ado, we're going to get into the agenda. We're going to do a quick introduction as we get through. We're going to frame up the problem. We're here to discuss and really give a view on what we talk about internally as it comes to asset data and industrial data. Brian is going to get into Proficy Historian, why it matters, why it's different where they're heading. We'll discuss how we're connecting with Proficy with our APM to provide kind of more advanced maintenance techniques. And then really, the real world impacts here and why we're doing APM and Historian and why we're so tightly working together as one Vernova. And then finally, the big picture here, which is where we're heading and how getting data right lays the foundation for future maintenance techniques and data usability. So all of this before I hand it over to the experts for an introduction.  
 
My name's Ryan. I lead product marketing here on the software side for the Power Energy Resources software business, primarily SaaS, APM, and then our AI and Gen AI initiatives. And know I hand it over to Brian on the team, for an intro before we get started. Go ahead everybody.  
 
My name is Brian Johnson. I work in our manufacturing division here in GE Vernova delivering software for, grid, energy translation and, manufacturing operations. So I've been in the role for about three years now, and I really focus on, on data management and we really like Ryan, said, our Proficy Historian. So happy to be here. And I think we have a really great presentation for you.  
 
Team Andrew May, I am our product manager within the Power and Energy Resources software for our APM connectivity. I've been in this role for about four years now. And previous to that, I was in our services organization. Been with GE for about 11 years at this point. Thanks for having me. And look forward to the conversation. Hey Everybody. Good morning. Good afternoon. Good evening.  
 
My name is Truman Hwang. I am the technical product manager for our advanced analytic offering known as SmartSignal. Really happy to be in this space, providing insight across many industries. As you look to, you know, expand reliability and, and availability of your assets. Been with GE for over eight years, both on the services side, deploying, deploying analytics. And currently as a as a product manager here.  
 
Perfect. So let's get right into it. Let's let's frame up the problem and then I'll get out of the way. I'm I'll monitor the Q&A box as we go. So questions come up I will help moderate and we'll we'll spend most of the time with the experts here. But it's really important to frame up the problem. And I always like to discuss it this way. I've not been at GE for ten plus years. I've been a GE just over three. My background is really enterprise SaaS software. So I like to take a step back before we get into the really important stuff that you're all here to learn about, which is how can this help me? Right? The stuff before "How can this help me," is really what are What is the energy space dealing with? And this is not just unique to energy, right? This is any industry that's going under a transition and going more digital.  
 
So what we see today and what we hear and why we brought this webinar together is even in organizations that are using a historian, right, or just APM or a mix or a bunch of different systems, there's still an inherent problem with just inconsistent data. So what Brian is going to get into is really that data foundation piece with a story and, and how they're doing that different. We also here as we push through is we have siloed systems. Right. So when you think about using a historian with APM, we have 4 or 5 different providers, or you want to bring in another vendor that can help orchestrate data, a big chunk of this is how do you deal with those silos, right? And we'll get into that on the next slide on how we're thinking about that. And then again, you have the lack of context.  
 
So inconsistent data, siloed systems, lack of context is a recipe for disaster. So we're going to talk through as we progress through this how we're going to help with operational risk production through this work. How we're going to help increase your decision making speed. And also with some increased efficiency. So what this has meant to discover is there's a whole lot going on with your data. A lot of the times getting that data, it's very, very difficult to make sense of it. And there's a lot of options out there to make sense of it. So really where we're heading and we're going to discuss throughout this webinar data solutions to those challenges. And from a GE Vernova perspective, we're going to talk very heavily on asset data. As you see the expansion into SaaS and progression of cloud solutions and where we're heading as an APM and as a historian offering that Brian I'll touch on, there is a very big opportunity to help customers do more with data, especially with the expertise that GE Vernova provides. So when you look at the left side here, what we're going to really walk through is how between Proficy Historian, APM and SmartSignal, how we can help make sense of the data here on the left, turn it into actionable insights, whether that's through Historian itself or through SmartSignal or other elements of our platform.  
 
How can we help you increase your reliability, improve your maintenance outcomes? Take faster action on your assets, and ultimately expand what you can do with your data. At the end of this, we're going to wrap up with where we're heading on the AI front and why getting data right really matters to the progression of these use cases. So without further ado, I'm going to pass it over to Brian. He's going to get into Proficy again. The Q&A box is open. Please fire away. And we'll answer as we go. Awesome. Thank you. Ryan. So yeah, happy to be here today to talk about Proficy Historian. I'm sure that many of us on the call are either familiar with what a historian does or are directly related to, you know, the historian in our operation, whether that's, OSI Pi or you are a current customer of Proficy or there are many others out in the market. So what we'll cover today is, you know, what do we get out of having a historian, sort of from the Proficy angle specifically, what do we get? How is it extensible? How does it fit into the operations that you're doing today?  
 
And then what would you get moving forward? Right. How does it make your life easier? Right. And really, how do we break down those silos? How do we make the data easier to manage? How do we make it easier to access? Because I think in today's day and age, with the prevalence of AI and and agentic AI or training LLMs to get better use out of them, the quality of your data matters and the data that you're able to collect and then able to use to train those models or to gain better insights is becoming more and more important in the modern age. So yeah, Proficy Historian is well positioned to address all of those, those key pieces. And so we'll dig in now. Right. What you see here on the slide is, is our standard architecture. So really Historian starts at a site where data is collected. You can see on the left hand side we can either be in the cloud for Proficy Historian, we're the only, native to AWS historian available on the market, or you can actually purchase us from the marketplace.  
 
And we just had one of the world's largest utility providers, do just that, purchase Historian through the marketplace, and they're using many millions of tags in their operation. So we're scalable from just a few tags, up to 100 million tags, or anywhere in between. And we can bring in a lot of data. In the next slide, we'll look at sort of the key capabilities of what Historian offers. But when you're thinking about how would you fit Proficy Historian into your enterprise layout, for example? Right. We run right next to the machine, for example, and you can see on the left hand side we bring the data in, whether that's starting on your factory floor or where your power is generated or you know, where is the data coming from. That's where we want a historian to be. So you can start on the factory floor.  
 
For Proficy specifically, it doesn't necessarily have to be a Proficy Historian on the factory floor, but I would love if it was, frankly. But we can be additive to your, the environment that you already have. So if you have a PI historian in place and you're looking to aggregate data from multiple different, facilities in your footprint, or if you're multi national, for example, and you'd love to have a global view of your data Proficy Historian using our enterprise capability is a really easy way to do that. So you can see we bring the data if you follow the yellow lines across, the data is either coming from one factory floor or it's coming from third party historians. Like we just said, it's all flowing up to a centralized historian where we can aggregate all of the data coming off of your other enterprise systems and give you a single source of truth, right? Not only is it redundant, so you'll have a source to have disaster recovery in case anything happens in your scoring system, or if there's a natural disaster that affects your operation. But it gives you a single place to start to run analytics, right? Like we talked about training those models, breaking down those silos. Right.  
 
Maybe you're measuring factory A and factory B, the same quality or the same metrics with the same tag values, but their name something different, for example. And you need a sort of a unified namespace type operation to bring everything together. We can do that within our enterprise level historian. So you can then run metrics that really give you the breadth of your operation and across your organization. So you get that holistic picture and you're able to then make better decisions right faster. It doesn't necessarily have to be on prem based either. Right? We could move from an on prem primary historian to an on prem, windows based enterprise historian, but with our cloud capability and with our SaaS capability that it's coming at the end of next quarter.  
 
We'll talk about that in a future slide here. We really give you the extensibility to have your historian however you need it, how however it's going to work best for you and to give you the sort of time to value and operational efficiency that, that you're looking for. So on our cloud platform, right, we're we're native to AWS. And GE Vernova is a partner of AWS. So we, we build all of our infrastructure there. It's downloadable from the marketplace. So if you have an agreement with AWS today where you need to have a certain amount of spend to to maintain a certain rate, we can help you with that. Right? If you're downloading from the marketplace, you're getting that immediate benefit of spending that money with AWS. And it also makes the infrastructure much easier to manage.  
 
So our our cloud historian is deployable in about an hour. And you're up and running from download to data collection. And now the way Historian collects that data is to bring it in and store it in proprietary flat files inside of our Historian archiver, and then on top of those, you're able to apply compression to both the data coming in from the collectors, whether that's to a windows version on prem, in the tower computer, under your desk or to the cloud. And you're also able to have compression on the archive files themselves, so you can double compress the data. And that gives you a really, really efficient base from which to operate. So not only do you not have to buy more hardware, but you're able to collect very fast for a very long time and report off of it as if it was collected.  
 
Today we have customers that are running data from 15 or 20 years ago, and they're still able to report off of it as if it was collected that day, which is a really impressive capability. And I'm not sure others on the market can offer that. So we're able to expand to your footprint and to fit where you need to. And because we can run in this hybrid configuration, if either hybrid cloud or moving primarily to cloud is, imperative for your business, as we've seen in many businesses recently, they're they're really having a focus on moving to cloud and divesting from on prem infrastructure. It's important to have that capability. So it's definitely something that that we can do here. And if you're thinking in that sort of vein, Proficy Historian is definitely worth a look for you, I think. So let's talk about some of the key capabilities of Proficy Historian. Right. You may be saying, Brian, that architecture is pretty interesting. We're dabbling in cloud in our business, or we're looking at how do we aggregate multiple historians up into a single historian to both save space and to make reporting easier and to build redundancy or disaster recovery into our operation? But what specifically does your historian do? Let's answer that question. You can see if we start with the data storage element, right. Data historians, data storage is pretty bread and butter for what we do. It's it really is the main purpose. But you can see we can scale up to 100 million tags.  
 
We're really very fast. at data collection, you can see a million writes or 60 million reads per minute, unless you use our cloud option. A cloud option has been tested up to a billion samples per minute ingestion. And yes, you you read that and heard that correctly. A billion samples per minute. So really, really incredibly fast data collection on cloud. And because of the way cloud operates and because it auto scales and because it's really right size for your operation, you get what you need when you need it. It's really a pretty compelling offering, I think, for folks that are looking for that high speed data collection, but also the flexibility to not have to over purchase or overprovision hardware. Right? You're always right. Very right where you need to be. Now, obviously we collect time series data. We calculate data sets with our calculation collector. We're able to handle forecasted data as well. We have our customers in the energy space or in generation that are using green energy sources. Right. And calculation or forecasting of data is super important there.  
 
But we have that capability. We also handle alarm and event data. So if you're looking for that sort of, capability, we bring that to the table as well. We have another tool within the Proficy portfolio that we're we're working on called our Data Hub, which is bringing in, a calculated event engine. So if you're familiar with PI event frames, for example, it's very much an analog to that, where we're able to take the capability from historian and our other tools and be able to bring it to the table and say, listen, we can forecast out, we understand where this alarm is coming from, or we think you're going to have an alarm in ten minutes based on the data that we're collecting. So really fun capabilities there. And then what we talked about data replication across data centers.  
 
So important to know that your data can be where it needs to be, whether that's mirrored in an on prem system where you have a primary in a backup, or if you're using cloud, where by default your data is being put into multiple AZs. So not only do you have the cloud side replication and disaster recovery, but we're building it into the Historian as well. And then with tools like APM, Historian connects natively, whether it's using an API to get data out into tools like TrendMiner or Seeq or other analysis tools. From the manufacturing business, we have our own cloud operations hub, which is a really awesome Low-code no code tool to get you from data collection to actual insights into your data in a hurry, and it doesn't require any specialized knowledge or other training. There's training available, but it doesn't require special training to use. And so knowing that we're able to collect your data very fast and we're able to send it wherever you need to send your data, whether that's using Snowflake into a data lake or other tools or rest APIs or any of our distributors that we have available. We're going to talk about Kafka in a moment.  
 
One of the new big developments in the Historian space for us. Right. We can move your data where it needs to go in a very fast, efficient way so that you're able to get the best out of it. So I think Historian has a great capability set that's continuously growing and we're going to cover our latest release in the next slide. Right. So I think about Historian from my product management side. Right. There are a few categories that I like to cover. And in every one of my releases. And you can see them here user enhancements, enterprise model support. So bringing up things for better capability asset modeling, making data more intuitive to use and then security enhancements is a big one, right? In today's day and age, when, even what we consider to be the most secure servers in the world, we need to have better, cyber security requirements. And so from the Proficy side, we have a red team that does testing, and we have a cyber security team that helps us identify exploits that are available and make corrections in as fast a manner as possible. And if they're discovered, we we issue patches and make the necessary corrections as soon as possible. So these will always be features of the different releases that we have.  
 
And the keys for 2025, if you're just hearing about Historian now and you like what you've heard so far, we've made a lot of enhancements, right? So our Data Hub, which is, multi-component new tool that we're building, which is a unified namespace, directory service, enterprise data model, which initially will be ISA 95 based. But we're moving into look at says me or other data models as we go forward here. And this is something that we're just building. So we're really in the infancy of how we're we're getting it to the market. But there's a lot of capability there. And then a data fabric component as well to be able to move your data around. So if you are already looking to connect data or to bring system and different subsystems data, whether it's from your ERP system or other tools, we're using a lot of different protocols, whether it's Kafka, MQTT, many others that are available to be able to move the data around within Proficy to the tools that are, that are available. You can see so cloud scalability, like we talked about a billion points per minute ingestion really the fastest on the market I think, which is really incredible. And then our SaaS capability.  
 
Right. If you like the thought of having cloud based flexibility, but you really dread the thought of having to manage infrastructure on your side, SaaS is probably the way to go. We give a lot of key benefits with our SaaS schools. So not only we don't have to manage the infrastructure, but really you have all of the benefit of historian with none of the downside. So it's it's really a pretty impressive offering and something that I hope folks on the call will consider. And so for my final slide here, I think it's important to cover new and great capabilities. So for 2025 we've introduced what we're calling a Kafka producer and consumer. So we have, in our traditional parlance, a collector in terms of a consumer and a distributor in terms of a producer. So if you need that pubsub data, really high speed data collection, Kafka is probably the way to go. I know it's used a lot in the utility space, and so that's why we brought it in The Historian because we have a lot of customers that have this need for not only being able to subscribe to certain messages and data streaming and get closer to real time data, but to have the scalability and the the breadth that Kafka can offer and the way to bring in higher speed data, it's really very impressive. And so now it's available within Historian itself.  
 
And so you can see like the super high throughput, the scalability, if you're using things like Hadoop or Spark or Storm or really just looking for any other sort of data lake integration, Kafka is a quick way to get that. And so bringing it, the historian made a lot of sense being able to understand where we send the data, when then what you can do with it as customer. If you're streaming data very fast off of your data sources, we want you to be able to get it to a place where you can bring insights. And Kafka is an easy way to do that. And then obviously, being able to get real time analytics because of the data streaming is super important in high speed data applications. And and we bring it to the table. So yeah, this is my story over you hopefully folks have been really excited. And yeah, thank you so much. Brian, Really really good overview I'm going to be I have a question here.  
 
And I think it's really timely with the the problem we've framed up. So what do you think about Proficy Historian and where this falls into overall analytic strategy. And obviously so Truman will talk about it from an APM angle. How can someone like when you think about an analytic strategy, how can someone use Proficy as part of this analytic strategy. Like where does it fit? How can it help? Absolutely. I think time series data in both manufacturing and electrification is the backbone of where you're going to drive a lot of analytics off of, right? You're able to collect data very fast with the Proficy Historian and store it very efficiently for a very long time, which means that you can develop if you're training AI models, for example, you have a lot of data you can bring to bear against those models over a long period of time to show them different variation or to get better training for the model.  
 
And and even if you were just developing C-suite analytics, for example, because Historian is able to move across so many different spaces and to bring in data from different places and to use Rest APIs, for example, to connect to third party data sources, you're able to report on a lot more data and to derive a lot better insights, I think, than you would with other tools. Right. And because we connect to all sorts of different third party tools on the market, we can be additive to your environment. You don't have to rip and replace to bring in a Proficy Historian. So for example, if you're a Pi customer today and you're looking to aggregate to get other sources of data or to be able to bring things together into a single source, so then report off of, we're definitely something that you should be looking at.  
 
And so in terms of modern data strategy, like we talked about in the beginning, breaking down the silos, being able to bring everything together and then really to drive awesome insights based on the data you're able to collect, I think is where we really shine as a product. I think you're one last move before we move over to Andrew. This one just came in from Kinshasa. You so so, how does a user add historical data, or can a user add historical data? Absolutely. We have a bunch of tools. So not only are collectors which run off of different protocols like we see Kafka here on the screen, or we have MQTT, or we have OPC in different flavors. We also can bring in data using Rest, which is a pretty powerful method. And I think, you know, going forward, looking at modern software, it's probably the place to be. But we also have an Excel add in where if you can get your data into a tabular format, you can bring it in or bring it out from Historian, manipulate it and then add it back in using Excel. But if you're coming from something like PI or a competitive historian, we also have tools that work that way.  
 
So we have an ETL tool, which actually in the last year, working with a large global chip manufacturer, we got that to about 25 times its previous throughput. So bringing data in historian is much faster and easier than it ever has been, and it shouldn't be a barrier to entry. Absolutely not. This is perfect. And we'll keep going to Andrew, over over to you. And there's some questions in the chat will address. So if you ask one, I think a few of these tie into these next few sections. So Andrew, take it away. Great. Thanks. So Brian talked through, how do we collect data from, GE Vernova perspective and store it. Now the question is, what do we really do with that data from our application portfolio. So my section is really about how do we connect Proficy and APM together so that we can start utilizing some of that data and some of the, reliability standard maintenance practices, condition based maintenance practices within our portfolio.  
 
So I've got two sets of architectures on the slide here. I'm going to start with the one on the left hand side. And it's a really just high level. I've got a historian where I'm collecting data. Right. And Brian had a slide that shows where we can get data from and to the historian. And then how do we get that data into our APM application that's on premise. So we have an entire portfolio called OT Connect, that some customers may be using today that has a native adapter. So using the technologies that Brian was talking through, we've designed an off the shelf adapter that connects to the Historian and makes the data available to the APM application. So when we move over into APM, we're not replicating that data from the historian. When there is a workflow or there's a need for the data, we message queue that request up. We send it over to the Historian, we get a response, and we make it back to the particular, point of need within the APM application. So that's very important when you're considering that on premise, it's all your own infrastructure.  
 
We made the design choice not to have a point where we've got the equivalent of two historians for certain types of tags that you may want in the APM application. Rather, we're making the data available on a as needed basis, whether that's a user who wants to look at the trend of the data, or a policy that wants to evaluate the condition of a particular set of signals. And, look at the health of an equipment and update it, or, asset health index that you want to update once a day or once an hour to say based on current conditions for this particular tag and thresholds, you know, what is the health of my my asset going forward. So on premise, it is a point of access over into the Historian, and it's a native adapter that we test with every release for both the Historian releases and our APM application.  
 
As we move over to cloud, things get a little bit more interesting, right? Because now you potentially have a disparate system. So in this diagram, I have the Proficy Historian within your customer network. And then we have our cloud services with our APM application. So there's really two methods of getting data over into our cloud APM application. And one is a native solution that Brian's portfolio offers which is called a server to cloud collector. And what that does is it is a collector that you deploy from the historian perspective. You configure the information required to connect to our time series data store within our cloud, and you configure what tags do you want to send to our APM time series for availability from the application. So for a customer who is used to managing historian collectors because they have a large portfolio, this is a really good option to use because your infrastructure team may be already familiar with it. There's also a product line called Edge that we have, and that's really a system that has remote management capability, that also has protocol adapters that can connect to the Historian to get the changes in the data. As it's coming into the Historian and then replicated over to our time series instance. The advantage of this one may be that if you have a team who's familiar with the APM application and management of APM, the Edge system has a remote management capability. Plus it also has the capability Brian talked a little bit about, on the Kafka slide in stream analytics.  
 
The edge platform has the capability to have in the stream analytics as you're subscribing to data changes to do things like, compression, right, or to do things like, analysis using an AI model and then pushing the outputs to APM for consumption. Once the data hits APM in the cloud or APM on prem, it becomes available to all the modules that you may be using within APM today via either policy. The health indicator or, be it you want to go look at when you're in the cloud, a different set of, applications, which really takes me to our next slide, which is. All right, we've got the data connected up. Now, what can I do with it? All right. So I've mentioned some things like health indexes. And for folks who aren't aware of what a health index is, who aren't necessarily, part of our APM space today, a health index is taking different inputs within the portfolio of data you have on an asset. Right? So when we start talking about asset performance maintenance or asset performance management, sorry, you have things like your signal data coming out of the Process Historian.  
 
You may have things like rounds data where you have your operators walking your plant line, walking your facility and taking notes and measurements on non instrumented, points that are of interest to you from a management standpoint. And you start aggregating that together to say how healthy is a particular point in my process. When we integrate the historian, a lot of times what we'll do is we'll say, okay, I want to set based on certain thresholds, is this particular piece of equipment healthy or not healthy? The example I like to use, just because everyone understands temperature as a bearing metal temperature, obviously from a design standpoint, has some thresholds of how high can it go before you need to be concerned. Right. So equate that to sometimes when you're cooking on the stove, you need to use a potholder to move a pot off the stove because you have a metal handle and it's really hot and sometimes you don't.  
 
Within APM you can set an index to say, can I grab that pot handle without having something insulated in between my hand in the pot handle because it's not hot enough yet? Or do I need to be concerned about grabbing that pot handle? Right. And you start aggregating all kinds of signals together to just have an overall aggregated health index. I'm not going to touch on the predictive maintenance because I don't want to steal, Truman's thunder. That's his real world use cases and starting to get into the AI piece. The one I really want to emphasize that a lot of folks don't realize is, as you're aggregating that process historian data in, we have within our APM application a whole analysis section where you can do time series analysis. Right. It's really a robust, design where you can do things like just trending a time series trend, you can do parallel axis trends, you can do x, y correlations, spider web charts, polar charts, bar charts giving a real comprehensive way to take that time series data, look at and be informed of a change in health on a piece of equipment and then start diving into the problem to say, what do I want to do about this? Right?  
 
There's an issue that I need to see. I'm starting to aggregate all this information. I'm starting to see trends that I don't like, analyzing those trends in order to make sure that your equipment's running the way that you want to run it. And I believe do we have any questions? Ryan I'm not seeing the chat. There's one from Eric here. I know he started to touch on it a little bit. Then I might chime in. Um you mentioned the reliability Center maintenance or condition based maintenance applications that customers might have access to in this ecosystem. Yeah. So what I would say is, the reliability center maintenance and the condition based maintenance, we don't term our applications directly as those terms. Right. But when we talk about condition, condition based maintenance, using the tools like a Health Index or using the tools like Policy Designer, you can start to set conditional thresholds of when do I want to be informed that I need to start taking a maintenance action?  
 
When we start to move more towards the reliability center maintenance, that's where you're really starting to get into predictive maintenance and some of our more advanced tools that are going to give you earlier indication of an issue. And the question just for this seminar topic is, okay, well, we've got those applications. How does this all tie together with the Historian piece? We need to get the data from somewhere, right. And the Historian offering is a really good place where we can get that data to start feeding into those workflows, either reliability centered or condition based in order to execute, um you know, your strategy and I emphasize your strategy because everyone's is different, right? You may have different thresholds than any other customer, but the foundation or components are all here to choose. Do you want to do reliability centered maintenance for your high capital assets?  
 
Do you want to do condition based maintenance for your low capital assets that you know aren't going to cost you an arm and a leg to go replace in order to, leverage all different practices for maintenance Adoption. So it's a good question. Yeah, really good coverage on and if we're going to Truman to expand on on that. So you think about our applications. Our portfolio is in APM right. We have APM strategy, which really helps you look at a bunch of parameters around to Andrew's point, your assets. Right. Is it a highly criticality high cost high O & M spend asset. And that's really where you can dive in and say, hey, is this where I do RCM or CBM or predictive maintenance? Right. That helps you really get a stance on what do I need to cover and why. Right. Our APM health application, which you'll see with the Health Indexes and Rounds Pro, those are both part of what we call APM health in our portfolio. APM health is a mix between mobile data collection.  
 
So your operator around your images, your temperatures, vibration, right. And all that feeds into a health index, which is to Andrew's similar, metaphor, which is, hey, can I grab this pot or not? So we have ways for condition based to extend the data collected. Right. So if you don't want to go full predictive, we have the ability to support that with APM health. So the other things we'll talk about near the end here on the application side, the expansion of time series data, we also see customers expanding into leveraging fixed or mobile cameras for condition based maintenance. Right.  
 
Maybe they have an infrared camera set up on an asset, and they just want to pull that data and convert it into time series to help with their health index. We can talk for time a little bit about that on the end, but the expansion of what condition based means, we're seeing a lot of great customer use cases. So across the portfolio to Andrew's point, there's bits and pieces. Whether you want to do an RCM type approach, a condition based type approach, a predictive approach, right, or with Proficy, kind of look at the tags and historical data. We offer that capability, right? It really depends on where you want to go. But our applications all tie into, hey, what is the best strategy for me in my organization? Right. So with that, I know we are 37 minutes and I want to hand it over to Truman, to get into the predictive piece, and then we'll answer some of these all questions just came in. We'll answer some of these in the end, and then we'll go from there.  
 
Awesome. Thanks, Ryan. Thanks, Andrew. Yeah, that's, perfect lead in for me. I hope everybody's seeing the the journey as we are presenting here right. We have the data. You know, always keep in mind what we're talking about is the data connectivity. And, you know, we're seeing how it goes end to end, and from the sensor on that asset, all the way through to, what I'll talk about is, you know, producing analytic advisories and diagnostics, against that to, to drive, additional availability and reliability. Right. So, hey, everybody, hope you all are doing great. And thanks again for spending your valuable time with us. Let's now take a look into the lens of analytics and what you can do with this data and the challenges that some users face, both in deployment and also in what really matters, how connectivity we all are here for, what connectivity, how connectivity affects, analytics accuracy and capabilities. Right. Let's start here with, high level overview. Just looking at general building of analytic models from the ground up. Right. Say you have a data science team on site, or maybe they've already built, a model. And as a team embarks on development, they start with collecting all that data from all the various data sources relevant to the asset or assets. Next, there will be an important step to cleansing the data. This includes steps such as identifying the tags to use. Sometimes you have duplicate tags, maybe coming from data or duplicate data sources. And they all, show up just as that simple example is duplicate tags.  
 
So in these circumstances you'll typically choose the tags that are closest to the asset source that most, closely mimics and is is what the asset sensor is showing. The exploration step is where the team would dive deeper into the data sets, identifying trends, patterns, characteristics which ultimately result in narrowing down a set of validated tags. Then, with that narrowed list of tags and the data behind it, we get to the main step that most people think of when developing an analytical model as the, the names right there, modeling right here, the team builds all of the relationships, rules of all the data, and it's a very iterative process with the resulting model that then provides results that the end user can interpret. And so I hope that you picked up through this overview of, of building and analytic, several places where connectivity plays a role. Things that, Brian and Andrew have already kind of spoken towards. Right. It's it's a, it's a quite an intensive process. It's a very good process. But what I want to kind of show here is, SmartSignal and its capabilities. Right? SmartSignal is an analytic AI/ML solution. It's had over 20 years of development. And so through those years, we've removed, many of those steps we've seen in the previous slide with the advent of what we call digital twin blueprints. And that's the focus, I'll highlight as I talk through SmartSignal. There's a whole lot more that I'd love to talk with you.  
 
But we'll focus on blueprints, which provides a template that allows for efficient mapping of those connected tags through Proficy Historian, those user's assets sensors to. We're going to connect those to SME identified failure modes. Okay. These failure modes effectively use available SmartSignal tools which include the most important aspect similarity based modeling. But we layer on top of it several industry specific and industry agnostic rules and syntax, providing what culminates to, insights well in advance of any standard alarm thresholds, such as those at the DCS level, all while still maintaining the accuracy to keep false alerts at a minimum. Right? So here's an example of one of our existing blueprints and the level of coverage that you would have. Right out of the box, on specific failure modes that are pre-built, ready to map, connect and deploy.  
 
So again, as I mentioned, I I'm going to move on. But please, do take a look at some of the links in the resources tab tabs and reach out to me anytime after the webinar, ask some questions in the chat to so we can, dive even deeper into GE Vernova's SmartSignal technology. You know SmartSignal solves this time to insight is what I've been talking about from an analytic deployment angle. So now let's look at the connectivity side. Right. This is a challenge. That SmartSignal with Proficy looks to take head on. You know, we we did I did a little bit of polling several folks in our services deployment team, and many of the responses echoed what I've got there as, as a quote and, a little bit of change in what you see there. But data connectivity is the largest variable to deploying analytics, oftentimes becoming a critical path blocker, consuming multiple resource teams to get it right. And oftentimes rework gets built into timelines to accommodate. So anything to reduce this timeline is important to project cost and the time. What matters most of the time to, the customers value. Right.  
 
I'm just going to sense that there's a lot of nodding heads right now for those watching in, working with Proficy Historians pre connected and sensor not centralized yet still flexible. Sensor tags that, as Brian and Andrew have expertly pointed out, are available, manageable, scalable, easily digestible. You really now have an end to end solution that prevents, common pitfalls when it comes to analytic, connectivity. So three of those pitfalls that I'm just highlighting here. You can see on the screen intermittency, synchronization and collaboration issues. SmartSignal You know, I mentioned the 20 plus years of development. We've built in several robust tools that can help regenerate sensor data virtually. When you may have some of these intermittent outages for the sensor. However, it still results in, you know, degradation of your capability to quickly and accurately pinpoint the root cause. Also, when it comes to synchronize and collaboration issues, I'll speak to those. At the same time, it's important that users and applications use the same data that is properly aligned. And so that goes about no matter who or what application is looking at them. So I'm going to flash back to Andrew's screen here, Andrew's slide here that he showed at the end of his part right. With SmartSignal within APM, analysts can now cross collaborate with operators, process engineers and reliability engineers to ensure they're looking with the lens of synchronized data. Sometimes all it takes is, you know, just a slight offset of that data. If it's not aligned to have different conclusions drawn. And with that, you get time lost.  
 
Realizing after research, oh, hey, was just a data issue synchronization. All along that was the culprit. Or, you know, even worse is perhaps missing a failure due to this. You know, when one team saying, oh, looks okay from my end, another team's, you know, seeing something different. And, you know, after research, it's the incorrect conclusions drawn. You know, we just allow it to, to move on and potentially a failure. So those are all the things that, you know, with that time synchronous with the synchronize data with data that, doesn't have any sort of intermittency or anything of those sorts. Really you're setting up your analytics or any of the other things that you're seeing on the screen, you're setting those up for success. Right? So yeah, with that, I'll, you know, pass along to Ryan, to wrap everything up that Brian, Andrew and I have talked to you. Yeah. Thanks, Truman. And we have a quick question, Truman, before I go. So and this is a great one because even I mess around with I probably shouldn't be as a product marketer.  
 
Right. But, if someone has their own existing analytics or data science team or folks building analytics for their assets, how how can they integrate that within Proficy or APM? How can those be brought in and used if folks are playing around with the bring your own and some areas. Yeah. That's a that's a great question. Thanks, Ryan. I probably should have touched on that. And I know Brian touched on it from a Proficy Historian perspective. And so I'll flip it back to this slide right. And that's where we're you know, there is a capability within APM to, you know, let's say your data scientists, right? I know I shared that screen, showing that the model build, if you're doing it from the ground up and perhaps, you're you already have a team that's, that's created something within, say, Python and those. Yes. Those can get brought into APM. They can utilize the power that, Andrew had shown with analysis where you can trend all that data as well. And so, yeah, that's absolutely possible.  
 
And even more so being connected with all of the APM tools. Is is absolutely possible. Yes. Great question. Awesome. I think it's it's important to add to what Truman said as well, that because Proficy Historian and APM come from the same family, we work much better together. So there's no necessity for sort of extra wrangling and connectivity. So you can you can run all of this analysis much easier than you would otherwise. Right. Yeah. Last thing, I again, I mean it's I wish I, I remember to present on that is you know Brian mentioned security and and making sure that we've got, you know, there's things that Brian and his team learns from, from a security perspective. So that cross collaboration we're making, it's almost like enhancing more than double because we're we're catching things and sharing it across within the two teams to make sure that those holes are, you know, plugged up. Right. So I'll kind of wrap this up and I'm going to open it up. I know we have 11 minutes. We try not to take all that time. And this is probably a section where folks are going to have some questions here. It is a hot topic. AI, gen AI, Agentic AI.  
 
The future of industrial data, industrial automation. So I wanted to wrap up kind of going through what you just heard. Right. To, to reframe it, everybody and every, every industry, right. In my time on the banking front, struggles with data access, data accuracy, and using that data effectively, there's a lot of things out there's a lot you can do. But where do you start? How do you use that? You heard Brian talk through how they're collecting that data, how much data they can collect. You heard Andrew really go through where this fits in terms of condition based maintenance or predictive or even emerging techniques. And then you heard Truman really talk about how we can action this data very effectively and quickly through blueprints and our expertise.  
 
So there's ways and steps that you can take to start using that data and getting more from that data. But when you look at the whole ecosystem and strategic vision, right, when you look at the data that you have and what you need out of that data, you got your operator rounds data, you have other field data, you have time series, which we've talked about a lot today. Right. You have alerts in cases. You have work history, right? Whether it's IBM or SAP or another EAM system. And then you have all these things that you'll see on the bottom, which is the analytics that Truman just talked about. Bring your own analytics, right. You have the emergence of AI, which is a super hot topic. And then on top of that, you got to figure out how to use that data and manage that data effectively.  
 
So the MLOps and AI ops in the world under that data ops family is really important. So the way we look at the energy space and how we can help manage is data really falls around performance, emissions, reliability and safety. And that really covers our portfolio and kind of what you heard and really what we're driving towards. There is a new level of kind of asset intelligence. And I just wanted to share a few things that we have in the works on top of this data. And Brian and Truman and Andrew, please chime in. I'd be remiss not to talk about how we're approaching some of these AI topics. Right. As we progress through and this is very asset centric. I mean, you heard SmartSignal using AIML.  
 
We have our CERius product that's using AI on the admissions side where we're looking is we also have products that we're extending. And you'll hear more about this with computer vision right. Using fixed or mobile cameras to ingest images, turn those images into time series and leverage that data in APM. So you think about a health index or condition based or predictive. We're expanding kind of our ability to capture this data the right way and pull it back into this ecosystem. And then you look at things that are happening on the AI and gen AI front. We're hearing a lot from our customers on the ability to what Truman touched on bring your own models or create your own models. We are hearing a ton about interfacing with generative AI, which I know is a hot topic. But, amongst all of this is the issues of data cleanliness, data access, data security. You think about corporate parameters and what your organization might be driving to. So the way we're thinking about it is how can we amplify this data conversation you've heard today even further, and we're really looking at that across our APM portfolio, and our actively engaged customers today on the AI and Gen AI front.  
 
And um Brian, I don't know if you just want to hit quick on what Proficy is looking at from a data structure plus AI, and kind of why you're excited about this ecosystem that we have. Yeah, absolutely. There's actually a lot of work going on within Proficy. I saw a demo from the Historian engineering team this morning where we're building an AI into the product to leverage writing SQL queries and then bringing data together and building out metrics based on just asking the Historian for your data to say, hey, what was the pressure on pump Or how does the inlet temperature of sub A work with sub B sort of thing? So we're bringing in a lot of, generative AI in terms of just being able to interrogate the data in a natural language sort of way. So going forward, you won't even have to learn query language, right? The product itself will take care of that for you. And then building on top of that for security and different things, where I ISO 27,001 certified.  
 
So not only are we bringing the latest toolset to to bear, but we're bringing the latest security as well. So it's a really compelling, combination of, of, product features to really help drive your asset management efforts forward. Perfect. Truman, any final thoughts on kind of where we're heading on the expertise side and, and the predictive lens? Yeah, I think this is a perfect slide, Ryan. That kind of sums up, you know, you've got data from all sorts of places really. It's making sure that it it it is digestible. It's one of those, you know, over analysis paralysis. I mean, you the having a platform that helps bring all those data and giving you the tools, everybody's use cases are different. And so, you know, you could see ten plus items here. You know, the ones that matter to you or relevant to you having a platform that gives you that capability is is key, really. You know. All right, Andrew, any final thoughts before we we wrap it? I believe we have the questions answered in the chat. So, Andrew, any final thoughts on kind of where we're heading and what's exciting you in terms of the industrial data space? I would say, you know, the the age old problem of there's lots of data out there. If it's not stored right and accessible, it becomes challenging.  
 
Everyone goes, well, how does AI change that? And it it doesn't. What it changes is we need to make AI accessible to that data. Information. Right. And holistically, we have a really great foundation on starting to aggregate all of this data together for a human consumption and also for AI. And MI and kind of the latest technologies and parsing and analyzing data, is really coming together cohesively to offer you a good view into your asset intelligence in particular around the performance, emissions, reliability and safety integrity of the systems. You may not need every single one of the things on the slide, but most folks on the call need a good portion of these to run their plants the way they want to run them. All right.  
 
Awesome. Well, with that, we are, time. I know we have the questions answered. Our contact information is available through the resource center. Is there anything you want to learn more about? For Proficy questions, please reach out to Brian directly for anything with connectivity. Please reach out to Andrew, anything around our APM applications and SmartSignal in particular. Truman is the right person to connect with. Just want to say thank you all for your time. Again, we just want to reintroduce the fact that although we are Vernova, we also have a Historian offering we are working very closely to bring more advanced data management, to everyone in the market. So we're really excited to expand on this. Thank you for tuning in, and we appreciate all the questions.