PegaWorld | 55:48
PegaWorld iNspire 2024: Building a Robust and Resilient Automation CoE at Navy Federal Credit Union
As the largest credit union in the United States, Navy Federal’s ability to adapt to changes in the technological landscape is paramount. Their secret? A dedicated automation CoE, has served them well by ensuring their bot development and usage is scalable, resilient, and flexible enough to serve their users’ ever-evolving needs. Join us for a conversation about Navy Federal’s success with attended and unattended RPA, and learn about their strategies for staying current.
(upbeat music) - Hi everybody. Welcome, thank you so much for being here. Welcome to Building a Robust and Resilient Automation CoE with Navy Federal Credit Union. I'm Stephanie Hawkins. I help out with product marketing here at Pega and I am joined by my friends from Navy Federal Credit Union, Michael Royston and Kenneth Hearn. I have to say, I have worked with these two for a while and worked with the team at Navy Federal for several years and several Pega Worlds. And I'm always just consistently impressed by the quality of information they have to share. They are doing amazing things over there, really instituting a lot of best practices and I think they have some really good stories to share today. So I'm excited to have 'em share their learnings with you.
So thanks for being here. So maybe we can start out by just both of you telling us a little bit about yourselves, your work at Navy Federal and kind of your department. - So, Navy Federal Credit Union was established in 1933 with just seven members. We're now serving over 13.6 million members, all part of our military community. So I started my career out of over 18 years ago as a software developer. And then for the past 10 years, I've been focusing on process automation. I joined Navy Federal a little over five years ago, and in 2019, I joined the team that started our automation center of excellence. - Great, thanks. - I'm Mike Royston.
I've been at Navy Federal 18 years. I've worked all over the company, including in our IT department and also doing robotics for a decent amount of that. - Perfect. And you know, the three of us have spent quite a bit of time kind of talking about all of the different histories with Pega and robotics that need be federal. So I think we wanna kind of dive right in and get you guys to the good stuff. So first of all, can we talk a little bit about your very first robotics? Can you talk about those? - Right, so we started with an RDA or RPA unattended by that was picking up information from our mortgage application system and putting it into our mortgage loan origination system. Now we have, you know, at least 18 separate RPA and RDA solutions averaging over 500,000 hours a year.
- So those 18 different attended automations, they're deployed across probably around 10,000 different desktops, end users within inside the Credit Union, and the unattended ones, they run depending around two or three different dedicated VMs up to about 12 different dedicated VMs on them. - Okay, yep. And sort of right off the bat, you had some kind of interesting lessons learned, right? Can you talk about those? - Yeah, so our very first automation was supposed to be saving 30 minutes per case. And when we actually put it into production and measured it afterwards, it wasn't saving 30 minutes at all and we couldn't understand why. And so when we dove into it, we realized that the users were going back in and double checking the work the bot had done. And so the entire time savings was negated by them not trusting the bot. So, you know, that really realized that confidence in the bot is very important for, you know, successful deployments.
- So what did you do at that time to sort of resolve that confidence issue? - It was mostly communications issue. You know, not all the users necessarily understood what was going on, like why there were bots being deployed. They didn't know how much they were supposed to rely on the bots, you know, were they supposed to trust the information. So really day one, communicating with the end users was like the key takeaway. - So it sounds like there was sort of a perception problem where there was a perception about RPA amongst employees versus kind of the reality of what it could do. Was that something you struggled with? - Yeah, I think really the, even the value is something that sometimes people have trouble understanding. And a good example of that was we had an RDA that was only probably saved about a minute every time it ran.
And so when people first got it, they were like, oh, this is cool. But nobody was raving about it. The first time we had an outage, everyone was calling like, oh, like we, like we, you know, we need the bot. And then all it took was that one time. And now those users are very appreciative and they understand, not suggesting you should intentionally cause an outage, but like it took them not having the automation to realize that those little 30 second, one minute time savers every time, like they really add up. - And so another example that we had, we built an automation, extended automation for a team that self-admittedly weren't very tech savvy. And so many of 'em, when they were looking at what we built for them, they didn't really see the value, the benefit of the time savings they could create for them. But it wasn't until they had one team member who kind of championed the tool, they learned it, learned how to use it efficiently, and they were able to double their performance compared to anybody else on the team. And so at that point in time when the other team members saw one person doubling the work from them, it inspired them to be able to adopt the technology themselves.
That particular individual set up some kind of training classes to be able to show how they were using the tool, the value of it, and then the rest of the team came on board and were all able to double their productivity. - I love that story 'cause it just, it's so interesting how the adoption and the sort organizational strategy really matters in the deployment of RPA. So I would love to switch gears because one thing that we've seen maybe Federal does really, really well is their center of excellence, which they actually call their automation center of excellence. So can you both tell me a little bit about your automation center of excellence? - So when I joined the team in 2019, I had no Pega experience at all, but we started off with a vision to be able to set up enable program for citizen development. So in order to do that, we need to establish guardrails, guidance, design patterns, and then set up training to be able to teach the different delivery teams, the different developers on how to follow through on those types of things. We set up training programs to be able to teach them those design patterns of best practices even came up with our own training projects to go along with that. - One of the things we did with that training program that I think was fairly unique, I hadn't heard of other places doing it, but we built an entire fake mock environment internal to our system for the training. So one of the issues you always have, you know, with robotics on the web is, you know, if you're doing it to an external website, that site might change.
If you're doing it with an internal site, that site might change. So for us, we wanted to make sure that we had, you know, a production and non-production, you know, not really production, but a mock production environment in a mock lower region. And we wanted to know that that site was A, an example that's a real use case for Navy Federal, at least similar to one, and B, that we controlled those environments so we could make sure that, you know, our training documentation and steps were gonna work for years to come and not, you know, be randomly broken because you know, Google pushes an update or something. - That's really interesting. So when you first started out with robotics, were you just using robotics or were you using robotics and Pega platform or how did that kind of work? - So our initial thoughts were that we were gonna be doing mostly robotics to begin with. Even with our first use case, we had a application for our real estate lending department that there was a loan application process and a loan origination process. And our loan originators were having to copy information from one system to the other, and we thought it'd be a good fit to be able to build a robotics solution to help reduce that manual copy and paste. When we looked into it a little bit further, we realized that the loan application system had all the APIs necessary to be able to pull the information out of it.
So we ended up building a case management solution where we executed those APIs to retrieve the data about the loan, transformed it, and then passed it over to unintended robots to be able to do the data entry side of it. With that, we realized that we have robot manager to help us be able to orchestrate those robots and help out with that process. It makes it much more efficient. - Interesting. - Yeah, that was our first dive into, you know, realizing that robotics alone, you know, didn't seem to be a good solution across the board that really having robot manager be that orchestrator and coming up with, you know, some design patterns for how things call robot manager and how the return calls go essentially would lead us to having, you know, more scalable RPA, and really, you know, since then, our company, we really have gone much heavier in platform, but we still have robotics, we still do robotics, and you know, it's important that it's together. I mean that's one of the key, you know, successes for our CoE is it's not a Pega platform CoE or a robotic CoE, it's the automation center of excellence. So, you know, we want to make sure that we don't have the silos between robotics and platform and robot managers the place where that ties together. - And that's really unique because we often see these CoEs being siloed. So just a robotics CoE are just a platform CoE and the fact that you've combined them seems to be a really strong approach and a strong indicator of your success.
So that's great. So moving on from there, what are some of the more advanced use cases? - So Michael talked about that training project that we set up to be able to teach our seasoned developers off of. So that came from a real use case. We had our finance department, they were viewing checks that are being deposited by members, and so they have to go through and disposition those checks approved for deposit or declined to pick the various reasons of why they were declined. From there, they'd have to copy information off of the check, go look up a note template out of Excel, and then enter in a note into a secondary system. So we worked with them to build an attended automation that allowed them to merely just disposition the check in the original system, and the unintended robot would then copy all that information out, generate the note and enter it to the second system, keep them from having to do that swivel chair back and forth between the two different applications. - Interesting. And did you run into kind of the same problem with, that you had with the first one with the trust issue in introducing this to workers?
- So the team that we partnered with to implement that, they had a well established process improvement team that had proper communication training and their team members were accustomed to getting these types of improvements and automations delivered to them. So they adopted that one relatively quickly. - Okay. - Another big part, and you know, with it being an attended bot is that, you know, they see the actions happening and you know, that's another thing that's kind of a, a lessons learned we've had over time is it's a lot easier to build trust with an attended, you know, bot and then move it to unattended afterwards. So, you know, you get a lot of advantages to doing that and a lot of times it's easier to do an unattended bot because you know, it's just quicker and easier. You're not deploying it to a bunch of users, but when you do deploy it to a bunch of users, you get kind of free production validation, right? Because you know, you're gonna have users catching when there's errors and reporting it instead of having to have a support person monitoring the unattended or monitoring the logs. And then they get that comfort, they know how the bot works. When we say okay, now we're gonna move that bot to unattended, you don't have to do that step anymore, you don't have that discomfort.
They've seen the bot run, they know it works. So that's kind of one of our, you know, best practice recommendations is if possible, release it as an unattended, get that trust, or sorry, attended, get that trust, move it to unattended. - I love that, that's kind of like a secret hack or whether to pick attended or unattended because I never would've thought of that as, you know, a deciding factor between the two. Usually just kind of think of technology reasons. - Yeah, I think, you know, as long as you designed the, the automation in a good way, moving it from unattended, or attended to unattended is actually not a large development effort. So it actually works pretty smoothly. - And it seems like it has a big payoff. So talking back about best practices, are there any best practices around sort of designing these things correctly? - So when we established-- - One more slide.
- Yep. - There we go. - So when we set up our different design patterns and stuff, we kind of created a couple different project structures that we use inside of our solutions. So first of all, we have reusable toolbox libraries. So these are common automations that any application might take advantage of. So our next level on top of that would be the actual adapter applications. So here's where we do the adapter project within inside of robotics, all the different automations that you would need to interact with that application, but no business logic would be stored inside of that. Going up on top of that would be the controller project. The controller project is where we have all the business logic established inside of that.
And if you see over on the right hand side, we kind of create the project structure with the named project. So if it's an attended robot, then it's the, we would call it an RDA in that situation, the controller project which has all the business logic in it, interacting with the different adapter projects that might be involved there, and then leveraging those toolbox projects there. To be able to convert it from an RDA to an RPA to be unattended, you take that same controller project to create a new name project and make whatever changes that might be necessary with inside of the controller so that it can run unattended. We also make sure that all of our code is checked into GitHub as our repository for the source control so that when we're deploying packages out there, we actually have a change set that's associated with the package that's deployed. Helps us when we're doing automation playback and things like that as well to kind of triaging the production issues that we have. - That's great. So you know, building on these kinds of different use cases, one use case we talk about often at Pega is using RPA as kind of a stand in for an API when there are systems that lack connections. Is this something that you've done? - Yeah, I think that's kind of like our, the primary use case for unattended or bot specifically is a stand-in for a non-existent API.
And so part of our ACE or CoE governance processes, when we're building an unattended bot or a bot in general, we wanna also be putting in a project for the long-term solution. So whether if it's a vendor we need to put in a product, project with the vendor to make sure that they're gonna build the API functionality we need or if it's an internal API and we need to submit that to our internal API team, we go ahead and submit that entry when we're starting to build the robot. So that way, when that API gets deployed, we can swap it out. And so that's also why it's important to follow some standards with that too. So, you know, on our robot manager side, when we have a case for each, you know, API replacement, we have a fairly flat structure. So when you're creating the case from your external system, it's very similar to calling the future API, your JSON body has your inputs, you're gonna get back your outputs, and it's almost just like calling any other API. The other thing that we added into there is, you know, we don't like, because the bots do take some time to operate, you're not gonna get a millisecond response back. We don't want the source systems, a lot of cases, our pick a platform app, we don't want it pinging over and over to say, hey, you're done yet, are you done yet? So we utilize a callback, so whenever something creates a case in our RPA instance, it can optionally send a callback URL and then when the case gets processed by the bot, we call back and say, hey, it's done, you know, here's the status.
You know, so that really helps fill that stop gap for the non-existent API. And you know, when it comes to cost, you know, you can develop and deploy a bot in a couple of weeks that will hold you over for three, six, nine, 12 months, you know, that it takes for to do a project with the vendor or to build this API, so you know, the speed to market for using RPA as a stop gap for APIs, I think it pays for itself very easily. - Interesting. And so if that's kind of the classic use case for unattended, what would you both say is the equivalent for attended, or as you call it, RDA? - So we try to think of it as like a virtual assistant. So a lot of times when the users have processes that they're running on their desktop, it's helped them to do interweaving between the actual knowledge work we're doing something and the robot helping 'em out. We've actually even talked with some of our different pods on branding as well. One team in particular named theirs and kind of made it popular, really similar to what you'd see with other virtual assistants outside their desktops. - Did they give it a name?
- Yes, Ralph was the one that I'm talking about. - Okay, good name. - So another thing I think that, you know, maybe people who are looking at the slides might be saying, wait, it says there even with APIs, there's times unattended or attended is preferred, and be like, well why would that be? And I agree, you know, it's a kind of a shock, but in reality, a lot of enterprise platforms and SaaS solutions that you buy, out of the box, they're not gonna have extremely specific access controls to their API systems. Oftentimes when you request a service account from whichever platform you're working with, in order to call the APIs, you're almost an admin in the system if they give that to you, you know, if you wanna have very granular access, access controls to the different endpoints of the API, that has to be built in configured, and a lot of times, that's not there. So sometimes there's an advantage to say, look, the user's already logged into the application, they already have their role-based access in that application. An attended bot's just gonna do what they were about to do anyway. So yeah, it'd be great to take that process out of their hands and do an API call, but you've also just taken some factor of risk that you've now opened up the door that says, hey, we've created this system integration that possibly is overprivileged. And again, not that that should be the long-term solution, you should always be going towards the long-term solution.
But again, just like waiting for an API to be developed, sometimes you have to wait for an API to be properly, you know, provisioned within the system and within the access control of that platform before you expose it to, you know, a user being able to access that capability. - Makes sense. So these are great use cases. I would love to kind of circle back to the CoE conversation 'cause I think you have some really interesting information to share there. So one thing, you know, I've been curious about is how do you decide which use cases to build something for? - So that comes back to our automation center of excellence framework. So, you know, when we started with our framework, it was to be technology agnostic. You know, when we first started we hadn't even decided, you know, which where we're gonna go with Pega yet. It was just the idea is idea through production, right?
That's the framework is the entire thing. You know, which technology you build it in influences the design, influences the development, you know, the code reviews, the parts right in the middle, but outside of it, you know, it's this common flow. And so we encourage people not to come to us and say, I want to build a bot for this, or I want to build a platform app for this or anything like that. You should come into the intake system with a problem, you know, this takes too long, we have an error rate, you know, this is too slow. Come in with a problem and then let, you know, the experts in the technologies work with the experts from the business and come up with a solution that's gonna be the right technology, like the right tools. So that's like a key part is, you know, don't come in with an expectation of a solution, bring your problem and let's work, let's figure out what we can do to build a solution that's gonna help. - So when we originally set up our CoE, we were focusing on the citizen development side of it. As our usage for Pega grew within southern Navy Federal, our professional development teams started utilizing it as well. So we've used the same governance, same guardrails, no matter who's the delivery agent.
So our teams continue to evolve. We've actually combined together our larger delivery team with inside of our IT department and our governance team. And so we provide that same service for our internal delivery, the different deliveries within inside the professional vertical product areas within inside and our citizen developers delivering directly within inside the business area. - It's also helpful that we have our repository existing code for robotics that we mentioned that helps with doing the decisioning on those use cases. If someone comes in and there's a possibility to do it a couple different ways, but we already have an adapter project that opens the certain webpage, logs in, navigates around, you know, you've got this jumpstart on the level of effort it's gonna take to actually get it built. So that's like another factor. And of course, one of the other services just, you know, talking about the CoE that we offer is code reviews. And so that's another thing that we work on with the teams. We do unofficial code reviews throughout development whenever the developer wants us to look at the code and we get very in depth on that.
And then we have our larger CoE reviews with a larger audience across the company before anything goes to production. That's another one of those ones that not to, that's less the building trust with the end users and more building trust with, you know, the management, change management program. You know, they can have confidence from the, the higher management perspective that, you know, we're following our procedures and processes and that, you know, they're getting some security out of that. - Okay, so lots of good learnings there. So problem, not solution, start with the problem, not the solution. I feel like that sounds like something Yoda would say. And then the code repository, code review centrally and you know, there was something you touched on which was governance, and I would love to kind of talk a little bit more about that. So that's something you have done really well at Navy Federal. Can you expand on that a bit?
- Yeah, I think, you know, the governance and reigning in of the shadow IT, you know, is definitely one of the things that we got buy-in from the start to do. And we've seen a lot of different approaches over many years of experience with this. And you know, coming from that background myself, I know, you know, people will find a way to get things done. So anytime that management wants to take this kind of heavy handed approach, we're gonna shut this down, we're gonna shut down shadow IT, you know, we're gonna take away their tools. Like, you know, it's like pulling weeds. You're not solving the problem, you're just temporarily, you know, making it fixed. So our goal from day one with the CoE was we want to provide tools, a framework, and you know, a place for people to come to get the work done in an approved setting. So it's not, you know, it's the carrot not the stick. It's, you know, here is a place to come do your work, get credit for it, not have to be in the shadows.
I mean, that's another big thing in kind of shadow IT world. A lot of times your developers are, they're like an analyst or you know, some other random title out in the business because there's no officially sanctioned development outside of the IT department. So, you know, that was another place where we try to become champions in that space and push to have a business developer job title out in the business unit so that when you're a citizen developer, you can be officially recognized, people can see what you've done, and hopefully progress a career in development if that's where you want to go. Versus you know, being hidden in the shadows with a title that's not what you really do. You know, a lot of that was bringing things into the light in a positive way, not trying to shame people for what they're doing. - I love that, it's really smart, and I love that business developer job title. It's a unique approach to dealing with that problem of people just kind of going rogue. So, you know, one question I have for you both is the question of the value of the technology. So that's something people are always curious, like how much value am I gonna get out of this?
How am I gonna measure the value or the ROI from this? So I know you've had some interesting, an interesting journey, let's say, with how you've measured the value of your robotics. I would love to have you tell that story. - Yeah, I mean, at a high level, talking about our whole framework and intake, the first day one intake when we're doing the scoring, you know, we're considering hours saved, member satisfaction, employee satisfaction, reduction of errors, increase in revenue, decrease in costs. So, you know, that's like day one things that we want to be looking at. But when you try to put numbers on a dashboard, usually ROI hours saved is what everyone cares about. And so, you know, in our experience though, you know, having empathy is a big part of what our CoE is about and empathy for your users. And so, you know, sometimes you have to remember that, you know, saving every person in a contact center 30 seconds is gonna give you great ROI numbers, but you know, is it changing their life? Probably not.
At the same time, you've got a team of a few people doing very high value things that have to be done at the end of the month or beginning of the month, you know, you could make a much larger impact on people and it's not always about the dollars saved for some of those examples. - So on that end of the month processing, we actually had a team, they built some automations, it only saved a couple hours and it was only ran once a month. But for them, that processes had to be executed on that last day of the month. And with the number of resources they had within their team, it's more work than they could actually accomplish. So it kind of looking at the way we kind of analyze some of those hours saved so you can kind of see on the chart, you can show where, you know the process that's only saving a couple seconds being ran by hundreds of users. An initial look looks way more powerful than the one that's only saving a couple hours a month. But when you start comparing the hours saved relative to the size of the team, the resources that are available to actually run those processes, it gives you another perspective to look at how much value that that automation is actually creating. - I love that just, you know, calling out, again that idea of having, like, considering empathy when you're considering value because that's huge and it's so true that if you take away the one task that takes a little longer, but people actually like doing, you haven't, you know, really delivered value truly because you kind of made their lives worse. So that's really interesting.
So you both just have, so obviously so much experience with this and so much to share. I'm curious what your take is on, what are some things companies should watch out for? So what are some things that can go wrong, for example, if you don't have safeguards in place? - So one particular thing that comes to mind, there was a team, they had a automation that was running like large batch processes, and the developer went to do an improvement on it, thought it to be fairly simple. He went to take a single threaded process and made it multi-threaded, which made the process run significantly faster, but without any sort of review or governance over that, he didn't realize the downstream effects on it and it effectively created a denial service for anyone else trying to use that same service that the automation was hitting. So having these types of reviews and governance allows us to be able to look, think about the other stakeholders inside the process, provide communication and realize the impact that those things might cause. - And we're laughing about that now, but were you laughing about it at the time? - No, I don't think anyone laughed. - Yeah.
- You know, one of the other kind of downfalls when you don't have the governance and of course, one of the risks of like the shadow IT is people build stuff and they leave the company, and I mean, there's nothing worse than finding out that a process broke and the person who built it doesn't work here anymore. And actually, we don't even know where the code is because his computer's gone and he didn't check it in source control. Like, you know, those are like things that happen a lot. And so, you know, having a governance program means that, you know, we have these standards for the coding. Those standards mean that when a new employee comes in, as long as they know Pega, and in this case, they can look at it and see what it does. You know, you don't have to dive through, you know, a bunch of spaghetti trying to figure out what's happening. Everyone has to follow the standards and that means that anyone can understand what's in the code. Nothing you know, is gonna be that far outside the box. The other thing that we put in our initial framework again, before we even knew we were gonna use Pega, but that there's the business continuity plans.
You know, whenever you're deploying a bot, especially a large time saving bot, you got the business needs to consider what happens if it does go down. You know, what happens if it's just completely unavailable? What happens if you know a Windows patch comes out that breaks it and it might be a couple days or a week before we can get it fixed. They always have to remember that, you know, and that's something that happened a lot of times with old shadow IT stuff. There's just a script, once it starts running, everyone kind of forgets how to even do the process. So having a business continuity plan is really important and make sure that the business is prepared for, you know, what happens if this goes away. - Right, that's really smart, that's very smart. So as you move forward with your robotics projects, and you've obviously already done a lot, so what are you excited about kinda digging into? - I think the thing I'm most excited about is actually moving to the new version, the version 22, version 23.
There's, you know, we've worked with the Pega engineering team a lot on, you know, giving feedback on, and what we've done in our testing, things that we've liked and didn't like, and you know, and just really excited to get into the new version, take advantage of the new features. Like it's exciting to be, 'cause like, you know, usually when there's little version changes, you know, it's like, oh, one new feature, two new features. Like no, it's like a whole new studio, whole new experience. Like it's very exciting to be moving, to have something new to work on. - It is, we're excited to have you guys on it. - So one of the things I'm pretty excited about and proud of is some of the automations that we built. The one we talked about, the loan export process, it was built over five years ago and it's still running and providing value today. We have upgraded it from different versions of the runtime and done some enhancements on it as well, but with relatively little support, it's still running and providing value for the company. - And for something like robotics, which is often just thought of as a stopgap to have something with lasting value like that is really cool.
- Oh yes. - Yeah. All right, so, lightning round. What is your number one top piece of advice for anyone in the audience who is considering starting to use robotics at their company? - So I'd say a really important thing is to, you know, be in contact with Pega. You know, we, when you work with them, when you have an issue and you, you know, go to the forum and ask a question, and someone answers it, now that's out there for other people. You know, I don't know how many times I've solved my problem because, you know, I went to the forum and there was an answer there. And also submitting tickets when you have issues and again, it sounds like I'm plugging the new version and I'm just an end user, but we communicated a lot with Pega Engineering about what we wanted to see and they really built those features into the new product. And so if you're not talking to Pega and sharing your issues and successes and how you're doing things like then you're not actively participating in making the product better, which benefits all of us.
So like that really is the, to us, like just that's like so feel so proud to see a feature in the system that you suggested it's really cool and something to take advantage of. - I love that. - So for me, setting up some sort of CoE or governance whenever you bring in a new technology was critical. So having those different design standards, coding standards that we followed has made us be able to upgrade through different versions of the runtime quickly and easily. If you go and let people start building and building various things, you end up with so much more work later on trying to unify that code, getting it reverted to following those standards later on and the problems you have trying to get through changing versions and going through different versions of that. - Awesome, those are great, those are both great learnings. Well I can turn it over to the audience. Do any of you have any questions for Michael and Kenneth about everything they've been working on at Navy Federal? - Thank you for presenting.
How big is your team and what type of roles are on your team? - So our team now is called Digital Process Automation. We have about 80 different individuals within inside of it. And so it's broken up into five different teams. So we have one for the governance and architecture side. We have another team for reusable components. Then we have our support teams, which is both application support, platform support, and then our delivery teams as well. So with inside of that we'll have developers, engineers, architects, scrum masters, analysts, and testers. - And then of course from the citizen development side, we have project managers, analysts and developers in pods out in each of the different business units.
So you know, there's the centralized team, within the centralized team, the governance team, but then there's also development, you know, coming from all those different other avenues as well. - Actually, I'd actually feel free to come back and talk about your citizen development experience because I think there's a, that could be a presentation in and of itself, there's a lot to that. - Sure. I think, you know, in general, that was the initial plan for the automation framework that became, you know, the automation center of excellence was, there are so many aspects to professional code development that can be either eliminated or at least very condensed using guardrails in Pega, especially, you know, Pega platform with the whole guardrail system, you know, with Pega Robotics there's, you know, the limiting them to not using like C# scripts for example is enough that says, well if you're only able to use the out of the box functionality, you're already limiting a lot of the security concerns that you have when you have to do like in-depth code reviews. So when a citizen developer comes in and uses a low code tool to build an application or an automation, you've already eliminated, you know, a lot of the complexity of the software development lifecycle that a professional developer would be doing in .NET or Java. So you know, that really was where we said the idea for citizen development is as long as we have a centralized body that has these checkpoints, you know, for us, it's the intake, solution design, review or we approve their solution design before they start building. We do the official code reviews before anything goes to production environment. So we have those control gates in place. You know, really allows us to say, it's okay that a citizen developer built this because between the guardrails of the system and the checks and balances we have in place, you know, we know we are confident that these solutions are okay to go into production, and.
- Having experience doing shadow development prior, we also wanted to make sure that our framework was not only effective, but efficient as well. 'Cause you can build something on your own without any governance really quickly and get it out there. There's a lot of risk that goes along with that. So that's why you need to have those different guardrails and the governance in place, but you also wanna make sure it's not slowing down the overall process. - Yeah, I think a lot of initial negativity or you know, people being scared to move to citizen development is, you know, they feel like there's this big risk or they feel like, you know, they're not professional developers, they can't do it. But I mean, I think you Pega, PegaWorld the keynote, those are examples. I mean, there's so much technology now going into, you know, app studio and the low codes, GenAI type stuff that being a professional developer is not a barrier to building an application. I think anyone who uses Pega now knows that. So you know, that's kind of out the door to say that you know, hey you have to, you need professional development experience to build an app, that's just not true.
So really it's more about how safe is it? At what risk are we putting ourselves in by letting citizen developers build and deploy? And for that, you know, you're only as safe as the governance you have in place. And so that's like the whole point of our program is to make sure we have that governance in place that citizen develop applications are not going to take down production systems or cause other, you know, outages to any of our systems. - Thanks for the question, that was a great one. Maybe that's our session for next year. - [Attendee] Thanks for the presentation, it was very good. My name's (indistinct) and I have a question on testing. So could you, could you shed some light on the testing that you do for your bots, especially automated testing, and what are the recommendations on CoE?
- So with the robotics side of it, we would set up the adapter projects that we talked about earlier within inside of that the design structure. So for the adapter projects, we'd actually would isolate out the automations that could be exposed to the controller projects. We use the naming prefix on 'em, we call them basically robotic APIs. So from there, for every one of the APIs you would build inside your adaptive project, you'd actually have to have a corresponding unit test on it so that if another developer were to come in and make changes to that adapter project, we could be able to regression test all the changes inside of it. When it comes to the end-to-end solutions, especially with the attended ones, a lot of the testing was focused around the UAT testing from within inside the business area. Once again, we usually have the lower environments to all the applications we're building the automations against. So we know the adapter is doing what it's supposed to with inside the application, but that business logic that you're implementing inside the controller, that would usually require the sign off of the business areas to make sure that it was doing what they intended it to do. - And that design that I talked about about having like essentially the functions set up as either API or non API, meaning like, they're exposed to the other adapters, that's another thing that they added in the new version. In the new version, you can have interfaces which is essentially able to make your your functions public or private, which was the same concept we were doing using the naming conventions before.
And so we've just got a demo of it today at the innovation hub. I haven't got to test it myself yet, but I know they also have a whole new unit testing feature inside of the tool where, you know, all of your exposed functions listed on the side when you have your unit test set up, you just hit run unit tests and it comes down the side and executes on all of 'em. So you know, I think we were doing it with a separate unit testing project at the adapter level previously, but if it seems like in the new version, there might be even more of those tools built into the product, which we again, we're excited to test out. - [Attendee] Thank you. - I, oh did I cut somebody off? Oh, I'm sorry. (laughs) You guys are popular today. I noticed you have 15,000 WFI seats. We have a much smaller number, but I'm curious how you're using that to either prioritize or learn about where your next opportunity is.
- So we actually have WFI deployed across almost every department and some of 'em fully deployed. So like our branch call center and real estate lending areas all have WFI deployed on that. So probably a whole nother session to be able to go to that. And we have some of the program manager for that area product owner it can be able to speak to it better than I can. But they do have, do a lot of the research of what they can do looking for the opportunities for the opportunity finder, and many of the robotic automations that they built have come from their findings out of WFI. - And also of course utilize them after the bots deployed to see that change, you know. You should be able to go back in using WFI, create a new bookmarked process for an unattended or unattended automation, and actually see those start and end points that we're taking X amount of time now take Y amount of time, and again, like we talked about, if those don't match up with the business case that you came in with for, you know, with the solution, then now you have something to go look at. Maybe it's people are double checking the bot or one of those types of things, but like it's a great tool for identifying opportunities, but also measuring them after the fact. - A fun utility we created for the robotics side, if you're familiar with WFI, it has a task API, that lets you post whenever an event started and ended for a user.
So we actually built a workflow within inside of robotics to where you could call that task API from robotics. And so some of our teams will actually use that so that they can see with inside the W five results exactly when the robot starts and stops and running the different automations for the end users, it shows up as a workflow, and that, so it's a pretty neat one. (laughs) - I have three questions now. So do you guys build any ROA calculators? Like think of you having a, you have projects in pipeline, right? So do you guys build any ROA calculator if I give inputs like, you could showcase too. - Yes, so for the majority of our automations, we do look for that before and after. So using WFI or other methods to figure out how long it's taking to do the process, completely manly end to end and then afterwards when we automate it, how long is it taking to run or if it's fully automated, the fact that we save all that time. And so we actually have a dashboards, we call our the command center, which reports out every single automation that's been built, whether it's platform or robotics, and how many times it's been executed, and how often it runs.
So that's where we get that number at the beginning showing over 500,000 hours saved through robotics alone. We've also done our scoring at the beginning, which is where we're kind of looking for suitability not only at that, but also the value kind of an estimate of how much time is going to be saved off of that. We've started to extend it beyond just the time savings, looking at the things of risk factors, exception rates, satisfaction to the employee, satisfaction to our members, and using that to determine overall. So Navy Federal doesn't currently have a standard for how we'll do that, but we're working on creating that with some of the tools that we're putting in place. - Okay, prior to building that, right? - Sorry, yeah, I can't hear you through the mic. - Oh, okay, okay. So there is two kinds of like we could get the ROI, right? So once we build the application and then we could get the numbers our sale, that is one thing, like prior to building it, right?
Think of like for management, we have to showcase that. So this much ROI you could save like before going to build. - Yeah. Correct, the assets that we're doing upfront to determine whether or not we should move forward with the project by how much potential ROI savings and then the actual validation of that, especially when we're working with robot managers for the unattended robots, you have a case that's created for every single time it's been executed and then the resolution status of that case. So then we can see the actual time savings on it by number of executions and success and failure rates or other outcomes from it. - Okay, okay. And the next question is bots security. So Windows login security, right? So the other apps will have a normal user login, but bots need to a automatically log in, right?
So do you have any like security implemented around that? - Yeah. So one of the concepts that we, you know, worked with our security team, our active directory team, our identity access management teams from the start was that this is always gonna be one of those kind of shifty areas where there's user accounts and there's service accounts. - Yep. - User accounts are users, service accounts are service accounts, robotic accounts are somewhat of a hybrid, right? They need to have interactive login to log onto the machine. That automatically flags them, red flag as a service account, why is your service account logging in? And of course, they're not a true user, you shouldn't set them up as true users. - Yeah.
- And so, you know, what we did was we initially we had the concept of a robotic employee. So the idea was to put them in their own OU and active directory so that we could have policies that applied to that. Since then, we had a little bit of a shift recently where they want to classify them as functional accounts so that they can include not just like our Pega robotic employees, but also our like say, SQA testing accounts, our performance testing accounts, kind of categorize all of those that are hybrid of a user in service account so that they can have their own set of rules and security on top of them. So using that we have, you know, AD based policy that apply to them. You know, things like the secure attention sequence exception that's needed set the service, the RPA service, can grab control of the screen and log in. That's like an example of neither a user or a service account, only we need that. - Understood. - So in addition to that, we have dedicated VMs and the those 80 users that go along with the corresponding VMs and that unattended robot can only run a single application. So going back to our design structure of the unattended using case management to orchestrate, if you had a process that needed to touch two different applications, those would be two different unattended robots.
One to say pull information from the first application and a second robot would get assigned another case to actually do the right to the second application. So we know that that VM, that 80 user will only ever access one application. - Got it, got it. One final question. Unattended bots, it's always easy to implement, but attended like where we hand in hand with the user, right? So did you get any chance to implement attended bots use case? - Yeah, we have, I think we had, there's 12 that are individual ones in production, some of those out to entire departments. So like our branch one that was mentioned, Ralph, I think there's 5,000 people that that have that automation on their desktop. And just to elaborate a little too, I think, you know, we have a bunch of different types of ones too.
Like there's some like, like Kenneth mentioned where it's triggered by you just going on the page doing your normal work. When you click a button, we essentially intercept that button and go do something. We also have ones where it's like, I'm sitting down to do this process. And so you go down to your tray and launch it, you've got your own custom UI, you see the bot working. And then we have ones that come from Pega platform as the UI. So you're in, you open up Pega platform, you put in your account number that you're working, you hit go, Pega goes, calls all the APIs, pulls all the data, pulls everything in, and then the RDA takes over on your machine and fills out the forms for you. So you know, there's that, there's a lot of different, you know, some people think of RDA, I even used to as a macro because that's what we used to do before we had robotics, was everything was a macro. You have an executable on your machine that you launch or you have a button in your Excel or a button on your mainframe emulator, and you launch the macro. With robotics, I think, you know, we shouldn't differentiate there and realize that there's so much more than just being a click a button macro that it can do, it can, it can be even be picking up things in the background without you actively, you know, choosing to do it.
So there's a lot of opportunities I think to deploy RDA that could be taken advantage of. - [Attendee] WFI, (indistinct). - Sorry. - WFI (indistinct). - Okay, WFI, because WFI was based off of the same runtime for Pega Robotics and you were able to kind of track the activities of users using it. (indistinct) - I know you guys are, we're at time, but specifically around attended automations, did you guys have any challenges around support of the automations? For example, educating users on when there's an issue with the automation on their machine versus an actual issue with their machine and how you guys handled that? - Yeah, but bots never have any errors, people make the mistake, so. Yes, we've had that.
So part of our training we have the different pods that we've created, especially like the system development pods. So part of what they have to create is knowledge base articles for the automation that they've created. So this not only gives the user a place to go and look for steps to do it, but our service desk teams to have a place to go to. So whenever someone calls into our service desk, they can go look up that knowledge base article, it gives them step by steps on how to give 'em access, known mistakes that the end user might be using and how to correct that and how to get their errors resolved. Also, what team to escalate it to. So a lot of times for the citizen development teams, it would get escalated back to their developers to be able to help triage. Whereas the things that are created by the professional development teams, we'd have our level two, level three support teams that would be triaging those for them. So a lot of around making sure that we had everything documented so that our support teams could be able to help navigate 'em to where they need to be - At the end of the day though, you know, if you have a bot that goes to, you know, any website and that website is down, you're gonna get people saying Pega is down, like the bot's broke. I mean, you know what I mean?
It's like that's just like, you know, my mom calling every video game a Nintendo. Like, it's just, you can't train 'em not to say that, it's just built in. - What about for less binary cases of it being up and down, but like performance, like they feel that it's running slower than usual or something like that. How are you guys? - So I think one of the things that we do for a lot of the RDAs that I think kind of helps give them a realistic perception of is it performing, is adding little UI elements that kind of give them a feel of what's going on. So one that's not specifically performance, but you know, in the past there has been sometimes where the extension has gotten corrupted, especially on like VDIs. It took us a while to get where that would stop happening. And so we said we should add a visual indicator on a webpage that says, hey, Pega's hooked in and looking, instead of, you know, looking at the extension. So having that visual indicator means I see the Pega bot there, I'm good to execute.
And then when you're executing something and let's say that the platform or the bot is actually creating a platform case, then you know, we have a small little UI element that comes up and spins, and it actually has the seconds on. It doesn't just, it actually says, you know, one second, two seconds. So instead of someone just being like, the bot feels slower today, they can say, oh, creating the case was taking three seconds and it usually takes one second. And so just putting a number on it, let someone say, instead of saying I feel like it's slower, they can say, it's taking three seconds. You know, so stuff like that, I think, the more transparency we have with the users, then and the more information they have, then the less likely you are to get false positives, you know, one-off complaints. - Mike brought one of my favorite things to do with attended robotics, modify webpage. (laughs) You could add in additional images, buttons, dropdowns, text boxes-- - JavaScript projection. - JavaScript. I's a lot of fun and that gives that user, that visual feel that the automation is actually running in that sense, a lot of times where they're toggling back and forth between two web app gas applications just to see one additional field.
You append that to the original web application, updating the title bar on things so that they know that when they started that application it's running some of the common mistakes is runtime got shut off on their machine, and they're going through their process of thinking this robot's just gonna do what it's supposed to do, and they had no visual indication that, you know, the one time was never there, the robot was never monitoring for anything. - [Attendee] You could also say, I feel you're wrong. (laughs) - I guess we're going for a long time, we're well over time, but I'm gonna go ahead with the question. So two questions, first one, just outta curiosity, how many of your bots are in customer service versus finance or you know, rest of the company, rest of the departments? - I'd say there's probably a heavier leaning towards our back offices. So our real estate lending, origination side, multiple ones in lending. I'd say overall, all of our Pega stuff, a lot of it goes towards lending. Those are like the, some of the primary use cases that we have. The bots that are more for the larger group facing, like our branch and stuff, they are really like that virtual assistant, you know, they help you with a couple of things, especially things that maybe you don't do all the time.
You know, help you if you're familiar with start my day, you know, pulling up a couple applications and sorting 'em, sizing 'em. - Yeah, our branch citizen development pod is probably one of the oldest and really successful pods out there and they're doing attended robotics for our branch. So that would probably the majority of our customer service facing-- - Robotics, got it. Then maybe last question from my side, I was in a different session and you know, the gentleman who had chosen the RPA technology there, he was sharing that it was really important for them to have RPA and BPM in one place, and their CoE kind of like spanned across both. Was that an important factor for you? Like was it important for you to also think about BPM as a part of your RPA decision making and how you use it today? - Yeah, I think that was one of the topics we're talking about that we don't have two separate CoEs, it is just one. And so, you know, you do need to have the technology expertise 'cause platform and robotics are different. But again, from a one step back, it's all about automating a solution and following, you know, best practices and design patterns.
So I think it is crucial to have the not separate in its silos because the integration between the two is why we chose, you know, to go with this technology to go with with Pega for, you know, for robotics, because of that integration with the BPM side. - We don't find it as often where it's just robotics or just platform. We find many cases where it's a hybrid between the two. And so having that one sue covering both allows you to look at the problem and determine what's the best solution to create for it. - Great, thank you. (upbeat music)
Weiteres Informationsmaterial
Produkt
App-Design völlig neu gedachtOptimieren Sie mit Pega GenAI Blueprint™ blitzschnell Ihr Workflow-Design. Legen Sie Ihre Vision fest und erleben Sie, wie Ihr Workflow umgehend erstellt wird.