Webinar

Better Treasury Outcomes Through Data Visibility

rnz3cb5crncc6ezp5kpfv6

Gaining better treasury outcomes is all about data. With many treasuries still relying on legacy technologies to bridge the visibility gap across a multitude of banking accounts, companies struggle to gain a complete picture of their cash flow and position.

In this 30-minute workshop, join Trovata’s CTO, Joseph Drambarean, as he shares how you can gain valuable insights into your cash and banking information using Trovata’s automated reporting and forecasting capabilities. By bridging the gap between your bank accounts and cash management platform with Trovata’s open banking APIs, you too can uncover mission-critical insights that can propel your business forward.

Mike Hewitt: Hello, everyone! And welcome to this Trovata webinar “Better Treasury Outcomes through Data Visibility”. I’m Mike Hewitt from Treasury Dragons, and we’re delighted to partner with Trovata on today’s topic. It’s a really important one. So many treasuries have data that’s trapped, imprisoned in various silos. And when we can release that data and make it available for analysis, do more with it, it really improves the performance of treasury. So we really are gonna look at how we can improve treasury performance. And I’m delighted to have with me Joseph Drambarean, JD, from Trovata. And in a couple of moments, I’m gonna ask JD to walk us through how we can analyze this data better, of course, using the Trovata platform. But these lessons about analyzing and using data are useful for all of us.

Very quickly, a quick explanation of the platform. On the right-hand side of your screen, you can see some tabs. Do introduce yourself in the comment section. If you have questions for JD, please do type those into that Questions tab. And I’ll ask the most relevant live on air towards the end of JD’s workshop. And if you’re having any issues at all remembering any of what you see, don’t worry because, at the end of this webinar, the whole thing will be available as a replay, an on-demand replay. So you can watch it again or indeed share this link with friends and colleagues if you’d like to spread the good news about what’s possible with data.

So if you’ve all got a good idea of what the platform can do, let’s just have a look at what we’re gonna be talking about in the next few minutes. I’m gonna be asking JD first of all why data is so important (it really is the topic of the moment), why a legacy TMS is, which we might think could handle some of this stuff, why there may be some obstacles to those systems management to get quite as well as modern technology, some of the best-of-breed solutions, and of course, Trovata. I’m then gonna ask JD to give us a real insight into how this works in practice. He’s gonna be sharing some screen, showing us how he works with data. And then, we’ll close with a Q&A session. So I guess, JD, now that you’re online, let’s start with that first question. Why is data so important? Why should treasurers care about data that might be locked in those silos?

Joseph Drambarean: Yeah, that’s a great question, Mike. And it’s great to see you again. I’m excited to be talking about this topic. Data is really foundational to everything that we do, really, from a day-to-day analysis perspective. And one of the things that… Especially over the last year and a half as we’ve all dealt with COVID and every implication that has brought to our workplace, the availability of data in a clean way that gives you access to insights, that can drive decision-making is ultimately what this is all about. Now, that isn’t to say that it’s impossible to make those types of decisions on legacy data platforms. You just have limitations that may drive decision-making in a way that doesn’t take advantage of all of the insights that are at your disposal in today’s modern technology avenues that we have through cloud computing, through big data, as well as through modern database platforms. So I’d say that data is critical because it’s at the foundation of everything that we do. Now, old… Oh, go ahead!

Mike Hewitt: No, no, please! I’m just gonna say… So what I’ve seen over the past few years, a lot of treasurers, they can use data, but it tends to be a relatively slow process of getting it out of one place, maybe putting it into a spreadsheet. And you can access the data, but maybe the reporting is a bit slower. You’re getting more of a historical picture of what was happening at the time you exported the data rather than real-time. Is that one of the issues you’re seeing?

Joseph Drambarean: One of the things that we’re gonna see in this exploration today is this concept of the penalties of working with large amounts of data when you do it by hand and why your platform should be almost like a robotic suit that you’re wearing, that allows for you to jump higher, punch harder, move faster. And when you’re doing everything by hand, collating all of that data, cleansing it, and normalizing it, you introduce a tremendous amount of opportunity for human error. You also introduce latency just in the process of getting that data from all of its different resources. And then, ultimately, when you share it, when you provide it to stakeholders, when you give it to folks on your team, you’re obviously not equipped to be a big data platform. So when it comes to security, when it comes to extensibility, being able to do it in a performant way, you are wearing handcuffs at every stage and it really holds you down. And that’s why from a modern big data platform perspective, the concepts that we’re going to be wrestling with are extensibility, performance, scalability. And these are all things that are really key to the modern operator.

Mike Hewitt: This all makes absolute sense. But I can almost hear some of the questions right now from treasuries, who may have invested quite a lot of cash in a big TMS. Maybe, they did that a year ago, two years ago, three years ago. So I guess that their question, to be devil’s advocate, is, “Hang on! Isn’t all this stuff that my TMS should be able to do anyway?”

Joseph Drambarean: That’s a great question. And I think that one of the things that we’ve seen in the marketplace is that the legacy solutions were built on a different plane of understanding whether it’s from a technology perspective but also from where the banks were moving. Even five years ago, if you would have asked all of the heads of strategy at every single bank, they probably would have said that modern data platforms were coming in the future but weren’t being worked on. Fast forward five years later. Now, every single bank in the world has an API strategy. Some have already moved on to streaming strategies and providing real-time data. And that kind of modern technology stack really requires a platform that can keep up with that kind of pace. And it’s not just a matter of how you store the data and how you make it available to your end-users. It’s also how you integrate with some of these modern solutions. So that’s really where some of that dichotomy comes into play. Those lengthy implementation times of having to connect to the SWIFT network or having to set up BAI statements with every single one of the banks, that’s not gonna go away for the legacy players. And adopting some of the new protocols, some of the new standards, it’s going to take quite a bit of work. In addition, those older models are built to be more monolithic. They were assumed to be either installed on-prem or built on the cloud infrastructure that are self-hosted, which means that from a security perspective, from a scalability perspective, you’re single-threaded through those providers, whereas modern solutions take advantage of the public cloud, take advantage of the best-in-breed, best-in-class with regards to big data storage, big data compute, availability of machine learning tools, all of the things that you would want out of the box from a modern solution. And then, one of the things that has also been clear in this fast-evolving market is that user experience is really starting to matter in a way that impacts the day-to-day workflows of our operators. And having the ability to change that user experience with rapid release cycles, react to customer feedback, be able to implement the new solutions without having to manage a wide fleet of custom solutions, that really has become an important factor. So I’d say that the dichotomy is starting to become very clear, where you see the line between old and new. New is everything you would expect a modern application experience and platform to be. And that’s the interesting thing about all of this. It’s that we’ve gotten used to amazing consumer experiences that drive some of these best practices, whether it’s from a big data or search perspective or modern user experience, but that has not made its way to the enterprise just yet. However, we’re kind of bringing that message that it is here and you can take advantage of it today.

Mike Hewitt: Which makes complete sense. And I guess one of the things we’re particularly interested in is how this proves itself in real life. Joseph, I think I’ve lost your feed. Are you still there, and can you hear me?

Joseph Drambarean: Yep, I’m still here.

Mike Hewitt: Okay. As long as you’re there and everyone else can see you, that’s marvelous. I guess the average treasury is saying, “This is kind of theory, and it’s all very well. But can we see some proof?” And I think we may well have a video to share that gives us a sense of what you’re doing at the moment. Guys, while this video is running, do feel free to go to the poll tab and just vote and tell JD and myself a little bit about where you are at the moment in your journey towards greater automation. We’ve got about 90 seconds of video. And then, JD, I’m gonna turn the floor over to you and ask you to sort of walk the talk if you like and show us how this works if that’s okay.

Joseph Drambarean: Absolutely!

Mike Hewitt: Great! Let’s see this short video before we switch over to JD.

[Video]

Overseeing my company’s cash flow goes way beyond the hours I was spending manually collecting data. Processes that used to take me more than twenty hours a week are now automated in an instant. I no longer have to download data from all our bank accounts and format it into spreadsheets. Downloading transaction statements from my bank portals was not only a major headache but was also holding me back from agile real-time reporting. With Trovata, I don’t worry about day-to-day data management and focus more on strategic decisions that drive my company forward. I can explore thousands of transactions in milliseconds, even break down multibank data by different formats for custom reporting and share my insights with a simple click. Instead of relying on scattered spreadsheets and bank portals, I can intuitively analyze my company’s cash operations from Trovata secure cash management platform. Manual data collection doesn’t have to be a part of my job ever again. Multibank transactions are automatically consolidated and formatted, eliminating the weeks I would have spent gathering data and fixing spreadsheet errors. Confidently manage your financial operations with Trovata scalable API-connected platform! To find out more, speak with Trovata today.

Joseph Drambarean: Mike, I think you might be on mute.

Mike Hewitt: Do you know what? Every time we do one of these, that’s my classic era, I am a cynic. And I’m always gonna say, “Look, actually show me! Show me how this works.” So I am gonna hand the screen over to you and ask you to share your screen and give us some insight into how this all happens in practice. So I’m gonna go away, go on mute again, and leave it to you for the next few minutes. JD, it’s yours!

Joseph Drambarean: Sounds great! Let me know if you can see my screen.

Mike Hewitt: I can, and I think we all can.

Joseph Drambarean: Perfect! So one of the things that we were talking about just a moment ago was this concept of effortless reporting, being able to traverse all of your accounts in a way that stitches together different data formats across different types of institutions all in one view and gives you access to all that information through the power of search and through the power of a big data platform, whether that means personalizing your experience, being able to pull in any number of reports and have access to that on a fingertips basis, and then also giving you the edge of being able to do custom reporting (for example, if you wanted to see across all of your ACHs and be able to do instant analysis across 100 days of that flow), being able to do that kind of work on an instant basis with no penalty with regards to the novelty of that idea, and being able to take all of that information and put it together. That’s really at the heart of Trovata: being able to traverse all of that data. But in addition to that, we have access to more expansive sets of data. Because of the fact that we integrate natively with bank APIs, we have access to information that is provided by the banks that will not be provided in traditional more legacy file formats. And we continue to advance that area where banks continue to add more functionality, add more types of breakdowns. In some cases, remittance information is pulling in even invoice data. And we’re able to pull all of that information in and index it and take advantage of it from a search perspective.

Now, a lot of what Trovata does is built on the foundation of search. And we’ve really invested a lot in this experience to try to make it feel as modern and as Google-like as possible, whether it means as you’re typing in, you get dynamic autosuggest across all of your transactions, being able to narrow in on specific areas of your cash flows, or supporting concepts like natural language search. Here, I’m able to type in random information and be able to find within a quick view all of the data for this particular vendor, which is Weebly, on Fridays across my entire data set. I can even continue to narrow down that information and introduce operators that whittle it down even further. And then, we build powerful filters that allow for you to break it down either by different currencies, or by your accounts, or by the institutions that are driving that cash flow. And all of this is to drive, conceptually, this notion of labeling (that we call tags) that allows for you to keep an eye on these transactions not only historically but also going into the future. So as new transactions come in, we automatically tag these transactions so that you never have to put in any effort in maintaining them. There is no penalty with regards to how many tags you can create. I have quite a few here. And you can see that I create all kinds of hierarchy, whether it’s the one that I just created a moment ago or creating parent-child relationships where I might have a top of the house reporting module that can then be broken down by different nested tags that give you roll-ups of all of that data and instantly provide analysis, which then, of course, you can further dig into, whether you want to see this kind of information on a weekly or daily basis. You can introduce random numbers of periods that give you deep insight into all of that data. And as you’ve been seeing, I’m jumping around through this data. We’re dealing with, in this case, tens of thousands of transactions. We support millions of transactions of data. Some of our biggest customers have enormous volume in the e-commerce space and in the payment space. All of that scales because of our cloud data experience built on the power of Amazon Web Services. And then, of course, all of this is really coming to a head when you use our business intelligence, being able to traverse all of this data through search and get active reporting.

Some of these use cases… I think I just showed one where I was showing a specific vendor. Let’s see what happens when I type in that vendor. And I get an instant analysis of all the transactions for that vendor. But let’s say that I needed to understand deeper the cash flow across a dimension. Maybe, I needed to understand my exposure for this specific vendor across the currencies that it applies to. And maybe, I needed to drill in and focus in on a specific period and then, ultimately, see the specific transactions that are at play for that vendor. All of this kind of at your fingertips analysis is available out of the box. We provide all of it based on our search experience and on the platform performance of our big data stack. And ultimately, all of this information can be created in a canonical way. It can be exported. It can be sent to a stakeholder list through email. And you can even go further. If, for example, I wasn’t satisfied and I wanted to see how this breaks down across my accounts as well, you can dig in further and further and see all of these different dimensions and be able to see it using our big data analysis. And really, some of these are very powerful. When it comes to data aggregation, it’s something that, from a human perspective, would take days upon days to kind of collate this kind of information. And then, also number-crunching across all of these different dimensions. And we do it within seconds.

Now, all of this information also can be used on a forward basis. You can create forecasts built on that same conceptual model of labeling transactions using our powerful search methodology and then taking those specific labels and using them as logical units that can be then forward cast into a forecast that can be used for all kinds of different breakdowns. You can create as many forecasts as you’d like. I’ll jump into one here real quick. That is an example of a complex corporate cash forecast. And ultimately, it’s built on those building blocks of using the specific tags that I mentioned a moment ago and crunching the numbers across every single one of them using our custom AI and ML modeling that takes all of that data and points it into a one-year outlook that then you can easily traverse using our UI and be able to jump into any of the areas here, see details on a moment’s notice, and then also see a variance analysis on how this specific forecast did against actuals. This kind of experience that is intended to be effortless across all of your data, across all of your accounts, whether it’s international, whether it’s local here in the United States, it’s built on those foundations of big data.

And speaking of international, you can at any time create a local reporting environment. So, for example, if you’re in an EU office and you’re taking advantage of Trovata and you need to see everything here within Trovata in EU format, that can be done. Within a few clicks, your entire experience is converted into the EU format. And you can see all of your positions, all of your accounts rolled up into euro. All of that is intended to be, like I said, modular, quick in terms of its implementation. Of course, Trovata is zero IT required. We do everything white-glove from the moment of integration with the banks. And ultimately, you sit on a foundation that gives you access to all this data but also can then be extended. So we provide a developer experience that allows for you to use our APIs to connect to other downstream systems, whether it’s ERP systems or cash management systems on the month-end close side of the house like BlackLine. You can also connect using our payments platform to initiate payments directly through bank APIs. So we’ve extended this platform to be beyond just business intelligence. So I’ll stop there, see if there are any questions coming through. But hopefully, that was helpful. Mike!

Mike Hewitt: It’s enormously helpful. Thank you! Thank you very much! And good to see you back on screen. Interesting insight into your desktop, too, which I always find fascinating. Yours is, fortunately, very clean, very neat, admirable. We have questions from the audience and, of course, a couple of my own. [Participant’s name] is asking how you access invoice data. Do you integrate with ERPs as well? I think you mentioned that you did, but maybe… Can you give us a little bit of depth on that? How does that ERP integration work? What sort of sources do you find that you’re connected to?

Joseph Drambarean: That’s right! Yeah, so we natively integrate with a few ERP systems: Oracle Cloud, NetSuite, Sage Intacct, and Sage Products generally. We pull in AR and AP data that can be used in conjunction with your cache data coming in from the banks. We also support importing data from Excel spreadsheets that may be driving models that are custom to your business. So all of that can be placed into the forecast and collated together into one common forecast that’s taking all of those data sources and putting them together in harmony.

Mike Hewitt: Excellent! [Participant’s name], I hope that answers your question. Feel free to come back with a follow-up should you wish. And one for me. And I think you touched on this, JD, about IT input. We all know that treasury teams are always scrambling to try and get resources from elsewhere in the company. You refer to this, I think, as a sort of very light touch in terms of IT that you basically handle everything. But to be absolutely realistic, if I’ve got my TMS in place, or I’ve got some legacy systems, I come to you guys and I say, “Look, I really want to get some of this data processing power in my organization,” can I really sort of not copy any IT? I mean, how does a typical implementation really work?

Joseph Drambarean: Right, in terms of an implementation, there is no IT involved. The process of setting up your Trovata instance happens within seconds, and that deployment happens at a global scale. The process of integrating with the banks is all handled by our service team. And it’s built on the partnerships that we’ve established with the banks. So we have procedures that we follow to integrate with the bank APIs. However, we also support integrating with legacy formats as well. So if there are specific banks that may not have an API, we also can connect via SFTP and pull up BAI statements, CAMT statements and be able to support those legacy formats as well. They then get the benefit of high performance and indexing and all of the things that you saw today. So there really is no limit to the type of data integrations that we have at our disposal. We’ve even done integrations with, as you saw, a CryptoWallet. So it’s really very wide in terms of our capability.

Mike Hewitt: Really interesting. Thank you! Another question has just come in from [participant’s name]. Hello, [participant’s name]! If you are thinking about asking a question of your own, just click on the Questions tab, type it in, and we will get to you if we have time. So [participant’s name] wants to know, “How does Trovata forecast based on transaction data in the database?” So I’m guessing there, as you mentioned sort of AI and machine learning just without… Most of us probably wouldn’t understand a lot of the algorithm inside it. But just give us a flavor of how Trovata handles this forecasting thing based on the existing data.

Joseph Drambarean: Yeah, that’s a great question. We, of course, analyze the historical data. And depending on how much data we have, what we do is, we treat that data through a set of modeling techniques that gives us insight into which of the models will be best for a given set of data. And each tag, as you saw, may have a different history. It may have noisy history. It may have very cyclical history. And those present different types of challenges from a modeling perspective. So we have an array of modeling techniques that we use to choose the right model for the specific use case that has again been forwarded through your tag. And then, once we have the right model through our proprietary technique, we then take that and cast it on a forward basis and give you the optionality to then also make edits. So if there are areas where maybe the model was a little bit too aggressive and you need to trim it, give it a little haircut, you can do that and adjust the models on a manual basis as well. So it’s really intended to be a powerful joining of machine and human to create a forecast that takes the best of business intelligence that you are bringing in, that you know from the business itself and then automating the process of analyzing the transaction history, which gives you a good foundation and a basis for creating a forward rollover.

Mike Hewitt: Great. Thanks for that, JD! [Participant’s name], I hope that that answers your question. Again, just to remind that you can upvote questions. We’re getting quite a few coming in now. We may not be able to get to everyone. If you see a question you really want to answer, do upvote it and I’ll try and get to it. And even as I speak, [participant’s name], your question has been upvoted. What percentage of banks is actually API-enabled already? A very good question. If we talk about APIs, it’s great. But is every bank already willing to share data through APIs? What are you seeing at the moment? How many banks are kind of with the program as far as APIs go?

Joseph Drambarean: I’d say that within the top 50, every single one of the banks has an API option that’s either fully developed and very mature or in the process of being released within the next year or two. I’d say that if you continue down that scale and you look at the long tail, every single one of the banks is starting to publish API strategy decks and providing insight into the future of their technology. Now, from a regional perspective, there may be some banks that will take a little bit longer and will eventually get to the API format. And that’s why the hybrid approach (having the ability to connect with the modern stack as well as the legacy stack) is just so critical. It’s because as new connections come online, you want to have that nimble ability to be able to transition to a more modern connection and be able to take advantage of that metadata. And as you do that and you have a platform that can support you in that journey, your data transformation can follow in toe and you can take advantage of all of the platform capabilities that you have from a Trovata perspective, whether it’s API extensibility, whether it’s the search capabilities, whether it’s AI/ML, and not have to sacrifice any of that just because a bank might not have an API connection available. So we are able to walk with you through that data transformation journey.

Mike Hewitt: Excellent! [Participant’s name], I hope that answers your question. The next one is being upvoted by a couple of people from [participant’s name] who wants to know, “Does the system or how does the system connect with real-time payment companies?” I mean, he mentions Fiserv. But of course, there are plenty available. Do you connect with other payment services like that?

Joseph Drambarean: Trovata actually supports real-time payments out of the box. So we have direct connections with bank partners that are offering real-time payment capabilities via APIs. And we natively integrate with their APIs allowing for you to initiate real-time payments directly from the Trovata platform.

Mike Hewitt: Excellent! [Participant’s name], I hope that answers your question. And we have just time, I think, for one more. [Participant’s name] is back with a second question, but it’s a good one. So thanks, [participant’s name], for this. The question is, “Are all the forecasts derived by your models, or can I use my own scenarios as well?” He talks about training averages, that kind of thing. How much sort of user modification is possible at the moment? Is that something you encourage, or is that likely to cause problems?

Joseph Drambarean: The forecast experience is intended to be an approach that synthesizes a dramatic amount of data. And if you have your own custom models maybe that you’re doing within an Excel environment, all of that output can be copied and placed directly within your forecast. So there is no penalty whatsoever with regards to the source of the data. Now, in some cases, our models might be really beneficial. For example, if it’s very noisy data that’s just difficult to make sense of, it might make sense to use Trovata’s AI to take advantage of the modeling for that specific use case. However, if you have a very precise model because of business intelligence that has been accumulated over the years and you know precisely where that data is going to be cast based on trends, it makes a ton of sense to take that data and place it directly within the forecast. So I’d say that there is no penalty whatsoever with regards to the data source, and we welcome all of the different formats.

Mike Hewitt: Tremendous. Thanks for that, JD! So we’re drawing to a close now. Just to share with you a couple of results from that poll. Terrifyingly, 30% of the people on this webinar are still pasting balances into a spreadsheet. If you’re doing that, then I suggest you give Trovata a call. And I’m sure they can help you get rid of that practice. But for now, JD, thank you very much for that really informative half an hour! And to our audience, thank you very much for being here, thank you for your questions! If you have further questions, don’t be afraid to keep typing them. We can get to them offline after the event. Do vote in the poll. There’s still time for that. And of course, do feel free to follow up with Trovata, you can see a link there on the screen, with any further questions you may have. So from JD and myself, thanks very much and goodbye! Thanks.

Joseph Drambarean: Thank you, Mike!

Mike Hewitt: Bye!

Speaker

dcp 0335
Joseph Drambarean​
Chief Technology Officer, Trovata

Working with key Fortune-level brands, including Capital One, Marriott International, Microsoft, Harley-Davidson, and Allstate Insurance, Joseph Drambarean has helped brands navigate the digital landscape by creating and executing innovative digital strategies, as well as enterprise product integrations that incorporate cloud architecture, analytical insights, industry-leading UI/UX, and technical recommendations designed to bring measurable ROI.