Good morning. I'm in Westchester, New York, I thought I continue the joint processing discussion and build on a theme that we talked about a little bit. They're called normalization and de normalization. This is part of database theory. There's a whole lot that could be talked about in this space. I'm not going to go into all the details of what could be talked about.
And when it comes to financial ledger processing. I haven't found many people that actually have studied these concepts or in depth determine exactly what is normalization and de normalization. Let's review for a moment though, when data is normalized, it's optimized for storage and for update purposes, we store things once a value only one time and we refer to those values through keys on other data structures so that all the data is stored one time when it can be like in a ledger process. Often the account titles the cost center titles, the legal entity titles, All of those descriptions are stored in normalized structures in one place, and referred to as through keys, the legal entity identifier, the cost center identifier or the account identifier. So we do some level of normalization in ledger processing and probably always have clear back in the earliest bookkeeping days, they didn't write down the company or the account on the ledger, they only wrote down the account number, and then referred to the account description when they produced a report of some kind.
But when when it comes to transactions, transactions, or business events, storage tend to pretty much be a normalized structure. Yeah, you could define a second table that had combinations of attributes for a business event and try to detect when the same set of combinations happened. That combination table probably can't have something like timestamp on it because the same to tramp auctions for the same customers for the same business event happen in the exact same time. That's highly unlikely to have happen. All those other attributes. Yeah, you might find if you exclude that you might find combinations of attributes that could be normalized.
But by and large event storage is something that is already fairly normalized. In the world of balances, though, I don't know quite what to describe balances as, are they normalized, or are they de normalized, we aggregate the values associated with an amount in some way. we accumulate those values. And so in some sense, we're normalizing when we make a balance because we've accumulated all of the balances up and we are only storing the attributes associated with the transaction one time in the posting key. But we've also lost detail. So we've we've destroyed detail.
That's what the problem of transparency is, when we can't understand why a balance is the balance between Because we need the business events to understand that. So although it is normalized in a sense, we've lost details. So there isn't really true normalization there, which I think requires that you don't lose information through the data storage process. This idea of normalization becomes more important as we go forward going thinking about a metric engine, our metric engine will be much more flexible. If we can maintain access to many more attributes than just the posting key when it comes to reporting processes. If we have something that allows us to access all sorts of other identifiers, and other kinds of attributes about customers, and vendors and products and environments, all of those kinds of things would enhance our reporting process, if they're in addition to our posted key.
So we'll talk more about this idea of data modeling and important concept in normalization of a ledger. Going forward talk more about metric engine.