Okay, so maybe it's time to wrap up our, our mini sec save series here on hairy stinky problems. Let's recap for just a minute. We've talked about reclassification problems where values change in rules that change attributes that are assigned either transactions or postings, that we have the possibility of producing a switch view, which changes us from one place to another when the rule changes made, or recast view, which allows us to either reverse or forward change history that everything gets stamped on. Neither of those are a three class view, which a reclass view requires that new business events can be generated to the effect of balances based upon rule changes, so that we can see how things posted into one position and then change into another position over time. We've also talked about bit about backdating processes and how those can complicate the problems how lookups effective data joins can help us alleviate some of the problems and produce more flexible reporting environments.
The last thing that we just talked about is data modeling and how normalization can affect this. All of this complexity though, where does this leave us? How do we solve this problem? Well, this is where I think things are new. new thoughts in this area are starting to become interesting to me around basically around artificial intelligence, Ai, the ability of machines to understand and comprehend more quickly the amounts of data that are needed for us to understand and help us make sense of that data more quickly. So where does it play in this space?
Well, a lot of the AI work today, particularly in IBM has been focused on quantify qualitative data. So textual data, textual analytics, understanding meaning within text, natural language processing. That's a big aspect and an important aspect and the big problem to be solved. And there's big implications for that, which I think we'll talk about it in another segment. That is about how the intersection of qualitative and quantitative data must come together to help us solve these problems. But this particular problem space that we're talking about here, it has certain types of rules, but it's not naturally necessarily a natural language process are the kinds of rules that we're talking about here aren't expressed as natural language.
They're expresses reference data and rules for extracting and processing data. So there's an aspect here where I think we have to apply artificial intelligence to help us make sense of our basis of the data that we're processing. It has to help us intersect the customer view of the data with the financial view of the data with the risk view of the data with the management reporting view of the data, and all the other kinds of views that we're interested in getting out of the data, this intersection of where these these balances, we step up, start to approach each other. How when a rule changes, how does that rule change going to affect where the balances are posting? How do we make sense of what was historical values and what our future values? And how do we even go so far as to generate the business events required by a true reclass view?
This is going to require more computing capacity than we've ever applied before, and more sophisticated computing. But I do think that we're on the cusp of being able to solve these things in a much more effective manner. It's to create a metric engine that would help us make sense of our more quantitative world in much more of an organized, less costly fashion to gain much more insight and transparency from our measurement of the world, from understanding our actions and the actions of others, and the value of those actions, and measuring that to make choices about what we do and where we apply our energies and efforts. All of those things I think will come together to help make this series of episodes at some point, irrelevant as hairy, sweaty, monsters problems. We'll look back and they'll say, those aren't so big problems anymore. We figured out how to solve them.