Fireside chat with Verdentum’s Chief Scientist,
Dr. Shambhawi Pandey
- Dr. Shambhawi Pandey, Verdentum’s Chief Scientist
Back to main blog pageELJ: How would you describe Verdentum in one sentence?
S: Verdentum transforms impact analysis and impact finance through tech.
ELJ: ‘Tech’ meaning..?
S: Meaning scientific approaches. At the most basic level, we’re trying to compute the Y outcome of an X input. We do this using causal inference, impact analysis and science.
ELJ: Sounds impressive. But what exactly is impact analysis?
S: Impact analysis involves using scientific tools to compute the outcome of a certain intervention or process. As a population, we suffer from shortsightedness when it comes to understanding the impact of something over a long time period. At Verdentum, we’re future-minded, so for us impact analysis is helpful to understand how a certain ‘task’ can change the future.
ELJ: We definitely need longer-term thinking. What do you mean by ‘task’?
S: Any organisational operation. It could simply mean existence. For example, the existence of this building: what is the impact of this building’s existence on planetary boundaries? Or let’s take another example: A cotton bag has higher emissions during production than a plastic bag, but in the long run, a cotton bag will last; you can keep using it over and over again. It’s only when you use the cotton bag over a long period of time that you’d reduce the emissions compared to a plastic bag. So outcomes differ based on the timeframe. We want to understand how much it differs. And we want to know what planning is required to reduce the impact. Impact analysis is really just a way of looking into the future using scientific tools.
ELJ: Let’s drill down into the details. What are some of those scientific tools?
S: We currently use life-cycle assessments (LCAs) and the TNFD frameworks in our analysis. We are data-driven. We try to get the data from the root level and then build our models using that data. We do all sorts of analysis and create a large variety of predication models. We try to predict the impact in the future even where there are certain data gaps. Our methodologies are constantly evolving.
ELJ: Who would then use these models?
S: We want it to be used by all sorts of companies across sectors. So we need something that is adaptable and uses the general metrics - in other words, something that uses either the available frameworks, like TNFD, or by creating our own framework based on recent literature and our own insights.
ELJ: We constantly hear in the news about data. Why is data so important to FIs, philanthropists and governments today? What’s all the fuss about?
S: If you go ten years back, there was almost no data - and ten years before that, there was hardly the internet! The fuss about data is because we've come to a point where we feel we need collective human knowledge to accelerate the progress we’ve made so far, and to integrate this into a generalised structure. We don't want to reinvent the wheel or repeat what’s already been done - if somebody else has already done it, let’s look at the data and learn from it rather than unnecessarily repeating. Let’s be efficient with our time. We’re working on solving urgent issues, after all. We don’t have all the time in the world. So for Verdentum, working with data is about using collective knowledge to our advantage.
ELJ: So if anything, there’s too much data today and we need to harness it to be effective?
S: The availability of data today exists in a way it didn't before. To stay ahead, you need access to this data. It’s about new variations in those data and new features. And about trying to really understand the data – if it’s not a rich and extensive data, your analysis will be lacking and, frankly, you’re out of the race. It’s a competition. There’s a need to get ahead: the future will be based on data. Most people realise this. We can train models to develop the science itself; you just need to provide the data and it will spit out the equations at you! It’s an easier way round now, but only if you have the right access to the data points.
ELJ: What if you don’t have access to the data points? For example, asset managers who have a large portfolios of companies - maybe thousands of companies - might not have the information required to effectively analyse their portfolios for the whole range of ESG issues. How could Verdentum’s tech circumvent this issue?
S: We use different data science techniques. For filling in gaps when it comes to building models, it depends on the best case scenario. It can be as simple as data augmentation. We look at the value of the data. It’s a known practice in data science: data augmentation is about creating data points from existing data points.
Let me give an analogy. Suppose you want to train a model that can predict whether a picture is a picture of a sky or a picture of something else, you might not have enough pictures to train your model. So you take existing pictures and use them as filler pictures. This creates more data points. Does that make sense? It’s just about creating new data points from the existing ones. You’re looking at the mean standard deviation of the data and adding noise to create more data points.
At Verdentum, we take a more extensive approach. We go bottom up. We try to generate the missing data ourselves. We use empirical relations and correlations between different factors. We compute empirical correlations and from this we try to predict more data points that were initially missing. It can be very comprehensive. And we could use a surrogate model to generate the data.
ELJ: So if I’m an investor thinking about my ESG strategy, how could Verdentum help?
S: What really makes us different is that we’re creating generic solutions that can be used in a variety of scenarios and can be tailored to different end users: governments, asset owners, banks, philanthropists and so on. We identify the problems first, before generating the data. We can go around aimlessly looking for the solution - what we need is to try and plan a strategy for where exactly the problem area is and how to tackle that particular problem. We try to identify the problem regions first and then look for solutions, instead of going for a head on approach, we go for a reverse approach. We look at the problem from a different perspective.
ELJ: And if I’m a bank?
S: Great question! As you know, the CEO and I recently ran a training for ShareAction’s Biodiversity and Banks Roundtable for banks across Europe. During this roundtable, I learnt that banks were worried about the TNFD and how to do their risk assessments for these new metrics. We explained that this is not new - there is a pre-existing literature on these biodiversity metrics. We showed some example for chemical industries and how the biodiversity metrics are already computed. We were explaining to the banks that whenever they feel they need new metrics we can provide them. We really went into detail on the process of a risk assessment against a biodiversity metric provided by TNFD: what data is needed for the metric and what goes through our models when we try to compute the metric as an output of that model.
ELJ: What is it exactly that banks need to know?
S: If there are A, B and C companies in a particular sector, banks want to know how to get the maximum return on investments with the minimum risk. The metrics help evaluate the risk of investing in a certain industry, region, sector – for TNFD, this is location-specific. So if a bank invests in a property that is in a region at risk of flooding, the valuation will decrease. We can run a scenario analysis using our tools to compare the investment options and understand the risk:return ratio when factoring in ESG topics like biodiversity and climate change; this can help banks make decisions.